17

K8s在LinuxONE上搭建 基于RHEL7 操作系统(一)-完美世界!

 4 years ago
source link: https://blog.51cto.com/shyln/2483115
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

K8s在LinuxONE上搭建 基于RHEL7 操作系统(一)

Kubernetes 是当前炙手可热的技术,它已然成为可开源界的PASS管理平台的标准,当下文章对大多数是对X86平台搭建Kubernetes平台,下面笔者进行在LinuxONE上搭建开源的Kubernetes平台。
搭建K8S 平台主流的有两种方法,

  1. 第一种是基于二进制的搭建,通过一步一步的搭建可以加深对K8S各个服务的理解。
  2. 官方推荐的自动化部署工具 kubeadm
    本次使用官方推荐的Kubeadm 的搭建方法, kubedm 把K8S 自身的服务都被K8S自身的pod,除此之外事先的基础服务是用system服务的方式运行。
    master节点安装组件:
    docker、kubelet、kubeadm 基于本地的system服务运行
    kube-proxy 是 动态的可被k8s 管理的pod
    api-server、kube-controller、etcd、 是托guan在pod
    node节点组件
    docker、kubelet 基于本地的system服务运行
    kube-proxy 是 动态的可被k8s 管理的pod
    flannel 是 动态的可被k8s 管理的pod

    1. 环境
    安装的环境可以使用虚拟机也可以使用Lpar,我这是使用的Openstack环境下面的虚拟机。虚拟机的规格为4C10G50G

系统版本 IP地址 主机名 K8s version
Red Hat Enterprise Linux Server release 7.4 172.16.35.141 rhel7-master 1.17.4
Red Hat Enterprise Linux Server release 7.4 172.16.35.138 rhel7-node-1 1.17.4

环境准备工作

setenforce 0
sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
swapoff -a && sysctl -w vm.swappiness=0  && sysctl -w  net.ipv4.ip_forward=1

cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system

[root@rhel7-master ~]# yum install  rng-tools -y
[root@rhel7-master ~]# systemctl enable rngd;systemctl start rngd

2、安装docker包

[root@rhel7-master tmp]# wget   ftp://ftp.unicamp.br/pub/linuxpatch/s390x/redhat/rhel7.3/docker-17.05.0-ce-rhel7.3-20170523.tar.gz

[root@rhel7-master tmp]# tar -xvf docker-17.05.0-ce-rhel7.3-20170523.tar.gz
[root@rhel7-master tmp]#cd docker-17.05.0-ce-rhel7.3-20170523
[root@rhel7-master tmp]# cp docker* /usr/local/bin
[root@rhel7-master docker-1.11.2-rhel7.2-20160623]# cat  > /etc/systemd/system/docker.service << EOF
[Unit]
Description=docker
[Service]
User=root
#ExecStart=/usr/bin/docker daemon -s overlay
ExecStart=/usr/local/bin/dockerd
EnvironmentFile=-/etc/sysconfig/docker
[Install]
WantedBy=multi-user.target
EOF
[root@rhel7-master docker-1.11.2-rhel7.2-20160623]# cat > /etc/sysconfig/docker <<EOF
 OPTIONS="daemon -s overlay"
 EOF
[root@rhel7-master docker-1.11.2-rhel7.2-20160623]#
启动服务
systemctl daemon-reload
systemctl restart docker
[root@rhel7-master tmp]# docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
[root@rhel7-master tmp]#

3、安装kubelet、kubeadm 等包
添加yum 源(master 和node 节点都要执行)

[root@rhel7-master ~]# cat > /etc/yum.repos.d/os.repo <<EOF
[k8s]
name=k8s
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-s390x/
gpgcheck=0
enabled=1
[clefos1]
name=cle1
baseurl=http://download.sinenomine.net/clefos/7.6/
gpgcheck=0
enabled=1
EOF
[root@rhel7-master ~]#

查看 当前repo提供的包的版本

yum list kubelet kubeadm kubectl  --showduplicates|sort -r

下面进行安装1.174版本的包

[root@rhel-master ~]# yum install kubeadm-1.17.4  kubelet-1.17.4 kubectl-1.17.4 -y
[root@rhel7-master ~]# systemctl enable --now kubelet

4、利用kubeadm 初始化环境
再此之前需要自行做好下面准备

初始化环境(master 和node 节点都要执行):
1、基于主机名通讯
2、时间同步
3、防火墙关闭
4、swapoff -a && sysctl -w vm.swappiness=0 && sysctl -w net.ipv4.ip_forward=1 \
echo 1 > /proc/sys/net/bridge/bridge-nf-call-ip6tables \
echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables

查看kubeadm 需要哪些基础的docker image

[root@rhel7-master ~]# kubeadm  config images list
k8s.gcr.io/kube-apiserver:v1.17.4
k8s.gcr.io/kube-controller-manager:v1.17.4
k8s.gcr.io/kube-scheduler:v1.17.4
k8s.gcr.io/kube-proxy:v1.17.4
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd:3.4.3-0
k8s.gcr.io/coredns:1.6.5

可以看到我们已经罗列出了需要的docker image ,由于不可描述的因素,我们无法直接访问k8s.gcr.io
所以需要我们自行下载这几个镜像,我已经把镜像上传到了我的docker hub, 大家可以自己pull

 docker pull  erickshi/kube-apiserver-s390x:v1.17.4
 docker pull  erickshi/kube-scheduler-s390x:v1.17.4
 docker pull  erickshi/kube-controller-manager-s390x:v1.17.4 
 docker pull  erickshi/pause-s390x:3.1 
 docker pull  erickshi/coredns:s390x-1.6.5
 docker pull  erickshi/etcd:3.4.3-0
 docker pull  erickshi/pause:3.1

下载完后,我们需要进行更改成和我们列出来一样的名字,因为kubeadm 会去下载

 docker tag erickshi/kube-apiserver-s390x:v1.17.4 k8s.gcr.io/kube-apiserver:v1.17.4
 docker tag erickshi/kube-scheduler-s390x:v1.17.4 k8s.gcr.io/kube-scheduler:v1.17.4
 docker tag erickshi/kube-controller-manager-s390x:v1.17.4 k8s.gcr.io/kube-controller-manager:v1.17.4
docker tag erickshi/pause-s390x:3.1 k8s.gcr.io/pause:3.1
docker tag erickshi/etcd-s390x:3.4.3-0 k8s.gcr.io/etcd:3.4.3-0
docker tags erickshi/coredns:s390x-1.6.5 k8s.gcr.io/coredns:1.6.5
docker tag erickshi/coredns:s390x-1.6.5 k8s.gcr.io/coredns:1.6.5

除此之外,我们还需要flannal的镜像

docker pull erickshi/flannel:v0.12.0-s390x
docker tag erickshi/flannel:v0.12.0-s390x   k8s.gcr.io/flannel:v0.12.0-s390x

或者在百度云盘下载,直接导入docker load 即可
链接:https://pan.baidu.com/s/1E5YLM8LhPvdo1mlSsNPVdg 密码:vfis

进行环境的初始化

[root@rhel-master ~]# kubeadm  init --pod-network-cidr=10.244.0.0/16 --kubernetes-[root@rhel-master ~]#  kubeadm  init --pod-network-cidr=10.244.0.0/16
W0327 02:32:15.413161    6817 version.go:101] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get https://dl.k8s.io/release/stable-1.txt: dial tcp: lookup dl.k8s.io on [::1]:53: read udp [::1]:32972->[::1]:53: read: connection refused
W0327 02:32:15.414720    6817 version.go:102] falling back to the local client version: v1.17.4
W0327 02:32:15.414805    6817 validation.go:28] Cannot validate kube-proxy config - no validator is available
W0327 02:32:15.414811    6817 validation.go:28] Cannot validate kubelet config - no validator is available
[init] Using Kubernetes version: v1.17.4
[preflight] Running pre-flight checks
    [WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly
    [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
    [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 17.05.0-ce. Latest validated version: 19.03
    [WARNING Hostname]: hostname "rhel-master.novalocal" could not be reached
    [WARNING Hostname]: hostname "rhel-master.novalocal": lookup rhel-master.novalocal on [::1]:53: read udp [::1]:33466->[::1]:53: read: connection refused
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [rhel-master.novalocal kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.16.35.141]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [rhel-master.novalocal localhost] and IPs [172.16.35.141 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [rhel-master.novalocal localhost] and IPs [172.16.35.141 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
W0327 02:32:22.545271    6817 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
W0327 02:32:22.546219    6817 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 12.501743 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.17" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node rhel-master.novalocal as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node rhel-master.novalocal as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: ckun8n.l8adw68yhpcsdmdu
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 172.16.35.141:6443 --token ckun8n.l8adw68yhpcsdmdu \
    --discovery-token-ca-cert-hash sha256:ea5a0282fa2582d6b10d4ea29a9b76318d1f023109248172e0820531ac1bef5e
[root@rhel-master ~]#

拷贝认证文件,并且查看当前pod和节点

[root@rhel-master ~]# kubectl get node,pod --all-namespaces
NAME                         STATUS     ROLES    AGE     VERSION
node/rhel-master.novalocal   NotReady   master   2m59s   v1.17.4

NAMESPACE     NAME                                                READY   STATUS    RESTARTS   AGE
kube-system   pod/coredns-6955765f44-cbzss                        0/1     Pending   0          2m41s
kube-system   pod/coredns-6955765f44-dxjfb                        0/1     Pending   0          2m41s
kube-system   pod/etcd-rhel-master.novalocal                      1/1     Running   0          2m56s
kube-system   pod/kube-apiserver-rhel-master.novalocal            1/1     Running   0          2m56s
kube-system   pod/kube-controller-manager-rhel-master.novalocal   1/1     Running   0          2m56s
kube-system   pod/kube-proxy-6nmhq                                1/1     Running   0          2m41s
kube-system   pod/kube-scheduler-rhel-master.novalocal            1/1     Running   0          2m56s
[root@rhel-master ~]#

安装flannel

    [root@rhel-master ~]# kubectl  apply -f https://github.com/coreos/flannel/blob/master/Documentation/kube-flannel.yml

再次查看网络和节点

[root@rhel-master ~]#  kubectl get node,pod --all-namespaces
NAME                         STATUS   ROLES    AGE     VERSION
node/rhel-master.novalocal   Ready    master   9m21s   v1.17.4

NAMESPACE     NAME                                                READY   STATUS    RESTARTS   AGE
kube-system   pod/coredns-6955765f44-cbzss                        1/1     Running   0          9m3s
kube-system   pod/coredns-6955765f44-dxjfb                        1/1     Running   0          9m3s
kube-system   pod/etcd-rhel-master.novalocal                      1/1     Running   0          9m18s
kube-system   pod/kube-apiserver-rhel-master.novalocal            1/1     Running   0          9m18s
kube-system   pod/kube-controller-manager-rhel-master.novalocal   1/1     Running   0          9m18s
kube-system   pod/kube-flannel-ds-s390x-zv6xl                     1/1     Running   0          5m54s
kube-system   pod/kube-proxy-6nmhq                                1/1     Running   0          9m3s
kube-system   pod/kube-scheduler-rhel-master.novalocal            1/1     Running   0          9m18s
[root@rhel-master ~]#

可以看到节点已经成为ready 状态,并且kube-system 的pod 已经全部就绪。目前master节点的安装搞一个段落,下面安装node节点

安装node节点
前置条件:
见环境准备工作
下面直接 加入master节点

    kubeadm join 172.16.35.141:6443 --token ckun8n.l8adw68yhpcsdmdu \
    --discovery-token-ca-cert-hash sha256:ea5a0282fa2582d6b10d4ea29a9b76318d1f023109248172e0820531ac1bef5e
        W0330 03:22:57.551161    2546 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
    [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
    [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 17.05.0-ce. Latest validated version: 19.03
    [WARNING Hostname]: hostname "rhel-node-1.novalocal" could not be reached
    [WARNING Hostname]: hostname "rhel-node-1.novalocal": lookup rhel-node-1.novalocal on [::1]:53: read udp [::1]:48786->[::1]:53: read: connection refused
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.17" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

[root@rhel-node-1 ~]#

下面再次查看node节点

[root@rhel-master ~]# kubectl  get node
NAME                    STATUS     ROLES    AGE     VERSION
rhel-master.novalocal   Ready      master   8m47s   v1.17.4
rhel-node-1.novalocal   NotReady   <none>   8m7s    v1.17.4

由于node-1节点的网络没有ready,下面看下pod状态

[root@rhel-master ~]# kubectl  get pod --all-namespaces -owide
NAMESPACE     NAME                                            READY   STATUS              RESTARTS   AGE     IP              NODE                    NOMINATED NODE   READINESS GATES
kube-system   coredns-6955765f44-9j5mc                        1/1     Running             0          9m14s   10.244.0.3      rhel-master.novalocal   <none>           <none>
kube-system   coredns-6955765f44-sjsjs                        1/1     Running             0          9m14s   10.244.0.2      rhel-master.novalocal   <none>           <none>
kube-system   etcd-rhel-master.novalocal                      1/1     Running             0          9m31s   172.16.35.141   rhel-master.novalocal   <none>           <none>
kube-system   kube-apiserver-rhel-master.novalocal            1/1     Running             0          9m31s   172.16.35.141   rhel-master.novalocal   <none>           <none>
kube-system   kube-controller-manager-rhel-master.novalocal   1/1     Running             0          9m31s   172.16.35.141   rhel-master.novalocal   <none>           <none>
kube-system   kube-flannel-ds-s390x-ftz9h                     1/1     Running             0          8m19s   172.16.35.141   rhel-master.novalocal   <none>           <none>
kube-system   kube-flannel-ds-s390x-nl5q4                     0/1     Init:ErrImagePull   0          6m37s   172.16.35.138   rhel-node-1.novalocal   <none>           <none>
kube-system   kube-proxy-5vtcq                                1/1     Running             0          9m14s   172.16.35.141   rhel-master.novalocal   <none>           <none>
kube-system   kube-proxy-6qfc6                                1/1     Running             0          8m54s   172.16.35.138   rhel-node-1.novalocal   <none>           <none>
kube-system   kube-scheduler-rhel-master.novalocal            1/1     Running             0          9m31s   172.16.35.141   rhel-master.novalocal   <none>           <none>
[root@rhel-master ~]#

可以看到下rhel-node-1的flannal pod 尚未ready ,是因为 image 的问题,我手动导入image

[root@rhel-node-1 ~]# docker load < flannelv0.12.0-s390x.tar
1f106b41b4d6: Loading layer  5.916MB/5.916MB
271ca11ef489: Loading layer  3.651MB/3.651MB
fbd88a276dca: Loading layer  10.77MB/10.77MB
3b7ae8a9c323: Loading layer  2.332MB/2.332MB
4c4bfa1b47e6: Loading layer  35.23MB/35.23MB
b67de7789e55: Loading layer   5.12kB/5.12kB
Loaded image: quay.io/coreos/flannel:v0.12.0-s390x
[root@rhel-node-1 ~]#

再次查看pod状态

[root@rhel-master ~]# kubectl  get pod --all-namespaces -owide
NAMESPACE     NAME                                            READY   STATUS    RESTARTS   AGE     IP              NODE                    NOMINATED NODE   READINESS GATES
kube-system   coredns-6955765f44-9j5mc                        1/1     Running   0          10m     10.244.0.3      rhel-master.novalocal   <none>           <none>
kube-system   coredns-6955765f44-sjsjs                        1/1     Running   0          10m     10.244.0.2      rhel-master.novalocal   <none>           <none>
kube-system   etcd-rhel-master.novalocal                      1/1     Running   0          11m     172.16.35.141   rhel-master.novalocal   <none>           <none>
kube-system   kube-apiserver-rhel-master.novalocal            1/1     Running   0          11m     172.16.35.141   rhel-master.novalocal   <none>           <none>
kube-system   kube-controller-manager-rhel-master.novalocal   1/1     Running   0          11m     172.16.35.141   rhel-master.novalocal   <none>           <none>
kube-system   kube-flannel-ds-s390x-ftz9h                     1/1     Running   0          9m53s   172.16.35.141   rhel-master.novalocal   <none>           <none>
kube-system   kube-flannel-ds-s390x-nl5q4                     1/1     Running   0          8m11s   172.16.35.138   rhel-node-1.novalocal   <none>           <none>
kube-system   kube-proxy-5vtcq                                1/1     Running   0          10m     172.16.35.141   rhel-master.novalocal   <none>           <none>
kube-system   kube-proxy-6qfc6                                1/1     Running   0          10m     172.16.35.138   rhel-node-1.novalocal   <none>           <none>
kube-system   kube-scheduler-rhel-master.novalocal            1/1     Running   0          11m     172.16.35.141   rhel-master.novalocal   <none>           <none>
[root@rhel-master ~]#

k8s 的核心组件已经部署完成!


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK