3

Kubernetes学习笔记-手动搭建k8s-1.10.4之一键部署脚本 |坐而言不如起而行! 二丫讲梵

 2 years ago
source link: http://www.eryajf.net/2231.html
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
本文预计阅读时间 26 分钟

1,简单说明。

此脚本所能够成形于今日,完全是拜大神分享的https://github.com/opsnull/follow-me-install-kubernetes-cluster 项目所依托而成。之前也曾想过对k8s熟悉之后做一下部署脚本,但那时候并没有什么多么好的思路,直到上周看到了如上开源项目的部署思路,让我有种拨云见日,豁然开朗的感觉,当我跟随项目学习的时候,就已经打算了要写一下部署小脚本了。

因此,这个脚本基本上可以说是大神项目流程的一个堆叠,自己则只不过是做了一点点小小的整理与调试罢了,再一次,郑重的,对此表示感谢!

当然啦,事实上当自己来整理这个脚本的时候发现,事情也并没有那么的简单,而写脚本的不简单,则是为了以后每次部署的更简单。

这里简单说明一下我使用的服务器情况:

服务器均采用CentOS7.3版本,未在其他系统版本中进行测试。

主机 主机名 组件 192.168.111.3 kube-node1 Kubernetes 1.10.4,Docker 18.03.1-ce,Etcd 3.3.7,Flanneld 0.10.0,kube-apiserver,kube-controller-manager,kube-scheduler,kubelet,kube-proxy 192.168.111.4 kube-node2 同上 192.168.111.5 kube-node3 同上

2,准备工作。

首先将整个部署文件上传到部署服务器,进行解压,然后做以下准备工作。

其中脚本代码,我已上传到GitHub,各位可以参考:

magic-of-kubernetes-scripts

整个安装包我已打包并上传百度云,可自行下载。

  文件名称:一键安装脚本所需包  文件大小:八九百兆  下载声明:本站文件大多来自于网络,仅供学习和研究使用,不得用于商业用途,如果有版权问题,请联系博主进行相关处理!  下载地址:https://pan.baidu.com/s/1JbICafwEdIwHnsDlGvPIMw

提取码通过如下方式获得:

注意:本段内容须成功“回复本文”后“刷新本页”方可查看!

1,修改以下内容。

  1. config/environment.sh #修改ip为自己将要部署的机器ip
  2. config/Kcsh/hosts #修改ip为自己将要部署的机器ip
  3. config/Ketcd/etcd-csr.json #修改ip为自己将要部署的机器ip
  4. config/Kmaster/Kha/haproxy.cfg #修改ip为自己将要部署的机器ip
  5. config/Kmaster/Kapi/kubernetes-csr.json #修改ip为自己将要部署的机器ip
  6. config/Kmaster/Kmanage/kube-controller-manager-csr.json #修改ip为自己将要部署的机器ip
  7. config/Kmaster/Kscheduler/kube-scheduler-csr.json #修改ip为自己将要部署的机器ip

2,基础配置。

这些操作均在kube-node1主机上执行。

注意:请严格按照如下这几步操作进行,否则可能导致下边部署脚本无法正常走完。

  1. ssh-keygen
  2. ssh-copy-id 192.168.111.3
  3. ssh-copy-id 192.168.111.4
  4. ssh-copy-id 192.168.111.5
  5. scp config/Kcsh/hosts [email protected]:/etc/hosts
  6. scp config/Kcsh/hosts [email protected]:/etc/hosts
  7. scp config/Kcsh/hosts [email protected]:/etc/hosts
  8. ssh root@kube-node1 "hostnamectl set-hostname kube-node1"
  9. ssh root@kube-node2 "hostnamectl set-hostname kube-node2"
  10. ssh root@kube-node3 "hostnamectl set-hostname kube-node3"

3,正式部署。

部署非常简单,直接执行magic.sh脚本即可。

不过有几点需要做一下简单说明:

  • 1,启动正式部署之前,务必仔细认真检查各处配置是否与所需求的相匹配了,若不匹配,应当调整。
  • 2,部署过程中如果有卡壳,或者未正常部署而退出,请根据对应的部署阶段进行排查,然后重新执行部署脚本,即可进行接续部署。
  • 3,如对脚本中一些不足地方有任何建议,欢迎与我提出,一起维护,共同进步!

4,简单验证。

部署完成之后,可使用如下方式进行一些对集群可用性的初步检验:

1,检查服务是否均已正常启动。

  1. #!/bin/bash
  2. #
  3. #author:eryajf
  4. #blog:www.eryajf.net
  5. #time:2018-11
  6. #
  7. set -e
  8. source /opt/k8s/bin/environment.sh
  9. #
  10. ##set color##
  11. echoRed() { echo $'\e[0;31m'"$1"$'\e[0m'; }
  12. echoGreen() { echo $'\e[0;32m'"$1"$'\e[0m'; }
  13. echoYellow() { echo $'\e[0;33m'"$1"$'\e[0m'; }
  14. ##set color##
  15. #
  16. for node_ip in ${NODE_IPS[@]}
  17. do
  18. echoGreen ">>> ${node_ip}"
  19. ssh root@${node_ip} "systemctl status etcd|grep Active"
  20. ssh root@${node_ip} "systemctl status flanneld|grep Active"
  21. ssh root@${node_ip} "systemctl status haproxy|grep Active"
  22. ssh root@${node_ip} "systemctl status keepalived|grep Active"
  23. ssh root@${node_ip} "systemctl status kube-apiserver |grep 'Active:'"
  24. ssh root@${node_ip} "systemctl status kube-controller-manager|grep Active"
  25. ssh root@${node_ip} "systemctl status kube-scheduler|grep Active"
  26. ssh root@${node_ip} "systemctl status docker|grep Active"
  27. ssh root@${node_ip} "systemctl status kubelet | grep Active"
  28. ssh root@${node_ip} "systemctl status kube-proxy|grep Active"
  29. done

2,查看相关服务可用性。

1,验证etcd集群可用性。

  1. cat > magic.sh << "EOF"
  2. #!/bin/bash
  3. source /opt/k8s/bin/environment.sh
  4. for node_ip in ${NODE_IPS[@]}
  5. do
  6. echo ">>> ${node_ip}"
  7. ETCDCTL_API=3 /opt/k8s/bin/etcdctl \
  8. --endpoints=https://${node_ip}:2379 \
  9. --cacert=/etc/kubernetes/cert/ca.pem \
  10. --cert=/etc/etcd/cert/etcd.pem \
  11. --key=/etc/etcd/cert/etcd-key.pem endpoint health
  12. done
  13. EOF

2,验证flannel网络。

查看已分配的 Pod 子网段列表:

  1. source /opt/k8s/bin/environment.sh
  2. etcdctl \
  3. --endpoints=${ETCD_ENDPOINTS} \
  4. --ca-file=/etc/kubernetes/cert/ca.pem \
  5. --cert-file=/etc/flanneld/cert/flanneld.pem \
  6. --key-file=/etc/flanneld/cert/flanneld-key.pem \
  7. ls ${FLANNEL_ETCD_PREFIX}/subnets
  1. /kubernetes/network/subnets/172.30.84.0-24
  2. /kubernetes/network/subnets/172.30.8.0-24
  3. /kubernetes/network/subnets/172.30.29.0-24

验证各节点能通过 Pod 网段互通:

注意其中的IP段换成自己的。

  1. cat > magic.sh << "EOF"
  2. #!/bin/bash
  3. source /opt/k8s/bin/environment.sh
  4. for node_ip in ${NODE_IPS[@]}
  5. do
  6. echo ">>> ${node_ip}"
  7. ssh ${node_ip} "ping -c 1 172.30.8.0"
  8. ssh ${node_ip} "ping -c 1 172.30.29.0"
  9. ssh ${node_ip} "ping -c 1 172.30.84.0"
  10. done
  11. EOF

3,高可用组件验证。

查看 VIP 所在的节点,确保可以 ping 通 VIP:

  1. cat > magic.sh << "EOF"
  2. #!/bin/bash
  3. source /opt/k8s/bin/environment.sh
  4. for node_ip in ${NODE_IPS[@]}
  5. do
  6. echo ">>> ${node_ip}"
  7. ssh ${node_ip} "/usr/sbin/ip addr show ${VIP_IF}"
  8. ssh ${node_ip} "ping -c 1 ${MASTER_VIP}"
  9. done
  10. EOF

4,高可用性试验。

查看当前的 leader:

  1. $kubectl get endpoints kube-controller-manager --namespace=kube-system -o yaml
  2. apiVersion: v1
  3. kind: Endpoints
  4. metadata:
  5. annotations:
  6. control-plane.alpha.kubernetes.io/leader: '{"holderIdentity":"kube-node1_444fbc06-f3d8-11e8-8ca8-0050568f514f","leaseDurationSeconds":15,"acquireTime":"2018-11-29T13:11:21Z","renewTime":"2018-11-29T13:48:10Z","leaderTransitions":0}'
  7. creationTimestamp: 2018-11-29T13:11:21Z
  8. name: kube-controller-manager
  9. namespace: kube-system
  10. resourceVersion: "3134"
  11. selfLink: /api/v1/namespaces/kube-system/endpoints/kube-controller-manager
  12. uid: 4452bff1-f3d8-11e8-a5a6-0050568fef9b

可见,当前的 leader 为 kube-node1 节点。

现在停掉kube-node1上的kube-controller-manager。

  1. $systemctl stop kube-controller-manager
  2. $systemctl status kube-controller-manager |grep Active
  3. Active: inactive (dead) since Sat 2018-11-24 00:52:53 CST; 44s ago

大概一分钟后,再查看一下当前的leader:

  1. $kubectl get endpoints kube-controller-manager --namespace=kube-system -o yaml
  2. apiVersion: v1
  3. kind: Endpoints
  4. metadata:
  5. annotations:
  6. control-plane.alpha.kubernetes.io/leader: '{"holderIdentity":"kube-node3_45525ae6-f3d8-11e8-a2b8-0050568fbcaa","leaseDurationSeconds":15,"acquireTime":"2018-11-29T13:49:28Z","renewTime":"2018-11-29T13:49:28Z","leaderTransitions":1}'
  7. creationTimestamp: 2018-11-29T13:11:21Z
  8. name: kube-controller-manager
  9. namespace: kube-system
  10. resourceVersion: "3227"
  11. selfLink: /api/v1/namespaces/kube-system/endpoints/kube-controller-manager
  12. uid: 4452bff1-f3d8-11e8-a5a6-0050568fef9b

可以看到已经自动漂移到kube-node3上去了。

5,查验kube-proxy功能。

查看 ipvs 路由规则

  1. cat > magic.sh << "EOF"
  2. #!/bin/bash
  3. source /opt/k8s/bin/environment.sh
  4. for node_ip in ${NODE_IPS[@]}
  5. do
  6. echo ">>> ${node_ip}"
  7. ssh root@${node_ip} "/usr/sbin/ipvsadm -ln"
  8. done
  9. EOF
  1. $bash magic.sh
  2. >>> 192.168.111.120
  3. IP Virtual Server version 1.2.1 (size=4096)
  4. Prot LocalAddress:Port Scheduler Flags
  5. -> RemoteAddress:Port Forward Weight ActiveConn InActConn
  6. TCP 10.254.0.1:443 rr persistent 10800
  7. -> 192.168.111.120:6443 Masq 1 0 0
  8. -> 192.168.111.121:6443 Masq 1 0 0
  9. -> 192.168.111.122:6443 Masq 1 0 0
  10. >>> 192.168.111.121
  11. IP Virtual Server version 1.2.1 (size=4096)
  12. Prot LocalAddress:Port Scheduler Flags
  13. -> RemoteAddress:Port Forward Weight ActiveConn InActConn
  14. TCP 10.254.0.1:443 rr persistent 10800
  15. -> 192.168.111.120:6443 Masq 1 0 0
  16. -> 192.168.111.121:6443 Masq 1 0 0
  17. -> 192.168.111.122:6443 Masq 1 0 0
  18. >>> 192.168.111.122
  19. IP Virtual Server version 1.2.1 (size=4096)
  20. Prot LocalAddress:Port Scheduler Flags
  21. -> RemoteAddress:Port Forward Weight ActiveConn InActConn
  22. TCP 10.254.0.1:443 rr persistent 10800
  23. -> 192.168.111.120:6443 Masq 1 0 0
  24. -> 192.168.111.121:6443 Masq 1 0 0
  25. -> 192.168.111.122:6443 Masq 1 0 0

6,创建一个应用。

查看集群节点:

  1. $kubectl get node
  2. NAME STATUS ROLES AGE VERSION
  3. kube-node1 Ready <none> 45m v1.10.4
  4. kube-node2 Ready <none> 45m v1.10.4
  5. kube-node3 Ready <none> 45m v1.10.4

创建测试应用:

  1. cat > nginx-ds.yml <<EOF
  2. apiVersion: v1
  3. kind: Service
  4. metadata:
  5. name: nginx-ds
  6. labels:
  7. app: nginx-ds
  8. spec:
  9. type: NodePort
  10. selector:
  11. app: nginx-ds
  12. ports:
  13. - name: http
  14. port: 80
  15. targetPort: 80
  16. ---
  17. apiVersion: extensions/v1beta1
  18. kind: DaemonSet
  19. metadata:
  20. name: nginx-ds
  21. labels:
  22. addonmanager.kubernetes.io/mode: Reconcile
  23. spec:
  24. template:
  25. metadata:
  26. labels:
  27. app: nginx-ds
  28. spec:
  29. containers:
  30. - name: my-nginx
  31. image: nginx:1.7.9
  32. ports:
  33. - containerPort: 80
  34. EOF

执行定义文件,启动之前,可以先将上边定义的镜像pull下来。

  1. $ kubectl create -f nginx-ds.yml
  2. service "nginx-ds" created
  3. daemonset.extensions "nginx-ds" created

检查各 Node 上的 Pod IP 连通性

  1. $kubectl get pods -o wide|grep nginx-ds
  2. nginx-ds-78lqz 1/1 Running 0 2m 172.30.87.2 kube-node3
  3. nginx-ds-j45zf 1/1 Running 0 2m 172.30.99.2 kube-node2
  4. nginx-ds-xhttt 1/1 Running 0 2m 172.30.55.2 kube-node1

可见,nginx-ds 的 Pod IP 分别是 172.30.84.2、172.30.8.2、172.30.29.2,在所有 Node 上分别 ping 这三个 IP,看是否连通:

  1. cat > magic.sh << "EOF"
  2. #!/bin/bash
  3. source /opt/k8s/bin/environment.sh
  4. for node_ip in ${NODE_IPS[@]}
  5. do
  6. echo ">>> ${node_ip}"
  7. ssh ${node_ip} "ping -c 1 172.30.87.2"
  8. ssh ${node_ip} "ping -c 1 172.30.99.2"
  9. ssh ${node_ip} "ping -c 1 172.30.55.2"
  10. done
  11. EOF

检查服务 IP 和端口可达性

  1. $kubectl get svc |grep nginx-ds
  2. nginx-ds NodePort 10.254.110.153 <none> 80:8781/TCP 6h

在所有 Node 上 curl Service IP:

  1. cat > magic.sh << "EOF"
  2. #!/bin/bash
  3. source /opt/k8s/bin/environment.sh
  4. for node_ip in ${NODE_IPS[@]}
  5. do
  6. echo ">>> ${node_ip}"
  7. ssh ${node_ip} "curl 10.254.128.98"
  8. done
  9. EOF

检查服务的 NodePort 可达性

  1. cat > magic.sh << "EOF"
  2. #!/bin/bash
  3. source /opt/k8s/bin/environment.sh
  4. for node_ip in ${NODE_IPS[@]}
  5. do
  6. echo ">>> ${node_ip}"
  7. ssh ${node_ip} "curl ${node_ip}:8996"
  8. done
  9. EOF

weinxin


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK