5

Kubernetes学习笔记-手动搭建k8s-1.10.4之部署kube-proxy组件 |坐而言不如起而行! 二...

 2 years ago
source link: http://www.eryajf.net/2211.html
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
本文预计阅读时间 18 分钟

kube-proxy 运行在所有 worker 节点上,,它监听 apiserver 中 service 和 Endpoint 的变化情况,创建路由规则来进行服务负载均衡。

本文档讲解部署 kube-proxy 的部署,使用 ipvs 模式。

1,创建 kube-proxy 证书

创建证书签名请求:

  1. cat > kube-proxy-csr.json <<EOF
  2. {
  3. "CN": "system:kube-proxy",
  4. "key": {
  5. "algo": "rsa",
  6. "size": 2048
  7. },
  8. "names": [
  9. {
  10. "C": "CN",
  11. "ST": "BeiJing",
  12. "L": "BeiJing",
  13. "O": "k8s",
  14. "OU": "4Paradigm"
  15. }
  16. ]
  17. }
  18. EOF
  • CN:指定该证书的 User 为 system:kube-proxy;
  • 预定义的 RoleBinding system:node-proxier 将User system:kube-proxy 与 Role system:node-proxier 绑定,该 Role 授予了调用 kube-apiserver Proxy 相关 API 的权限;
  • 该证书只会被 kube-proxy 当做 client 证书使用,所以 hosts 字段为空;

生成证书和私钥:

  1. $cfssl gencert -ca=/etc/kubernetes/cert/ca.pem \
  2. -ca-key=/etc/kubernetes/cert/ca-key.pem \
  3. -config=/etc/kubernetes/cert/ca-config.json \
  4. -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy

2,创建和分发 kubeconfig 文件

  1. source /opt/k8s/bin/environment.sh
  2. kubectl config set-cluster kubernetes \
  3. --certificate-authority=/etc/kubernetes/cert/ca.pem \
  4. --embed-certs=true \
  5. --server=${KUBE_APISERVER} \
  6. --kubeconfig=kube-proxy.kubeconfig
  7. kubectl config set-credentials kube-proxy \
  8. --client-certificate=kube-proxy.pem \
  9. --client-key=kube-proxy-key.pem \
  10. --embed-certs=true \
  11. --kubeconfig=kube-proxy.kubeconfig
  12. kubectl config set-context default \
  13. --cluster=kubernetes \
  14. --user=kube-proxy \
  15. --kubeconfig=kube-proxy.kubeconfig
  16. kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
  • --embed-certs=true:将 ca.pem 和 admin.pem 证书内容嵌入到生成的 kubectl-proxy.kubeconfig 文件中(不加时,写入的是证书文件路径);

分发 kubeconfig 文件:

  1. cat > magic.sh << "EOF"
  2. #!/bin/bash
  3. source /opt/k8s/bin/environment.sh
  4. for node_name in ${NODE_NAMES[@]}
  5. do
  6. echo ">>> ${node_name}"
  7. scp kube-proxy.kubeconfig k8s@${node_name}:/etc/kubernetes/
  8. done
  9. EOF

3,创建 kube-proxy 配置文件

从 v1.10 开始,kube-proxy 部分参数可以配置文件中配置。可以使用 –write-config-to 选项生成该配置文件,或者参考 kubeproxyconfig 的类型定义源文件 :https://github.com/kubernetes/kubernetes/blob/master/pkg/proxy/apis/kubeproxyconfig/types.go

创建 kube-proxy config 文件模板:

  1. cat >kube-proxy.config.yaml.template <<EOF
  2. apiVersion: kubeproxy.config.k8s.io/v1alpha1
  3. bindAddress: ##NODE_IP##
  4. clientConnection:
  5. kubeconfig: /etc/kubernetes/kube-proxy.kubeconfig
  6. clusterCIDR: ${CLUSTER_CIDR}
  7. healthzBindAddress: ##NODE_IP##:10256
  8. hostnameOverride: ##NODE_NAME##
  9. kind: KubeProxyConfiguration
  10. metricsBindAddress: ##NODE_IP##:10249
  11. mode: "ipvs"
  12. EOF
  • bindAddress: 监听地址;
  • clientConnection.kubeconfig: 连接 apiserver 的 kubeconfig 文件;
  • clusterCIDR: kube-proxy 根据 --cluster-cidr 判断集群内部和外部流量,指定 --cluster-cidr 或 --masquerade-all 选项后 kube-proxy 才会对访问 Service IP 的请求做 SNAT;
  • hostnameOverride: 参数值必须与 kubelet 的值一致,否则 kube-proxy 启动后会找不到该 Node,从而不会创建任何 ipvs 规则;
  • mode: 使用 ipvs 模式;

为各节点创建和分发 kube-proxy 配置文件:

  1. cat > magic.sh << "EOF"
  2. #!/bin/bash
  3. source /opt/k8s/bin/environment.sh
  4. for (( i=0; i < 3; i++ ))
  5. do
  6. echo ">>> ${NODE_NAMES[i]}"
  7. sed -e "s/##NODE_NAME##/${NODE_NAMES[i]}/" -e "s/##NODE_IP##/${NODE_IPS[i]}/" kube-proxy.config.yaml.template > kube-proxy-${NODE_NAMES[i]}.config.yaml
  8. scp kube-proxy-${NODE_NAMES[i]}.config.yaml root@${NODE_NAMES[i]}:/etc/kubernetes/kube-proxy.config.yaml
  9. done
  10. EOF

4,创建和分发 kube-proxy systemd unit 文件

  1. $source /opt/k8s/bin/environment.sh
  2. $cat > kube-proxy.service <<EOF
  3. [Unit]
  4. Description=Kubernetes Kube-Proxy Server
  5. Documentation=https://github.com/GoogleCloudPlatform/kubernetes
  6. After=network.target
  7. [Service]
  8. WorkingDirectory=/var/lib/kube-proxy
  9. ExecStart=/opt/k8s/bin/kube-proxy \\
  10. --config=/etc/kubernetes/kube-proxy.config.yaml \\
  11. --alsologtostderr=true \\
  12. --logtostderr=false \\
  13. --log-dir=/var/log/kubernetes \\
  14. --v=2
  15. Restart=on-failure
  16. RestartSec=5
  17. LimitNOFILE=65536
  18. [Install]
  19. WantedBy=multi-user.target
  20. EOF

分发 kube-proxy systemd unit 文件:

  1. cat > magic.sh << "EOF"
  2. #!/bin/bash
  3. source /opt/k8s/bin/environment.sh
  4. for node_name in ${NODE_NAMES[@]}
  5. do
  6. echo ">>> ${node_name}"
  7. scp kube-proxy.service root@${node_name}:/etc/systemd/system/
  8. done
  9. EOF

5,启动 kube-proxy 服务

  1. cat > magic.sh << "EOF"
  2. #!/bin/bash
  3. source /opt/k8s/bin/environment.sh
  4. for node_ip in ${NODE_IPS[@]}
  5. do
  6. echo ">>> ${node_ip}"
  7. ssh root@${node_ip} "mkdir -p /var/lib/kube-proxy"
  8. ssh root@${node_ip} "mkdir -p /var/log/kubernetes && chown -R k8s /var/log/kubernetes"
  9. ssh root@${node_ip} "systemctl daemon-reload && systemctl enable kube-proxy && systemctl start kube-proxy"
  10. done
  11. EOF

6,检查启动结果

  1. cat > magic.sh << "EOF"
  2. #!/bin/bash
  3. source /opt/k8s/bin/environment.sh
  4. for node_ip in ${NODE_IPS[@]}
  5. do
  6. echo ">>> ${node_ip}"
  7. ssh k8s@${node_ip} "systemctl status kube-proxy|grep Active"
  8. done
  9. EOF

如果输出如下:

  1. $bash magic.sh
  2. >>> 192.168.106.3
  3. Active: active (running) since Fri 2018-11-23 19:39:28 CST; 6h ago
  4. >>> 192.168.106.4
  5. Active: active (running) since Fri 2018-11-23 19:39:28 CST; 6h ago
  6. >>> 192.168.106.5
  7. Active: active (running) since Fri 2018-11-23 19:39:29 CST; 6h ago

则正常,如果启动失败,检查日志:

  1. journalctl -xu kube-proxy

7,查看监听端口和 metrics

  1. [k8s@kube-node1 abc]$sudo netstat -lnpt|grep kube-prox
  2. tcp 0 0 192.168.106.3:10256 0.0.0.0:* LISTEN 19061/kube-proxy
  3. tcp 0 0 192.168.106.3:10249 0.0.0.0:* LISTEN 19061/kube-proxy
  • 10249:http prometheus metrics port;
  • 10256:http healthz port;

8,查看 ipvs 路由规则

  1. cat > magic.sh << "EOF"
  2. #!/bin/bash
  3. source /opt/k8s/bin/environment.sh
  4. for node_ip in ${NODE_IPS[@]}
  5. do
  6. echo ">>> ${node_ip}"
  7. ssh root@${node_ip} "/usr/sbin/ipvsadm -ln"
  8. done
  9. EOF
  1. $bash magic.sh
  2. >>> 192.168.106.3
  3. IP Virtual Server version 1.2.1 (size=4096)
  4. Prot LocalAddress:Port Scheduler Flags
  5. -> RemoteAddress:Port Forward Weight ActiveConn InActConn
  6. TCP 10.254.0.1:443 rr persistent 10800
  7. -> 192.168.106.3:6443 Masq 1 0 0
  8. -> 192.168.106.4:6443 Masq 1 0 0
  9. -> 192.168.106.5:6443 Masq 1 0 0
  10. >>> 192.168.106.4
  11. IP Virtual Server version 1.2.1 (size=4096)
  12. Prot LocalAddress:Port Scheduler Flags
  13. -> RemoteAddress:Port Forward Weight ActiveConn InActConn
  14. TCP 10.254.0.1:443 rr persistent 10800
  15. -> 192.168.106.3:6443 Masq 1 0 0
  16. -> 192.168.106.4:6443 Masq 1 0 0
  17. -> 192.168.106.5:6443 Masq 1 0 0
  18. >>> 192.168.106.5
  19. IP Virtual Server version 1.2.1 (size=4096)
  20. Prot LocalAddress:Port Scheduler Flags
  21. -> RemoteAddress:Port Forward Weight ActiveConn InActConn
  22. TCP 10.254.0.1:443 rr persistent 10800
  23. -> 192.168.106.3:6443 Masq 1 0 0
  24. -> 192.168.106.4:6443 Masq 1 0 0
  25. -> 192.168.106.5:6443 Masq 1 0 0

可见将所有到 kubernetes cluster ip 443 端口的请求都转发到了 kube-apiserver 的 6443 端口。


weinxin


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK