3

etcd单节点应用

 2 years ago
source link: https://zhangrr.github.io/posts/20211021-etcd_docker/
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

由于使用到了阿里的 K8S 托管集群 ACK,于是想占便宜。想用到托管 master node 的 etcd 来保存数据。

结果是,未遂!!无法使用。

阿里有单独的配置管理服务,复杂化了,不想用。

那么解决方案就是,启动只有一个节点副本的 etcd pod,然后数据持久化到 OSS 的 S3 桶中。

一、实现etcd的单节点docker化

首先我们只想在测试环境中跑一个单节点的 etcd,还没有用到 k8s,做法如下:

#!/bin/bash

NODE1=172.18.31.33
REGISTRY=quay.io/coreos/etcd
# available from v3.2.5
#REGISTRY=gcr.io/etcd-development/etcd

docker run \
  -p 2379:2379 \
  -p 2380:2380 \
  --volume=/data/etcd:/etcd-data \
  --name etcd ${REGISTRY}:latest \
  /usr/local/bin/etcd \
  --data-dir=/etcd-data --name node1 \
  --initial-advertise-peer-urls http://${NODE1}:2380 --listen-peer-urls http://0.0.0.0:2380 \
  --advertise-client-urls http://${NODE1}:2379 --listen-client-urls http://0.0.0.0:2379 \
  --initial-cluster node1=http://${NODE1}:2380


shell

如上就可以了,容器跑起来以后进入容器测试一下:

docker exec -it 425f26903466 /bin/sh

etcdctl -C http://127.0.0.1:2379 member list
c3511611548b7c7c: name=node1 peerURLs=http://172.18.31.33:2380 clientURLs=http://172.18.31.33:2379 isLeader=true

etcdctl ls --recursive /
shell

这样一个单节点的 etcd 就弄好了,对外暴露的是 2379 和 2380 端口

二、实现 etcd 的单节点 k8s 化

首先编写一个deployment文件etcd-deploy.yaml:

下载:etcd-deploy.yaml

apiVersion: apps/v1 
kind: Deployment 
metadata: 
  name: etcd-deploy
  labels: 
    app: etcd
spec: 
  replicas: 1 
  selector: 
    matchLabels: 
      app: etcd
  template: 
    metadata: 
      labels: 
        app: etcd
    spec: 
      containers: 
      - name: etcd
        image: quay.io/coreos/etcd:latest
        ports:
        - containerPort: 2379
          name: client
          protocol: TCP
        - containerPort: 2380
          name: server
          protocol: TCP
        command:
        - /usr/local/bin/etcd
        - --name
        - etcd
        - --initial-advertise-peer-urls
        - http://etcd:2380
        - --listen-peer-urls
        - http://0.0.0.0:2380
        - --listen-client-urls
        - http://0.0.0.0:2379
        - --advertise-client-urls
        - http://etcd:2379
        - --initial-cluster
        - etcd=http://etcd:2380
        - --data-dir
        - /etcd-data 
        volumeMounts:
        - mountPath: /etcd-data
          name: etcd-data
        lifecycle:
          postStart:
            exec:
              command:
                 - "sh"
                 - "-c"
                 - >
                   echo "127.0.0.1 etcd" >> /etc/hosts;
      volumes:
        - name: etcd-data
          persistentVolumeClaim:
            claimName: k8s-etcd-20g
      restartPolicy: Always

注意上面,我们使用了一个 pvc 卷 k8s-etcd-20g,这个卷挂在 /etcd-data,是由 OSS 建立的,用于持久话数据,省得重启 etcd 的 pod,数据消失不见了。

然后,我们需要把这个 deployment 作为 svc 服务暴露在集群中,再编写一个etcd-svc.yaml

下载:etcd-svc.yaml

apiVersion: v1
kind: Service
metadata:
 name: etcd-svc
spec:
 ports:
 - port: 2379
   name: tcp2379
   protocol: TCP
   targetPort: 2379
 - port: 2380
   name: tcp2380
   protocol: TCP
   targetPort: 2380
 selector:
   app: etcd
 type: ClusterIP

kubectl apply 部署到 k8s 中,这样就可以了。

k8s测试方法,随便启动一个 busybox pod,进去测试一下:

kubectl run curl --image=radial/busyboxplus:curl -i --tty --rm

curl http://etcd-svc:2379/version

curl http://etcd-svc.default:2379/version

curl http://etcd-svc.default:2379/v2/keys

curl http://etcd-svc.default:2379/v2/keys/?recursive=true

curl http://etcd-svc.default:2379/v2/keys/service/nginx

curl http://etcd-svc.default:2379/v2/keys/service/nginx/127.0.0.1

curl --location --request PUT 'http://etcd-svc:2379/v2/keys/service/nginx/10.240.0.41' --header 'Content-Type: application/x-www-form-urlencoded' --data-urlencode 'value=10.240.0.41:9000'

curl http://etcd-svc.default:2379/v2/keys/service/nginx/
shell

About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK