4

基于Kubernetes部署Apache Zookeeper

 3 years ago
source link: https://fredal.xin/deploy-zk-with-k8s
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
基于Kubernetes部署Apache Zookeeper

基于Kubernetes部署Apache Zookeeper

更新于 2021-01-08   |   0 条评论   |   热度 0°C

随着云原生化流行的大趋势,我们的基础组件也需要逐渐上Kubernetes了。Apache Zookeeper作为目前最流行的分布式协调组件,在我们的微服务架构中负责扮演注册中心的角色。在Kubernetes中运行Zookeeper集群是很有意义的,可以利用其原生的弹性扩缩容、高可用特性。

使用StatefulSet部署Zookeeper

官方提供了使用statefulSet的方式来部署Zookeeper运行Zookeeper,它会创建一个headless service,一个cluster service,一个podDisruptionBudget,一个statefulSet。

apiVersion: v1
kind: Service
metadata:
  name: zk-hs
  labels:
    app: zk
spec:
  ports:
  - port: 2888
    name: server
  - port: 3888
    name: leader-election
  clusterIP: None
  selector:
    app: zk
---
apiVersion: v1
kind: Service
metadata:
  name: zk-cs
  labels:
    app: zk
spec:
  ports:
  - port: 2181
    name: client
  selector:
    app: zk
---
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
  name: zk-pdb
spec:
  selector:
    matchLabels:
      app: zk
  maxUnavailable: 1
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: zk
spec:
  selector:
    matchLabels:
      app: zk
  serviceName: zk-hs
  replicas: 3
  updateStrategy:
    type: RollingUpdate
  podManagementPolicy: OrderedReady
  template:
    metadata:
      labels:
        app: zk
    spec:
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            - labelSelector:
                matchExpressions:
                  - key: "app"
                    operator: In
                    values:
                    - zk
              topologyKey: "kubernetes.io/hostname"
      containers:
      - name: kubernetes-zookeeper
        imagePullPolicy: Always
        image: "k8s.gcr.io/kubernetes-zookeeper:1.0-3.4.10"
        resources:
          requests:
            memory: "1Gi"
            cpu: "0.5"
        ports:
        - containerPort: 2181
          name: client
        - containerPort: 2888
          name: server
        - containerPort: 3888
          name: leader-election
        command:
        - sh
        - -c
        - "start-zookeeper \
          --servers=3 \
          --data_dir=/var/lib/zookeeper/data \
          --data_log_dir=/var/lib/zookeeper/data/log \
          --conf_dir=/opt/zookeeper/conf \
          --client_port=2181 \
          --election_port=3888 \
          --server_port=2888 \
          --tick_time=2000 \
          --init_limit=10 \
          --sync_limit=5 \
          --heap=512M \
          --max_client_cnxns=60 \
          --snap_retain_count=3 \
          --purge_interval=12 \
          --max_session_timeout=40000 \
          --min_session_timeout=4000 \
          --log_level=INFO"
        readinessProbe:
          exec:
            command:
            - sh
            - -c
            - "zookeeper-ready 2181"
          initialDelaySeconds: 10
          timeoutSeconds: 5
        livenessProbe:
          exec:
            command:
            - sh
            - -c
            - "zookeeper-ready 2181"
          initialDelaySeconds: 10
          timeoutSeconds: 5
        volumeMounts:
        - name: datadir
          mountPath: /var/lib/zookeeper
      securityContext:
        runAsUser: 1000
        fsGroup: 1000
  volumeClaimTemplates:
  - metadata:
      name: datadir
    spec:
      accessModes: [ "ReadWriteOnce" ]
      resources:
        requests:
          storage: 10Gi

使用 kubectl apply应用这个配置文件,等待一会之后,发现pod和service都已创建成功。

我们检查一下zookeeper节点的状态:

将zookeeper部署在kubernetes上一大优点就是可以方便扩缩容,这边我们以扩容成4个节点为例,kubectl edit sts zk,修改 replica:4以及 --server=4。可以看到经过一段时间的滚动更新,最终扩容成了4个节点。

使用Kubernetes Operator部署Zookeeper

除了StatefulSet的方式外,我们还可以使用Kubernetes Operator的方式部署。目前我们可以参考使用pravega提供的operator。

首先创建自定义的crd ZookeeperCluster

kubectl create -f deploy/crds

接着创建权限相关的,包括serviceAccount、Role和RoleBinding (注意需要修改一下权限相关的rbac.yaml的配置,如果你当前的namespace不是default,需要把namespcae: default去掉,不然权限验证有问题)。

kubectl create -f deploy/default_ns/rbac.yaml

然后给operator创建deployment

kubectl create -f deploy/default_ns/operator.yaml

我们看到operator已经创建好了:

接下来我们自己编写一个CR即可:

apiVersion: zookeeper.pravega.io/v1beta1
kind: ZookeeperCluster
metadata:
  name: zookeeper
spec:
  replicas: 3
  image:
    repository: pravega/zookeeper
    tag: 0.2.9
  storageType: persistence
  persistence:
    reclaimPolicy: Delete
    spec:
      storageClassName: "rbd"
      resources:
        requests:
          storage: 8Gi

这里的storageClassName配合自建集群选择了rbd。apply之后等一会儿可以看到zk已经创建完毕。

扩缩容的话也非常方便,还是以扩容4节点为例,直接patch我们创建的cr即可:

kubectl patch zk zookeeper --type='json' -p='[{"op": "replace", "path": "/spec/replicas", "value":4}]'

使用Kubernetes Kudo部署Zookeeper

kudo是一个适用于kubernetes operator的组装器,也是官方推荐的。

首先我们安装一下kudo,在mac上安装:

brew install kudo

安装完之后进行初始化

kubectl kudo init

这个时候我们会发现kudo operator已经装好了:

然后直接安装一下zookeeper即可(kudo内置了zookeeper operator),注意这里同样声明一下storage class为rbd。

kubectl kudo install zookeeper --instance=zookeeper-instance -p STORAGE_CLASS=rbd

扩缩容的话也非常方便:

kubectl kudo update --instance=zookeeper-instance -p NODE_COUNT=4


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK