9

04.kubernetes笔记 Pod控制器(三) DaemonSet、Job、CronJob、StatefulSet

 3 years ago
source link: https://segmentfault.com/a/1190000040655156
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

DaemonSet简介

DaemonSet 确保全部(或者一些)Node 上运行一个 Pod 的副本。当有 Node 加入集群时,也会为他们新增一个Pod 。当有 Node 从集群移除时,这些 Pod 也会被回收。删除 DaemonSet 将会删除它创建的所有 Pod

  • 使用 DaemonSet 的一些典型用法:
  • 运行集群存储 daemon,例如在每个 Node 上运行 glusterd 、 ceph
  • 在每个 Node 上运行日志收集 daemon,例如 fluentd 、 logstash
  • 在每个 Node 上运行监控 daemon,例如 Prometheus Node Exporter、 collectd 、Datadog 代理、New Relic 代理,或 Ganglia gmond

默认会在每个节点运行一个Pod (master节点除外)或根据标签匹配选择运行的节点

DaemonSet 配置规范

apiVersion: apps/v1 # API群组及版本
kind: DaemonSet#资源类型特有标识
metadata:
  name <string>  #资源名称,在作用域中要唯一
  namespace <string> #名称空间;DaemonSet资源隶属名称空间级别
spec:
  minReadySeconds <integer> # Pod就绪后多少秒内任一容器无crash方可视为“就绪”
  selector <object> #标签选择器,必须匹配template字段中Pod模板中的标签
  template <object> #Pod模板对象;
  revisionHistoryLimit <integer> #滚动更新历史记录数量,默认为10;
  updateStrategy <0bject> #滚动更新策略
    type <string>  #滚动更新类型,可用值有OnDelete和Rollingupdate;
    rollingUpdate <Object>  #滚动更新参数,专用于RollingUpdate类型
      maxUnavailable <string>  #更新期间可比期望的Pod数量缺少的数量或比例

示例1: 新建DaemonSet控制器 部署node-exporter

[root@k8s-master PodControl]# cat daemonset-demo.yaml 
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: daemonset-demo
  namespace: default
  labels:
    app: prometheus
    component: node-exporter
spec:
  selector:
    matchLabels:
      app: prometheus
      component: node-exporter
  template:
    metadata:
      name: prometheus-node-exporter
      labels:
        app: prometheus
        component: node-exporter
    spec:
      containers:
      - image: prom/node-exporter:v0.18.0
        name: prometheus-node-exporter
        ports:
        - name: prom-node-exp
          containerPort: 9100
          hostPort: 9100
        livenessProbe:
          tcpSocket :
            port: prom-node-exp
          initialDelaySeconds: 3
        readinessProbe:
          httpGet:
            path: '/metrics'
            port: prom-node-exp
            scheme: HTTP
          initialDelaySeconds: 5
      hostNetwork: true
      hostPID: true

[root@k8s-master PodControl]# kubectl apply -f daemonset-demo.yaml 
daemonset.apps/daemonset-demo created

[root@k8s-master PodControl]# kubectl get pod
NAME                                             READY   STATUS      RESTARTS   AGE
deployment-demo-77d46c4794-fhz4l                 1/1     Running     0          16h
deployment-demo-77d46c4794-kmrhn                 1/1     Running     0          16h
deployment-demo-fb544c5d8-5f9lp                  1/1     Running     0          17h


[root@k8s-master PodControl]# cat daemonset-demo.yaml 
...
    spec:
      containers:
      - image: prom/node-exporter:latest  #更新为最新版
...

#默认为滚动更新

[root@k8s-master PodControl]# kubectl apply -f daemonset-demo.yaml && kubectl rollout status daemonset/daemonset-demo
daemonset.apps/daemonset-demo created
Waiting for daemon set "daemonset-demo" rollout to finish: 0 of 3 updated pods are available...
Waiting for daemon set "daemonset-demo" rollout to finish: 1 of 3 updated pods are available...
Waiting for daemon set "daemonset-demo" rollout to finish: 2 of 3 updated pods are available...
daemon set "daemonset-demo" successfully rolled out

Job的简介及 配置规范

Job 负责批处理任务,即仅执行一次的任务,它保证批处理任务的一个或多个 Pod 成功结束

Job 配置规范

apiVersion: batch/v1 #API群组及版本
kind: Job  #资源类型特有标识
metadata:
  name <string> #资源名称,在作用域中要唯一
  namespace <string> #名称空间;Job资源隶属名称空间级别
spec:
  selector <object> #标签选择器,必须匹配template字段中Pod模板中的标签
  template <object> #Pod模板对象
  completions <integer>  #期望的成功完成的作业次数,成功运行结束的Pod数量
  ttlSecondsAfterFinished <integer> #终止状态作业的生存时长,超期将被删除
  parallelism <integer> #作业的最大并行度,默认为1
  backoffLimit <integer> #将作业标记为Failed之前的重试次数,默认为6
  activeDeadlineSeconds <integer> #作业启动后可处于活动状态的时长
  restartPolicy <string> #重启策略
    #1. Always: 容器失效时,kubelet 自动重启该容器;
    #2. OnFailure: 容器终止运行且退出码不为0时重启;
    #3. Never: 不论状态为何, kubelet 都不重启该容器。

示例2: 创建Job控制器 完成2次任务

[root@k8s-master PodControl]# cat job-demo.yaml 
apiVersion: batch/v1
kind: Job
metadata:
  name: job-demo
spec:
  template:
    spec:
      containers:
      - name: myjob
        image: alpine:3.11
        imagePullPolicy: IfNotPresent
        command: ["/bin/sh" , "-c", "sleep 60"]
      restartPolicy: Never
  completions: 2   #完成2次 没有配置并行 所以是单队列 完成1次后在启动一次
  ttlSecondsAfterFinished: 3600 #保存1个小时
  backoffLimit: 3  #重试次数 默认为6改为3
  activeDeadlineSeconds: 300 #启动后的存活时长

[root@k8s-master PodControl]# kubectl apply -f job-demo.yaml 

[root@k8s-master PodControl]# kubectl get pod
NAME                                             READY   STATUS              RESTARTS   AGE
daemonset-demo-4zfwp                             1/1     Running             0          20m
daemonset-demo-j7m7k                             1/1     Running             0          20m
daemonset-demo-xj6wc                             1/1     Running             0          20m
job-demo-w4nkh                                   0/1     ContainerCreating   0          2s

[root@k8s-master PodControl]# kubectl get pod
NAME                                             READY   STATUS      RESTARTS   AGE
daemonset-demo-4zfwp                             1/1     Running     0          22m
daemonset-demo-j7m7k                             1/1     Running     0          22m
daemonset-demo-xj6wc                             1/1     Running     0          22m
job-demo-vfh9r                                   1/1     Running     0          49s  #串行运行 
job-demo-w4nkh                                   0/1     Completed   0          2m5s #已完成

[root@k8s-master PodControl]# cat job-para-demo.yaml 
apiVersion: batch/v1
kind: Job
metadata:
  name: job-para-demo
spec:
  template:
    spec:
      containers:
      - name: myjob
        image: alpine:3.11
        imagePullPolicy: IfNotPresent
        command: ["/bin/sh" , "-c", "sleep 60"]
      restartPolicy: Never
  completions: 12   #完成12次
  parallelism: 2    #同时运行2个  个2个Pod同时运行
  ttlSecondsAfterFinished: 3600
  backoffLimit: 3
  activeDeadlineSeconds: 1200


[root@k8s-master PodControl]# kubectl apply  -f  job-para-demo.yaml 
job.batch/job-para-demo created

[root@k8s-master PodControl]# kubectl get job
NAME            COMPLETIONS   DURATION   AGE
job-demo        2/2           2m25s      11m
job-para-demo   10/12         6m37s      7s

[root@k8s-master PodControl]# kubectl get pod
NAME                                             READY   STATUS              RESTARTS   AGE
daemonset-demo-4zfwp                             1/1     Running             0          25m
daemonset-demo-j7m7k                             1/1     Running             0          25m
daemonset-demo-xj6wc                             1/1     Running             0          25m
deployment-demo-fb544c5d8-lj5gt                  0/1     Terminating         0          17h
deployment-demo-with-strategy-59468cb976-vkxdg   0/1     Terminating         0          16h
job-demo-vfh9r                                   0/1     Completed           0          3m41s
job-demo-w4nkh                                   0/1     Completed           0          4m57s
job-para-demo-9jtnv                              0/1     ContainerCreating   0          7s  #同一时间并行数为2个  共循环6次
job-para-demo-q2h6g                              0/1     ContainerCreating   0          7s


[root@k8s-master PodControl]# kubectl get pod
NAME                   READY   STATUS      RESTARTS   AGE
daemonset-demo-4zfwp   1/1     Running     0          30m
daemonset-demo-j7m7k   1/1     Running     0          30m
daemonset-demo-xj6wc   1/1     Running     0          30m
job-demo-vfh9r         0/1     Completed   0          9m38s
job-demo-w4nkh         0/1     Completed   0          10m
job-para-demo-8fz78    0/1     Completed   0          3m48s
job-para-demo-9jtnv    0/1     Completed   0          6m4s
job-para-demo-bnw47    0/1     Completed   0          2m42s
job-para-demo-dsmbm    0/1     Completed   0          96s
job-para-demo-j4zw5    1/1     Running     0          30s
job-para-demo-jkbw4    0/1     Completed   0          4m55s
job-para-demo-l9pwc    0/1     Completed   0          96s
job-para-demo-lxxrv    1/1     Running     0          30s
job-para-demo-nljhg    0/1     Completed   0          4m55s
job-para-demo-q2h6g    0/1     Completed   0          6m4s
job-para-demo-rc9qt    0/1     Completed   0          3m48s
job-para-demo-xnzsq    0/1     Completed   0          2m42s

cronJob简介及字段格式

Cron Job 管理基于时间的 Job,即:

  1. 在给定时间点只运行一次
  2. 周期性地在给定时间点运行

**使用前提条件:当前使用的 Kubernetes 集群,版本 >= 1.8(对 CronJob)。对于先前版本的集群,版本 < 1.8,启动 API Server时,通过传递选项  --runtime-config=batch/v2alpha1=true  可以开启 batch/v2alpha1
API**
典型的用法如下所示:
1.在给定的时间点调度 Job 运行
2.创建周期性运行的 Job,例如:数据库备份、发送邮件

cronJob 借助Job来完成任务 他们的有关系类似Deployment 与 ReplicaSet的关系

cronJob 配置规范

apiVersion: batch/v1betal #API群组及版本
kind: CronJob #资源类型特有标识
metadata:
  name <string> #资源名称,在作用域中要唯一
  namespace <string> #名称空间; CronJob资源隶属名称空间级别
spec:
  jobTemplate <object> #job作业模板,必选字段
    metadata <object> #模板元数据
    spec_<object> #作业的期望状态
  schedule string> #调度时间设定,必选字段
  concurrencyPolicy <string> #并发策略,可用值有Allow、Fprbid和Replace 指前一个任务还没有执行完 下一个任务时间又到了,是否允许两个任务同时运行
  failedJobsHistoryLimit <integer> #失败作业的历史记录数,默认为1
  successfulJobsHistoryLimit <integer> #成功作业的历史记录数,默认为3
  startingDeadlineSeconds <integer> #因错过时间点而未执行的作业的可超期时长
  suspend <boolean> #是否挂起后续的作业,不影响当前作业,默认为false

示例3 :创建 cronJob 每2分钟执行1次任务

[root@k8s-master PodControl]# cat cronjob-demo.yaml 
apiVersion: batch/v1beta1
kind: CronJob
metadata:
  name: cronjob-demo
  namespace: default
spec:
  schedule: "*/2 * * * *"  #第2分种执行一次 和Linux定时任务一样分别是 分 时 日 月 周
  jobTemplate:
    metadata:
      labels:
        controller: cronjob-demo
    spec:  #Job的定义
      parallelism: 1  #并行数为1
      completions: 1  #执行1次 
      ttlSecondsAfterFinished: 600  #完成600后删除
      backoffLimit: 3 #最多执行3次
      activeDeadlineSeconds: 60
      template:
        spec:
          containers:
          - name: myjob
            image: alpine
            command:
            - /bin/sh
            - -c
            - date; echo Hello from cronJob, sleep a while...; sleep  10 
          restartPolicy: OnFailure  #容器终止运行且退出码不为0时重启
  startingDeadlineSeconds: 300


[root@k8s-master PodControl]# kubectl get cronjob
NAME           SCHEDULE      SUSPEND   ACTIVE   LAST SCHEDULE   AGE
cronjob-demo   */2 * * * *   False     0        <none>          58s

[root@k8s-master PodControl]# kubectl get pod
NAME                            READY   STATUS              RESTARTS   AGE
cronjob-demo-1629169560-vn8hk   0/1     ContainerCreating   0          12s
daemonset-demo-4zfwp            1/1     Running             0          51m
daemonset-demo-j7m7k            1/1     Running             0          51m
daemonset-demo-xj6wc            1/1     Running             0          51m

StatefulSet 无状态应用控制器

StatefulSet 作为 Controller 为 Pod 提供唯一的标识。它可以保证部署和 scale 的顺序
StatefulSet是为了解决有状态服务的问题(对应Deployments和ReplicaSets是为无状态服务而设计),其应用
场景包括:

  1. 稳定的持久化存储,即Pod重新调度后还是能访问到相同的持久化数据,基于PVC来实现
  2. 稳定的网络标志,即Pod重新调度后其PodName和HostName不变,基于Headless Service(即没 Cluster IP的Service)来实现
  3. 有序部署,有序扩展,即Pod是有顺序的,在部署或者扩展的时候要依据定义的顺序依次依次进行(即从0到N-1,在下一个Pod运行之前所有之前的Pod必须都是Running和Ready状态),基于init containers来实现有序
  4. 收缩,有序删除(即从N-1到0)
  • StatefulSet:通用的有状态应用控制器
    每个Pod都有自己的惟一标识,故障时,它只能被拥有同一标识的新实例所取代;
    &dollar;{STATEFULSET_NAME}-${ORDINAL},web-O, web-1,web-2, 如果更新和删除都是从逆序从最后一个开始StatefulSet 强依赖于 Headless Service 无头服务创建Pod唯一标识 确定域名解析到能解析到唯一Pod之上如果有必要,可以为被Pod配置专用的存储卷,且只能是PVC格式;

StatefulSet 配置规范:

apiVersion:apps/v1 #API群组及版本;
kind: StatefulSet #资源类型的特有标识
metadata:
  name <string> #资源名称,在作用域中要唯一
  namespace <string> #名称空间;StatefulSet隶属名称空间级别
spec:
  replicas <integer> #期望的Pod副本数,默认为1
  selector <object> #标签选择器,须匹配Pod模板中的标签,必选字段
  template <object> #Pod模板对象,必选字段
  revisionHistoryLimit <integer> #滚动更新历史记录数量,默认为10
  updateStrategy'<object> #滚动更新策略
    type <string> #滚动更新类型,可用值有OnDelete和Rollingupdate
    rollingUpdate <object> #滚动更新参数,专用于RollingUpdate类型
      partition <integer> #分区指示索引值,默认为0 如果为2表示只更新序号大于2的Pod
  serviceName <string> #相关的Headless Service的名称,必选字段
  volumeclaimTemplates <[]0bject> #存储卷申请模板
    apiVersion <string> #PVC资源所属的API群组及版本,可省略
    kind <string> #PVc资源类型标识,可省略
    metadata <object>#卷申请模板元数据
    spec <object>#期望的状态,可用字段同PVC
  podManagementPolicy <string> # Pod管理策略,默认的“OrderedReady"表示顺序创
                               # 建并逆序删除,另一可用值“Parallel"表示并行模式

示例4:创建StatefulSet控制器

[root@k8s-master PodControl]# cat statefulset-demo.yaml 
apiVersion: v1
kind: Service
metadata:
  name: demoapp-sts
  namespace: default
spec:
  clusterIP: None  #创造无头服务
  ports:
  - port: 80
    name: http
  selector:
    app: demoapp
    controller: sts-demo
---  
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: sts-demo
spec:
  serviceName: demoapp-sts #绑定无头服务
  replicas: 2
  selector:
    matchLabels :
      app: demoapp
      controller: sts-demo
  template:  #以下为Pod模板
    metadata:
      labels:
        app: demoapp
        controller: sts-demo
    spec:
      containers:
      - name: demoapp
        image: ikubernetes/demoapp:v1.0  #不是无状态服务  只模拟StatefulSet创建过程
        ports:
        - containerPort: 80
          name: web
        volumeMounts:
        - name: appdata
          mountPath: /app/data
  volumeClaimTemplates:
  - metadata:
      name: appdata
    spec:
      accessModes: ["ReadWriteOnce"]    #单路读写
      storageClassName: "longhorn"   #使用之前创建的sc
      resources:
        requests:
          storage: 2Gi

[root@k8s-master PodControl]# kubectl apply -f statefulset-demo.yaml 
service/demoapp-sts created

#Pod是按序创建 先创造完成第1个才会创造2个Pod
[root@k8s-master PodControl]# kubectl get pod 
NAME         READY   STATUS              RESTARTS   AGE
sts-demo-0   0/1     ContainerCreating   0          12s

[root@k8s-master PodControl]# kubectl get pod 
NAME         READY   STATUS              RESTARTS   AGE
sts-demo-0   1/1     Running             0          29s
sts-demo-1   0/1     ContainerCreating   0          9s

示例4: StatefulSet 扩容、缩容

[root@k8s-master PodControl]# cat demodb.yaml 
# demodb ,an educational Kubernetes-native NoSQL data store. It is a distributed
# key-value store,supporting permanent read and write operations.
# Environment Variables: DEMODB_DATADIR,DEMODB_HOST,DEMODB_PORT
# default port: 9907/tcp for clients, 9999/tcp for members.
# Maintainter: MageEdu <mage@magedu. com>
---
apiVersion: v1
kind: Service
metadata:
  name: demodb
  namespace: default
  labels:
    app: demodb
spec:
  clusterIP: None
  ports:
  - port: 9907
  selector:
    app: demodb
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: demodb
  namespace: default
spec:
  selector:
    matchLabels:
      app: demodb
  serviceName: "demodb"
  replicas: 3
  template:
    metadata:
      labels:
        app: demodb
    spec:
      nodeName: k8s-node3  #指定节点 测试环境node系统资源比较 充裕
      containers :
      - name: demodb-shard
        image: ikubernetes/demodb:v0.1
        ports:
        - containerPort: 9907
          name: db
        env:
        - name: DEMODB_DATADIR
          value: "/demodb/data"
        volumeMounts:
        - name: data
          mountPath: /demodb/data
  volumeClaimTemplates:
  - metadata:
      name: data
    spec:
      accessModes: ["ReadWriteOnce"]
      storageClassName: "longhorn"
      resources:
        requests:
          storage: 1Gi


[root@k8s-master PodControl]# kubectl apply -f demodb.yaml

[root@k8s-master PodControl]# kubectl get sts
NAME     READY   AGE
demodb   1/3     31s
[root@k8s-master PodControl]# kubectl get pod   #按序创建
NAME                                 READY   STATUS              RESTARTS   AGE
centos-deployment-66d8cd5f8b-9x47c   1/1     Running             1          22h
demodb-0                             1/1     Running             0          33s
demodb-1                             0/1     ContainerCreating   0          5s

[root@k8s-master PodControl]# kubectl get pod -o wide 
NAME                                 READY   STATUS    RESTARTS   AGE     IP             NODE        NOMINATED NODE   READINESS GATES
demodb-0                             1/1     Running   0          5m22s   10.244.3.69    k8s-node3   <none>           <none>
demodb-1                             1/1     Running   0          5m1s    10.244.3.70    k8s-node3   <none>           <none>
demodb-2                             1/1     Running   0          4m37s   10.244.3.71    k8s-node3   <none>           <none>




[root@k8s-master PodControl]# kubectl get pvc  # PVC 绑定成功
NAME            STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
data-demodb-0   Bound    pvc-bef2c61d-4ab4-4652-9902-f4bc2a918beb   1Gi        RWO            longhorn       14m
data-demodb-1   Bound    pvc-2ebb5510-b9c1-4b7e-81d0-f540a1451eb3   1Gi        RWO            longhorn       9m8s
data-demodb-2   Bound    pvc-74ee431f-794c-4b3d-837a-9337a6767f49   1Gi        RWO            longhorn       8m
  • 新开启一个终端 测试使用
[root@k8s-master PodControl]# kubectl run pod-$RANDOM --image=ikubernetes/admin-box:latest -it --rm --command -- /bin/sh

root@pod-31692 # nslookup -query=A demodb   #解析测试
Server:        10.96.0.10
Address:    10.96.0.10#53

Name:    demodb.default.svc.cluster.local
Address: 10.244.3.69
Name:    demodb.default.svc.cluster.local
Address: 10.244.3.71
Name:    demodb.default.svc.cluster.local
Address: 10.244.3.70


root@pod-31692 # nslookup -query=PTR  10.244.3.70  #对IP进行反解析
Server:        10.96.0.10
Address:    10.96.0.10#53

70.3.244.10.in-addr.arpa    name = demodb-1.demodb.default.svc.cluster.local. #解析对唯一Pod

root@pod-31692 # nslookup -query=PTR  10.244.3.71 
Server:        10.96.0.10
Address:    10.96.0.10#53

71.3.244.10.in-addr.arpa    name = demodb-2.demodb.default.svc.cluster.local.  #解析对唯一Pod



root@pod-31692 # echo "www.google.com" >/tmp/data
root@pod-31692 # curl -L -XPUT -T /tmp/data http://demodb:9907/set/data   #通过svc上传数据 数据上传set 下载get
WRITE completedroot
root@pod-31692 # curl http://demodb:9907/get/data  #虽然位于不同的PV 不同Pod之前会同步数据 保持数据的一致性
www.google.com
  • 更新: 如果是更新镜你 可以用 set 命令 或 编译yaml文件 还可以用patch 打补丁
#编译yaml文件image版本为v0.2

[root@k8s-master PodControl]# kubectl apply -f demodb.yaml
service/demodb unchanged
statefulset.apps/demodb configured


[root@k8s-master PodControl]# kubectl get sts -o wide
NAME     READY   AGE   CONTAINERS     IMAGES
demodb   2/3     18m   demodb-shard   ikubernetes/demodb:v0.2     #之前说过是逆序更新 demodb-2 最先更新
[root@k8s-master PodControl]# kubectl get pod -o wide
NAME                                 READY   STATUS              RESTARTS   AGE   IP             NODE        NOMINATED NODE   READINESS GATES
centos-deployment-66d8cd5f8b-9x47c   1/1     Running             1          22h   10.244.1.160   k8s-node1   <none>           <none>
demodb-0                             1/1     Running             0          18m   10.244.3.69    k8s-node3   <none>           <none>
demodb-1                             1/1     Running             0          18m   10.244.3.70    k8s-node3   <none>           <none>
demodb-2                             0/1     ContainerCreating   0          16s   <none>         k8s-node3   <none>           <none>
  • 扩缩容 Pod 也是逆序执行
#使用patch 打补丁修改副本数
[root@k8s-master PodControl]# kubectl patch sts/demodb -p '{"spec":{"replicas":5}}' #扩容数量为5
statefulset.apps/demodb patched

[root@k8s-master PodControl]# kubectl get pod
NAME                                 READY   STATUS              RESTARTS   AGE
demodb-0                             1/1     Running             0          6m43s
demodb-1                             1/1     Running             0          7m3s
demodb-2                             1/1     Running             0          7m53s
demodb-3                             0/1     ContainerCreating   0          10s


[root@k8s-master PodControl]# kubectl get pod
NAME                                 READY   STATUS              RESTARTS   AGE
demodb-0                             1/1     Running             0          6m52s
demodb-1                             1/1     Running             0          7m12s
demodb-2                             1/1     Running             0          8m2s
demodb-3                             0/1     ContainerCreating   0          19s

[root@k8s-master PodControl]# kubectl patch sts/demodb -p '{"spec":{"replicas":2}}'  #缩容数据为2 逆序缩

[root@k8s-master PodControl]# kubectl get pod   
NAME                                 READY   STATUS        RESTARTS   AGE
demodb-0                             1/1     Running       0          9m19s
demodb-1                             1/1     Running       0          6m9s
demodb-2                             1/1     Running       0          5m25s
demodb-3                             1/1     Running       0          2m18s
demodb-4                             0/1     Terminating   0          83s

[root@k8s-master PodControl]# kubectl get pod
NAME                                 READY   STATUS        RESTARTS   AGE
demodb-0                             1/1     Running       0          10m
demodb-1                             1/1     Running       0          7m34s
demodb-2                             1/1     Running       0          6m50s
demodb-3                             1/1     Terminating   0          3m43s

[root@k8s-master PodControl]# kubectl get pod 
NAME                                 READY   STATUS    RESTARTS   AGE
demodb-0                             1/1     Running   0          12m
demodb-1                             1/1     Running   0          9m46s
pod-31692                            1/1     Running   0          85m

#测试数据一致性 虽然不同位于的PV 不同Pod之前会同步数据 保持数据的一致性

root@pod-31692 # curl http://demodb:9907/get/data
www.google.com

Recommend

About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK