14

k8s中应用GlusterFS类型StorageClass - vfanCloud

 1 year ago
source link: https://www.cnblogs.com/v-fan/p/16261777.html
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

GlusterFS在Kubernetes中的应用

GlusterFS服务简介

GlusterFS是一个可扩展,分布式文件系统,集成来自多台服务器上的磁盘存储资源到单一全局命名空间,以提供共享文件存储。

特点:

  • 可以扩展到几PB容量
  • 支持处理数千个客户端
  • 兼容POSIX接口
  • 使用通用硬件,普通服务器即可构建
  • 能够使用支持扩展属性的文件系统,例如ext4,XFS
  • 支持工业标准的协议,例如NFS,SMB
  • 提供很多高级功能,例如副本,配额,跨地域复制,快照以及bitrot检测
  • 支持根据不同工作负载进行调优

GFS存储的一些术语:

  • Brick:GlusterFS中的存储单元,通常是一个受信存储池中的服务器的一个目录。可以通过主机名和目录名来标识,如'SERVER:EXPORT'。
  • Node:一个拥有若干brick的设备。
  • Volume:一组bricks的逻辑集合。
  • Client:挂载了GlusterFS卷的设备。
  • GFID:GlusterFS卷中的每个文件或目录都有一个唯一的128位的数据相关联,其用于模拟inode
    Namespace:每个Gluster卷都导出单个ns作为POSIX的挂载点。
  • RDMA:远程直接内存访问,支持不通过双方的OS进行直接内存访问。
  • RRDNS:round robin DNS是一种通过DNS轮转返回不同的设备以进行负载均衡的方法
  • Self-heal:用于后台运行检测复本卷中文件和目录的不一致性并解决这些不一致。
  • Split-brain:脑裂
  • Volfile:Glusterfs进程的配置文件,通常位于/var/lib/glusterd/vols/volname

GFS卷(volume)的模式:

Volume是一组Brick的组合,一个gfs集群中可以有多个volume,可以供客户端挂载、存储数据等,它有很多种模式供选择:

  1. 分布式卷(默认模式):即DHT,文件通过hash算法分布存在不同的brick里,单个brick失效会带来数据丢失,无需额外元数据服务器。
  2. 条带卷:即Striped,文件切分成一个个的chunk,存放于不同的brick上,只建议在非常大的文件时使用。
  3. 复制卷:即AFR,同一份数据会同步在多个Brick中,单一节点故障时保持数据高可用,事务性操作,保持一致性。
  4. 分布式复制卷:即AFR和DHT的组合,最少需要4台服务器才能创建,读操作可以做到负载均衡。
  5. 条带复制卷:Striped与AFR 的组合,最少需要4台服务器才能创建。
  6. 分布式条带复制卷:至少需要8台服务器才能创建,每四台服务器一组。

GlusterFS服务搭建及进程解析

主机名 ip地址 角色
master 192.168.72.100 k8s-master、glusterfs、heketi
node1 192.168.72.101 k8s-work节点
node2 192.168.72.102 k8s-work节点
  1. 获取gluster rpm包,我这里是7.5版本的,需要下载以下几个rpm包:

    # ls
    glusterfs-7.5-1.el7.x86_64.rpm                 glusterfs-libs-7.5-1.el7.x86_64.rpm
    glusterfs-api-7.5-1.el7.x86_64.rpm             glusterfs-server-7.5-1.el7.x86_64.rpm
    glusterfs-api-devel-7.5-1.el7.x86_64.rpm       psmisc-22.20-15.el7.x86_64.rpm
    glusterfs-cli-7.5-1.el7.x86_64.rpm             userspace-rcu-0.10.0-3.el7.x86_64.rpm
    glusterfs-client-xlators-7.5-1.el7.x86_64.rpm  userspace-rcu-0.7.16-1.el7.x86_64.rpm
    glusterfs-fuse-7.5-1.el7.x86_64.rpm
    

    地址:https://buildlogs.centos.org/centos/7/storage/x86_64/gluster-9/Packages/;如果有外网可以直接配置yum源安装需要的版本。

  2. 安装rpm包

    ## 先安装依赖的环境包
    yum -y install attr psmisc rpcbind
    
    ## 安装glusterfs
    rpm -ivh glusterfs-libs-7.5-1.el7.x86_64.rpm 
    rpm -ivh glusterfs-7.5-1.el7.x86_64.rpm
    rpm -ivh glusterfs-api-7.5-1.el7.x86_64.rpm  glusterfs-cli-7.5-1.el7.x86_64.rpm  glusterfs-client-xlators-7.5-1.el7.x86_64.rpm
    rpm -ivh glusterfs-fuse-7.5-1.el7.x86_64.rpm
    rpm -ivh userspace-rcu-0.10.0-3.el7.x86_64.rpm
    rpm -ivh glusterfs-server-7.5-1.el7.x86_64.rpm
    rpm -ivh glusterfs-api-devel-7.5-1.el7.x86_64.rpm
    

    因为包之间会有依赖关系,按照以上顺序可以顺利安装,如果不想这么麻烦,可强制安装--force --nodeps。

  3. 启动服务

    ## 需要启动两个服务glusterd和glusterfsd
    systemctl start glusterd glusterfsd
    
  4. glusterd和glusterfsd两个服务的区别,以及gluster和glusterfs命令

    glusterd是其服务的守护进程,是一个管理模块,处理gluster发过来的命令,处理集群管理、存储池管理、brick管理、负载均衡、快照管理等,默认端口24007:

    # ss -tnlp | grep 24007
    LISTEN     0      128          *:24007                    *:*                   users:(("glusterd",pid=7873,fd=10))
    
    ## 进程
    # ps -ef | grep gluster
    root       7873      1  0 Mar28 ?        00:00:41 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO
    

    glusterfsd是服务端模块,存储池中的每个brick(卷组)都会启动一个glusterfsd进程。此模块主要是处理客户端的读写请求,默认端口是从49152开始,后续端口都+1。我这里启动了两个卷组:

    # ss -tnlp | grep gluster
    LISTEN     0      128          *:49152                    *:*                   users:(("glusterfsd",pid=7898,fd=11))
    LISTEN     0      128          *:49153                    *:*                   users:(("glusterfsd",pid=44275,fd=11))
    
    ## 进程
    # ps -ef | grep gluster
    root       7898      1  0 Mar28 ?        00:04:46 /usr/sbin/glusterfsd -s 192.168.72.100 --volfile-id xtest-fs.192.168.72.100.data-text-mind -p /var/run/gluster/vols/xtest-fs/192.168.72.100-data-text-mind.pid -S /var/run/gluster/3b5c03678c749dfd.socket --brick-name /data/text-mind -l /var/log/glusterfs/bricks/data-text-mind.log --xlator-option *-posix.glusterd-uuid=46bfd5ab-07be-4a97-993f-1ddab13e0ee9 --process-name brick --brick-port 49152 --xlator-option xtest-fs-server.listen-port=49152
    

    glusterfs命令是一个客户端模块,负责通过mount挂载集群中某台服务器的存储池,以目录的形式呈现给用户。用户mount几个卷组,就是出现几个glusterfs的进程,如下:

    # ps -ef | grep -w glusterfs | grep -v glusterfsd
    root      31496      1  0 Mar25 ?        00:02:46 /usr/sbin/glusterfs --process-name fuse --volfile-server=yq01-aip-aikefu10.yq01.baidu.com --volfile-id=xtest-fs /data/mnt
    

    gluster命令就是客户端请求命令,负责发起请求,比如创建、查看当前卷组等:

    # gluster volume create xtest-fs 192.168.72.100:/data/xtest-fs/brick force
    # gluster volume list 
    
  5. 如果是多台gfs构成集群,在服务安装完毕之后,要进行一个添加的动作

    ## 添加
    # gluster peer probe node1
    # gluster peer probe node2
    
    ## 查看当前集群
    # gluster pool list
    

heketi服务搭建以及管理GFS

heketi提供一个RESTful管理节点界面,可以用来管理GlusterFS卷的生命周期,通过heketi,就可以像使用Opentack Manila,kubernete和openShift一样申请可以动态配置GlusterFS卷,Heketi会动态在集群内选择bricks构建所需的volumes,这样以确保数据的副本会分散到集群不同的故障域内。

  1. 安装heketi服务,有源可以直接yum安装,没有可以单独下载rpm包安装,或者通过docker、k8s安装

    # yum -y install heketi heketi-client
    
    # heketi --version 
    Heketi 9.0.0
    
    # systemctl start heketi.service
    
  2. 配置ssh秘钥,使heketi节点可以免密访问glusterfs节点

    # ssh-keygen -f /etc/heketi/heketi_key -t rsa -N ''
    # ssh-copy-id -i /etc/heketi/heketi_key.pub master
    # chown heketi:heketi /etc/heketi/
    
  3. 编辑heketi的配置文件/etc/heketi/heketi.json

    heketi有三种执行方式,分别为mock,ssh,kubernetes,官方建议在测试和开发环境使用mock,生产使用ssh,如果glusterfs是部署在k8s上,则使用kubernetes方式。

    下边对配置部分做出解释:

    {
      ## heketi服务的默认端口,可以更改。
      "_port_comment": "Heketi Server Port Number",
      "port": "8080",
      
      ## 是否开启认证,一般选是。
      "_use_auth": "Enable JWT authorization. Please enable for deployment",
      "use_auth": true,
    
      ## 若开启认证,则需要编辑认证的用户及密码。
      "_jwt": "Private keys for access",
      "jwt": {
        "_admin": "Admin has access to all APIs",
        "admin": {
          "key": "My Secret"
        },
        "_user": "User only has access to /volumes endpoint",
        "user": {
          "key": "My Secret"
        }
      },
    
      "_glusterfs_comment": "GlusterFS Configuration",
      "glusterfs": {
        "_executor_comment": [
          "Execute plugin. Possible choices: mock, ssh",
          "mock: This setting is used for testing and development.",
          "      It will not send commands to any node.",
          "ssh:  This setting will notify Heketi to ssh to the nodes.",
          "      It will need the values in sshexec to be configured.",
          "kubernetes: Communicate with GlusterFS containers over",
          "            Kubernetes exec api."
        ],
        ## 选择操作gfs的方式,这里选择ssh。
        "executor": "ssh",
    
        ## 配置ssh秘钥
        "_sshexec_comment": "SSH username and private key file information",
        "sshexec": {
          "keyfile": "/etc/heketi/heketi_key",
          "user": "root",
          "port": "22",
          "fstab": "/etc/fstab"
        },
    
        ## 配置k8s证书
        "_kubeexec_comment": "Kubernetes configuration",
        "kubeexec": {
          "host" :"https://kubernetes.host:8443",
          "cert" : "/path/to/crt.file",
          "insecure": false,
          "user": "kubernetes username",
          "password": "password for kubernetes user",
          "namespace": "OpenShift project or Kubernetes namespace",
          "fstab": "Optional: Specify fstab file on node.  Default is /etc/fstab"
        },
    
        "_db_comment": "Database file name",
        ## heketi的数据存储位置
        "db": "/var/lib/heketi/heketi.db",
    
        "_loglevel_comment": [
          "Set log level. Choices are:",
          "  none, critical, error, warning, info, debug",
          "Default is warning"
        ],
        ## heketi的日志级别
        "loglevel" : "debug"
      }
    }
    
  4. 启动heketi服务,设置开启自启,测试访问

    # systemctl start heketi
    # systemctl enable heketi
    # curl http://master:8080/hello
    Hello from Heketi
    
  5. 配置heketi连接gfs服务

    (1) 在集群的主节点设置环境变量,注意端口根据实际情况修改

    # export HEKETI_CLI_SERVER=http://master:8080
    

    (2) 在集群的主节点修改/usr/share/heketi/topology-sample.json配置文件,执行添加节点和添加device的操作:

    {
        "clusters": [
            {
                "nodes": [
                    {
                        "node": {
                            "hostnames": {
                                "manage": [
                                    "master"
                                ],
                                "storage": [
                                    "192.168.72.100"
                                ]
                            },
                            "zone": 1
                        },
                        "devices": [
                            {
                                "name": "/dev/sdb",
                                "destroydata": false
                            }
                        ]
                    }
                ]
            }
        ]
    }
    

    manage指定gfs主机域名,storage指定gfs主机ip,devices下指定要存储gfs数据的数据盘,此盘应该是一块未分区的干净盘,分区应该也可以,我这里没有试。

    可以添加多个主机和多块盘,具体格式如下:

    {
        "clusters": [
            {
                "nodes": [
                    {
                        "node": {
                            "hostnames": {
                                "manage": [
                                    "192.168.10.100"
                                ],
                                "storage": [
                                    "192.168.10.100"
                                ]
                            },
                            "zone": 1
                        },
                        "devices": [
                            {
                                "name": "/dev/sdb",
                                "destroydata": false
                            },
                            {
                                "name": "/dev/sdc",
                                "destroydata": false
                            },
                            {
                                "name": "/dev/sdd",
                                "destroydata": false
                            },
                            {
                                "name": "/dev/sde",
                                "destroydata": false
                            },
                            {
                                "name": "/dev/sdf",
                                "destroydata": false
                            },
                            {
                                "name": "/dev/sdg",
                                "destroydata": false
                            },
                            {
                                "name": "/dev/sdh",
                                "destroydata": false
                            },
                            {
                                "name": "/dev/sdi",
                                "destroydata": false
                            }
                        ]
                    },
                    {
                        "node": {
                            "hostnames": {
                                "manage": [
                                    "192.168.10.101"
                                ],
                                "storage": [
                                    "192.168.10.101"
                                ]
                            },
                            "zone": 2
                        },
                        "devices": [
                            {
                                "name": "/dev/sdb",
                                "destroydata": false
                            },
                            {
                                "name": "/dev/sdc",
                                "destroydata": false
                            },
                            {
                                "name": "/dev/sdd",
                                "destroydata": false
                            },
                            {
                                "name": "/dev/sde",
                                "destroydata": false
                            },
                            {
                                "name": "/dev/sdf",
                                "destroydata": false
                            },
                            {
                                "name": "/dev/sdg",
                                "destroydata": false
                            },
                            {
                                "name": "/dev/sdh",
                                "destroydata": false
                            },
                            {
                                "name": "/dev/sdi",
                                "destroydata": false
                            }
                        ]
                    },
                    {
                        "node": {
                            "hostnames": {
                                "manage": [
                                    "192.168.10.102"
                                ],
                                "storage": [
                                    "192.168.10.102"
                                ]
                            },
                            "zone": 1
                        },
                        "devices": [
                            {
                                "name": "/dev/sdb",
                                "destroydata": false
                            },
                            {
                                "name": "/dev/sdc",
                                "destroydata": false
                            },
                            {
                                "name": "/dev/sdd",
                                "destroydata": false
                            },
                            {
                                "name": "/dev/sde",
                                "destroydata": false
                            },
                            {
                                "name": "/dev/sdf",
                                "destroydata": false
                            },
                            {
                                "name": "/dev/sdg",
                                "destroydata": false
                            },
                            {
                                "name": "/dev/sdh",
                                "destroydata": false
                            },
                            {
                                "name": "/dev/sdi",
                                "destroydata": false
                            }
                        ]
                    },
                    {
                        "node": {
                            "hostnames": {
                                "manage": [
                                    "192.168.10.103"
                                ],
                                "storage": [
                                    "192.168.10.103"
                                ]
                            },
                            "zone": 2
                        },
                        "devices": [
                            {
                                "name": "/dev/sdb",
                                "destroydata": false
                            },
                            {
                                "name": "/dev/sdc",
                                "destroydata": false
                            },
                            {
                                "name": "/dev/sdd",
                                "destroydata": false
                            },
                            {
                                "name": "/dev/sde",
                                "destroydata": false
                            },
                            {
                                "name": "/dev/sdf",
                                "destroydata": false
                            },
                            {
                                "name": "/dev/sdg",
                                "destroydata": false
                            },
                            {
                                "name": "/dev/sdh",
                                "destroydata": false
                            },
                            {
                                "name": "/dev/sdi",
                                "destroydata": false
                            }
                        ]
                    }
                ]
            }
        ]
    }
    

    (3) 导入配置文件到heketi,使heketi管理gfs

    # heketi-cli -s $HEKETI_CLI_SERVER --user admin --secret 'My Secret' topology load --json=/usr/share/heketi/topology-sample.json
    Creating cluster ... ID: 6b7b8affdb10add55ea367e7e2f8e091
    	Allowing file volumes on cluster.
    	Allowing block volumes on cluster.
    	Creating node master ... ID: 96bbfbf66bc8ad551fe5a8c8ef289218
    		Adding device /dev/sdb ... OK
    

    如果报错:Error: Unable to get topology information: Invalid JWT token: Token missing iss claim。是因为没加用户名和密码 --user admin --secret 'My Secret'。

    (4) 测试创建一个volume,看是否成功

    # heketi-cli volume create --size=10 --durability=none --user "admin" --secret "My Secret"
    Name: vol_38412746b79cf0d7ffa69a3969c1cfed
    Size: 10
    Volume Id: 38412746b79cf0d7ffa69a3969c1cfed
    Cluster Id: 6b7b8affdb10add55ea367e7e2f8e091
    Mount: 192.168.72.100:vol_38412746b79cf0d7ffa69a3969c1cfed
    Mount Options: backup-volfile-servers=
    Block: false
    Free Size: 0
    Reserved Size: 0
    Block Hosting Restriction: (none)
    Block Volumes: []
    Durability Type: none
    Distribute Count: 1
    

    创建成功,heketi已经可以管理gfs服务。

k8s集群创建StorageClass实现动态管理pv pvc

  1. 创建StorageClass

    vi storageclass-gfs-heketi-distributed.yaml

    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: gluster-distributed
    provisioner: kubernetes.io/glusterfs
    reclaimPolicy: Retain
    parameters:
      resturl: "http://192.168.72.100:8080"
      restauthenabled: "true"
      restuser: "admin"
      restuserkey: "My Secret"
      gidMin: "40000"
      gidMax: "50000"
      volumetype: "none"
    allowVolumeExpansion: true
    volumeBindingMode: Immediate
    

    provisioner:表示存储分配器,需要根据后端存储的不同而变更;

    reclaimPolicy: 默认即“Delete”,删除pvc后,相应的pv及后端的volume,brick(lvm)等一起删除;设置为”Retain”时则保留数据,需要手工处理

    resturl:heketi API服务提供的url;

    restauthenabled:可选参数,默认值为"false",heketi服务开启认证时必须设置为"true";

    restuser:可选参数,开启认证时设置相应用户名;

    restuserkey:可选参数。开启认证时输入对应用户的密码;

    secretNamespace:可选参数,开启认证时可以设置为使用持久化存储的namespace;

    secretName:可选参数,开启认证时,需要将heketi服务的认证密码保存在secret资源中;

    clusterid:可选参数,指定集群id,也可以是1个clusterid列表,格式为"id1,id2";

    volumetype:可选参数,设置卷类型及其参数,如果未分配卷类型,则有分配器决定卷类型;如"volumetype: replicate:3"表示3副本的replicate卷,"volumetype: disperse:4:2"表示disperse卷,其中‘4’是数据,’2’是冗余校验,"volumetype: none"表示distribute卷;

    allowVolumeExpansion:表示是否支持动态扩容,默认为true;

    volumeBindingMode:表示是否立即bound pv,可选值WaitForFirstConsumer和Immediate;

    创建sc:

    # kubectl create -f storageclass-gfs-heketi-distributed.yaml
    
    # kubectl get sc 
    NAME                   PROVISIONER                            RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION
    gluster-distributed    kubernetes.io/glusterfs                Retain          Immediate              true                
    
  2. 创建pvc

    vi glusterfs-pvc.yaml

    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: glusterfs-test
    spec:
      accessModes:
      - ReadWriteMany
      resources:
        requests:
          storage: 5Gi
      storageClassName: gluster-distributed
    

    创建pvc

    # kubectl create -f glusterfs-pvc.yaml
    # kubectl get pvc 
    NAME                                                                                           STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS          AGE
    glusterfs-test                                                                                 Bound    pvc-0a3cffea-36b0-4609-a983-88df995b99e8   5Gi        RWX            gluster-distributed   5s
    

    这个时候 df -h 就可以看到有新的分区被挂载

    # df -h | tail -n 1 
    /dev/mapper/vg_5b4ef948902b85845afd387ca71c6858-brick_0fd0afa344ca798d7a9fccab5bb60a9e  5.0G   33M  5.0G    1% /var/lib/heketi/mounts/vg_5b4ef948902b85845afd387ca71c6858/brick_0fd0afa344ca798d7a9fccab5bb60a9e
    
  3. 创建一个deployment,测试读写

    vi glusterfs-deploy.yaml

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: glusterfs-deploy
      labels:
        app: nginx
    spec:
      replicas: 2
      selector:
        matchLabels:
          app: nginx
      template:
        metadata:
          labels:
            app: nginx
        spec:
          containers:
          - name: nginx
            image: registry:5000/nginx:1.13
            imagePullPolicy: IfNotPresent
            ports:
            - name: web
              containerPort: 80
            volumeMounts:
            - name: pvc-glusterfs
              mountPath: /data
          volumes:
          - name: pvc-glusterfs
            persistentVolumeClaim:
              claimName: glusterfs-test
    

    创建deployment

    # kubectl create -f glusterfs-deploy.yaml 
    deployment.apps/glusterfs-deploy created
    
    # kubectl get pod | grep glus
    glusterfs-deploy-cfd8c8578-45r9z                            1/1     Running   0          72s
    glusterfs-deploy-cfd8c8578-vjrzn                            1/1     Running   0          104s
    
    # kubectl exec -it glusterfs-deploy-cfd8c8578-45r9z -- bash
    # cd /data/
    # for i in `seq 1 10`; do echo $i > $i ; done 
    # ls
    1  10  2  3  4	5  6  7  8  9
    # cat 1 
    1
    

至此,gfs已经可以作为k8s的storageclass来动态管理pv pvc了。


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK