您的位置:首页 > 其它

kubernetes的ceph RBD volume(3): 动态volume的使用(ceph RBD)

2017-05-19 14:34 1111 查看
我实验了一下kubenetes的dynamic volume,基于ceph RDB. 以下是使用步骤:

1. 准备storageclass, 配置ceph rdb的信息, ceph mon的ip, 用户,密码和ceph pool, 之前secret要生成。

storageclass:

apiVersion: storage.k8s.io/v1beta1

kind: StorageClass

metadata:

 name: kubepool

 annotations:

   storageclass.beta.kubernetes.io/is-default-class: 'true'

provisioner: kubernetes.io/rbd

parameters:

   monitors: 10.0.200.11:6789

   adminId: kube

   adminSecretName: ceph-secret

   adminSecretNamespace: default

   pool: kube

   userId: kube

   userSecretName: ceph-secret

 2. 准备 persistent volume claim,

kind: PersistentVolumeClaim

apiVersion: v1

metadata:

 name: ceph-claim-sc

 annotations:

   volume.beta.kubernetes.io/storage-class: 'kubepool'

spec:

 accessModes:

    -ReadWriteOnce

 resources:

   requests:

     storage: 20Gi

 3. 检查kube resource 和 rbd的image. 发现image已经在pool里被创建,并且格式化。缺省是ext4

[root@testnew kube]# kubectl getstorageclass

NAME                 TYPE

kubepool (default)   kubernetes.io/rbd  

[root@testnew kube]# kubectl get pvc

NAME            STATUS    VOLUME                                    CAPACITY   ACCESSMODES   AGE

ceph-claim      Bound    ceph-pv                                    50Gi       RWO           3h

ceph-claim-sc   Bound    pvc-ac668f99-3b8b-11e7-8af9-fa163e01317b   20Gi      RWO           7m

[root@testnew kube]# kubectl get pv

NAME                                      CAPACITY   ACCESSMODES   RECLAIMPOLICY   STATUS   CLAIM                  REASON    AGE

ceph-pv                                    50Gi       RWO           Recycle         Bound     default/ceph-claim                3h

pvc-ac668f99-3b8b-11e7-8af9-fa163e01317b   20Gi      RWO          
Delete          Bound     default/ceph-claim-sc             7m

 note: delete reclaimpolicy, 缺省的policy, 意思是pvc被删除,相应的ceph rbd image也将被删除

 [root@testnew kube]# rbd -p kube -nclient.kube ls

kubernetes-dynamic-pvc-ac6b857a-3b8b-11e7-bdfc-fa163e01317b

vol1

vol2

vol50

4.  创建rc, 验证是否能够使用volume

apiVersion: v1

kind: ReplicationController

metadata:

  name: frontendpvcsc

  labels:

    name: frontendpvcsc

spec:

  replicas: 1

  selector:

    name: frontendpvcsc

  template:

    metadata:

      labels:

        name: frontendpvcsc

    spec:

      containers:

      - name: frontendpvcsc

        image: kubeguide/guestbook-php-frontend

        env:

        - name: GET_HOSTS_FROM

          value: env

        ports:

        - containerPort: 80

        volumeMounts:

        - mountPath: /mnt/rbd

          name: ceph-vol

      volumes:

      - name: ceph-vol

        persistentVolumeClaim:

          claimName: ceph-claim-sc

[root@testnew kube]# kubectl exec frontendpvcsc-xzz15 -it bash

root@frontendpvcsc-xzz15:/var/www/html# df -h

Filesystem                                                                                        Size  Used Avail Use% Mounted on

/dev/mapper/docker-253:1-528252-1915f387c1f17925e19bbcaa4324e401cc7c1abb5e86a11ee6bddda38f0db1da   10G  609M  9.4G   6% /

tmpfs                                                                                             3.9G     0  3.9G   0% /dev

tmpfs                                                                                             3.9G     0  3.9G   0% /sys/fs/cgroup

/dev/vda1                                                                                          19G  4.9G   14G  27% /etc/hosts
/dev/rbd0                                                                                          20G   45M   19G   1% /mnt/rbd

shm                                                                                                64M     0   64M   0% /dev/shm

能够看到volume已经被mount上来了。

总结:

Dynamic volume能够很方便的被用户使用,像一个存储资源池,能动态的分配存储资源。
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: