kubernetes的ceph RBD volume(3): 动态volume的使用(ceph RBD)
2017-05-19 14:34
1111 查看
我实验了一下kubenetes的dynamic volume,基于ceph RDB. 以下是使用步骤:
1. 准备storageclass, 配置ceph rdb的信息, ceph mon的ip, 用户,密码和ceph pool, 之前secret要生成。
storageclass:
apiVersion: storage.k8s.io/v1beta1
kind: StorageClass
metadata:
name: kubepool
annotations:
storageclass.beta.kubernetes.io/is-default-class: 'true'
provisioner: kubernetes.io/rbd
parameters:
monitors: 10.0.200.11:6789
adminId: kube
adminSecretName: ceph-secret
adminSecretNamespace: default
pool: kube
userId: kube
userSecretName: ceph-secret
2. 准备 persistent volume claim,
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: ceph-claim-sc
annotations:
volume.beta.kubernetes.io/storage-class: 'kubepool'
spec:
accessModes:
-ReadWriteOnce
resources:
requests:
storage: 20Gi
3. 检查kube resource 和 rbd的image. 发现image已经在pool里被创建,并且格式化。缺省是ext4
[root@testnew kube]# kubectl getstorageclass
NAME TYPE
kubepool (default) kubernetes.io/rbd
[root@testnew kube]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESSMODES AGE
ceph-claim Bound ceph-pv 50Gi RWO 3h
ceph-claim-sc Bound pvc-ac668f99-3b8b-11e7-8af9-fa163e01317b 20Gi RWO 7m
[root@testnew kube]# kubectl get pv
NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM REASON AGE
ceph-pv 50Gi RWO Recycle Bound default/ceph-claim 3h
pvc-ac668f99-3b8b-11e7-8af9-fa163e01317b 20Gi RWO
Delete Bound default/ceph-claim-sc 7m
note: delete reclaimpolicy, 缺省的policy, 意思是pvc被删除,相应的ceph rbd image也将被删除
[root@testnew kube]# rbd -p kube -nclient.kube ls
kubernetes-dynamic-pvc-ac6b857a-3b8b-11e7-bdfc-fa163e01317b
vol1
vol2
vol50
4. 创建rc, 验证是否能够使用volume
apiVersion: v1
kind: ReplicationController
metadata:
name: frontendpvcsc
labels:
name: frontendpvcsc
spec:
replicas: 1
selector:
name: frontendpvcsc
template:
metadata:
labels:
name: frontendpvcsc
spec:
containers:
- name: frontendpvcsc
image: kubeguide/guestbook-php-frontend
env:
- name: GET_HOSTS_FROM
value: env
ports:
- containerPort: 80
volumeMounts:
- mountPath: /mnt/rbd
name: ceph-vol
volumes:
- name: ceph-vol
persistentVolumeClaim:
claimName: ceph-claim-sc
[root@testnew kube]# kubectl exec frontendpvcsc-xzz15 -it bash
root@frontendpvcsc-xzz15:/var/www/html# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/docker-253:1-528252-1915f387c1f17925e19bbcaa4324e401cc7c1abb5e86a11ee6bddda38f0db1da 10G 609M 9.4G 6% /
tmpfs 3.9G 0 3.9G 0% /dev
tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup
/dev/vda1 19G 4.9G 14G 27% /etc/hosts
/dev/rbd0 20G 45M 19G 1% /mnt/rbd
shm 64M 0 64M 0% /dev/shm
能够看到volume已经被mount上来了。
总结:
Dynamic volume能够很方便的被用户使用,像一个存储资源池,能动态的分配存储资源。
1. 准备storageclass, 配置ceph rdb的信息, ceph mon的ip, 用户,密码和ceph pool, 之前secret要生成。
storageclass:
apiVersion: storage.k8s.io/v1beta1
kind: StorageClass
metadata:
name: kubepool
annotations:
storageclass.beta.kubernetes.io/is-default-class: 'true'
provisioner: kubernetes.io/rbd
parameters:
monitors: 10.0.200.11:6789
adminId: kube
adminSecretName: ceph-secret
adminSecretNamespace: default
pool: kube
userId: kube
userSecretName: ceph-secret
2. 准备 persistent volume claim,
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: ceph-claim-sc
annotations:
volume.beta.kubernetes.io/storage-class: 'kubepool'
spec:
accessModes:
-ReadWriteOnce
resources:
requests:
storage: 20Gi
3. 检查kube resource 和 rbd的image. 发现image已经在pool里被创建,并且格式化。缺省是ext4
[root@testnew kube]# kubectl getstorageclass
NAME TYPE
kubepool (default) kubernetes.io/rbd
[root@testnew kube]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESSMODES AGE
ceph-claim Bound ceph-pv 50Gi RWO 3h
ceph-claim-sc Bound pvc-ac668f99-3b8b-11e7-8af9-fa163e01317b 20Gi RWO 7m
[root@testnew kube]# kubectl get pv
NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM REASON AGE
ceph-pv 50Gi RWO Recycle Bound default/ceph-claim 3h
pvc-ac668f99-3b8b-11e7-8af9-fa163e01317b 20Gi RWO
Delete Bound default/ceph-claim-sc 7m
note: delete reclaimpolicy, 缺省的policy, 意思是pvc被删除,相应的ceph rbd image也将被删除
[root@testnew kube]# rbd -p kube -nclient.kube ls
kubernetes-dynamic-pvc-ac6b857a-3b8b-11e7-bdfc-fa163e01317b
vol1
vol2
vol50
4. 创建rc, 验证是否能够使用volume
apiVersion: v1
kind: ReplicationController
metadata:
name: frontendpvcsc
labels:
name: frontendpvcsc
spec:
replicas: 1
selector:
name: frontendpvcsc
template:
metadata:
labels:
name: frontendpvcsc
spec:
containers:
- name: frontendpvcsc
image: kubeguide/guestbook-php-frontend
env:
- name: GET_HOSTS_FROM
value: env
ports:
- containerPort: 80
volumeMounts:
- mountPath: /mnt/rbd
name: ceph-vol
volumes:
- name: ceph-vol
persistentVolumeClaim:
claimName: ceph-claim-sc
[root@testnew kube]# kubectl exec frontendpvcsc-xzz15 -it bash
root@frontendpvcsc-xzz15:/var/www/html# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/docker-253:1-528252-1915f387c1f17925e19bbcaa4324e401cc7c1abb5e86a11ee6bddda38f0db1da 10G 609M 9.4G 6% /
tmpfs 3.9G 0 3.9G 0% /dev
tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup
/dev/vda1 19G 4.9G 14G 27% /etc/hosts
/dev/rbd0 20G 45M 19G 1% /mnt/rbd
shm 64M 0 64M 0% /dev/shm
能够看到volume已经被mount上来了。
总结:
Dynamic volume能够很方便的被用户使用,像一个存储资源池,能动态的分配存储资源。
相关文章推荐
- kubernetes的ceph RBD volume(5): 创建动态的volume绑定随replica scale的扩容和缩容的pod
- kubernetes的ceph RBD volume(1):使用Ceph RBD作为后端Volume
- kubernetes的ceph RBD volume(2): 使用Ceph RBD作为persistent volume
- 使用Ceph集群作为Kubernetes的动态分配持久化存储
- 初试 Kubernetes 集群使用 Ceph RBD 块存储
- kubernetes的ceph RBD volume(4): 性能测试
- kubernetes使用cephRBD作为存储卷
- 在kubernetes中使用StorageClass绑定ceph rbd
- cinder使用ceph,设置rbd_flatten_volume_from_snapshot的意义
- kubernetes使用ceph rbd存储
- 通过iscsi协议使用ceph rbd
- kubernetes使用ceph
- 初试 Kubernetes 动态卷配置使用 RBD 作为 StorageClass
- Ceph RBD使用
- 调整服务器时间导致的ceph命令使用正常,但是rbd命令报错
- rexray在CentOS上不能创建ceph rbd的docker volume问题定位
- rexray在CentOS上不能创建ceph rbd的docker volume问题定位
- openstack nova后端使用ceph rbd(增加在线迁移live_migrate和快照snapshot功能)
- k8s使用ceph rbd
- 与Ceph RBD关联,实现Kubernetes持久化存储 - PaaS云