kubernetes的ceph RBD volume(2): 使用Ceph RBD作为persistent volume
2017-05-22 15:51
1796 查看
以下是使用ceph RBD作为persistent volume的例子:
A
Volumes, but have a lifecycle independent of any individual pod that uses the PV. This API object captures the details of the implementation of the storage, be that NFS, iSCSI, or a cloud-provider-specific storage system.
A
can request specific size and access modes (e.g., can be mounted once read/write or many times read-only).
1. 先创建image在pool里。
2. 创建 PV
[root@testnew kube]# cat pv_ceph.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: ceph-pv
spec:
capacity:
storage: 50Gi
accessModes:
- ReadWriteOnce
rbd:
monitors:
- 10.0.200.11:6789
- 10.0.200.13:6789
- 10.0.200.14:6789
pool: kube
image: vol2
user: kube
secretRef:
name: ceph-secret
fsType: ext4
readOnly: false
persistentVolumeReclaimPolicy: Recycle
kubectl create -f pv_ceph.yaml
[root@testnew kube]# kubectl get pv
NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM REASON AGE
ceph-pv 50Gi RWO Recycle Bound default/ceph-claim 4d
pvc-ac668f99-3b8b-11e7-8af9-fa163e01317b 20Gi RWO Delete Bound default/ceph-claim-sc 4d
3. 创建pvc
[root@testnew kube]# cat pvc_ceph.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: ceph-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
note: pvc和pv是一一对应的,所以storage在这设20G没用
kubectl create -f pvc_ceph.yaml
[root@testnew kube]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESSMODES AGE
ceph-claim Bound ceph-pv 50Gi RWO 4d
4. 创建pod
[root@testnew kube]# cat frontend-pvc-controller.yaml
apiVersion: v1
kind: ReplicationController
metadata:
name: frontendpvc
labels:
name: frontendpvc
spec:
replicas: 1
selector:
name: frontendpvc
template:
metadata:
labels:
name: frontendpvc
spec:
containers:
- name: frontendpvc
image: kubeguide/guestbook-php-frontend
env:
- name: GET_HOSTS_FROM
value: env
ports:
- containerPort: 80
volumeMounts:
- mountPath: /mnt/rbd
name: ceph-vol
volumes:
- name: ceph-vol
persistentVolumeClaim:
claimName: ceph-claim
A
PersistentVolume(PV) is a piece of networked storage in the cluster that has been provisioned by an administrator. It is a resource in the cluster just like a node is a cluster resource. PVs are volume plugins like
Volumes, but have a lifecycle independent of any individual pod that uses the PV. This API object captures the details of the implementation of the storage, be that NFS, iSCSI, or a cloud-provider-specific storage system.
A
PersistentVolumeClaim(PVC) is a request for storage by a user. It is similar to a pod. Pods consume node resources and PVCs consume PV resources. Pods can request specific levels of resources (CPU and Memory). Claims
can request specific size and access modes (e.g., can be mounted once read/write or many times read-only).
1. 先创建image在pool里。
2. 创建 PV
[root@testnew kube]# cat pv_ceph.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: ceph-pv
spec:
capacity:
storage: 50Gi
accessModes:
- ReadWriteOnce
rbd:
monitors:
- 10.0.200.11:6789
- 10.0.200.13:6789
- 10.0.200.14:6789
pool: kube
image: vol2
user: kube
secretRef:
name: ceph-secret
fsType: ext4
readOnly: false
persistentVolumeReclaimPolicy: Recycle
kubectl create -f pv_ceph.yaml
[root@testnew kube]# kubectl get pv
NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM REASON AGE
ceph-pv 50Gi RWO Recycle Bound default/ceph-claim 4d
pvc-ac668f99-3b8b-11e7-8af9-fa163e01317b 20Gi RWO Delete Bound default/ceph-claim-sc 4d
3. 创建pvc
[root@testnew kube]# cat pvc_ceph.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: ceph-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
note: pvc和pv是一一对应的,所以storage在这设20G没用
kubectl create -f pvc_ceph.yaml
[root@testnew kube]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESSMODES AGE
ceph-claim Bound ceph-pv 50Gi RWO 4d
4. 创建pod
[root@testnew kube]# cat frontend-pvc-controller.yaml
apiVersion: v1
kind: ReplicationController
metadata:
name: frontendpvc
labels:
name: frontendpvc
spec:
replicas: 1
selector:
name: frontendpvc
template:
metadata:
labels:
name: frontendpvc
spec:
containers:
- name: frontendpvc
image: kubeguide/guestbook-php-frontend
env:
- name: GET_HOSTS_FROM
value: env
ports:
- containerPort: 80
volumeMounts:
- mountPath: /mnt/rbd
name: ceph-vol
volumes:
- name: ceph-vol
persistentVolumeClaim:
claimName: ceph-claim
相关文章推荐
- kubernetes的ceph RBD volume(1):使用Ceph RBD作为后端Volume
- kubernetes的ceph RBD volume(3): 动态volume的使用(ceph RBD)
- kubernetes使用cephRBD作为存储卷
- cinder使用ceph,设置rbd_flatten_volume_from_snapshot的意义
- 在kubernetes中使用StorageClass绑定ceph rbd
- kubernetes使用ceph rbd存储
- 初试 Kubernetes 集群使用 Ceph RBD 块存储
- 使用Ceph集群作为Kubernetes的动态分配持久化存储
- kubernets使用ceph-rbd作为storageclass并创建pvc和应用
- kubernetes的ceph RBD volume(5): 创建动态的volume绑定随replica scale的扩容和缩容的pod
- kubernetes的ceph RBD volume(4): 性能测试
- 初试 Kubernetes 动态卷配置使用 RBD 作为 StorageClass
- Ceph学习----Ceph rbd 作为设备挂载到本地
- 通过iscsi协议使用ceph rbd
- 使用 Docker 和 Kubernetes 将 MongoDB 作为微服务运行
- ceph中rados/rbd命令无法使用
- kubernetes使用ceph
- Ceph架构介绍及使用(RBD部分)
- docker (centOS 7) 使用笔记2 - 使用nfs作为volume
- 使用Ceph作为OpenStack的后端存储