您的位置:首页 > 运维架构

openstack 管理三十九 - cinder 连接多个 ceph 存储方法

2017-04-17 16:28 911 查看

环境说明

当前 openstack 使用正常
由于后端 ceph 存储容量已经超过 60%
不希望直接进行扩容, 因为会存在大量的数据迁移问题
新创建另外一个 ceph 集群, 并计划用于 openstack 成为一个新的 ceph 后端
旧的 ceph 集群称为 ceph-A,  使用中的 pool 为 volumes
新的 ceph 集群称为 ceph-B,  使用中的 pool 为 develop-ceph


目标

在 openstack 中,  同时连接到两个不同的 ceph backend


cinder server 配置

1. ceph 连接配置
2. cinder 服务配置
3. 命令行对 cinder 服务进行管理
4. 验证


ceph 连接配置

1.同时把两个 ceph 集群中的配置复制到 cinder 服务器 /etc/ceph 目录下, 定义成不同命名

[root@hh-yun-db-129041 ceph]# tree `pwd`
/etc/ceph
├── ceph.client.admin-develop.keyring      <- ceph-B 集群中的 admin key
├── ceph.client.admin-volumes.keyring      <- ceph-A 集群中的 admin key
├── ceph.client.developcinder.keyring      <- ceph-B 集群中的用户 developcinder key
├── ceph.client.cinder.keyring             <- ceph-A 集群中的 cinder key
├── ceph.client.mon-develop.keyring        <- ceph-B 集群中的 mon key
├── ceph.client.mon-volumes.keyring        <- ceph-A 集群中的 mon key
├── ceph-develop.conf                      <- ceph-B 集群配置文件(包含了 mon 地址等集群信息)
└── ceph-volumes.conf                      <- ceph-B 集群配置文件(包含了 mon 地址等集群信息)


这里需要注意, clinet.client.(username).keyring 必须要与连接 ceph 的合法用户命名一致, 否则 cinder server 端, 无法正确获得权限

2.命令行下, 测试连接不同的 ceph 后端测试

ceph-A 连接测试

[root@hh-yun-db-129041 ceph]# ceph -c ceph-volumes.conf -k ceph.client.admin-volumes.keyring -s
cluster xxx-xxx-xxxx-xxxx-xxxx
health HEALTH_OK
monmap e3: 5 mons at {hh-yun-ceph-cinder015-128055=240.30.128.55:6789/0,hh-yun-ceph-cinder017-128057=240.30.128.57:6789/0,hh-yun-ceph-cinder024-128074=240.30.128.74:6789/0,hh-yun-ceph-cinder025-128075=240.30.128.75:6789/0,hh-yun-ceph-cinder026-128076=240.30.128.76:6789/0}, election epoch 452, quorum 0,1,2,3,4 hh-yun-ceph-cinder015-128055,hh-yun-ceph-cinder017-128057,hh-yun-ceph-cinder024-128074,hh-yun-ceph-cinder025-128075,hh-yun-ceph-cinder026-128076
osdmap e170088: 226 osds: 226 up, 226 in
pgmap v50751302: 20544 pgs, 2 pools, 157 TB data, 40687 kobjects
474 TB used, 376 TB / 850 TB avail
20537 active+clean
7 active+clean+scrubbing+deep
client io 19972 kB/s rd, 73591 kB/s wr, 3250 op/s


ceph-B 连接测试

[root@hh-yun-db-129041 ceph]# ceph -c ceph-develop.conf -k  ceph.client.admin-develop.keyring -s
cluster 4bf07d3e-a289-456d-9bd9-5a89832b413b
health HEALTH_OK
monmap e1: 5 mons at {240.30.128.214=240.30.128.214:6789/0,240.30.128.215=240.30.128.215:6789/0,240.30.128.39=240.30.128.39:6789/0,240.30.128.40=240.30.128.40:6789/0,240.30.128.58=240.30.128.58:6789/0}
election epoch 6, quorum 0,1,2,3,4 240.30.128.39,240.30.128.40,240.30.128.58,240.30.128.214,240.30.128.215
osdmap e559: 264 osds: 264 up, 264 in
flags sortbitwise
pgmap v116751: 12400 pgs, 9 pools, 1636 bytes data, 171 objects
25091 MB used, 1440 TB / 1440 TB avail
12400 active+clean


cinder 服务配置

对 cinder 服务端进行配置

/etc/cinder/cinder.conf

enabled_backends=CEPH_SATA,CEPH_DEVELOP

...

[CEPH_SATA]
glance_api_version=2
volume_backend_name=ceph_sata
rbd_ceph_conf=/etc/ceph/ceph-volumes.conf
rbd_user=cinder
rbd_flatten_volume_from_snapshot=False
rados_connect_timeout=-1
rbd_max_clone_depth=5
volume_driver=cinder.volume.drivers.rbd.RBDDriver
rbd_store_chunk_size=4
rbd_secret_uuid=dc4f91c1-8792-4948-b68f-2fcea75f53b9
rbd_pool=volumes
host=hh-yun-cinder.vclound.com

[CEPH_DEVELOP]
glance_api_version=2
volume_backend_name=ceph_develop
rbd_ceph_conf=/etc/ceph/ceph-develop.conf
rbd_user=developcinder
rbd_flatten_volume_from_snapshot=False
rados_connect_timeout=-1
rbd_max_clone_depth=5
volume_driver=cinder.volume.drivers.rbd.RBDDriver
rbd_store_chunk_size=4
rbd_secret_uuid=4bf07d3e-a289-456d-9bd9-5a89832b413b
rbd_pool=develop-ceph
host=hh-yun-cinder.vclound.com


命令行对 cinder 服务进行管理

重启服务后, 将会看到增加了 hh-yun-cinder.vclound.com@CEPH_DEVELOP 服务类型

[root@hh-yun-puppet-129021 ~(keystone_admin)]# cinder service-list
+------------------+----------------------------------------+------+---------+-------+----------------------------+-----------------+
|      Binary      |                  Host                  | Zone |  Status | State |         Updated_at         | Disabled Reason |
+------------------+----------------------------------------+------+---------+-------+----------------------------+-----------------+
|  cinder-backup   |       hh-yun-cinder.vclound.com        | nova | enabled |   up  | 2017-04-18T06:14:57.000000 |       None      |
| cinder-scheduler |       hh-yun-cinder.vclound.com        | nova | enabled |   up  | 2017-04-18T06:14:49.000000 |       None      |
|  cinder-volume   | hh-yun-cinder.vclound.com@CEPH_DEVELOP | nova | enabled |   up  | 2017-04-18T06:14:53.000000 |       None      |
|  cinder-volume   |  hh-yun-cinder.vclound.com@CEPH_SATA   | nova | enabled |   up  | 2017-04-18T06:14:53.000000 |       None      |
+------------------+----------------------------------------+------+---------+-------+----------------------------+-----------------+


为 cinder 添加新的 type

[root@hh-yun-puppet-129021 ~(keystone_admin)]# cinder type-create DEVELOP-CEPH
+--------------------------------------+--------------+
|                  ID                  |     Name     |
+--------------------------------------+--------------+
| 14b43bcb-0085-401d-8e2f-504587cf3589 | DEVELOP-CEPH |
+--------------------------------------+--------------+


查询 type

[root@hh-yun-puppet-129021 ~(keystone_admin)]# cinder type-list
+--------------------------------------+----------------+
|                  ID                  |      Name      |
+--------------------------------------+----------------+
| 14b43bcb-0085-401d-8e2f-504587cf3589 |  DEVELOP-CEPH  |
| 45fdd68a-ca0f-453c-bd10-17e826a1105e |   CEPH-SATA    |
+--------------------------------------+----------------+


添加 extra 信息

[root@hh-yun-db-129041 ~(keystone_admin)]# cinder  type-key DEVELOP-CEPH  set  volume_backend_name=ceph_develop


验证 extra 信息

[root@hh-yun-db-129041 ~(keystone_admin)]# cinder extra-specs-list
+--------------------------------------+----------------+----------------------------------------------------+
|                  ID                  |      Name      |                    extra_specs                     |
+--------------------------------------+----------------+----------------------------------------------------+
| 14b43bcb-0085-401d-8e2f-504587cf3589 |  DEVELOP-CEPH  |     {u'volume_backend_name': u'ceph_develop'}      |
| 45fdd68a-ca0f-453c-bd10-17e826a1105e |   CEPH-SATA    |       {u'volume_backend_name': u'ceph_sata'}       |
+--------------------------------------+----------------+----------------------------------------------------+


验证

利用命令行进行 cinder volume 创建验证

[root@hh-yun-db-129041 ceph(keystone_admin)]# cinder create --display-name tt-test --volume-type DEVELOP-CEPH 20
+---------------------+--------------------------------------+
|       Property      |                Value                 |
+---------------------+--------------------------------------+
|     attachments     |                  []                  |
|  availability_zone  |                 nova                 |
|       bootable      |                false                 |
|      created_at     |      2017-04-18T07:02:27.783977      |
| display_description |                 None                 |
|     display_name    |               tt-test                |
|      encrypted      |                False                 |
|          id         | 4fd11447-fd34-4dd6-8da3-634cf1c67a1e |
|       metadata      |                  {}                  |
|         size        |                  20                  |
|     snapshot_id     |                 None                 |
|     source_volid    |                 None                 |
|        status       |               creating               |
|       user_id       |   226e71f1c1aa4bae85485d1d17b6f0ae   |
|     volume_type     |             DEVELOP-CEPH             |  <- 指定 ceph-B 集群
+---------------------+--------------------------------------+

[root@hh-yun-db-129041 ceph(keystone_admin)]# cinder create --display-name tt-test02 --volume-type CEPH-SATA 20
+---------------------+--------------------------------------+
|       Property      |                Value                 |
+---------------------+--------------------------------------+
|     attachments     |                  []                  |
|  availability_zone  |                 nova                 |
|       bootable      |                false                 |
|      created_at     |      2017-04-18T07:03:20.880786      |
| display_description |                 None                 |
|     display_name    |              tt-test02               |
|      encrypted      |                False                 |
|          id         | f7f11c03-e2dc-44a4-bc5b-6718fc4c064d |
|       metadata      |                  {}                  |
|         size        |                  20                  |
|     snapshot_id     |                 None                 |
|     source_volid    |                 None                 |
|        status       |               creating               |
|       user_id       |   226e71f1c1aa4bae85485d1d17b6f0ae   |
|     volume_type     |              CEPH-SATA               |  <- 指定 ceph-A 集群
+---------------------+--------------------------------------+


验证

[root@hh-yun-db-129041 ceph(keystone_admin)]# cinder list
+--------------------------------------+-----------+--------------+------+--------------+----------+--------------------------------------+
|                  ID                  |   Status  | Display Name | Size | Volume Type  | Bootable |             Attached to              |
+--------------------------------------+-----------+--------------+------+--------------+----------+--------------------------------------+
| 422a43f7-79b3-4fa1-a300-2ad1f3d63018 |   in-use  |    dd250     | 250  |  CEPH-SATA   |  false   | 208a713d-ae71-4243-94a8-5a3ab22126d7 |
| 4fd11447-fd34-4dd6-8da3-634cf1c67a1e | available |   tt-test    |  20  | DEVELOP-CEPH |  false   |                                      |
| f7f11c03-e2dc-44a4-bc5b-6718fc4c064d | available |  tt-test02   |  20  |  CEPH-SATA   |  false   |                                      |
+--------------------------------------+-----------+--------------+------+--------------+----------+--------------------------------------+


从 volume type 中可以看到, 这两个新创建云盘, 分别存储到不同的 cinder volume backend

nova compuet 连接 cinder

注意:  opesntack nova compute 是不支持上述方法同时连接两个不同 ceph mon 的 ceph 集群


nova compute 连接 ceph 存储方法

参考:

openstack 管理二十三 - nova compute 连接 ceph 集群

Integrating Ceph Storage with OpenStack – A Step by Step Guide

nova compute 连接一个 ceph 集群中两个不同 pool 方法

参考: OpenStack Nova: configure multiple Ceph backends on one hypervisor

libvirt securt key 资料

参考:

Running VM with libvirt on Ceph RBDs

Secret XML format

ceph official libvirt
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签:  ceph openstack