您的位置:首页 > 运维架构

使用 ceph 作为 openstack 的后端

2017-12-04 17:10 393 查看

openstack 与 ceph 集成

ceph
上创建
openstack
需要的
pool
.

sudo ceph osd pool create volumes 128
sudo ceph osd pool create images 128
sudo ceph osd pool create backups 128
sudo ceph osd pool create vms 128


将 ceph 服务器上
/etc/ceph/ceph.conf
复制到
openstack
compute
glance
节点中。

安装
ceph
相关依赖

sudo yum install python-rbd ceph-common


ceph
admin-node
上创建相关的用户

sudo ceph auth get-or-create client.glance mon 'allow *' osd 'allow * pool=images' -o client.glance.keyring
sudo ceph auth get-or-create client.cinder mon 'allow *' osd 'allow * pool=volumes, allow * pool=vms, allow * pool=images' -o client.cinder.keyring
sudo ceph auth get-or-create client.cinder-backup mon 'allow *' osd 'allow * pool=backups' -o client.cinder-backup.keyring


如果填错权限可以改 sudo ceph auth caps client.glance mon 'allow ' osd 'allow pool=images' -o client.glance.keyring



拿到
cinder
的 key

ceph auth get-key client.cinder  >> client.cinder.key
sz client.cinder.key
# 然后把 文件发到每一个 compute node 上
uuidgen # aff9070f-b853-4d19-b77c-b2aa7baca432
#d2b06849-6a8c-40b7-bfea-0d2a729ac70d
# 生成一个 uuid 然后写到 secret.xml 中

<secret ephemeral='no' private='no'>
<uuid>{your UUID}</uuid>
<usage type='ceph'>
<name>client.cinder secret</name>
</usage>
</secret>

然后执行

sudo virsh secret-define --file secret.xml
sudo virsh secret-set-value --secret {your UUID} --base64 $(cat client.cinder.key)
rm  -rf client.cinder.key secret.xml

到这了 compute2 上 http://docs.ceph.com/docs/master/rbd/rbd-openstack/#configuring-cinder

编辑 /etc/glance/glance-api.conf

[DEFAULT]
...
default_store = rbd
...
[glance_store]
stores = rbd
rbd_store_pool = images
rbd_store_user = glance
rbd_store_ceph_conf = /etc/ceph/ceph.conf
rbd_store_chunk_size = 8
show_image_direct_url = True
show_multiple_locations = True
[paste_deploy]
flavor = keystone


如果 glance 连接失败可以考虑是不是 /etc/cinder 下的 keyring 文件是不是 ceph.client.*.keyring 格式 。 ceph!!!



编辑
/etc/cinder/cinder.conf


[DEFAULT]
...
enabled_backends = ceph
glance_api_version = 2
### 添加
[ceph]
volume_driver = cinder.volume.drivers.rbd.RBDDriver
volume_backend_name = ceph
rbd_pool = volumes
rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_flatten_volume_from_snapshot = false
rbd_max_clone_depth = 5
rbd_store_chunk_size = 4
rados_connect_timeout = -1
rbd_user = cinder
host_ip = 10.0.5.10 ## 这个地方用本地机器替换一下
rbd_secret_uuid = 457eb676-33da-42ec-9a8c-9293d545c337
# * backup *
backup_driver = cinder.backup.drivers.ceph
backup_ceph_conf = /etc/ceph/ceph.conf
backup_ceph_user = cinder-backup
backup_ceph_chunk_size = 134217728
backup_ceph_pool = backups
backup_ceph_stripe_unit = 0
backup_ceph_stripe_count = 0
restore_discard_excess_bytes = true
[libvirt]
rbd_user = cinder
rbd_secret_uuid = 457eb676-33da-42ec-9a8c-9293d545c337


如果 cinder 失败,可以看看 /etc/ceph/ceph.conf 下的 public network 是不是加了一个下划线



编辑
/etc/nova/nova.conf


[client]
rbd cache = true
rbd cache writethrough until flush = true
admin socket = /var/run/ceph/guests/$cluster-$type.$id.$pid.$cctid.asok
log file = /var/log/qemu/qemu-guest-$pid.log
rbd concurrent management ops = 20

mkdir -p /var/run/ceph/guests/ /var/log/qemu/
chown qemu:libvirt /var/run/ceph/guests /var/log/qemu/
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: