Centos8部署Ceph集群对接OpenStack(U版本)
2020-10-22 11:15
1601 查看
简介
Linux持续不断进军可扩展计算空间,特别是可扩展存储空间,Ceph 最近加入到 Linux 中令人印象深刻的文件系统备选行列,它是一个分布式文件系统,能够在维护 POSIX 兼容性的同时加入了复制和容错功能
Ceph 生态系统架构可以划分为四部分:
1、Clients:客户端(数据用户)
2、cmds:Metadata server cluster,元数据服务器(缓存和同步分布式元数据)
3、cosd:Object storage cluster,对象存储集群(将数据和元数据作为对象存储,执行其他关键职能)
4、cmon:Cluster monitors,集群监视器(执行监视功能)
前期准备
准备两台Centos8虚拟机,配置IP地址和hostname,同步系统时间,关闭防火墙和selinux,修改IP地址和hostname映射,每台虚拟机添加一块硬盘
ip | hostname |
---|---|
192.168.29.148 | controller |
192.168.29.149 | computer |
配置openstack可参考:https://blog.51cto.com/14832653/2542863
注:若已经创建openstack集群,需要先把实例,镜像和卷进行删除
安装ceph源
[root@controller ~]# yum install centos-release-ceph-octopus.noarch -y [root@computer ~]# yum install centos-release-ceph-octopus.noarch -y
安装ceph组件
[root@controller ~]# yum install cephadm -y [root@computer ~]# yum install ceph -y
computer结点安装libvirt
[root@computer ~]# yum install libvirt -y
部署ceph集群
创建集群
[root@controller ~]# mkdir -p /etc/ceph [root@controller ~]# cd /etc/ceph/ [root@controller ceph]# cephadm boostrap --mon-ip 192.168.29.148 [root@controller ceph]# ceph status [root@controller ceph]# cephadm install ceph-common [root@controller ceph]# ssh-copy-id -f -i /etc/ceph/ceph.pub root@computer
修改配置
[root@controller ceph]# ceph config set mon public_network 192.168.29.0/24
添加主机
[root@controller ceph]# ceph orch host add computer [root@controller ceph]# ceph orch host ls
初始化集群监控
[root@controller ceph]# ceph orch host label add controller mon [root@controller ceph]# ceph orch host label add computer mon [root@controller ceph]# ceph orch apply mon label:mon [root@controller ceph]# ceph orch daemon add mon computer:192.168.29.149
创建OSD
[root@controller ceph]# ceph orch daemon add osd controller:/dev/nvme0n2 [root@controller ceph]# ceph orch daemon add osd computer:/dev/nvme0n3
查看集群状态
[root@c 38f8 ontroller ceph]# ceph -s
查看集群容量
[root@controller ceph]# ceph df
创建pool
[root@controller ceph]# ceph osd pool create volumes 64 [root@controller ceph]# ceph osd pool create vms 64 #设置自启动 [root@controller ceph]# ceph osd pool application enable vms mon [root@controller ceph]# ceph osd pool application enable volumes mon
查看mon,osd,pool状态
[root@controller ceph]# ceph mon stat [root@controller ceph]# ceph osd status [root@controller ceph]# ceph osd lspools
查看pool情况
[root@controller ~]# rbd ls vms [root@controller ~]# rbd ls volumes
ceph集群与openstack对接
创建cinder并设置权限
[root@controller ceph]# ceph auth get-or-create client.cinder mon 'allow r' osd 'allow class-read object_prefix rbd_children,allow rwx pool=volumes,allow rwx pool=vms'
设置密钥
[root@controller ceph]# ceph auth get-or-create client.cinder | tee /etc/ceph/ceph.client.cinder.keyring #传送密钥到computer [root@controller ~]# ceph auth get-key client.cinder > client.cinder.key [root@controller ~]# scp client.cinder.key computer:/root/ #修改权限 [root@controller ceph]# chown cinder.cinder /etc/ceph/ceph.client.cinder.keyring
设置密钥
#computer生成uuid [root@computer ~]#uuidgen 1fad1f90-63fb-4c15-bfc3-366c6559c1fe #创建密钥文件 [root@computer ~]# vi secret.xml <secret ephemeral='no' private='no'> <uuid>1fad1f90-63fb-4c15-bfc3-366c6559c1fe </uuid> <usage type='ceph'> <name>client.cinder secret</name> </usage> </secret> #定义密钥 virsh secret-define --file secret.xml #设置密钥 virsh secret-set-value --secret 1fad1f90-63fb-4c15-bfc3-366c6559c1fe --base64 $(cat client.cinder.key) && rm -rf client.cinder.key secret.xml
设置对接cinder模块
修改配置文件
[root@controller ~]# vi /etc/cinder/cinder.conf [default] rpc_backend = rabbit auth_strategy = keystone my_ip = 192.168.29.148 enabled_backends = ceph [ceph] default_volume_type= ceph glance_api_version = 2 volume_driver = cinder.volume.drivers.rbd.RBDDriver volume_backend_name = ceph rbd_pool = volumes rbd_ceph_conf = /etc/ceph/ceph.conf rbd_flatten_volume_from_snapshot = false rbd_max_clone_depth = 5 rbd_store_chunk_size = 4 rados_connect_timeout = -1 rbd_user = cinder #对应computer创建的uuid rbd_secret_uuid = 1fad1f90-63fb-4c15-bfc3-366c6559c1fe
同步数据库
#若已经有数据库,对数据库进行删除并重新创建和同步 [root@controller ~]# su -s /bin/sh -c "cinder-manage db sync" cinder
重启服务
[root@controller ~]# systemctl restart openstack-cinder-api.service openstack-cinder-scheduler.service openstack-cinder-volume.service
设置ceph的类型和存储类型
[root@controller ~]# source admin-openrc [root@controller ~]# cinder type-create ceph [root@controller ~]# cinder type-key ceph set volume_backend_name=ceph
对接nova-compute模块
computer结点修改配置文件
[root@computer ~]# vi /etc.nova/nova.conf [libvirt] virt_type = qemu inject_password = true inject_partition = -1 images_type = rbd images_rbd_pool = vms images_rbd_ceph_conf = /etc/ceph/ceph.conf rbd_user = cinder rbd_secret_uuid = 1fad1f90-63fb-4c15-bfc3-366c6559c1fe disk_cachemodes = "network=writeback" live_migration_flag = "VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE,VIR_MIGRATE_PERSIST_DEST,VIR_MIGRATE_TUNNELLED" hw_disk_discard = unmap
[root@computer ~]# vi /etc/ceph/ceph.conf [client] rbd cache=true rbd cache writethrough until flush=true admin socket = /var/run/ceph/guests/$cluster-$type.$id.$pid.$cctid.asok log file = /var/log/qemu/qemu-guest-$pid.log rbd concurrent management ops = 20
创建日志目录
[root@computer ~]# mkdir -p /var/run/ceph/guests/ /var/log/qemu/ [root@computer ~]# chown 777 -R /var/run/ceph/guests/ /var/log/qemu/
controller下发密钥
[root@controller ~]# cd /etc/ceph [root@controller ~]# scp ceph.client.cinder.keyring root@computer:/etc/ceph
重启服务
[root@computer ~]# systemctl stop libvirtd openstack-nova-compute [root@computer ~]# systemctl start libvirtd openstack-nova-compute
相关文章推荐
- ceph集群jewel版本部署osd激活权限报错-故障排查
- kolla-ansible部署openstack+ceph高可用集群queens版本---部署Openstack+ceph
- Kolla单节点快速部署OpenStack-pike版本(集成ceph)
- 理解 OpenStack + Ceph (1):Ceph + OpenStack 集群部署和配置
- ceph集群jewel版本部署osd激活权限报错-故障排查
- kolla部署openstack多节点高可用并对接ceph后端存储(17)
- 基于centos7.3安装部署jewel版本ceph集群实战演练
- 基于centos7.5安装部署最新luminous版 ceph集群部署整合openstack
- [Ceph001]基于CentOS7部署Ceph集群(版本10.2.2)
- 【实践】基于CentOS7部署Ceph集群(版本10.2.2)
- kubernetes1.13.1集群结合ceph rbd部署最新版本jenkins
- CentOS7下部署Ceph集群(版本10.2.2)
- 在OpenStack(Mitaka版本)上通过Sahara部署Hadoop&Spark集群
- CentOS 7.5安装部署Jewel版本Ceph集群
- ceph-deploy部署jewel版本cpeh集群
- 理解 OpenStack + Ceph (1):Ceph + OpenStack 集群部署和配置(转)
- Openstack之Ceph集群操作
- 用ceph-deploy安装ceph并部署集群
- Docker(十三):OpenStack部署Docker集群
- OpenStack Newton版本部署---- 界面(Dashboard)