您的位置:首页 > 大数据 > 云计算

配置基于IPv6的单节点Ceph

2017-07-14 00:00 204 查看

引言

为什么突然想起搭建一个基于IPv6的Ceph环境?纯属巧合,原本有一个项目需要搭建一个基于IPv6的文件系统,可惜Hadoop不支持(之前一直觉得Hadoop比较强大),几经折腾,Ceph给了我希望,好了闲话少说,直接进入正题。

实验环境

Linux操作系统版本:CentOSLinuxrelease7.2.1511(Core)

Minimal镜像603M左右

Everything镜像7.2G左右

Ceph版本:0.94.9(hammer版本)

原本选取的为jewel最新版本,环境配置成功后,在使用Ceph的对象存储功能时,导致不能通过IPv6访问,出现类似如下错误提示,查阅资料发现是Cephjewel版本的一个bug,正在修复,另外也给大家一个建议,在生产环境中,尽量不要选择最新版本。

set_ports_option:[::]8888:invalidportsportspec

预检

网络配置

参考之前的一篇文章CentOS7设置静态IPv6/IPv4地址完成网络配置

修改主机名

1
2
3
4
5
[root@localhost~]#hostnamectlset-hostnameceph001#ceph001即为你想要修改的名字
[root@localhost~]#vim/etc/hosts
127.0.0.1localhostlocalhost.localdomainlocalhost4localhost4.localdomain4
::1localhostlocalhost.localdomainlocalhost6localhost6.localdomain6
2001:250:4402:2001:20c:29ff:fe25:8888ceph001#新增,前面IPv6地址即主机ceph001的静态IPv6地址

修改yum源

由于某些原因,可能导致官方的yum在下载软件时速度较慢,这里我们将yum源换为aliyun源

1
2
3
4
5
6
7
[root@localhost~]#yumcleanall#清空yum源
[root@localhost~]#rm-rf/etc/yum.repos.d/*.repo
[root@localhost~]#wget-O/etc/yum.repos.d/CentOS-Base.repohttp://mirrors.aliyun.com/repo/Centos-7.repo#下载阿里base源
[root@localhost~]#wget-O/etc/yum.repos.d/epel.repohttp://mirrors.aliyun.com/repo/epel-7.repo#下载阿里epel源
[root@localhost~]#sed-i'/aliyuncs/d'/etc/yum.repos.d/CentOS-Base.repo
[root@localhost~]#sed-i'/aliyuncs/d'/etc/yum.repos.d/epel.repo
[root@localhost~]#sed-i's/$releasever/7.2.1511/g'/etc/yum.repos.d/CentOS-Base.repo
添加ceph源

1
2
3
4
5
6
7
8
9
10
[root@localhost~]#vim/etc/yum.repos.d/ceph.repo
[ceph]
name=ceph
baseurl=http://mirrors.aliyun.com/ceph/rpm-hammer/el7/x86_64/#可以选择需要安装的版本
gpgcheck=0
[ceph-noarch]
name=cephnoarch
baseurl=http://mirrors.aliyun.com/ceph/rpm-hammer/el7/noarch/#可以选择需要安装的版本
gpgcheck=0
[root@localhost~]#yummakecache

安装ceph与ceph-deploy

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
[root@localhost~]#yuminstallcephceph-deploy
Loadedplugins:fastestmirror,langpacks
Loadingmirrorspeedsfromcachedhostfile
ResolvingDependencies
-->Runningtransactioncheck
--->Packageceph.x86_641:0.94.9-0.el7willbeinstalled
-->ProcessingDependency:librbd1=1:0.94.9-0.el7forpackage:1:ceph-0.94.9-0.el7.x86_64
-->ProcessingDependency:python-rbd=1:0.94.9-0.el7forpackage:1:ceph-0.94.9-0.el7.x86_64
-->ProcessingDependency:python-cephfs=1:0.94.9-0.el7forpackage:1:ceph-0.94.9-0.el7.x86_64
-->ProcessingDependency:libcephfs1=1:0.94.9-0.el7forpackage:1:ceph-0.94.9-0.el7.x86_64
-->ProcessingDependency:librados2=1:0.94.9-0.el7forpackage:1:ceph-0.94.9-0.el7.x86_64
-->ProcessingDependency:python-rados=1:0.94.9-0.el7forpackage:1:ceph-0.94.9-0.el7.x86_64
-->ProcessingDependency:ceph-common=1:0.94.9-0.el7forpackage:1:ceph-0.94.9-0.el7.x86_64
-->ProcessingDependency:python-requestsforpackage:1:ceph-0.94.9-0.el7.x86_64
-->ProcessingDependency:python-flaskforpackage:1:ceph-0.94.9-0.el7.x86_64
-->ProcessingDependency:redhat-lsb-coreforpackage:1:ceph-0.94.9-0.el7.x86_64
-->ProcessingDependency:hdparmforpackage:1:ceph-0.94.9-0.el7.x86_64
-->ProcessingDependency:libcephfs.so.1()(64bit)forpackage:1:ceph-0.94.9-0.el7.x86_64
.......
DependenciesResolved
=======================================================================================
PackageArchVersionRepositorySize
=======================================================================================
Installing:
cephx86_641:0.94.9-0.el7ceph20M
ceph-deploynoarch1.5.36-0ceph-noarch283k
Installingfordependencies:
boost-program-optionsx86_641.53.0-25.el7base155k
ceph-commonx86_641:0.94.9-0.el7ceph7.2M
...
TransactionSummary
=======================================================================================
Install2Packages(+24Dependentpackages)
Upgrade(2Dependentpackages)
Totaldownloadsize:37M
Isthisok[y/d/N]:y
Downloadingpackages:
NoPrestometadataavailableforceph
warning:/var/cache/yum/x86_64/7/base/packages/boost-program-options-1.53.0-25.el7.x86_64.rpm:HeaderV3RSA/SHA256Signature,keyIDf4a80eb5:NOKEY
Publickeyforboost-program-options-1.53.0-25.el7.x86_64.rpmisnotinstalled
(1/28):boost-program-options-1.53.0-25.el7.x86_64.rpm|155kB00:00:00
(2/28):hdparm-9.43-5.el7.x86_64.rpm|83kB00:00:00
(3/28):ceph-deploy-1.5.36-0.noarch.rpm|283kB00:00:00
(4/28):leveldb-1.12.0-11.el7.x86_64.rpm|161kB00:00:00
...
---------------------------------------------------------------------------------------
Total718kB/s|37MB00:53
Retrievingkeyfromhttp://mirrors.aliyun.com/centos/RPM-GPG-KEY-CentOS-7ImportingGPGkey0xF4A80EB5:
Userid:"CentOS-7Key(CentOS7OfficialSigningKey)<security@centos.org>"
Fingerprint:6341ab2753d78a78a7c27bb124c6a8a7f4a80eb5
From:http://mirrors.aliyun.com/centos/RPM-GPG-KEY-CentOS-7Isthisok[y/N]:y
...
Complete!
验证安装版本

1
2
3
4
[root@localhost~]#ceph-deploy--version
1.5.36
[root@localhost~]#ceph-v
cephversion0.94.9(fe6d859066244b97b24f09d46552afc2071e6f90)

安装NTP(如果是多节点还需要配置服务端与客户端),并设置selinux与firewalld

1
2
3
4
5
6
7
[root@localhost~]#yuminstallntp
[root@localhost~]#sed-i's/SELINUX=.*/SELINUX=disabled/'/etc/selinux/config
[root@localhost~]#setenforce0
[root@localhost~]#systemctlstopfirewalld
[root@localhost~]#systemctldisablefirewalld
Removedsymlink/etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
Removedsymlink/etc/systemd/system/basic.target.wants/firewalld.service.

创建Ceph集群

在管理节点(ceph001)

[root@ceph001~]#mkdircluster
[root@ceph001~]#cdcluster/

创建集群

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
[root@ceph001cluster]#ceph-deploynewceph001
[ceph_deploy.conf][DEBUG]foundconfigurationfileat:/root/.cephdeploy.conf
[ceph_deploy.cli][INFO]Invoked(1.5.36):/usr/bin/ceph-deploynewceph001
[ceph_deploy.cli][INFO]ceph-deployoptions:
[ceph_deploy.cli][INFO]username:None
[ceph_deploy.cli][INFO]func:<functionnewat0xfe0668>
[ceph_deploy.cli][INFO]verbose:False
[ceph_deploy.cli][INFO]overwrite_conf:False
[ceph_deploy.cli][INFO]quiet:False
[ceph_deploy.cli][INFO]cd_conf:<ceph_deploy.conf.cephdeploy.Confinstanceat0x104c680>
[ceph_deploy.cli][INFO]cluster:ceph
[ceph_deploy.cli][INFO]ssh_copykey:True
[ceph_deploy.cli][INFO]mon:['ceph001']
[ceph_deploy.cli][INFO]public_network:None
[ceph_deploy.cli][INFO]ceph_conf:None
[ceph_deploy.cli][INFO]cluster_network:None
[ceph_deploy.cli][INFO]default_release:False
[ceph_deploy.cli][INFO]fsid:None
[ceph_deploy.new][DEBUG]Creatingnewclusternamedceph
[ceph_deploy.new][INFO]makingsurepasswordlessSSHsucceeds
[ceph001][DEBUG]connectedtohost:ceph001
[ceph001][DEBUG]detectplatforminformationfromremotehost
[ceph001][DEBUG]detectmachinetype
[ceph001][DEBUG]findthelocationofanexecutable
[ceph001][INFO]Runningcommand:/usr/sbin/iplinkshow
[ceph001][INFO]Runningcommand:/usr/sbin/ipaddrshow
[ceph001][DEBUG]IPaddressesfound:[u'192.168.122.1',u'49.123.105.124']
[ceph_deploy.new][DEBUG]Resolvinghostceph001
[ceph_deploy.new][DEBUG]Monitorceph001at2001:250:4402:2001:20c:29ff:fe25:8888
[ceph_deploy.new][INFO]MonitorsareIPv6,bindingMessengertrafficonIPv6
[ceph_deploy.new][DEBUG]Monitorinitialmembersare['ceph001']
[ceph_deploy.new][DEBUG]Monitoraddrsare['[2001:250:4402:2001:20c:29ff:fe25:8888]']
[ceph_deploy.new][DEBUG]Creatingarandommonkey...
[ceph_deploy.new][DEBUG]Writingmonitorkeyringtoceph.mon.keyring...
[ceph_deploy.new][DEBUG]Writinginitialconfigtoceph.conf...
[root@ceph001cluster]#ll
total12
-rw-r--r--.1rootroot244Nov621:54ceph.conf
-rw-r--r--.1rootroot3106Nov621:54ceph-deploy-ceph.log
-rw-------.1rootroot73Nov621:54ceph.mon.keyring
[root@ceph001cluster]#catceph.conf
[global]
fsid=865e6b01-b0ea-44da-87a5-26a4980aa7a8
ms_bind_ipv6=true
mon_initial_members=ceph001
mon_host=[2001:250:4402:2001:20c:29ff:fe25:8888]
auth_cluster_required=cephx
auth_service_required=cephx
auth_client_required=cephx
由于我们采用的单节点部署,将默认的复制备份数改为1(原本是3)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
[root@ceph001cluster]#echo"osd_pool_default_size=1">>ceph.conf
[root@ceph001cluster]#ceph-deploy--overwrite-confconfigpushceph001
[ceph_deploy.conf][DEBUG]foundconfigurationfileat:/root/.cephdeploy.conf
[ceph_deploy.cli][INFO]Invoked(1.5.36):/usr/bin/ceph-deploy--overwrite-confconfigpushceph001
[ceph_deploy.cli][INFO]ceph-deployoptions:
[ceph_deploy.cli][INFO]username:None
[ceph_deploy.cli][INFO]verbose:False
[ceph_deploy.cli][INFO]overwrite_conf:True
[ceph_deploy.cli][INFO]subcommand:push
[ceph_deploy.cli][INFO]quiet:False
[ceph_deploy.cli][INFO]cd_conf:<ceph_deploy.conf.cephdeploy.Confinstanceat0x14f9710>
[ceph_deploy.cli][INFO]cluster:ceph
[ceph_deploy.cli][INFO]client:['ceph001']
[ceph_deploy.cli][INFO]func:<functionconfigat0x14d42a8>
[ceph_deploy.cli][INFO]ceph_conf:None
[ceph_deploy.cli][INFO]default_release:False
[ceph_deploy.config][DEBUG]Pushingconfigtoceph001
[ceph001][DEBUG]connectedtohost:ceph001
[ceph001][DEBUG]detectplatforminformationfromremotehost
[ceph001][DEBUG]detectmachinetype
[ceph001][DEBUG]writeclusterconfigurationto/etc/ceph/{cluster}.conf

创建监控节点

将ceph001作为监控节点

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
[root@ceph001cluster]#ceph-deploymoncreate-initial
[ceph_deploy.conf][DEBUG]foundconfigurationfileat:/root/.cephdeploy.conf
[ceph_deploy.cli][INFO]Invoked(1.5.36):/usr/bin/ceph-deploymoncreate-initial
[ceph_deploy.cli][INFO]ceph-deployoptions:
[ceph_deploy.cli][INFO]username:None
[ceph_deploy.cli][INFO]verbose:False
[ceph_deploy.cli][INFO]overwrite_conf:False
[ceph_deploy.cli][INFO]subcommand:create-initial
[ceph_deploy.cli][INFO]quiet:False
[ceph_deploy.cli][INFO]cd_conf:<ceph_deploy.conf.cephdeploy.Confinstanceat0x23865a8>
[ceph_deploy.cli][INFO]cluster:ceph
[ceph_deploy.cli][INFO]func:<functionmonat0x237e578>
[ceph_deploy.cli][INFO]ceph_conf:None
[ceph_deploy.cli][INFO]default_release:False
[ceph_deploy.cli][INFO]keyrings:None
[ceph_deploy.mon][DEBUG]Deployingmon,clustercephhostsceph001
[ceph_deploy.mon][DEBUG]detectingplatformforhostceph001...
[ceph001][DEBUG]connectedtohost:ceph001
[ceph001][DEBUG]detectplatforminformationfromremotehost
[ceph001][DEBUG]detectmachinetype
[ceph001][DEBUG]findthelocationofanexecutable
[ceph_deploy.mon][INFO]distroinfo:CentOSLinux7.2.1511Core
[ceph001][DEBUG]determiningifprovidedhosthassamehostnameinremote
[ceph001][DEBUG]getremoteshorthostname
[ceph001][DEBUG]deployingmontoceph001
[ceph001][DEBUG]getremoteshorthostname
[ceph001][DEBUG]remotehostname:ceph001
[ceph001][DEBUG]writeclusterconfigurationto/etc/ceph/{cluster}.conf
[ceph001][DEBUG]createthemonpathifitdoesnotexist
[ceph001][DEBUG]checkingfordonepath:/var/lib/ceph/mon/ceph-ceph001/done
[ceph001][DEBUG]donepathdoesnotexist:/var/lib/ceph/mon/ceph-ceph001/done
[ceph001][INFO]creatingkeyringfile:/var/lib/ceph/tmp/ceph-ceph001.mon.keyring
[ceph001][DEBUG]createthemonitorkeyringfile
[ceph001][INFO]Runningcommand:ceph-mon--clusterceph--mkfs-iceph001--keyring/var/lib/ceph/tmp/ceph-ceph001.mon.keyring
[ceph001][DEBUG]ceph-mon:mon.noname-a[2001:250:4402:2001:20c:29ff:fe25:8888]:6789/0islocal,renamingtomon.ceph001
[ceph001][DEBUG]ceph-mon:setfsidto865e6b01-b0ea-44da-87a5-26a4980aa7a8
[ceph001][DEBUG]ceph-mon:createdmonfsat/var/lib/ceph/mon/ceph-ceph001formon.ceph001
[ceph001][INFO]unlinkingkeyringfile/var/lib/ceph/tmp/ceph-ceph001.mon.keyring
[ceph001][DEBUG]createadonefiletoavoidre-doingthemondeployment
[ceph001][DEBUG]createtheinitpathifitdoesnotexist
[ceph001][DEBUG]locatingthe`service`executable...
[ceph001][INFO]Runningcommand:/usr/sbin/serviceceph-c/etc/ceph/ceph.confstartmon.ceph001
[ceph001][DEBUG]===mon.ceph001===
[ceph001][DEBUG]StartingCephmon.ceph001onceph001...
[ceph001][WARNIN]Runningasunitceph-mon.ceph001.1478441156.735105300.service.
[ceph001][DEBUG]Startingceph-create-keysonceph001...
[ceph001][INFO]Runningcommand:systemctlenableceph
[ceph001][WARNIN]ceph.serviceisnotanativeservice,redirectingto/sbin/chkconfig.
[ceph001][WARNIN]Executing/sbin/chkconfigcephon
[ceph001][INFO]Runningcommand:ceph--cluster=ceph--admin-daemon/var/run/ceph/ceph-mon.ceph001.asokmon_status
[ceph001][DEBUG]********************************************************************************
[ceph001][DEBUG]statusformonitor:mon.ceph001
[ceph001][DEBUG]{
[ceph001][DEBUG]"election_epoch":2,
[ceph001][DEBUG]"extra_probe_peers":[],
[ceph001][DEBUG]"monmap":{
[ceph001][DEBUG]"created":"0.000000",
[ceph001][DEBUG]"epoch":1,
[ceph001][DEBUG]"fsid":"865e6b01-b0ea-44da-87a5-26a4980aa7a8",
[ceph001][DEBUG]"modified":"0.000000",
[ceph001][DEBUG]"mons":[
[ceph001][DEBUG]{
[ceph001][DEBUG]"addr":"[2001:250:4402:2001:20c:29ff:fe25:8888]:6789/0",
[ceph001][DEBUG]"name":"ceph001",
[ceph001][DEBUG]"rank":0
[ceph001][DEBUG]}
[ceph001][DEBUG]]
[ceph001][DEBUG]},
[ceph001][DEBUG]"name":"ceph001",
[ceph001][DEBUG]"outside_quorum":[],
[ceph001][DEBUG]"quorum":[
[ceph001][DEBUG]0
[ceph001][DEBUG]],
[ceph001][DEBUG]"rank":0,
[ceph001][DEBUG]"state":"leader",
[ceph001][DEBUG]"sync_provider":[]
[ceph001][DEBUG]}
[ceph001][DEBUG]********************************************************************************
[ceph001][INFO]monitor:mon.ceph001isrunning
[ceph001][INFO]Runningcommand:ceph--cluster=ceph--admin-daemon/var/run/ceph/ceph-mon.ceph001.asokmon_status
[ceph_deploy.mon][INFO]processingmonitormon.ceph001
[ceph001][DEBUG]connectedtohost:ceph001
[ceph001][DEBUG]detectplatforminformationfromremotehost
[ceph001][DEBUG]detectmachinetype
[ceph001][DEBUG]findthelocationofanexecutable
[ceph001][INFO]Runningcommand:ceph--cluster=ceph--admin-daemon/var/run/ceph/ceph-mon.ceph001.asokmon_status
[ceph_deploy.mon][INFO]mon.ceph001monitorhasreachedquorum!
[ceph_deploy.mon][INFO]allinitialmonitorsarerunningandhaveformedquorum
[ceph_deploy.mon][INFO]Runninggatherkeys...
[ceph_deploy.gatherkeys][INFO]Storingkeysintempdirectory/tmp/tmpgY2IT7
[ceph001][DEBUG]connectedtohost:ceph001
[ceph001][DEBUG]detectplatforminformationfromremotehost
[ceph001][DEBUG]detectmachinetype
[ceph001][DEBUG]getremoteshorthostname
[ceph001][DEBUG]fetchremotefile
[ceph001][INFO]Runningcommand:/usr/bin/ceph--connect-timeout=25--cluster=ceph--admin-daemon=/var/run/ceph/ceph-mon.ceph001.asokmon_status
[ceph001][INFO]Runningcommand:/usr/bin/ceph--connect-timeout=25--cluster=ceph--namemon.--keyring=/var/lib/ceph/mon/ceph-ceph001/keyringauthgetclient.admin
[ceph001][INFO]Runningcommand:/usr/bin/ceph--connect-timeout=25--cluster=ceph--namemon.--keyring=/var/lib/ceph/mon/ceph-ceph001/keyringauthgetclient.bootstrap-mds
[ceph001][INFO]Runningcommand:/usr/bin/ceph--connect-timeout=25--cluster=ceph--namemon.--keyring=/var/lib/ceph/mon/ceph-ceph001/keyringauthgetclient.bootstrap-osd
[ceph001][INFO]Runningcommand:/usr/bin/ceph--connect-timeout=25--cluster=ceph--namemon.--keyring=/var/lib/ceph/mon/ceph-ceph001/keyringauthgetclient.bootstrap-rgw
[ceph_deploy.gatherkeys][INFO]Storingceph.client.admin.keyring
[ceph_deploy.gatherkeys][INFO]Storingceph.bootstrap-mds.keyring
[ceph_deploy.gatherkeys][INFO]keyring'ceph.mon.keyring'alreadyexists
[ceph_deploy.gatherkeys][INFO]Storingceph.bootstrap-osd.keyring
[ceph_deploy.gatherkeys][INFO]Storingceph.bootstrap-rgw.keyring
[ceph_deploy.gatherkeys][INFO]Destroytempdirectory/tmp/tmpgY2IT7
查看集群状态

1
2
3
4
5
6
7
8
9
10
11
12
[root@ceph001cluster]#ceph-s
cluster865e6b01-b0ea-44da-87a5-26a4980aa7a8
healthHEALTH_ERR
64pgsstuckinactive
64pgsstuckunclean
noosds
monmape1:1monsat{ceph001=[2001:250:4402:2001:20c:29ff:fe25:8888]:6789/0}
electionepoch2,quorum0ceph001
osdmape1:0osds:0up,0in
pgmapv2:64pgs,1pools,0bytesdata,0objects
0kBused,0kB/0kBavail
64creating

添加OSD

查看硬盘

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
[root@ceph001cluster]#ceph-deploydisklistceph001
[ceph_deploy.conf][DEBUG]foundconfigurationfileat:/root/.cephdeploy.conf
[ceph_deploy.cli][INFO]Invoked(1.5.36):/usr/bin/ceph-deploydisklistceph001
[ceph_deploy.cli][INFO]ceph-deployoptions:
[ceph_deploy.cli][INFO]username:None
[ceph_deploy.cli][INFO]verbose:False
[ceph_deploy.cli][INFO]overwrite_conf:False
[ceph_deploy.cli][INFO]subcommand:list
[ceph_deploy.cli][INFO]quiet:False
[ceph_deploy.cli][INFO]cd_conf:<ceph_deploy.conf.cephdeploy.Confinstanceat0x1c79bd8>
[ceph_deploy.cli][INFO]cluster:ceph
[ceph_deploy.cli][INFO]func:<functiondiskat0x1c70e60>
[ceph_deploy.cli][INFO]ceph_conf:None
[ceph_deploy.cli][INFO]default_release:False
[ceph_deploy.cli][INFO]disk:[('ceph001',None,None)]
[ceph001][DEBUG]connectedtohost:ceph001
[ceph001][DEBUG]detectplatforminformationfromremotehost
[ceph001][DEBUG]detectmachinetype
[ceph001][DEBUG]findthelocationofanexecutable
[ceph_deploy.osd][INFO]Distroinfo:CentOSLinux7.2.1511Core
[ceph_deploy.osd][DEBUG]Listingdisksonceph001...
[ceph001][DEBUG]findthelocationofanexecutable
[ceph001][INFO]Runningcommand:/usr/sbin/ceph-disklist
[ceph001][DEBUG]/dev/sda:
[ceph001][DEBUG]/dev/sda1other,xfs,mountedon/boot
[ceph001][DEBUG]/dev/sda2other,LVM2_member
[ceph001][DEBUG]/dev/sdbother,unknown
[ceph001][DEBUG]/dev/sdcother,unknown
[ceph001][DEBUG]/dev/sddother,unknown
[ceph001][DEBUG]/dev/sr0other,iso9660
添加第一个OSD(/dev/sdb)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
[root@ceph001cluster]#ceph-deploydiskzapceph001:/dev/sdb
[ceph_deploy.conf][DEBUG]foundconfigurationfileat:/root/.cephdeploy.conf
[ceph_deploy.cli][INFO]Invoked(1.5.36):/usr/bin/ceph-deploydiskzapceph001:/dev/sdb
[ceph_deploy.cli][INFO]ceph-deployoptions:
[ceph_deploy.cli][INFO]username:None
[ceph_deploy.cli][INFO]verbose:False
[ceph_deploy.cli][INFO]overwrite_conf:False
[ceph_deploy.cli][INFO]subcommand:zap
[ceph_deploy.cli][INFO]quiet:False
[ceph_deploy.cli][INFO]cd_conf:<ceph_deploy.conf.cephdeploy.Confinstanceat0x1b14bd8>
[ceph_deploy.cli][INFO]cluster:ceph
[ceph_deploy.cli][INFO]func:<functiondiskat0x1b0be60>
[ceph_deploy.cli][INFO]ceph_conf:None
[ceph_deploy.cli][INFO]default_release:False
[ceph_deploy.cli][INFO]disk:[('ceph001','/dev/sdb',None)]
[ceph_deploy.osd][DEBUG]zapping/dev/sdbonceph001
[ceph001][DEBUG]connectedtohost:ceph001
[ceph001][DEBUG]detectplatforminformationfromremotehost
[ceph001][DEBUG]detectmachinetype
[ceph001][DEBUG]findthelocationofanexecutable
[ceph_deploy.osd][INFO]Distroinfo:CentOSLinux7.2.1511Core
[ceph001][DEBUG]zeroinglastfewblocksofdevice
[ceph001][DEBUG]findthelocationofanexecutable
[ceph001][INFO]Runningcommand:/usr/sbin/ceph-diskzap/dev/sdb
[ceph001][DEBUG]CreatingnewGPTentries.
[ceph001][DEBUG]GPTdatastructuresdestroyed!Youmaynowpartitionthediskusingfdiskor
[ceph001][DEBUG]otherutilities.
[ceph001][DEBUG]CreatingnewGPTentries.
[ceph001][DEBUG]Theoperationhascompletedsuccessfully.
[ceph001][WARNIN]partx:specifiedrange<1:0>doesnotmakesense
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
[root@ceph001cluster]#ceph-deployosdcreateceph001:/dev/sdb
[ceph_deploy.conf][DEBUG]foundconfigurationfileat:/root/.cephdeploy.conf
[ceph_deploy.cli][INFO]Invoked(1.5.36):/usr/bin/ceph-deployosdcreateceph001:/dev/sdb
[ceph_deploy.cli][INFO]ceph-deployoptions:
[ceph_deploy.cli][INFO]username:None
[ceph_deploy.cli][INFO]disk:[('ceph001','/dev/sdb',None)]
[ceph_deploy.cli][INFO]dmcrypt:False
[ceph_deploy.cli][INFO]verbose:False
[ceph_deploy.cli][INFO]bluestore:None
[ceph_deploy.cli][INFO]overwrite_conf:False
[ceph_deploy.cli][INFO]subcommand:create
[ceph_deploy.cli][INFO]dmcrypt_key_dir:/etc/ceph/dmcrypt-keys
[ceph_deploy.cli][INFO]quiet:False
[ceph_deploy.cli][INFO]cd_conf:<ceph_deploy.conf.cephdeploy.Confinstanceat0x19b6680>
[ceph_deploy.cli][INFO]cluster:ceph
[ceph_deploy.cli][INFO]fs_type:xfs
[ceph_deploy.cli][INFO]func:<functionosdat0x19aade8>
[ceph_deploy.cli][INFO]ceph_conf:None
[ceph_deploy.cli][INFO]default_release:False
[ceph_deploy.cli][INFO]zap_disk:False
[ceph_deploy.osd][DEBUG]Preparingclustercephdisksceph001:/dev/sdb:
[ceph001][DEBUG]connectedtohost:ceph001
[ceph001][DEBUG]detectplatforminformationfromremotehost
[ceph001][DEBUG]detectmachinetype
[ceph001][DEBUG]findthelocationofanexecutable
[ceph_deploy.osd][INFO]Distroinfo:CentOSLinux7.2.1511Core
[ceph_deploy.osd][DEBUG]Deployingosdtoceph001
[ceph001][DEBUG]writeclusterconfigurationto/etc/ceph/{cluster}.conf
[ceph_deploy.osd][DEBUG]Preparinghostceph001disk/dev/sdbjournalNoneactivateTrue
[ceph001][DEBUG]findthelocationofanexecutable
[ceph001][INFO]Runningcommand:/usr/sbin/ceph-disk-vprepare--clusterceph--fs-typexfs--/dev/sdb
[ceph001][WARNIN]INFO:ceph-disk:Runningcommand:/usr/bin/ceph-osd--cluster=ceph--show-config-value=fsid
[ceph001][WARNIN]INFO:ceph-disk:Runningcommand:/usr/bin/ceph-conf--cluster=ceph--name=osd.--lookuposd_mkfs_options_xfs
[ceph001][WARNIN]INFO:ceph-disk:Runningcommand:/usr/bin/ceph-conf--cluster=ceph--name=osd.--lookuposd_fs_mkfs_options_xfs
[ceph001][WARNIN]INFO:ceph-disk:Runningcommand:/usr/bin/ceph-conf--cluster=ceph--name=osd.--lookuposd_mount_options_xfs
[ceph001][WARNIN]INFO:ceph-disk:Runningcommand:/usr/bin/ceph-conf--cluster=ceph--name=osd.--lookuposd_fs_mount_options_xfs
[ceph001][WARNIN]INFO:ceph-disk:Runningcommand:/usr/bin/ceph-osd--cluster=ceph--show-config-value=osd_journal_size
[ceph001][WARNIN]INFO:ceph-disk:Runningcommand:/usr/bin/ceph-conf--cluster=ceph--name=osd.--lookuposd_cryptsetup_parameters
[ceph001][WARNIN]INFO:ceph-disk:Runningcommand:/usr/bin/ceph-conf--cluster=ceph--name=osd.--lookuposd_dmcrypt_key_size
[ceph001][WARNIN]INFO:ceph-disk:Runningcommand:/usr/bin/ceph-conf--cluster=ceph--name=osd.--lookuposd_dmcrypt_type
[ceph001][WARNIN]INFO:ceph-disk:Willcolocatejournalwithdataon/dev/sdb
[ceph001][WARNIN]DEBUG:ceph-disk:Creatingjournalpartitionnum2size5120on/dev/sdb
[ceph001][WARNIN]INFO:ceph-disk:Runningcommand:/usr/sbin/sgdisk--new=2:0:5120M--change-name=2:cephjournal--partition-guid=2:ae307314-3a81-4da2-974b-b21c24d9bba1--typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106--mbrtogpt--/dev/sdb
[ceph001][DEBUG]Theoperationhascompletedsuccessfully.
[ceph001][WARNIN]INFO:ceph-disk:callingpartxonprepareddevice/dev/sdb
[ceph001][WARNIN]INFO:ceph-disk:re-readingknownpartitionswilldisplayerrors
[ceph001][WARNIN]INFO:ceph-disk:Runningcommand:/usr/sbin/partx-a/dev/sdb
[ceph001][WARNIN]partx:/dev/sdb:erroraddingpartition2
[ceph001][WARNIN]INFO:ceph-disk:Runningcommand:/usr/bin/udevadmsettle
[ceph001][WARNIN]DEBUG:ceph-disk:JournalisGPTpartition/dev/disk/by-partuuid/ae307314-3a81-4da2-974b-b21c24d9bba1
[ceph001][WARNIN]DEBUG:ceph-disk:JournalisGPTpartition/dev/disk/by-partuuid/ae307314-3a81-4da2-974b-b21c24d9bba1
[ceph001][WARNIN]DEBUG:ceph-disk:Creatingosdpartitionon/dev/sdb
[ceph001][WARNIN]INFO:ceph-disk:Runningcommand:/usr/sbin/sgdisk--largest-new=1--change-name=1:cephdata--partition-guid=1:16a6298d-59bb-4190-867a-10a5b519e7c0--typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be--/dev/sdb
[ceph001][DEBUG]Theoperationhascompletedsuccessfully.
[ceph001][WARNIN]INFO:ceph-disk:callingpartxoncreateddevice/dev/sdb
[ceph001][WARNIN]INFO:ceph-disk:re-readingknownpartitionswilldisplayerrors
[ceph001][WARNIN]INFO:ceph-disk:Runningcommand:/usr/sbin/partx-a/dev/sdb
[ceph001][WARNIN]partx:/dev/sdb:erroraddingpartitions1-2
[ceph001][WARNIN]INFO:ceph-disk:Runningcommand:/usr/bin/udevadmsettle
[ceph001][WARNIN]DEBUG:ceph-disk:Creatingxfsfson/dev/sdb1
[ceph001][WARNIN]INFO:ceph-disk:Runningcommand:/usr/sbin/mkfs-txfs-f-isize=2048--/dev/sdb1
[ceph001][DEBUG]meta-data=/dev/sdb1isize=2048agcount=4,agsize=6225855blks
[ceph001][DEBUG]=sectsz=512attr=2,projid32bit=1
[ceph001][DEBUG]=crc=0finobt=0
[ceph001][DEBUG]data=bsize=4096blocks=24903419,imaxpct=25
[ceph001][DEBUG]=sunit=0swidth=0blks
[ceph001][DEBUG]naming=version2bsize=4096ascii-ci=0ftype=0
[ceph001][DEBUG]log=internallogbsize=4096blocks=12159,version=2
[ceph001][DEBUG]=sectsz=512sunit=0blks,lazy-count=1
[ceph001][DEBUG]realtime=noneextsz=4096blocks=0,rtextents=0
[ceph001][WARNIN]DEBUG:ceph-disk:Mounting/dev/sdb1on/var/lib/ceph/tmp/mnt.2SMGIkwithoptionsnoatime,inode64
[ceph001][WARNIN]INFO:ceph-disk:Runningcommand:/usr/bin/mount-txfs-onoatime,inode64--/dev/sdb1/var/lib/ceph/tmp/mnt.2SMGIk
[ceph001][WARNIN]DEBUG:ceph-disk:Preparingosddatadir/var/lib/ceph/tmp/mnt.2SMGIk
[ceph001][WARNIN]DEBUG:ceph-disk:Creatingsymlink/var/lib/ceph/tmp/mnt.2SMGIk/journal->/dev/disk/by-partuuid/ae307314-3a81-4da2-974b-b21c24d9bba1
[ceph001][WARNIN]DEBUG:ceph-disk:Unmounting/var/lib/ceph/tmp/mnt.2SMGIk
[ceph001][WARNIN]INFO:ceph-disk:Runningcommand:/bin/umount--/var/lib/ceph/tmp/mnt.2SMGIk
[ceph001][WARNIN]INFO:ceph-disk:Runningcommand:/usr/sbin/sgdisk--typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d--/dev/sdb
[ceph001][DEBUG]Warning:Thekernelisstillusingtheoldpartitiontable.
[ceph001][DEBUG]Thenewtablewillbeusedatthenextreboot.
[ceph001][DEBUG]Theoperationhascompletedsuccessfully.
[ceph001][WARNIN]INFO:ceph-disk:callingpartxonprepareddevice/dev/sdb
[ceph001][WARNIN]INFO:ceph-disk:re-readingknownpartitionswilldisplayerrors
[ceph001][WARNIN]INFO:ceph-disk:Runningcommand:/usr/sbin/partx-a/dev/sdb
[ceph001][WARNIN]partx:/dev/sdb:erroraddingpartitions1-2
[ceph001][INFO]Runningcommand:systemctlenableceph
[ceph001][WARNIN]ceph.serviceisnotanativeservice,redirectingto/sbin/chkconfig.
[ceph001][WARNIN]Executing/sbin/chkconfigcephon
[ceph001][INFO]checkingOSDstatus...
[ceph001][DEBUG]findthelocationofanexecutable
[ceph001][INFO]Runningcommand:/bin/ceph--cluster=cephosdstat--format=json
[ceph001][WARNIN]thereis1OSDdown
[ceph001][WARNIN]thereis1OSDout
[ceph_deploy.osd][DEBUG]Hostceph001isnowreadyforosduse.
查看集群状态

1
2
3
4
5
6
7
8
9
10
11
[root@ceph001cluster]#ceph-s
cluster865e6b01-b0ea-44da-87a5-26a4980aa7a8
healthHEALTH_WARN
64pgsstuckinactive
64pgsstuckunclean
monmape1:1monsat{ceph001=[2001:250:4402:2001:20c:29ff:fe25:8888]:6789/0}
electionepoch1,quorum0ceph001
osdmape3:1osds:0up,0in
pgmapv4:64pgs,1pools,0bytesdata,0objects
0kBused,0kB/0kBavail
64creating
继续添加其他OSD

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
[root@ceph001cluster]#ceph-deploydiskzapceph001:/dev/sdc
[root@ceph001cluster]#ceph-deploydiskzapceph001:/dev/sdd
[root@ceph001cluster]#ceph-deployosdcreateceph001:/dev/sdc
[root@ceph001cluster]#ceph-deployosdcreateceph001:/dev/sdd
[root@ceph001cluster]#ceph-s
cluster865e6b01-b0ea-44da-87a5-26a4980aa7a8
healthHEALTH_WARN
64pgsstuckinactive
64pgsstuckunclean
monmape1:1monsat{ceph001=[2001:250:4402:2001:20c:29ff:fe25:8888]:6789/0}
electionepoch1,quorum0ceph001
osdmape7:3osds:0up,0in
pgmapv8:64pgs,1pools,0bytesdata,0objects
0kBused,0kB/0kBavail
64creating
重启机器,查看集群状态

1
2
3
4
5
6
7
8
9
10
[root@ceph001~]#ceph-s
cluster2818c750-8724-4a70-bb26-f01af7f6067f
healthHEALTH_WARN
toofewPGsperOSD(21<min30)
monmape1:1monsat{ceph001=[2001:250:4402:2001:20c:29ff:fe25:8888]:6789/0}
electionepoch1,quorum0ceph001
osdmape9:3osds:3up,3in
pgmapv11:64pgs,1pools,0bytesdata,0objects
102196kBused,284GB/284GBavail
64active+clean

错误处理

我们可以看到,目前集群状态为HEALTH_WARN,存在以下警告提示

1
toofewPGsperOSD(21<min30)
增大rbd的pg数(toofewPGsperOSD(21<min30))

1
2
3
4
[root@ceph001cluster]#cephosdpoolsetrbdpg_num128
setpool0pg_numto128
[root@ceph001cluster]#cephosdpoolsetrbdpgp_num128
setpool0pgp_numto128
查看集群状态

1
2
3
4
5
6
7
8
9
[root@ceph001~]#ceph-s
cluster2818c750-8724-4a70-bb26-f01af7f6067f
healthHEALTH_OK
monmape1:1monsat{ceph001=[2001:250:4402:2001:20c:29ff:fe25:8888]:6789/0}
electionepoch1,quorum0ceph001
osdmape13:3osds:3up,3in
pgmapv17:128pgs,1pools,0bytesdata,0objects
101544kBused,284GB/284GBavail
128active+clean

小结

本教程只是简单的搭建了一个单节点的Ceph环境,如果要换成多节点也很简单,操作大同小异

在基于IPv6的Ceph配置上,个人觉得与IPv4操作相差不大,只需要注意两点

配置静态的IPv6地址

修改主机名并添加域名解析,将主机名对应于前面设置的静态IPv6地址

坚持原创技术分享,您的支持将鼓励我继续创作!



本文作者:lemon

本文链接:https://lemon2013.github.io/2016/11/06/配置基于IPv6的Ceph/

版权声明:本博客所有文章除特别声明外,均采用CCBY-NC-SA3.0许可协议。转载请注明出处!
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息