配置基于IPv6的单节点Ceph
2017-07-14 00:00
204 查看
引言
为什么突然想起搭建一个基于IPv6的Ceph环境?纯属巧合,原本有一个项目需要搭建一个基于IPv6的文件系统,可惜Hadoop不支持(之前一直觉得Hadoop比较强大),几经折腾,Ceph给了我希望,好了闲话少说,直接进入正题。实验环境
Linux操作系统版本:CentOSLinuxrelease7.2.1511(Core)Ceph版本:0.94.9(hammer版本)
原本选取的为jewel最新版本,环境配置成功后,在使用Ceph的对象存储功能时,导致不能通过IPv6访问,出现类似如下错误提示,查阅资料发现是Cephjewel版本的一个bug,正在修复,另外也给大家一个建议,在生产环境中,尽量不要选择最新版本。
set_ports_option:[::]8888:invalidportsportspec
预检
网络配置
参考之前的一篇文章修改主机名
2 3 4 5 | [root 127.0.0.1localhostlocalhost.localdomainlocalhost4localhost4.localdomain4 ::1localhostlocalhost.localdomainlocalhost6localhost6.localdomain6 2001:250:4402:2001:20c:29ff:fe25:8888ceph001#新增,前面IPv6地址即主机ceph001的静态IPv6地址 |
修改yum源
由于某些原因,可能导致官方的yum在下载软件时速度较慢,这里我们将yum源换为aliyun源2 3 4 5 6 7 | [root [root [root@localhost~]#wget-O/etc/yum.repos.d/epel.repo [root@localhost~]#sed-i'/aliyuncs/d'/etc/yum.repos.d/CentOS-Base.repo [root@localhost~]#sed-i'/aliyuncs/d'/etc/yum.repos.d/epel.repo [root@localhost~]#sed-i's/$releasever/7.2.1511/g'/etc/yum.repos.d/CentOS-Base.repo |
2 3 4 5 6 7 8 9 10 | [ceph] name=ceph baseurl=http://mirrors.aliyun.com/ceph/rpm-hammer/el7/x86_64/#可以选择需要安装的版本 gpgcheck=0 [ceph-noarch] name=cephnoarch baseurl=http://mirrors.aliyun.com/ceph/rpm-hammer/el7/noarch/#可以选择需要安装的版本 gpgcheck=0 [root@localhost~]#yummakecache |
安装ceph与ceph-deploy
2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 | Loadedplugins:fastestmirror,langpacks Loadingmirrorspeedsfromcachedhostfile ResolvingDependencies -->Runningtransactioncheck --->Packageceph.x86_641:0.94.9-0.el7willbeinstalled -->ProcessingDependency:librbd1=1:0.94.9-0.el7forpackage:1:ceph-0.94.9-0.el7.x86_64 -->ProcessingDependency:python-rbd=1:0.94.9-0.el7forpackage:1:ceph-0.94.9-0.el7.x86_64 -->ProcessingDependency:python-cephfs=1:0.94.9-0.el7forpackage:1:ceph-0.94.9-0.el7.x86_64 -->ProcessingDependency:libcephfs1=1:0.94.9-0.el7forpackage:1:ceph-0.94.9-0.el7.x86_64 -->ProcessingDependency:librados2=1:0.94.9-0.el7forpackage:1:ceph-0.94.9-0.el7.x86_64 -->ProcessingDependency:python-rados=1:0.94.9-0.el7forpackage:1:ceph-0.94.9-0.el7.x86_64 -->ProcessingDependency:ceph-common=1:0.94.9-0.el7forpackage:1:ceph-0.94.9-0.el7.x86_64 -->ProcessingDependency:python-requestsforpackage:1:ceph-0.94.9-0.el7.x86_64 -->ProcessingDependency:python-flaskforpackage:1:ceph-0.94.9-0.el7.x86_64 -->ProcessingDependency:redhat-lsb-coreforpackage:1:ceph-0.94.9-0.el7.x86_64 -->ProcessingDependency:hdparmforpackage:1:ceph-0.94.9-0.el7.x86_64 -->ProcessingDependency:libcephfs.so.1()(64bit)forpackage:1:ceph-0.94.9-0.el7.x86_64 ....... DependenciesResolved ======================================================================================= PackageArchVersionRepositorySize ======================================================================================= Installing: cephx86_641:0.94.9-0.el7ceph20M ceph-deploynoarch1.5.36-0ceph-noarch283k Installingfordependencies: boost-program-optionsx86_641.53.0-25.el7base155k ceph-commonx86_641:0.94.9-0.el7ceph7.2M ... TransactionSummary ======================================================================================= Install2Packages(+24Dependentpackages) Upgrade(2Dependentpackages) Totaldownloadsize:37M Isthisok[y/d/N]:y Downloadingpackages: NoPrestometadataavailableforceph warning:/var/cache/yum/x86_64/7/base/packages/boost-program-options-1.53.0-25.el7.x86_64.rpm:HeaderV3RSA/SHA256Signature,keyIDf4a80eb5:NOKEY Publickeyforboost-program-options-1.53.0-25.el7.x86_64.rpmisnotinstalled (1/28):boost-program-options-1.53.0-25.el7.x86_64.rpm|155kB00:00:00 (2/28):hdparm-9.43-5.el7.x86_64.rpm|83kB00:00:00 (3/28):ceph-deploy-1.5.36-0.noarch.rpm|283kB00:00:00 (4/28):leveldb-1.12.0-11.el7.x86_64.rpm|161kB00:00:00 ... --------------------------------------------------------------------------------------- Total718kB/s|37MB00:53 Retrievingkeyfrom Userid:"CentOS-7Key(CentOS7OfficialSigningKey)<security@centos.org>" Fingerprint:6341ab2753d78a78a7c27bb124c6a8a7f4a80eb5 From: ... Complete! |
2 3 4 | 1.5.36 [root@localhost~]#ceph-v cephversion0.94.9(fe6d859066244b97b24f09d46552afc2071e6f90) |
安装NTP(如果是多节点还需要配置服务端与客户端),并设置selinux与firewalld
2 3 4 5 6 7 | [root@localhost~]#sed-i's/SELINUX=.*/SELINUX=disabled/'/etc/selinux/config [root@localhost~]#setenforce0 [root@localhost~]#systemctlstopfirewalld [root@localhost~]#systemctldisablefirewalld Removedsymlink/etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service. Removedsymlink/etc/systemd/system/basic.target.wants/firewalld.service. |
创建Ceph集群
在管理节点(ceph001)
[root@ceph001~]#mkdircluster[root@ceph001~]#cdcluster/
创建集群
2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 | [ceph_deploy.conf][DEBUG]foundconfigurationfileat:/root/.cephdeploy.conf [ceph_deploy.cli][INFO]Invoked(1.5.36):/usr/bin/ceph-deploynewceph001 [ceph_deploy.cli][INFO]ceph-deployoptions: [ceph_deploy.cli][INFO]username:None [ceph_deploy.cli][INFO]func:<functionnewat0xfe0668> [ceph_deploy.cli][INFO]verbose:False [ceph_deploy.cli][INFO]overwrite_conf:False [ceph_deploy.cli][INFO]quiet:False [ceph_deploy.cli][INFO]cd_conf:<ceph_deploy.conf.cephdeploy.Confinstanceat0x104c680> [ceph_deploy.cli][INFO]cluster:ceph [ceph_deploy.cli][INFO]ssh_copykey:True [ceph_deploy.cli][INFO]mon:['ceph001'] [ceph_deploy.cli][INFO]public_network:None [ceph_deploy.cli][INFO]ceph_conf:None [ceph_deploy.cli][INFO]cluster_network:None [ceph_deploy.cli][INFO]default_release:False [ceph_deploy.cli][INFO]fsid:None [ceph_deploy.new][DEBUG]Creatingnewclusternamedceph [ceph_deploy.new][INFO]makingsurepasswordlessSSHsucceeds [ceph001][DEBUG]connectedtohost:ceph001 [ceph001][DEBUG]detectplatforminformationfromremotehost [ceph001][DEBUG]detectmachinetype [ceph001][DEBUG]findthelocationofanexecutable [ceph001][INFO]Runningcommand:/usr/sbin/iplinkshow [ceph001][INFO]Runningcommand:/usr/sbin/ipaddrshow [ceph001][DEBUG]IPaddressesfound:[u'192.168.122.1',u'49.123.105.124'] [ceph_deploy.new][DEBUG]Resolvinghostceph001 [ceph_deploy.new][DEBUG]Monitorceph001at2001:250:4402:2001:20c:29ff:fe25:8888 [ceph_deploy.new][INFO]MonitorsareIPv6,bindingMessengertrafficonIPv6 [ceph_deploy.new][DEBUG]Monitorinitialmembersare['ceph001'] [ceph_deploy.new][DEBUG]Monitoraddrsare['[2001:250:4402:2001:20c:29ff:fe25:8888]'] [ceph_deploy.new][DEBUG]Creatingarandommonkey... [ceph_deploy.new][DEBUG]Writingmonitorkeyringtoceph.mon.keyring... [ceph_deploy.new][DEBUG]Writinginitialconfigtoceph.conf... [root@ceph001cluster]#ll total12 -rw-r--r--.1rootroot244Nov621:54ceph.conf -rw-r--r--.1rootroot3106Nov621:54ceph-deploy-ceph.log -rw-------.1rootroot73Nov621:54ceph.mon.keyring [root@ceph001cluster]#catceph.conf [global] fsid=865e6b01-b0ea-44da-87a5-26a4980aa7a8 ms_bind_ipv6=true mon_initial_members=ceph001 mon_host=[2001:250:4402:2001:20c:29ff:fe25:8888] auth_cluster_required=cephx auth_service_required=cephx auth_client_required=cephx |
2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 | [root@ceph001cluster]#ceph-deploy--overwrite-confconfigpushceph001 [ceph_deploy.conf][DEBUG]foundconfigurationfileat:/root/.cephdeploy.conf [ceph_deploy.cli][INFO]Invoked(1.5.36):/usr/bin/ceph-deploy--overwrite-confconfigpushceph001 [ceph_deploy.cli][INFO]ceph-deployoptions: [ceph_deploy.cli][INFO]username:None [ceph_deploy.cli][INFO]verbose:False [ceph_deploy.cli][INFO]overwrite_conf:True [ceph_deploy.cli][INFO]subcommand:push [ceph_deploy.cli][INFO]quiet:False [ceph_deploy.cli][INFO]cd_conf:<ceph_deploy.conf.cephdeploy.Confinstanceat0x14f9710> [ceph_deploy.cli][INFO]cluster:ceph [ceph_deploy.cli][INFO]client:['ceph001'] [ceph_deploy.cli][INFO]func:<functionconfigat0x14d42a8> [ceph_deploy.cli][INFO]ceph_conf:None [ceph_deploy.cli][INFO]default_release:False [ceph_deploy.config][DEBUG]Pushingconfigtoceph001 [ceph001][DEBUG]connectedtohost:ceph001 [ceph001][DEBUG]detectplatforminformationfromremotehost [ceph001][DEBUG]detectmachinetype [ceph001][DEBUG]writeclusterconfigurationto/etc/ceph/{cluster}.conf |
创建监控节点
将ceph001作为监控节点2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 | [ceph_deploy.conf][DEBUG]foundconfigurationfileat:/root/.cephdeploy.conf [ceph_deploy.cli][INFO]Invoked(1.5.36):/usr/bin/ceph-deploymoncreate-initial [ceph_deploy.cli][INFO]ceph-deployoptions: [ceph_deploy.cli][INFO]username:None [ceph_deploy.cli][INFO]verbose:False [ceph_deploy.cli][INFO]overwrite_conf:False [ceph_deploy.cli][INFO]subcommand:create-initial [ceph_deploy.cli][INFO]quiet:False [ceph_deploy.cli][INFO]cd_conf:<ceph_deploy.conf.cephdeploy.Confinstanceat0x23865a8> [ceph_deploy.cli][INFO]cluster:ceph [ceph_deploy.cli][INFO]func:<functionmonat0x237e578> [ceph_deploy.cli][INFO]ceph_conf:None [ceph_deploy.cli][INFO]default_release:False [ceph_deploy.cli][INFO]keyrings:None [ceph_deploy.mon][DEBUG]Deployingmon,clustercephhostsceph001 [ceph_deploy.mon][DEBUG]detectingplatformforhostceph001... [ceph001][DEBUG]connectedtohost:ceph001 [ceph001][DEBUG]detectplatforminformationfromremotehost [ceph001][DEBUG]detectmachinetype [ceph001][DEBUG]findthelocationofanexecutable [ceph_deploy.mon][INFO]distroinfo:CentOSLinux7.2.1511Core [ceph001][DEBUG]determiningifprovidedhosthassamehostnameinremote [ceph001][DEBUG]getremoteshorthostname [ceph001][DEBUG]deployingmontoceph001 [ceph001][DEBUG]getremoteshorthostname [ceph001][DEBUG]remotehostname:ceph001 [ceph001][DEBUG]writeclusterconfigurationto/etc/ceph/{cluster}.conf [ceph001][DEBUG]createthemonpathifitdoesnotexist [ceph001][DEBUG]checkingfordonepath:/var/lib/ceph/mon/ceph-ceph001/done [ceph001][DEBUG]donepathdoesnotexist:/var/lib/ceph/mon/ceph-ceph001/done [ceph001][INFO]creatingkeyringfile:/var/lib/ceph/tmp/ceph-ceph001.mon.keyring [ceph001][DEBUG]createthemonitorkeyringfile [ceph001][INFO]Runningcommand:ceph-mon--clusterceph--mkfs-iceph001--keyring/var/lib/ceph/tmp/ceph-ceph001.mon.keyring [ceph001][DEBUG]ceph-mon:mon.noname-a[2001:250:4402:2001:20c:29ff:fe25:8888]:6789/0islocal,renamingtomon.ceph001 [ceph001][DEBUG]ceph-mon:setfsidto865e6b01-b0ea-44da-87a5-26a4980aa7a8 [ceph001][DEBUG]ceph-mon:createdmonfsat/var/lib/ceph/mon/ceph-ceph001formon.ceph001 [ceph001][INFO]unlinkingkeyringfile/var/lib/ceph/tmp/ceph-ceph001.mon.keyring [ceph001][DEBUG]createadonefiletoavoidre-doingthemondeployment [ceph001][DEBUG]createtheinitpathifitdoesnotexist [ceph001][DEBUG]locatingthe`service`executable... [ceph001][INFO]Runningcommand:/usr/sbin/serviceceph-c/etc/ceph/ceph.confstartmon.ceph001 [ceph001][DEBUG]===mon.ceph001=== [ceph001][DEBUG]StartingCephmon.ceph001onceph001... [ceph001][WARNIN]Runningasunitceph-mon.ceph001.1478441156.735105300.service. [ceph001][DEBUG]Startingceph-create-keysonceph001... [ceph001][INFO]Runningcommand:systemctlenableceph [ceph001][WARNIN]ceph.serviceisnotanativeservice,redirectingto/sbin/chkconfig. [ceph001][WARNIN]Executing/sbin/chkconfigcephon [ceph001][INFO]Runningcommand:ceph--cluster=ceph--admin-daemon/var/run/ceph/ceph-mon.ceph001.asokmon_status [ceph001][DEBUG]******************************************************************************** [ceph001][DEBUG]statusformonitor:mon.ceph001 [ceph001][DEBUG]{ [ceph001][DEBUG]"election_epoch":2, [ceph001][DEBUG]"extra_probe_peers":[], [ceph001][DEBUG]"monmap":{ [ceph001][DEBUG]"created":"0.000000", [ceph001][DEBUG]"epoch":1, [ceph001][DEBUG]"fsid":"865e6b01-b0ea-44da-87a5-26a4980aa7a8", [ceph001][DEBUG]"modified":"0.000000", [ceph001][DEBUG]"mons":[ [ceph001][DEBUG]{ [ceph001][DEBUG]"addr":"[2001:250:4402:2001:20c:29ff:fe25:8888]:6789/0", [ceph001][DEBUG]"name":"ceph001", [ceph001][DEBUG]"rank":0 [ceph001][DEBUG]} [ceph001][DEBUG]] [ceph001][DEBUG]}, [ceph001][DEBUG]"name":"ceph001", [ceph001][DEBUG]"outside_quorum":[], [ceph001][DEBUG]"quorum":[ [ceph001][DEBUG]0 [ceph001][DEBUG]], [ceph001][DEBUG]"rank":0, [ceph001][DEBUG]"state":"leader", [ceph001][DEBUG]"sync_provider":[] [ceph001][DEBUG]} [ceph001][DEBUG]******************************************************************************** [ceph001][INFO]monitor:mon.ceph001isrunning [ceph001][INFO]Runningcommand:ceph--cluster=ceph--admin-daemon/var/run/ceph/ceph-mon.ceph001.asokmon_status [ceph_deploy.mon][INFO]processingmonitormon.ceph001 [ceph001][DEBUG]connectedtohost:ceph001 [ceph001][DEBUG]detectplatforminformationfromremotehost [ceph001][DEBUG]detectmachinetype [ceph001][DEBUG]findthelocationofanexecutable [ceph001][INFO]Runningcommand:ceph--cluster=ceph--admin-daemon/var/run/ceph/ceph-mon.ceph001.asokmon_status [ceph_deploy.mon][INFO]mon.ceph001monitorhasreachedquorum! [ceph_deploy.mon][INFO]allinitialmonitorsarerunningandhaveformedquorum [ceph_deploy.mon][INFO]Runninggatherkeys... [ceph_deploy.gatherkeys][INFO]Storingkeysintempdirectory/tmp/tmpgY2IT7 [ceph001][DEBUG]connectedtohost:ceph001 [ceph001][DEBUG]detectplatforminformationfromremotehost [ceph001][DEBUG]detectmachinetype [ceph001][DEBUG]getremoteshorthostname [ceph001][DEBUG]fetchremotefile [ceph001][INFO]Runningcommand:/usr/bin/ceph--connect-timeout=25--cluster=ceph--admin-daemon=/var/run/ceph/ceph-mon.ceph001.asokmon_status [ceph001][INFO]Runningcommand:/usr/bin/ceph--connect-timeout=25--cluster=ceph--namemon.--keyring=/var/lib/ceph/mon/ceph-ceph001/keyringauthgetclient.admin [ceph001][INFO]Runningcommand:/usr/bin/ceph--connect-timeout=25--cluster=ceph--namemon.--keyring=/var/lib/ceph/mon/ceph-ceph001/keyringauthgetclient.bootstrap-mds [ceph001][INFO]Runningcommand:/usr/bin/ceph--connect-timeout=25--cluster=ceph--namemon.--keyring=/var/lib/ceph/mon/ceph-ceph001/keyringauthgetclient.bootstrap-osd [ceph001][INFO]Runningcommand:/usr/bin/ceph--connect-timeout=25--cluster=ceph--namemon.--keyring=/var/lib/ceph/mon/ceph-ceph001/keyringauthgetclient.bootstrap-rgw [ceph_deploy.gatherkeys][INFO]Storingceph.client.admin.keyring [ceph_deploy.gatherkeys][INFO]Storingceph.bootstrap-mds.keyring [ceph_deploy.gatherkeys][INFO]keyring'ceph.mon.keyring'alreadyexists [ceph_deploy.gatherkeys][INFO]Storingceph.bootstrap-osd.keyring [ceph_deploy.gatherkeys][INFO]Storingceph.bootstrap-rgw.keyring [ceph_deploy.gatherkeys][INFO]Destroytempdirectory/tmp/tmpgY2IT7 |
2 3 4 5 6 7 8 9 10 11 12 | cluster865e6b01-b0ea-44da-87a5-26a4980aa7a8 healthHEALTH_ERR 64pgsstuckinactive 64pgsstuckunclean noosds monmape1:1monsat{ceph001=[2001:250:4402:2001:20c:29ff:fe25:8888]:6789/0} electionepoch2,quorum0ceph001 osdmape1:0osds:0up,0in pgmapv2:64pgs,1pools,0bytesdata,0objects 0kBused,0kB/0kBavail 64creating |
添加OSD
查看硬盘2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 | [ceph_deploy.conf][DEBUG]foundconfigurationfileat:/root/.cephdeploy.conf [ceph_deploy.cli][INFO]Invoked(1.5.36):/usr/bin/ceph-deploydisklistceph001 [ceph_deploy.cli][INFO]ceph-deployoptions: [ceph_deploy.cli][INFO]username:None [ceph_deploy.cli][INFO]verbose:False [ceph_deploy.cli][INFO]overwrite_conf:False [ceph_deploy.cli][INFO]subcommand:list [ceph_deploy.cli][INFO]quiet:False [ceph_deploy.cli][INFO]cd_conf:<ceph_deploy.conf.cephdeploy.Confinstanceat0x1c79bd8> [ceph_deploy.cli][INFO]cluster:ceph [ceph_deploy.cli][INFO]func:<functiondiskat0x1c70e60> [ceph_deploy.cli][INFO]ceph_conf:None [ceph_deploy.cli][INFO]default_release:False [ceph_deploy.cli][INFO]disk:[('ceph001',None,None)] [ceph001][DEBUG]connectedtohost:ceph001 [ceph001][DEBUG]detectplatforminformationfromremotehost [ceph001][DEBUG]detectmachinetype [ceph001][DEBUG]findthelocationofanexecutable [ceph_deploy.osd][INFO]Distroinfo:CentOSLinux7.2.1511Core [ceph_deploy.osd][DEBUG]Listingdisksonceph001... [ceph001][DEBUG]findthelocationofanexecutable [ceph001][INFO]Runningcommand:/usr/sbin/ceph-disklist [ceph001][DEBUG]/dev/sda: [ceph001][DEBUG]/dev/sda1other,xfs,mountedon/boot [ceph001][DEBUG]/dev/sda2other,LVM2_member [ceph001][DEBUG]/dev/sdbother,unknown [ceph001][DEBUG]/dev/sdcother,unknown [ceph001][DEBUG]/dev/sddother,unknown [ceph001][DEBUG]/dev/sr0other,iso9660 |
2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 | [ceph_deploy.conf][DEBUG]foundconfigurationfileat:/root/.cephdeploy.conf [ceph_deploy.cli][INFO]Invoked(1.5.36):/usr/bin/ceph-deploydiskzapceph001:/dev/sdb [ceph_deploy.cli][INFO]ceph-deployoptions: [ceph_deploy.cli][INFO]username:None [ceph_deploy.cli][INFO]verbose:False [ceph_deploy.cli][INFO]overwrite_conf:False [ceph_deploy.cli][INFO]subcommand:zap [ceph_deploy.cli][INFO]quiet:False [ceph_deploy.cli][INFO]cd_conf:<ceph_deploy.conf.cephdeploy.Confinstanceat0x1b14bd8> [ceph_deploy.cli][INFO]cluster:ceph [ceph_deploy.cli][INFO]func:<functiondiskat0x1b0be60> [ceph_deploy.cli][INFO]ceph_conf:None [ceph_deploy.cli][INFO]default_release:False [ceph_deploy.cli][INFO]disk:[('ceph001','/dev/sdb',None)] [ceph_deploy.osd][DEBUG]zapping/dev/sdbonceph001 [ceph001][DEBUG]connectedtohost:ceph001 [ceph001][DEBUG]detectplatforminformationfromremotehost [ceph001][DEBUG]detectmachinetype [ceph001][DEBUG]findthelocationofanexecutable [ceph_deploy.osd][INFO]Distroinfo:CentOSLinux7.2.1511Core [ceph001][DEBUG]zeroinglastfewblocksofdevice [ceph001][DEBUG]findthelocationofanexecutable [ceph001][INFO]Runningcommand:/usr/sbin/ceph-diskzap/dev/sdb [ceph001][DEBUG]CreatingnewGPTentries. [ceph001][DEBUG]GPTdatastructuresdestroyed!Youmaynowpartitionthediskusingfdiskor [ceph001][DEBUG]otherutilities. [ceph001][DEBUG]CreatingnewGPTentries. [ceph001][DEBUG]Theoperationhascompletedsuccessfully. [ceph001][WARNIN]partx:specifiedrange<1:0>doesnotmakesense |
2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 | [ceph_deploy.conf][DEBUG]foundconfigurationfileat:/root/.cephdeploy.conf [ceph_deploy.cli][INFO]Invoked(1.5.36):/usr/bin/ceph-deployosdcreateceph001:/dev/sdb [ceph_deploy.cli][INFO]ceph-deployoptions: [ceph_deploy.cli][INFO]username:None [ceph_deploy.cli][INFO]disk:[('ceph001','/dev/sdb',None)] [ceph_deploy.cli][INFO]dmcrypt:False [ceph_deploy.cli][INFO]verbose:False [ceph_deploy.cli][INFO]bluestore:None [ceph_deploy.cli][INFO]overwrite_conf:False [ceph_deploy.cli][INFO]subcommand:create [ceph_deploy.cli][INFO]dmcrypt_key_dir:/etc/ceph/dmcrypt-keys [ceph_deploy.cli][INFO]quiet:False [ceph_deploy.cli][INFO]cd_conf:<ceph_deploy.conf.cephdeploy.Confinstanceat0x19b6680> [ceph_deploy.cli][INFO]cluster:ceph [ceph_deploy.cli][INFO]fs_type:xfs [ceph_deploy.cli][INFO]func:<functionosdat0x19aade8> [ceph_deploy.cli][INFO]ceph_conf:None [ceph_deploy.cli][INFO]default_release:False [ceph_deploy.cli][INFO]zap_disk:False [ceph_deploy.osd][DEBUG]Preparingclustercephdisksceph001:/dev/sdb: [ceph001][DEBUG]connectedtohost:ceph001 [ceph001][DEBUG]detectplatforminformationfromremotehost [ceph001][DEBUG]detectmachinetype [ceph001][DEBUG]findthelocationofanexecutable [ceph_deploy.osd][INFO]Distroinfo:CentOSLinux7.2.1511Core [ceph_deploy.osd][DEBUG]Deployingosdtoceph001 [ceph001][DEBUG]writeclusterconfigurationto/etc/ceph/{cluster}.conf [ceph_deploy.osd][DEBUG]Preparinghostceph001disk/dev/sdbjournalNoneactivateTrue [ceph001][DEBUG]findthelocationofanexecutable [ceph001][INFO]Runningcommand:/usr/sbin/ceph-disk-vprepare--clusterceph--fs-typexfs--/dev/sdb [ceph001][WARNIN]INFO:ceph-disk:Runningcommand:/usr/bin/ceph-osd--cluster=ceph--show-config-value=fsid [ceph001][WARNIN]INFO:ceph-disk:Runningcommand:/usr/bin/ceph-conf--cluster=ceph--name=osd.--lookuposd_mkfs_options_xfs [ceph001][WARNIN]INFO:ceph-disk:Runningcommand:/usr/bin/ceph-conf--cluster=ceph--name=osd.--lookuposd_fs_mkfs_options_xfs [ceph001][WARNIN]INFO:ceph-disk:Runningcommand:/usr/bin/ceph-conf--cluster=ceph--name=osd.--lookuposd_mount_options_xfs [ceph001][WARNIN]INFO:ceph-disk:Runningcommand:/usr/bin/ceph-conf--cluster=ceph--name=osd.--lookuposd_fs_mount_options_xfs [ceph001][WARNIN]INFO:ceph-disk:Runningcommand:/usr/bin/ceph-osd--cluster=ceph--show-config-value=osd_journal_size [ceph001][WARNIN]INFO:ceph-disk:Runningcommand:/usr/bin/ceph-conf--cluster=ceph--name=osd.--lookuposd_cryptsetup_parameters [ceph001][WARNIN]INFO:ceph-disk:Runningcommand:/usr/bin/ceph-conf--cluster=ceph--name=osd.--lookuposd_dmcrypt_key_size [ceph001][WARNIN]INFO:ceph-disk:Runningcommand:/usr/bin/ceph-conf--cluster=ceph--name=osd.--lookuposd_dmcrypt_type [ceph001][WARNIN]INFO:ceph-disk:Willcolocatejournalwithdataon/dev/sdb [ceph001][WARNIN]DEBUG:ceph-disk:Creatingjournalpartitionnum2size5120on/dev/sdb [ceph001][WARNIN]INFO:ceph-disk:Runningcommand:/usr/sbin/sgdisk--new=2:0:5120M--change-name=2:cephjournal--partition-guid=2:ae307314-3a81-4da2-974b-b21c24d9bba1--typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106--mbrtogpt--/dev/sdb [ceph001][DEBUG]Theoperationhascompletedsuccessfully. [ceph001][WARNIN]INFO:ceph-disk:callingpartxonprepareddevice/dev/sdb [ceph001][WARNIN]INFO:ceph-disk:re-readingknownpartitionswilldisplayerrors [ceph001][WARNIN]INFO:ceph-disk:Runningcommand:/usr/sbin/partx-a/dev/sdb [ceph001][WARNIN]partx:/dev/sdb:erroraddingpartition2 [ceph001][WARNIN]INFO:ceph-disk:Runningcommand:/usr/bin/udevadmsettle [ceph001][WARNIN]DEBUG:ceph-disk:JournalisGPTpartition/dev/disk/by-partuuid/ae307314-3a81-4da2-974b-b21c24d9bba1 [ceph001][WARNIN]DEBUG:ceph-disk:JournalisGPTpartition/dev/disk/by-partuuid/ae307314-3a81-4da2-974b-b21c24d9bba1 [ceph001][WARNIN]DEBUG:ceph-disk:Creatingosdpartitionon/dev/sdb [ceph001][WARNIN]INFO:ceph-disk:Runningcommand:/usr/sbin/sgdisk--largest-new=1--change-name=1:cephdata--partition-guid=1:16a6298d-59bb-4190-867a-10a5b519e7c0--typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be--/dev/sdb [ceph001][DEBUG]Theoperationhascompletedsuccessfully. [ceph001][WARNIN]INFO:ceph-disk:callingpartxoncreateddevice/dev/sdb [ceph001][WARNIN]INFO:ceph-disk:re-readingknownpartitionswilldisplayerrors [ceph001][WARNIN]INFO:ceph-disk:Runningcommand:/usr/sbin/partx-a/dev/sdb [ceph001][WARNIN]partx:/dev/sdb:erroraddingpartitions1-2 [ceph001][WARNIN]INFO:ceph-disk:Runningcommand:/usr/bin/udevadmsettle [ceph001][WARNIN]DEBUG:ceph-disk:Creatingxfsfson/dev/sdb1 [ceph001][WARNIN]INFO:ceph-disk:Runningcommand:/usr/sbin/mkfs-txfs-f-isize=2048--/dev/sdb1 [ceph001][DEBUG]meta-data=/dev/sdb1isize=2048agcount=4,agsize=6225855blks [ceph001][DEBUG]=sectsz=512attr=2,projid32bit=1 [ceph001][DEBUG]=crc=0finobt=0 [ceph001][DEBUG]data=bsize=4096blocks=24903419,imaxpct=25 [ceph001][DEBUG]=sunit=0swidth=0blks [ceph001][DEBUG]naming=version2bsize=4096ascii-ci=0ftype=0 [ceph001][DEBUG]log=internallogbsize=4096blocks=12159,version=2 [ceph001][DEBUG]=sectsz=512sunit=0blks,lazy-count=1 [ceph001][DEBUG]realtime=noneextsz=4096blocks=0,rtextents=0 [ceph001][WARNIN]DEBUG:ceph-disk:Mounting/dev/sdb1on/var/lib/ceph/tmp/mnt.2SMGIkwithoptionsnoatime,inode64 [ceph001][WARNIN]INFO:ceph-disk:Runningcommand:/usr/bin/mount-txfs-onoatime,inode64--/dev/sdb1/var/lib/ceph/tmp/mnt.2SMGIk [ceph001][WARNIN]DEBUG:ceph-disk:Preparingosddatadir/var/lib/ceph/tmp/mnt.2SMGIk [ceph001][WARNIN]DEBUG:ceph-disk:Creatingsymlink/var/lib/ceph/tmp/mnt.2SMGIk/journal->/dev/disk/by-partuuid/ae307314-3a81-4da2-974b-b21c24d9bba1 [ceph001][WARNIN]DEBUG:ceph-disk:Unmounting/var/lib/ceph/tmp/mnt.2SMGIk [ceph001][WARNIN]INFO:ceph-disk:Runningcommand:/bin/umount--/var/lib/ceph/tmp/mnt.2SMGIk [ceph001][WARNIN]INFO:ceph-disk:Runningcommand:/usr/sbin/sgdisk--typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d--/dev/sdb [ceph001][DEBUG]Warning:Thekernelisstillusingtheoldpartitiontable. [ceph001][DEBUG]Thenewtablewillbeusedatthenextreboot. [ceph001][DEBUG]Theoperationhascompletedsuccessfully. [ceph001][WARNIN]INFO:ceph-disk:callingpartxonprepareddevice/dev/sdb [ceph001][WARNIN]INFO:ceph-disk:re-readingknownpartitionswilldisplayerrors [ceph001][WARNIN]INFO:ceph-disk:Runningcommand:/usr/sbin/partx-a/dev/sdb [ceph001][WARNIN]partx:/dev/sdb:erroraddingpartitions1-2 [ceph001][INFO]Runningcommand:systemctlenableceph [ceph001][WARNIN]ceph.serviceisnotanativeservice,redirectingto/sbin/chkconfig. [ceph001][WARNIN]Executing/sbin/chkconfigcephon [ceph001][INFO]checkingOSDstatus... [ceph001][DEBUG]findthelocationofanexecutable [ceph001][INFO]Runningcommand:/bin/ceph--cluster=cephosdstat--format=json [ceph001][WARNIN]thereis1OSDdown [ceph001][WARNIN]thereis1OSDout [ceph_deploy.osd][DEBUG]Hostceph001isnowreadyforosduse. |
2 3 4 5 6 7 8 9 10 11 | cluster865e6b01-b0ea-44da-87a5-26a4980aa7a8 healthHEALTH_WARN 64pgsstuckinactive 64pgsstuckunclean monmape1:1monsat{ceph001=[2001:250:4402:2001:20c:29ff:fe25:8888]:6789/0} electionepoch1,quorum0ceph001 osdmape3:1osds:0up,0in pgmapv4:64pgs,1pools,0bytesdata,0objects 0kBused,0kB/0kBavail 64creating |
2 3 4 5 6 7 8 9 10 11 12 13 14 15 | [root@ceph001cluster]#ceph-deploydiskzapceph001:/dev/sdd [root@ceph001cluster]#ceph-deployosdcreateceph001:/dev/sdc [root@ceph001cluster]#ceph-deployosdcreateceph001:/dev/sdd [root@ceph001cluster]#ceph-s cluster865e6b01-b0ea-44da-87a5-26a4980aa7a8 healthHEALTH_WARN 64pgsstuckinactive 64pgsstuckunclean monmape1:1monsat{ceph001=[2001:250:4402:2001:20c:29ff:fe25:8888]:6789/0} electionepoch1,quorum0ceph001 osdmape7:3osds:0up,0in pgmapv8:64pgs,1pools,0bytesdata,0objects 0kBused,0kB/0kBavail 64creating |
2 3 4 5 6 7 8 9 10 | cluster2818c750-8724-4a70-bb26-f01af7f6067f healthHEALTH_WARN toofewPGsperOSD(21<min30) monmape1:1monsat{ceph001=[2001:250:4402:2001:20c:29ff:fe25:8888]:6789/0} electionepoch1,quorum0ceph001 osdmape9:3osds:3up,3in pgmapv11:64pgs,1pools,0bytesdata,0objects 102196kBused,284GB/284GBavail 64active+clean |
错误处理
我们可以看到,目前集群状态为HEALTH_WARN,存在以下警告提示2 3 4 | setpool0pg_numto128 [root@ceph001cluster]#cephosdpoolsetrbdpgp_num128 setpool0pgp_numto128 |
2 3 4 5 6 7 8 9 | cluster2818c750-8724-4a70-bb26-f01af7f6067f healthHEALTH_OK monmape1:1monsat{ceph001=[2001:250:4402:2001:20c:29ff:fe25:8888]:6789/0} electionepoch1,quorum0ceph001 osdmape13:3osds:3up,3in pgmapv17:128pgs,1pools,0bytesdata,0objects 101544kBused,284GB/284GBavail 128active+clean |
小结
本教程只是简单的搭建了一个单节点的Ceph环境,如果要换成多节点也很简单,操作大同小异在基于IPv6的Ceph配置上,个人觉得与IPv4操作相差不大,只需要注意两点
配置静态的IPv6地址
修改主机名并添加域名解析,将主机名对应于前面设置的静态IPv6地址
坚持原创技术分享,您的支持将鼓励我继续创作!
赏
本文作者:lemon
本文链接:
版权声明:本博客所有文章除特别声明外,均采用
相关文章推荐
- 配置基于IPv6的单节点Ceph
- 基于Java IO 序列化方案的memcached-session-manager多memcached节点配置
- 基于centos 7 docker+openvswitch 容器多节点互联配置
- 集群:基于DRBD的双节点Master-Slave存储 配置文档
- 基于【CentOS-7+ Ambari 2.7.0 + HDP 3.0】搭建HAWQ数据仓库——安装配置OPEN-SSH,设置主机节点之间免密互访
- ceph-rest-api的IPv6环境配置
- OpenStack基于修改ip和配置文件的多节点部署
- Ceph对象存储(rgw)的IPv6环境配置
- 查看,修改ceph节点的ceph配置命令
- 基于【IPv6】静态路由和默认路由的配置
- Ceph 集群 client 节点 rdb配置和使用
- 基于Kerberos的NIFI单节点安全登陆配置
- 基于ceph集群的iSCSI传输服务配置
- Spring Cloud服务注册中心双节点集群,使用Eureka实现,以IP方式配置,基于Spring Cloud的Camden SR5版本
- 基于Kerberos的NIFI单节点安全登陆配置
- 集群节点基于Hadoop集群的HBase集群的配置【2】
- pgpool-II 2.10 故障节点动态恢复 基于pgpool-ii的集群配置(五)
- Spring Cloud服务注册中心双节点集群,使用Eureka实现,以IP方式配置,基于Spring Cloud的Camden SR5版本
- Ceph对象存储(rgw)的IPv6环境配置
- Web Service配置过程(基于DeeGree)