Ceph对象存储(rgw)的IPv6环境配置
2017-07-14 21:08
417 查看
引言
在搭建成功Ceph集群后,对于如何使用,其实我还是一脸MB的,心想竟然提供三种存储接口(对象,文件,快),口气也未免太大。在结合项目需求之后,我选择了对象存储接口。那么问题又来了,如何配置IPv6的对象存储?
实验环境
Linux操作系统版本:CentOS Linux release 7.2.1511 (Core)Minimal镜像 603M左右
Everything镜像 7.2G左右
Ceph版本:0.94.9(hammer版本)
一个搭建成功的Ceph集群
123456789 | [root@ceph001 ~]# ceph -s cluster 2818c750-8724-4a70-bb26-f01af7f6067f health HEALTH_OK monmap e1: 1 mons at {ceph001=[2001:250:4402:2001:20c:29ff:fe25:8888]:6789/0} election epoch 1, quorum 0 ceph001 osdmap e17: 3 osds: 3 up, 3 in pgmap v26: 128 pgs, 1 pools, 0 bytes data, 0 objects 101676 kB used, 284 GB / 284 GB avail 128 active+clean |
Ceph对象存储
从 firefly(v0.80)版本以后,网关进程内嵌了Civetweb,而无需配置安装web服务器或者配置FastCGI,大大简化了Ceph对象网关的安装与配置。本教程亦是选用Civetweb
安装对象网关
安装 ceph-radosgw12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970 | [root@ceph001 cluster]# yum install ceph-radosgwLoaded plugins: fastestmirror, langpacksbase | 3.6 kB 00:00:00 ceph | 2.9 kB 00:00:00 ceph-noarch | 2.9 kB 00:00:00 epel | 4.3 kB 00:00:00 extras | 3.4 kB 00:00:00 updates | 3.4 kB 00:00:00 (1/9): ceph-noarch/primary_db | 5.4 kB 00:00:00 (2/9): base/x86_64/group_gz | 155 kB 00:00:00 (3/9): epel/x86_64/group_gz | 170 kB 00:00:01 (4/9): ceph/primary_db | 160 kB 00:00:01 (5/9): extras/x86_64/primary_db | 166 kB 00:00:00 (6/9): epel/x86_64/updateinfo | 673 kB 00:00:01 (7/9): epel/x86_64/primary_db | 4.3 MB 00:00:17 (8/9): base/x86_64/primary_db | 5.3 MB 00:00:20 (9/9): updates/x86_64/primary_db | 9.1 MB 00:00:27 Determining fastest mirrorsResolving Dependencies--> Running transaction check---> Package ceph-radosgw.x86_64 1:0.94.9-0.el7 will be installed--> Processing Dependency: mailcap for package: 1:ceph-radosgw-0.94.9-0.el7.x86_64--> Processing Dependency: libfcgi.so.0()(64bit) for package: 1:ceph-radosgw-0.94.9-0.el7.x86_64--> Running transaction check---> Package fcgi.x86_64 0:2.4.0-25.el7 will be installed---> Package mailcap.noarch 0:2.1.41-2.el7 will be installed--> Finished Dependency ResolutionDependencies Resolved======================================================================================= Package Arch Version Repository Size=======================================================================================Installing: ceph-radosgw x86_64 1:0.94.9-0.el7 ceph 2.3 MInstalling for dependencies: fcgi x86_64 2.4.0-25.el7 epel 47 k mailcap noarch 2.1.41-2.el7 base 31 kTransaction Summary=======================================================================================Install 1 Package (+2 Dependent packages)Total download size: 2.4 MInstalled size: 8.6 MIs this ok [y/d/N]: yDownloading packages:(1/3): mailcap-2.1.41-2.el7.noarch.rpm | 31 kB 00:00:00 (2/3): fcgi-2.4.0-25.el7.x86_64.rpm | 47 kB 00:00:00 (3/3): ceph-radosgw-0.94.9-0.el7.x86_64.rpm | 2.3 MB 00:00:02 ---------------------------------------------------------------------------------------Total 867 kB/s | 2.4 MB 00:02 Running transaction checkRunning transaction testTransaction test succeededRunning transaction Installing : fcgi-2.4.0-25.el7.x86_64 1/3 Installing : mailcap-2.1.41-2.el7.noarch 2/3 Installing : 1:ceph-radosgw-0.94.9-0.el7.x86_64 3/3 Verifying : mailcap-2.1.41-2.el7.noarch 1/3 Verifying : 1:ceph-radosgw-0.94.9-0.el7.x86_64 2/3 Verifying : fcgi-2.4.0-25.el7.x86_64 3/3 Installed: ceph-radosgw.x86_64 1:0.94.9-0.el7 Dependency Installed: fcgi.x86_64 0:2.4.0-25.el7 mailcap.noarch 0:2.1.41-2.el7 Complete! |
1234567891011121314151617181920 | ot@ceph001 cluster]# ceph-deploy admin ceph001[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf[ceph_deploy.cli][INFO ] Invoked (1.5.36): /usr/bin/ceph-deploy admin ceph001[ceph_deploy.cli][INFO ] ceph-deploy options:[ceph_deploy.cli][INFO ] username : None[ceph_deploy.cli][INFO ] verbose : False[ceph_deploy.cli][INFO ] overwrite_conf : False[ceph_deploy.cli][INFO ] quiet : False[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7fe09a5280e0>[ceph_deploy.cli][INFO ] cluster : ceph[ceph_deploy.cli][INFO ] client : ['ceph001'][ceph_deploy.cli][INFO ] func : <function admin at 0x7fe09b38f410>[ceph_deploy.cli][INFO ] ceph_conf : None[ceph_deploy.cli][INFO ] default_release : False[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to ceph001[ceph001][DEBUG ] connection detected need for sudo[ceph001][DEBUG ] connected to host: ceph001 [ceph001][DEBUG ] detect platform information from remote host[ceph001][DEBUG ] detect machine type[ceph001][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf |
新建网关实例
123456789101112131415161718192021222324252627282930313233 | [root@ceph001 cluster]# ceph-deploy rgw create ceph001[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf[ceph_deploy.cli][INFO ] Invoked (1.5.36): /usr/bin/ceph-deploy rgw create ceph001[ceph_deploy.cli][INFO ] ceph-deploy options:[ceph_deploy.cli][INFO ] username : None[ceph_deploy.cli][INFO ] verbose : False[ceph_deploy.cli][INFO ] rgw : [('ceph001', 'rgw.ceph001')][ceph_deploy.cli][INFO ] overwrite_conf : False[ceph_deploy.cli][INFO ] subcommand : create[ceph_deploy.cli][INFO ] quiet : False[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x2a7ee60>[ceph_deploy.cli][INFO ] cluster : ceph[ceph_deploy.cli][INFO ] func : <function rgw at 0x29e7230>[ceph_deploy.cli][INFO ] ceph_conf : None[ceph_deploy.cli][INFO ] default_release : False[ceph_deploy.rgw][DEBUG ] Deploying rgw, cluster ceph hosts ceph001:rgw.ceph001[ceph001][DEBUG ] connection detected need for sudo[ceph001][DEBUG ] connected to host: ceph001 [ceph001][DEBUG ] detect platform information from remote host[ceph001][DEBUG ] detect machine type[ceph_deploy.rgw][INFO ] Distro info: CentOS Linux 7.2.1511 Core[ceph_deploy.rgw][DEBUG ] remote host will use sysvinit[ceph_deploy.rgw][DEBUG ] deploying rgw bootstrap to ceph001[ceph001][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf[ceph001][DEBUG ] create path recursively if it doesn't exist[ceph001][INFO ] Running command: sudo ceph --cluster ceph --name client.bootstrap-rgw --keyring /var/lib/ceph/bootstrap-rgw/ceph.keyring auth get-or-create client.rgw.ceph001 osd allow rwx mon allow rw -o /var/lib/ceph/radosgw/ceph-rgw.ceph001/keyring[ceph001][INFO ] Running command: sudo service ceph-radosgw start[ceph001][DEBUG ] Reloading systemd: [ OK ][ceph001][DEBUG ] Starting ceph-radosgw (via systemctl): [ OK ][ceph001][INFO ] Running command: sudo systemctl enable ceph-radosgw[ceph001][WARNIN] ceph-radosgw.service is not a native service, redirecting to /sbin/chkconfig.[ceph001][WARNIN] Executing /sbin/chkconfig ceph-radosgw on[ceph_deploy.rgw][INFO ] The Ceph Object Gateway (RGW) is now running on host ceph001 and default port 7480 |
12 | [root@ceph001 ceph]# netstat -nlp | grep 7480tcp 0 0 0.0.0.0:7480 0.0.0.0:* LISTEN 3537/radosgw |
修改默认端口,并配置IPv6网关
12345678910111213141516171819202122232425262728293031323334 | [root@ceph001 cluster]# vim ceph.conf[global]fsid = 2818c750-8724-4a70-bb26-f01af7f6067fms_bind_ipv6 = truemon_initial_members = ceph001mon_host = [2001:250:4402:2001:20c:29ff:fe25:8888]auth_cluster_required = cephxauth_service_required = cephxauth_client_required = cephxosd_pool_default_size = 1# IPv6网关配置[client.rgw.ceph001]rgw_frontends= "civetweb port=[::]:80"[root@ceph001 cluster]# ceph-deploy --overwrite-conf config push ceph001[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf[ceph_deploy.cli][INFO ] Invoked (1.5.36): /usr/bin/ceph-deploy --overwrite-conf config push ceph001[ceph_deploy.cli][INFO ] ceph-deploy options:[ceph_deploy.cli][INFO ] username : None[ceph_deploy.cli][INFO ] verbose : False[ceph_deploy.cli][INFO ] overwrite_conf : True[ceph_deploy.cli][INFO ] subcommand : push[ceph_deploy.cli][INFO ] quiet : False[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7fdc0728a7a0>[ceph_deploy.cli][INFO ] cluster : ceph[ceph_deploy.cli][INFO ] client : ['ceph001'][ceph_deploy.cli][INFO ] func : <function config at 0x7fdc072652a8>[ceph_deploy.cli][INFO ] ceph_conf : None[ceph_deploy.cli][INFO ] default_release : False[ceph_deploy.config][DEBUG ] Pushing config to ceph001[ceph001][DEBUG ] connection detected need for sudo[ceph001][DEBUG ] connected to host: ceph001 [ceph001][DEBUG ] detect platform information from remote host[ceph001][DEBUG ] detect machine type[ceph001][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf |
12 | [root@ceph001 ceph]# netstat -nlp | grep 80tcp6 0 0 :::80 :::* LISTEN 3540/ra |
得到与下图类似返回结果
恭喜你,完成了IPv6的网关配置,更多内容可以参考官网Ceph对象网关
简单使用
新建一个用户(S3接口)
123456789101112131415161718192021222324252627282930313233 | [root@ceph001 ~]# radosgw-admin user create --uid=lemon --display-name="柠檬" --email=lemon@qq.com{ "user_id": "lemon", "display_name": "柠檬", "email": "lemon@qq.com", "suspended": 0, "max_buckets": 1000, "auid": 0, "subusers": [], "keys": [ { "user": "lemon", "access_key": "29YAB6D3BVRBQQDFVLHI", "secret_key": "QVPTxEvZHxQJhNdR58tZCfsgyP37jOKBKiPg1TaU" } ], "swift_keys": [], "caps": [], "op_mask": "read, write, delete", "default_placement": "", "placement_tags": [], "bucket_quota": { "enabled": false, "max_size_kb": -1, "max_objects": -1 }, "user_quota": { "enabled": false, "max_size_kb": -1, "max_objects": -1 }, "temp_url_keys": []} |
通过CloudBerry
Explorer for Amazon S3 客户端验证IPv6平台的部署情况
下载该测试客户端CloudBerry Explorer for Amazon S3 for Windows官网
百度云 密码: bxu5
安装成功后启动客户端,得到类似如下界面
添加创建成功的Ceph对象存储
点击菜单栏 File->Edit Accounts
点击 Add->S3 Compatible
配置相关参数,如下图所示,然后点击测试连接(test connection)
Display name:随便填
Service point:即我们的对象网关地址,比如这里是/]http://[2001:250:4402:2001:20c:29ff:fe25:8888]/
Access key: 为我们创建用户的access_key,比如这里是29YAB6D3BVRBQQDFVLHI
Secret key: 为我们创建用户的secret_key, 比如这里是QVPTxEvZHxQJhNdR58tZCfsgyP37jOKBKiPg1TaU
如果测试成功会弹出一个 connection success对话框
使用
选择刚刚创建的账户ceph001
接下来可以通过该客户端创建bucket,上传以及下载文件等操作,其他就不一一介绍了
ceph端可以通过ceph -w 查看实时的客户端操作,比如这里是客户端进行写操作
123456789101112131415161718192021222324252627282930313233 | [root@ceph001 ~]# ceph -w cluster 2818c750-8724-4a70-bb26-f01af7f6067f health HEALTH_OK monmap e1: 1 mons at {ceph001=[2001:250:4402:2001:20c:29ff:fe25:8888]:6789/0} election epoch 1, quorum 0 ceph001 osdmap e54: 3 osds: 3 up, 3 in pgmap v312: 200 pgs, 10 pools, 425 MB data, 164 objects 826 MB used, 284 GB / 284 GB avail 200 active+clean client io 11630 kB/s wr, 25 op/s2016-11-10 10:30:23.606053 mon.0 [INF] pgmap v312: 200 pgs: 200 active+clean; 425 MB data, 826 MB used, 284 GB / 284 GB avail; 11630 kB/s wr, 25 op/s2016-11-10 10:30:27.197373 mon.0 [INF] pgmap v313: 200 pgs: 200 active+clean; 443 MB data, 866 MB used, 284 GB / 284 GB avail; 7826 kB/s wr, 17 op/s2016-11-10 10:30:28.810358 mon.0 [INF] pgmap v314: 200 pgs: 200 active+clean; 457 MB data, 910 MB used, 283 GB / 284 GB avail; 5914 kB/s wr, 12 op/s2016-11-10 10:30:31.830126 mon.0 [INF] pgmap v315: 200 pgs: 200 active+clean; 466 MB data, 927 MB used, 283 GB / 284 GB avail; 5015 kB/s wr, 11 op/s2016-11-10 10:30:32.918332 mon.0 [INF] pgmap v316: 200 pgs: 200 active+clean; 502 MB data, 1011 MB used, 283 GB / 284 GB avail; 10214 kB/s wr, 22 op/s2016-11-10 10:30:37.113515 mon.0 [INF] pgmap v317: 200 pgs: 200 active+clean; 521 MB data, 1037 MB used, 283 GB / 284 GB avail; 11093 kB/s wr, 24 op/s2016-11-10 10:30:38.256587 mon.0 [INF] pgmap v318: 200 pgs: 200 active+clean; 563 MB data, 1082 MB used, 283 GB / 284 GB avail; 11669 kB/s wr, 25 op/s2016-11-10 10:30:42.089761 mon.0 [INF] pgmap v319: 200 pgs: 200 active+clean; 580 MB data, 1106 MB used, 283 GB / 284 GB avail; 11572 kB/s wr, 25 op/s2016-11-10 10:30:43.099061 mon.0 [INF] pgmap v320: 200 pgs: 200 active+clean; 609 MB data, 1162 MB used, 283 GB / 284 GB avail; 9575 kB/s wr, 21 op/s2016-11-10 10:30:47.423680 mon.0 [INF] pgmap v321: 200 pgs: 200 active+clean; 628 MB data, 1184 MB used, 283 GB / 284 GB avail; 9104 kB/s wr, 19 op/s2016-11-10 10:30:48.938458 mon.0 [INF] pgmap v322: 200 pgs: 200 active+clean; 652 MB data, 1212 MB used, 283 GB / 284 GB avail; 7914 kB/s wr, 17 op/s2016-11-10 10:30:49.948222 mon.0 [INF] pgmap v323: 200 pgs: 200 active+clean; 660 MB data, 1216 MB used, 283 GB / 284 GB avail; 13007 kB/s wr, 28 op/s2016-11-10 10:30:52.843301 mon.0 [INF] pgmap v324: 200 pgs: 200 active+clean; 668 MB data, 1238 MB used, 283 GB / 284 GB avail; 3975 kB/s wr, 8 op/s2016-11-10 10:30:54.968022 mon.0 [INF] pgmap v325: 200 pgs: 200 active+clean; 714 MB data, 1278 MB used, 283 GB / 284 GB avail; 12919 kB/s wr, 28 op/s2016-11-10 10:30:58.521788 mon.0 [INF] pgmap v326: 200 pgs: 200 active+clean; 730 MB data, 1290 MB used, 283 GB / 284 GB avail; 12325 kB/s wr, 27 op/s2016-11-10 10:30:59.558175 mon.0 [INF] pgmap v327: 200 pgs: 200 active+clean; 764 MB data, 1334 MB used, 283 GB / 284 GB avail; 9621 kB/s wr, 20 op/s2016-11-10 10:31:03.218629 mon.0 [INF] pgmap v328: 200 pgs: 200 active+clean; 776 MB data, 1354 MB used, 283 GB / 284 GB avail; 8880 kB/s wr, 19 op/s2016-11-10 10:31:04.234516 mon.0 [INF] pgmap v329: 200 pgs: 200 active+clean; 816 MB data, 1382 MB used, 283 GB / 284 GB avail; 11345 kB/s wr, 24 op/s2016-11-10 10:31:08.422236 mon.0 [INF] pgmap v330: 200 pgs: 200 active+clean; 820 MB data, 1398 MB used, 283 GB / 284 GB avail; 8706 kB/s wr, 19 op/s2016-11-10 10:31:09.870466 mon.0 [INF] pgmap v331: 200 pgs: 200 active+clean; 856 MB data, 1463 MB used, 283 GB / 284 GB avail; 7887 kB/s wr, 17 op/s2016-11-10 10:31:14.003109 mon.0 [INF] pgmap v332: 200 pgs: 200 active+clean; 884 MB data, 1491 MB used, 283 GB / 284 GB avail; 12958 kB/s wr, 28 op/s... |
API
希望能帮到大家
坚持原创技术分享,您的支持将鼓励我继续创作!
赏
本文作者: lemon
本文链接: https://lemon2013.github.io/2016/11/09/Ceph对象存储-rgw-IPv6环境配置/
版权声明: 本博客所有文章除特别声明外,均采用 CC
BY-NC-SA 3.0 许可协议。转载请注明出处!
相关文章推荐
- Ceph对象存储(rgw)的IPv6环境配置
- ceph-rest-api的IPv6环境配置
- ceph对象存储(rgw)服务、高可用安装配置
- 分布式存储ceph 对象存储配置zone 同步
- Linux下ipv6配置系列一:如何配置Linux系统ipv6环境
- 配置基于IPv6的单节点Ceph
- ceph 对象存储配置zone 同步
- 2014-05-08ubuntu环境ceph配置入门(一)
- 2014-05-08ubuntu环境ceph配置入门(二)
- 配置基于IPv6的单节点Ceph
- Linux下ipv6配置系列一:如何配置Linux系统ipv6环境
- ceph对象存储的配置与S3、swift接口的使用
- JBoss7配置-支持IPv4和IPv6双栈环境
- ubuntu环境ceph配置入门(一)
- ubuntu环境ceph配置入门(二)
- ubuntu环境ceph配置入门(一)
- ubuntu环境ceph配置入门(二)
- win8.1连接mac配置的ipv6环境
- ubuntu环境ceph配置入门(一)