您的位置:首页 > 运维架构 > Linux

CentOS 7.3 分布式存储Glusterfs部署使用(一)

2017-03-03 20:42 686 查看
Glusterfs_Server:

四台服务器:

192.168.101.5 glusterfs1

192.168.101.6 glusterfs2

192.168.101.7 glusterfs3

192.168.101.12 glusterfs4

一、初始化服务器

参考:利用openstack建设适合中小型互联网企业的私有云(三)

二、配置主机hosts

cat <<EOF> /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.101.5 glusterfs1
192.168.101.6 glusterfs2
192.168.101.7 glusterfs3
192.168.101.12 glusterfs4
EOF


三、配置YUM源

rpm -ivh http://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-8.noarch.rpm wget -P /etc/yum.repos.d https://download.gluster.org/pub/gluster/glusterfs/3.7/3.7.19/CentOS/glusterfs-epel.repo[/code] 

四、安装glusterfs软件包并启动服务

yum -y install glusterfs glusterfs-fuse glusterfs-server
systemctl start glusterd.service
systemctl enable glusterd.service

五、组建集群

仅glusterfs1上执行如下:

gluster peer probe glusterfs2
gluster peer probe glusterfs3
gluster peer probe glusterfs4
验证:

gluster peer status
gluster pool list
gluster volume status

六、新建卷

仅glusterfs1上执行如下:

mkdir -p /data/brick1/gv0
gluster volume create gv0 replica 2 glusterfs1:/data/brick1/gv0 glusterfs2:/data/brick1/gv0 glusterfs3:/data/brick1/gv0 glusterfs4:/data/brick1/gv0
gluster volume start gv0

验证:

gluster volume info
测试验证:

for i in `seq -w 1 100`; do cp -rp /var/log/messages /mnt/copy-test-$i; done


七、附常用命令:

1.删除卷

gluster volume stop gv0
gluster volume delete gv0


2.将机器移出集群

gluster peer detach glusterfs3 glusterfs4


3.卷扩容(由于副本数设置为2,至少要添加2(4、6、8..)台机器)

gluster peer probe glusterfs3 # 加节点
gluster peer probe glusterfs4 # 加节点
gluster volume add-brick gv0 glusterfs3:/data/brick1/gv0 glusterfs4:/data/brick1/gv0 force# 合并卷


4.重新均衡卷

gluster volume rebalance gv0 start
gluster volume rebalance gv0 status
gluster volume rebalance gv0 stop


5.收缩卷(收缩卷前gluster需要先移动数据到其他位置)

gluster volume remove-brick gv0 glusterfs3:/data/brick1/gv0 glusterfs4:/data/brick1/gv0 start # 开始迁移
gluster volume remove-brick gv0 glusterfs3:/data/brick1/gv0 glusterfs4:/data/brick1/gv0 status # 查看迁移状态
gluster volume remove-brick gv0 glusterfs3:/data/brick1/gv0 glusterfs4:/data/brick1/gv0 commit # 迁移完成后提交


6.迁移卷

gluster peer probe glusterfs5 # 将glusterfs3的数据迁移到glusterfs5,先将glusterfs5加入集群
gluster volume replace-brick gv0 glusterfs3:/data/brick1/gv0 glusterfs5:/data/brick1/gv0 start # 开始迁移
gluster volume replace-brick gv0 glusterfs3:/data/brick1/gv0 glusterfs5:/data/brick1/gv0 status # 查看迁移状态
gluster volume replace-brick gv0 glusterfs3:/data/brick1/gv0 glusterfs5:/data/brick1/gv0 commit # 数据迁移完毕后提交
gluster volume replace-brick gv0 glusterfs3:/data/brick1/gv0 glusterfs5:/data/brick1/gv0 commit -force # 如果机器agent31.kisops.org出现故障已经不能运行,执行强制提交
gluster volume heal gfs full # 同步整个卷


7.授权访问

gluster volume set gfs auth.allow 10.30.*
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息