您的位置:首页 > 数据库 > Redis

redis5.0集群模式安装部署和节点扩容

2020-12-03 15:36 901 查看

redis5.0安装部署参考一键安装脚本

1.目前在生产环境已经有的集群环境为3个master、3个slave,现在添加2个节点,形成5个master和5个slave

将安装好的新redis节点添加到集群,安装好的节点规划如下:

172.16.153.32:7002和172.16.153.32:7003     前面的是master后面的是slave
172.16.153.33:7004和172.16.153.33:7005     前面的是master后面的是slave

2.登录到redis集群操作如下:

./redis-cli  -a redis_pass

执行如下操作:

127.0.0.1:6379> cluster meet 172.16.153.33 7004
OK
127.0.0.1:6379> cluster meet 172.16.153.33 7005
OK

查看集群此时信息:

127.0.0.1:6379> cluster nodes
a9deeb976ba0efa14190cf382bfe61aea65697ad 172.16.153.5:6380@16380 master - 0 1606980344037 2 connected 6462-10922
ad845f2b5c47d981577abeece629fc14550c7e8b 172.16.153.32:7002@17002 master - 0 1606980344037 7 connected 0-998 5461-6461 10923-11921
1e259f10a6e29d38a1261c637ad1a24d40a3e754 172.16.153.5:6384@16384 slave a9deeb976ba0efa14190cf382bfe61aea65697ad 0 1606980344137 6 connected
ef7d6f7d2b547b6a89a7478b4c39e1650d8074e1 172.16.153.5:6382@16382 slave 54fed77edf250f947f7d27959c2a317a082d0d3b 0 1606980344537 4 connected
922c14b9935f3fd6f457701c41f991c883ec9ca4 172.16.153.5:6379@16379 myself,master - 0 1606980344000 1 connected 999-5460
7eabf4a5569f0ee35f159139cf62bdf99c5fa482 172.16.153.5:6383@16383 slave 922c14b9935f3fd6f457701c41f991c883ec9ca4 0 1606980344037 5 connected
3c57d818abc38d9a0a8ca8d17fe2cac56fad8380 172.16.153.33:7005@17005 master - 0 1606980344137 8 connected
54fed77edf250f947f7d27959c2a317a082d0d3b 172.16.153.5:6381@16381 master - 0 1606980344137 3 connected 11922-16383
ba3dfe8a7462d55272d8ae5defa0b2db1e63fb07 172.16.153.32:7003@17003 slave ad845f2b5c47d981577abeece629fc14550c7e8b 0 1606980344137 7 connected
e1ece83ddf64f3d23567ccecc8354e717c19a961 172.16.153.33:7004@17004 master - 0 1606980344137 0 connected

3.登录到172.16.153.33 7005操作,将7005作为7004的slave,操作如下:

redis-cli  -c -h 172.16.153.33 -p 7005 -a smcaiot_redis_pass
172.16.153.33:7005> cluster nodes
ef7d6f7d2b547b6a89a7478b4c39e1650d8074e1 172.16.153.5:6382@16382 slave 54fed77edf250f947f7d27959c2a317a082d0d3b 0 1606980637442 3 connected
1e259f10a6e29d38a1261c637ad1a24d40a3e754 172.16.153.5:6384@16384 slave a9deeb976ba0efa14190cf382bfe61aea65697ad 0 1606980637000 2 connected
922c14b9935f3fd6f457701c41f991c883ec9ca4 172.16.153.5:6379@16379 master - 0 1606980637000 1 connected 999-5460
54fed77edf250f947f7d27959c2a317a082d0d3b 172.16.153.5:6381@16381 master - 0 1606980637000 3 connected 11922-16383
e1ece83ddf64f3d23567ccecc8354e717c19a961 172.16.153.33:7004@17004 master - 0 1606980637000 0 connected
a9deeb976ba0efa14190cf382bfe61aea65697ad 172.16.153.5:6380@16380 master - 0 1606980636440 2 connected 6462-10922
ba3dfe8a7462d55272d8ae5defa0b2db1e63fb07 172.16.153.32:7003@17003 slave ad845f2b5c47d981577abeece629fc14550c7e8b 0 1606980637000 7 connected
3c57d818abc38d9a0a8ca8d17fe2cac56fad8380 172.16.153.33:7005@17005 myself,master - 0 1606980637000 8 connected
7eabf4a5569f0ee35f159139cf62bdf99c5fa482 172.16.153.5:6383@16383 slave 922c14b9935f3fd6f457701c41f991c883ec9ca4 0 1606980637000 1 connected
ad845f2b5c47d981577abeece629fc14550c7e8b 172.16.153.32:7002@17002 master - 0 1606980636000 7 connected 0-998 5461-6461 10923-11921
172.16.153.33:7005> cluster replicate  e1ece83ddf64f3d23567ccecc8354e717c19a961
OK

172.16.153.33:7005> cluster nodes
ef7d6f7d2b547b6a89a7478b4c39e1650d8074e1 172.16.153.5:6382@16382 slave 54fed77edf250f947f7d27959c2a317a082d0d3b 0 1606980686498 3 connected
1e259f10a6e29d38a1261c637ad1a24d40a3e754 172.16.153.5:6384@16384 slave a9deeb976ba0efa14190cf382bfe61aea65697ad 0 1606980685000 2 connected
922c14b9935f3fd6f457701c41f991c883ec9ca4 172.16.153.5:6379@16379 master - 0 1606980686000 1 connected 999-5460
54fed77edf250f947f7d27959c2a317a082d0d3b 172.16.153.5:6381@16381 master - 0 1606980685000 3 connected 11922-16383
e1ece83ddf64f3d23567ccecc8354e717c19a961 172.16.153.33:7004@17004 master - 0 1606980686000 0 connected
a9deeb976ba0efa14190cf382bfe61aea65697ad 172.16.153.5:6380@16380 master - 0 1606980685497 2 connected 6462-10922
ba3dfe8a7462d55272d8ae5defa0b2db1e63fb07 172.16.153.32:7003@17003 slave ad845f2b5c47d981577abeece629fc14550c7e8b 0 1606980685000 7 connected
3c57d818abc38d9a0a8ca8d17fe2cac56fad8380 172.16.153.33:7005@17005 myself,slave e1ece83ddf64f3d23567ccecc8354e717c19a961 0 1606980686000 8 connected
7eabf4a5569f0ee35f159139cf62bdf99c5fa482 172.16.153.5:6383@16383 slave 922c14b9935f3fd6f457701c41f991c883ec9ca4 0 1606980685000 1 connected
ad845f2b5c47d981577abeece629fc14550c7e8b 172.16.153.32:7002@17002 master - 0 1606980685000 7 connected 0-998 5461-6461 10923-11921

此时5个master和5个slave完成,默认新加入的节点不分配槽位,无法存储数据,下一步开始进行槽位分配。

4.重新分配操作,操作如下

redis-cli --cluster reshard  172.16.153.33:7004 -a smcaiot_redis_pass
>>> Performing Cluster Check (using node 172.16.153.33:7004)
M: e1ece83ddf64f3d23567ccecc8354e717c19a961 172.16.153.33:7004
slots: (0 slots) master
1 additional replica(s)
S: ba3dfe8a7462d55272d8ae5defa0b2db1e63fb07 172.16.153.32:7003
slots: (0 slots) slave
replicates ad845f2b5c47d981577abeece629fc14550c7e8b
S: 3c57d818abc38d9a0a8ca8d17fe2cac56fad8380 172.16.153.33:7005
slots: (0 slots) slave
replicates e1ece83ddf64f3d23567ccecc8354e717c19a961
M: 54fed77edf250f947f7d27959c2a317a082d0d3b 172.16.153.5:6381
slots:[11922-16383] (4462 slots) master
1 additional replica(s)
S: ef7d6f7d2b547b6a89a7478b4c39e1650d8074e1 172.16.153.5:6382
slots: (0 slots) slave
replicates 54fed77edf250f947f7d27959c2a317a082d0d3b
M: a9deeb976ba0efa14190cf382bfe61aea65697ad 172.16.153.5:6380
slots:[6462-10922] (4461 slots) master
1 additional replica(s)
M: 922c14b9935f3fd6f457701c41f991c883ec9ca4 172.16.153.5:6379
slots:[999-5460] (4462 slots) master
1 additional replica(s)
S: 1e259f10a6e29d38a1261c637ad1a24d40a3e754 172.16.153.5:6384
slots: (0 slots) slave
replicates a9deeb976ba0efa14190cf382bfe61aea65697ad
S: 7eabf4a5569f0ee35f159139cf62bdf99c5fa482 172.16.153.5:6383
slots: (0 slots) slave
replicates 922c14b9935f3fd6f457701c41f991c883ec9ca4
M: ad845f2b5c47d981577abeece629fc14550c7e8b 172.16.153.32:7002
slots:[0-998],[5461-6461],[10923-11921] (2999 slots) master
1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
How many slots do you want to move (from 1 to 16384)? 3000
What is the receiving node ID? e1ece83ddf64f3d23567ccecc8354e717c19a961
Please enter all the source node IDs.
Type 'all' to use all the nodes as source nodes for the hash slots.
Type 'done' once you entered all the source nodes IDs.
Source node #1: all

5.检查rediscluster状态

127.0.0.1:6379> cluster info
127.0.0.1:6379> cluster nodes
a9deeb976ba0efa14190cf382bfe61aea65697ad 172.16.153.5:6380@16380 master - 0 1606980981086 2 connected 7278-10922
ad845f2b5c47d981577abeece629fc14550c7e8b 172.16.153.32:7002@17002 master - 0 1606980981086 7 connected 549-998 5461-6461 10923-11921
1e259f10a6e29d38a1261c637ad1a24d40a3e754 172.16.153.5:6384@16384 slave a9deeb976ba0efa14190cf382bfe61aea65697ad 0 1606980981287 6 connected
ef7d6f7d2b547b6a89a7478b4c39e1650d8074e1 172.16.153.5:6382@16382 slave 54fed77edf250f947f7d27959c2a317a082d0d3b 0 1606980981086 4 connected
922c14b9935f3fd6f457701c41f991c883ec9ca4 172.16.153.5:6379@16379 myself,master - 0 1606980980000 1 connected 1816-5460
7eabf4a5569f0ee35f159139cf62bdf99c5fa482 172.16.153.5:6383@16383 slave 922c14b9935f3fd6f457701c41f991c883ec9ca4 0 1606980981086 5 connected
3c57d818abc38d9a0a8ca8d17fe2cac56fad8380 172.16.153.33:7005@17005 slave e1ece83ddf64f3d23567ccecc8354e717c19a961 0 1606980981086 9 connected
54fed77edf250f947f7d27959c2a317a082d0d3b 172.16.153.5:6381@16381 master - 0 1606980981086 3 connected 12740-16383
ba3dfe8a7462d55272d8ae5defa0b2db1e63fb07 172.16.153.32:7003@17003 slave ad845f2b5c47d981577abeece629fc14550c7e8b 0 1606980981086 7 connected
e1ece83ddf64f3d23567ccecc8354e717c19a961 172.16.153.33:7004@17004 master - 0 1606980981086 9 connected 0-548 999-1815 6462-7277 11922-12739
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: