您的位置:首页 > 数据库 > Redis

redis 3.2 cluster高可用和数据迁移测试

2017-09-22 17:19 417 查看
部署架构:

   192.168.65.31  M1(6379)  S2(6380)

   192.168.65.32  M2(6379)  S3(6380)

   192.168.65.33  M2(6379)  S1(6380)

一.数据存储测试 :

连到31的6379上操作:

redis-cli -a "abc" -h 192.168.65.31 -p 6379

192.168.65.31:6379> set b 3

OK

192.168.65.31:6379> get b

"3"

再连到192.168.65.33:6379 

192.168.65.33:6379> get b

(error) MOVED 3300 192.168.65.31:6379

说明数据是随机按hash算法存储的,当一个key存储在一个节点中,而在另一个节点去查询这个key会报告key存储的目标节点。

二.高可用测试:

1.挂掉一个主库

连到从库查看:redis-cli -a "abc" -h 192.168.65.32 -p 6380

192.168.65.32:6380> randomkey

"b"

192.168.65.32:6380> get b

(error) MOVED 3300 192.168.65.31:6379

由于从库是只读的,可以看到该key,但不能获取key值。

31的6379关闭前的集群状态:

redis-trib.py list --password abc --addr  192.168.65.32:6379 

Total 6 nodes, 3 masters, 0 fail

M  192.168.65.31:6379 master 5462

 S 192.168.65.32:6380 slave 192.168.65.31:6379

M  192.168.65.32:6379 myself,master 5461

 S 192.168.65.33:6380 slave 192.168.65.32:6379

M  192.168.65.33:6379 master 5461

 S 192.168.65.31:6380 slave 192.168.65.33:6379

redis-trib.rb info 192.168.65.32:6379 

出现以下报错:

[apps@0782 bin]$ ./redis-trib.rb info 192.168.65.31:6379

/usr/local/ruby/lib/ruby/site_ruby/1.9.1/rubygems/core_ext/kernel_require.rb:55:in `require': no such file to load -- redis (LoadError)

        from /usr/local/ruby/lib/ruby/site_ruby/1.9.1/rubygems/core_ext/kernel_require.rb:55:in `require'

        from ./redis-trib.rb:25:in `<main>'

解决:

gem install -l /apps/software/redis-3.2.2.gem

[apps@0782 bin]$ ./redis-trib.rb info 192.168.65.31:6379

[ERR] Sorry, can't connect to node 192.168.65.31:6379

 ./redis-trib.rb help查看到所有命令及参数暂不支持密码,在使用方面不如redis-trib.py方便实用。

关闭31的6379:

[apps@ bin]$ ps -ef|grep redis

apps     12302     1  0 16:05 ?        00:00:22 /apps/svr/redis3.0/bin/redis-server 0.0.0.0:6379 [cluster]                     

apps     12398     1  0 16:28 ?        00:00:19 /apps/svr/redis3.0/bin/redis-server 0.0.0.0:6380 [cluster]                     

apps     15166 14995  0 22:14 pts/0    00:00:00 grep redis

[apps@bin]$ kill -9 12302

[apps@bin]$ ps -ef|grep redis

apps     12398     1  0 16:28 ?        00:00:19 /apps/svr/redis3.0/bin/redis-server 0.0.0.0:6380 [cluster]                     

apps     15201 14995  0 22:15 pts/0    00:00:00 grep redis

关闭31的6379后集群状态:

redis-trib.py list --password abc --addr  192.168.65.32:6379 

Total 6 nodes, 4 masters, 1 fail

M  192.168.65.31:6379 master,fail 0

M  192.168.65.32:6379 myself,master 5461

 S 192.168.65.33:6380 slave 192.168.65.32:6379

M  192.168.65.32:6380 master 5462

M  192.168.65.33:6379 master 5461

 S 192.168.65.31:6380 slave 192.168.65.33:6379

上面说明从库有自动切换为主库。

现在可以成功查看key b的值:

192.168.65.32:6380> get b

"3"

再将31的6379启动后,查看集群状态:

Total 6 nodes, 3 masters, 0 fail

M  192.168.65.32:6379 myself,master 5461

 S 192.168.65.33:6380 slave 192.168.65.32:6379

M  192.168.65.32:6380 master 5462

 S 192.168.65.31:6379 slave 192.168.65.32:6380

M  192.168.65.33:6379 master 5461

 S 192.168.65.31:6380 slave 192.168.65.33:6379

发现6379为新主库的从库。

说明在从库正常情况下,挂掉一个主库后,从库会被选举为新的主库。

2.挂掉两个主库

将32和33的6379关闭:

约5分钟后状态如下:

redis-trib.py list --password abc --addr  192.168.65.31:6379 

Total 6 nodes, 5 masters, 2 fail

M  192.168.65.31:6380 master 5461

M  192.168.65.32:6379 master,fail 0

M  192.168.65.32:6380 myself,master 5462

 S 192.168.65.31:6379 slave 192.168.65.32:6380

M  192.168.65.33:6379 master,fail 0

M  192.168.65.33:6380 master 5461

启动两个主库后:

redis-trib.py list --password abc --addr  192.168.65.31:6379 

Total 6 nodes, 3 masters, 0 fail

M  192.168.65.31:6380 master 5461

 S 192.168.65.33:6379 slave 192.168.65.31:6380

M  192.168.65.32:6380 master 5462

 S 192.168.65.31:6379 myself,slave 192.168.65.32:6380

M  192.168.65.33:6380 master 5461

 S 192.168.65.32:6379 slave 192.168.65.33:6380

说明在从库正常情况下,挂掉两个主库后,从库会被选举为新的主库。

3.挂掉一台主机(挂掉一主一从)

 ps -ef|grep redis|grep -v grep|awk '{print $2}'|xargs kill -9

状态如下:

redis-trib.py list --password abc --addr  192.168.65.32:6379 

Total 6 nodes, 4 masters, 2 fail

M  192.168.65.31:6380 master,fail 0

M  192.168.65.32:6380 master 5462

 S 192.168.65.31:6379 slave,fail 192.168.65.32:6380

M  192.168.65.33:6379 master 5461

M  192.168.65.33:6380 master 5461

 S 192.168.65.32:6379 myself,slave 192.168.65.33:6380

说明在从库正常情况下,挂掉一台主机后,从库会被选举为新的主库。

cluster info

cluster_state:ok

cluster_slots_assigned:16384

cluster_slots_ok:16384

cluster_slots_pfail:0

cluster_slots_fail:0

cluster_known_nodes:6

cluster_size:3

cluster_current_epoch:10

cluster_my_epoch:7

cluster_stats_messages_sent:464420

cluster_stats_messages_received:47526

info cluster

# Cluster

cluster_enabled:1

4.挂掉两台主机(6个节点挂掉二主二从)

 ps -ef|grep redis|grep -v grep|awk '{print $2}'|xargs kill -9

Total 6 nodes, 3 masters, 4 fail

M  192.168.65.31:6380 master,fail? 5461

 S 192.168.65.33:6379 slave 192.168.65.31:6380

M  192.168.65.32:6379 master,fail? 5461

 S 192.168.65.33:6380 myself,slave 192.168.65.32:6379

M  192.168.65.32:6380 master,fail? 5462

 S 192.168.65.31:6379 slave,fail? 192.168.65.32:6380

192.168.65.33:6380> get a

(error) CLUSTERDOWN The cluster is down

5.挂掉两台主机(9个节点挂掉二主二从)

增加实例到集群中:

原部署架构:

   192.168.65.31  M1(6379)  S2(6380)

   192.168.65.32  M2(6379)  S3(6380)

   192.168.65.33  M2(6379)  S1(6380)

改成如下:

部署架构:

   192.168.65.31  M1(6379)  S2(6380) S3(6381)

   192.168.65.32  M2(6379)  S3(6380) S1(6381)

   192.168.65.33  M2(6379)  S1(6380) S2(6381)

在原来三主三从的架构上变成三主六从,添加三个从库后的cluster状态:

redis-trib.py list --password abc --addr  192.168.65.32:6379 

Total 9 nodes, 3 masters, 0 fail

M  192.168.65.31:6380 master 5461

 S 192.168.65.32:6381 slave 192.168.65.31:6380

 S 192.168.65.33:6379 slave 192.168.65.31:6380

M  192.168.65.32:6380 master 5462

 S 192.168.65.31:6379 slave 192.168.65.32:6380

 S 192.168.65.33:6381 slave 192.168.65.32:6380

M  192.168.65.33:6380 master 5461

 S 192.168.65.31:6381 slave 192.168.65.33:6380

 S 192.168.65.32:6379 myself,slave 192.168.65.33:6380

上面是很明显的一个主库带两个从库,而且三台机是相互交叉分布的。

挂掉两台机后,如下状态:

Total 9 nodes, 3 masters, 6 fail

M  192.168.65.31:6380 master,fail? 5461

 S 192.168.65.32:6381 slave,fail? 192.168.65.31:6380

 S 192.168.65.33:6379 myself,slave 192.168.65.31:6380

M  192.168.65.32:6380 master,fail? 5462

 S 192.168.65.31:6379 slave,fail? 192.168.65.32:6380

 S 192.168.65.33:6381 slave 192.168.65.32:6380

M  192.168.65.33:6380 master 5461

 S 192.168.65.31:6381 slave,fail? 192.168.65.33:6380

 S 192.168.65.32:6379 slave,fail? 192.168.65.33:6380

192.168.65.33:6380> get a
(error) CLUSTERDOWN The cluster is down

三.数据迁移测试

192.168.65.32:6380> info cluster

# Cluster

cluster_enabled:1

192.168.65.32:6380> cluster node

(error) ERR Wrong CLUSTER subcommand or number of arguments

192.168.65.32:6380> cluster nodes

b06da1f508686c326b8c65856c680ee47cdd7582 192.168.65.31:6380 master - 0 1505900948523 9 connected 10923-16383

c3af50d219c8bfbe05cd5d20cfcb78234c90faa7 192.168.65.31:6379 slave cd798125b14aafe82d7d5e3d68a2b5014a9e7dfc 0 1505900954550 6 connected

edc8139c2dff1628e94ec0a4a93d3536b3ca4440 192.168.65.32:6379 master,fail - 1505899972878 1505899967967 0 disconnected

cd798125b14aafe82d7d5e3d68a2b5014a9e7dfc 192.168.65.32:6380 myself,master - 0 0 6 connected 0-5461

4868d82eaed0fea4983609c82933e5d6a883e782 192.168.65.33:6379 master,fail - 1505900006588 1505900005081 1 disconnected

6172b5c24acc213ba6b0a983f3f497cc0658b6cd 192.168.65.33:6380 master - 0 1505900953545 7 connected 5462-10922

192.168.65.32:6380> cluster info

cluster_state:ok

cluster_slots_assigned:16384

cluster_slots_ok:16384

cluster_slots_pfail:0

cluster_slots_fail:0

cluster_known_nodes:6

cluster_size:3

cluster_current_epoch:9

cluster_my_epoch:6

cluster_stats_messages_sent:403235

cluster_stats_messages_received:39290

 cluster slots

1) 1) (integer) 10923

   2) (integer) 16383

   3) 1) "192.168.65.31"

      2) (integer) 6380

      3) "b06da1f508686c326b8c65856c680ee47cdd7582"

2) 1) (integer) 0

   2) (integer) 5461

   3) 1) "192.168.65.32"

      2) (integer) 6380

      3) "cd798125b14aafe82d7d5e3d68a2b5014a9e7dfc"

   4) 1) "192.168.65.31"

      2) (integer) 6379

      3) "c3af50d219c8bfbe05cd5d20cfcb78234c90faa7"

3) 1) (integer) 5462

   2) (integer) 10922

   3) 1) "192.168.65.33"

      2) (integer) 6380

      3) "6172b5c24acc213ba6b0a983f3f497cc0658b6cd"

如何查看键放在哪些slot上

192.168.65.32:6380> keys *

1) "aaaa"

2) "b"

192.168.65.32:6380> cluster keyslot b

(integer) 3300

将3300号slot从192.168.65.32:6380迁移到192.168.65.33:6380:

a.在192.168.65.33:6380上执行cluster setslot 3300 importing cd798125b14aafe82d7d5e3d68a2b5014a9e7dfc (run ID)

b.在192.168.65.32:6380上执行cluster setslot 3300 migrating 6172b5c24acc213ba6b0a983f3f497cc0658b6cd (run ID)

c.在192.168.65.32:6380上执行cluster getkeysinslot 3300 3(返回键的数量)

192.168.65.32:6380> cluster getkeysinslot 3300 3

1) "b"

d.将第c步获取的每个键执行migrate操作:

migrate 192.168.65.33 6380 b 0 1599 replace

注意:这里在操作migrate的时候,若各节点有认证,执行的时候会出现:

(error) ERR Target instance replied with error: NOAUTH Authentication required.

若确定执行的迁移,把所有节点的masterauth和requirepass注释掉之后进行的,等进行完之后再开启认证。

e.执行cluster setslot 3300 node 6172b5c24acc213ba6b0a983f3f497cc0658b6cd

如何确认键是在哪个节点上

192.168.65.33:6380> get b

(error) MOVED 3300 192.168.65.32:6380

上面表示key b是在192.168.65.32:6380的3300 slot中。

客户端支持自动重定向:

redis-cli -h 192.168.65.33 -p 6380 -c -a abc

192.168.65.33:6380> get b

-> Redirected to slot [3300] located at 192.168.65.32:63802"

如果在至少负责一个slot的主库下线且 没有相应的从库可以故障恢复,则整个cluster会下线无法工作,如果想让cluster能正常工作,可以更改:

cluster-require-full-coverage为no(默认为Yes)。
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: