您的位置:首页 > 运维架构 > 网站架构

MySQL 高可用MMM

2015-11-26 10:51 507 查看
1. mmm简介

MMM(Master-Masterreplication manager for MySQL)是一套支持双主故障切换和双主日常管理的脚本程序。MMM使用Perl语言开发,主要用来监控和管理MySQL Master-Master(双主)复制,虽然叫做双主复制,但是业务上同一时刻只允许对一个主进行写入,另一台备选主上提供部分读服务,以加速在主主切换时刻备选主的预热,可以说MMM这套脚本程序一方面实现了故障切换的功能,另一方面其内部附加的工具脚本也可以实现多个slave的read负载均衡。

MMM提供了自动和手动两种方式移除一组服务器中复制延迟较高的服务器的虚拟ip,同时它还可以备份数据,实现两节点之间的数据同步等。由于MMM无法完全的保证数据一致性,所以MMM适用于对数据的一致性要求不是很高,但是又想最大程度的保证业务可用性的场景。对于那些对数据的一致性要求很高的业务,非常不建议采用MMM这种高可用架构。

1.架构图



3. MySQL-MMM坏境

--具体的配置信息如下所示:
角色               ip地址               主机名字               server-id

monitoring      192.168.0.20          monitor                 -

master1         192.168.0.21            db1                  21

master2         192.168.0.22            db2                  22

slave1          192.168.0.23            db3                  23

--业务中的服务ip信息如下所示:
ip地址                 角色                   描述

192.168.0.30          write                 应用程序连接该ip对主库进行写请求

192.168.0.31          read                  应用程序连接该ip进行读请求

192.168.0.32          read                  应用程序连接该ip进行读请求

192.168.0.33          read                  应用程序连接该ip进行读请求


4. MySQL-主主和主从搭建

yum安装MySQL:

--数据库服务器(192.168.0.21-23)安装mysql,测试环境采用简单的yum安装
1. 安装mysql 服务器端:
yum install mysql-server
yum install mysql-devel
2. 安装mysql客户端:
yum install mysql
3. 启动mysql服务:
service mysqld start或者/etc/init.d/mysqld start
停止:
service mysqld stop
重启:
service mysqld restart
4. 创建root管理员:
mysqladmin -u root password 123456
5.登陆
mysql -uroot -p123456

使用命令service mysqld stop 停止mysql
mysql数据库的默认路径:/var/lib/mysql
my.cnf的默认路径:/etc/my.cnf
mysqld的默认路径:/etc/init.d/mysqld


修改my.cnf参数:

[root@mysqlm1 mysql]# vi /etc/my.cnf
[mysqld]
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
user=mysql
# Disabling symbolic-links is recommended to prevent assorted security risks
symbolic-links=0
default-storage-engine = innodb
replicate-ignore-db = mysql
binlog-ignore-db = mysql
##三个mysql数据库server-id不同分别为21、22、23
server-id = 21
log-bin = /var/lib/mysql/mysql-bin.log
log_bin_index = /var/lib/mysql/mysql-bin.log.index
relay_log = /var/lib/mysql/mysql-bin.relay
relay_log_index = /var/lib/mysql/mysql-bin.relay.index
expire_logs_days = 10
max_binlog_size = 100M
log_slave_updates = 1

[mysqld_safe]
log-error=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid


创建同步和mmm用户并授权:

GRANT REPLICATION CLIENT ON *.* TO 'mmm_monitor'@'192.168.0.%' IDENTIFIED BY 'monitor';
GRANT SUPER, REPLICATION CLIENT, PROCESS ON *.* TO 'mmm_agent'@'192.168.0.%'   IDENTIFIED BY 'agent';
GRANT REPLICATION SLAVE ON *.* TO 'replication'@'192.168.0.%' IDENTIFIED BY 'replication';


主主复制搭建:

--备份主库(192.168.0.21)
mysqldump  -uroot -p123456 --events --master-data=2 -A -B --single-transaction|gzip >/opt/rep.sql.gz

--查看master-info信息(192.168.0.21)
CHANGE MASTER TO MASTER_LOG_FILE='mysql-bin.000003', MASTER_LOG_POS=658;
scp /opt/rep.sql.gz 192.168.0.22:/opt/

--恢复第二个节点主库(192.168.0.22)
gunzip /opt/rep.sql.gz
mysql -uroot -p123456 </opt/rep.sql

CHANGE MASTER TO
MASTER_HOST='192.168.0.21',
MASTER_PORT=3306,
MASTER_USER='replication',
MASTER_PASSWORD='replication',
MASTER_LOG_FILE='mysql-bin.000003',
MASTER_LOG_POS=658;

start slave;
show slave status\G;

Slave_IO_Running: Yes
Slave_SQL_Running: Yes

--查看第二节点主库信息(锁表)(192.168.0.22)
mysql> show master status\G
*************************** 1. row ***************************
File: mysql-bin.000003
Position: 1270
Binlog_Do_DB:
Binlog_Ignore_DB: mysql
1 row in set (0.00 sec)

--创建主主复制,第一节点主库执行(192.168.0.21)
CHANGE MASTER TO
MASTER_HOST='192.168.0.22',
MASTER_PORT=3306,
MASTER_USER='replication',
MASTER_PASSWORD='replication',
MASTER_LOG_FILE='mysql-bin.000003',
MASTER_LOG_POS=1270;

start slave;
show slave status\G;

Slave_IO_Running: Yes
Slave_SQL_Running: Yes


主从复制搭建:

--创建主从复制,备份第二节点主库(192.168.0.22)
mysqldump  -uroot -p123456 --events --master-data=1 -A -B --single-transaction|gzip >/opt/rep_master2.sql.gz
scp /opt/rep.sql.gz 192.168.0.22:/opt/

--恢复备库(192.168.0.23)
gunzip /opt/rep_master2.sql.gz
mysql -uroot -p123456 <rep_master2.sql

--创建主从复制
CHANGE MASTER TO
MASTER_HOST='192.168.0.22',
MASTER_PORT=3306,
MASTER_USER='replication',
MASTER_PASSWORD='replication';

start slave;
show slave status\G;

Slave_IO_Running: Yes
Slave_SQL_Running: Yes


5. MySQL-MMM搭建

配置阿里yum 源

1、备份

mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup

2、下载新的CentOS-Base.repo 到/etc/yum.repos.d/

CentOS 5

wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-5.repo CentOS 6

wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-6.repo CentOS 7

wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo 
3、之后运行yum makecache生成缓存


yum安装mysql-mmm

--MySQL服务器安装mysql-mmm-agent(192.168.0.21-23)
wget http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm rpm -ivh epel-release-6-8.noarch.rpm
yum -y install mysql-mmm-agent

--管理服务器安装mysql-mmm-monitor(192.168.0.20)
wget http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm rpm -ivh epel-release-6-8.noarch.rpm
yum -y install mysql-mmm-monitor


mysql-mmm-agent:



mysql-mmm-monitor:



MySQL服务器配置mysql-mmm

--192.168.0.(21-23)
#网卡要对应,replication和agent_user用户密码要与前面创建的对应
[root@mysqlm1 mysql-mmm]# vim /etc/mysql-mmm/mmm_common.conf
active_master_role      writer

<host default>
cluster_interface       eth1
pid_path                /var/run/mysql-mmm/mmm_agentd.pid
bin_path                /usr/libexec/mysql-mmm/
replication_user        replicantion
replication_password    replicatiion
agent_user              mmm_agent
agent_password          gent
</host>

<host db1>
ip      192.168.0.21
mode    master
peer    db2
</host>

<host db2>
ip      192.168.0.22
mode    master
peer    db1
</host>

<host db3>
ip      192.168.0.23
mode    slave
</host>

<role writer>
hosts   db1, db2
ips     192.168.0.30
mode    exclusive
</role>

<role reader>
hosts   db1, db2, db3
ips     192.168.0.31, 192.168.0.32, 192.168.0.33
mode    balanced
</role>

[root@mysqlm1 init.d]# vim /etc/mysql-mmm/mmm_agent.conf
include mmm_common.conf

# The 'this' variable refers to this server.  Proper operation requires
# that 'this' server (db1 by default), as well as all other servers, have the
# proper IP addresses set in mmm_common.conf.
# 每台服务器的对应相应的的db
this db1


monitor服务器配置mysql-mmm

--192.168.0.20
#monitor_user用户密码要与前面创建的对应
[root@mysqlm0 mysql-mmm]# vim /etc/mysql-mmm/mmm_mon.conf
include mmm_common.conf

<monitor>
ip                  127.0.0.1
pid_path            /var/run/mysql-mmm/mmm_mond.pid
bin_path            /usr/libexec/mysql-mmm
status_path         /var/lib/mysql-mmm/mmm_mond.status
ping_ips            192.168.0.21,192.168.0.22,192.168.0.23
auto_set_online     60

# The kill_host_bin does not exist by default, though the monitor will
# throw a warning about it missing.  See the section 5.10 "Kill Host
# Functionality" in the PDF documentation.
#
# kill_host_bin     /usr/libexec/mysql-mmm/monitor/kill_host
#
</monitor>

<host default>
monitor_user        mmm_monitor
monitor_password    monitor
</host>

debug 0


启动agent和monitor

--192.168.0.21-23
chkconfig mysql-mmm-agent on
service mysql-mmm-agent start

--192.168.0.20
vi /etc/default/mysql-mmm-monitor
ENABLED=1

chkconfig mysql-mmm-monitor on
service mysql-mmm-monitor start


MySQL-MMM管理

--管理节点192.168.0.20
[root@mysqlm0 sbin]# mmm_control show
db1(192.168.0.21) master/ONLINE. Roles: reader(192.168.0.32), writer(192.168.0.30)
db2(192.168.0.22) master/ONLINE. Roles: reader(192.168.0.33)
db3(192.168.0.23) slave/ONLINE. Roles: reader(192.168.0.31)

[root@mysqlm0 sbin]# mmm_control --help
Invalid command '--help'

Valid commands are:
help                              - show this message
ping                              - ping monitor
show                              - show status
checks [<host>|all [<check>|all]] - show checks status
set_online <host>                 - set host <host> online
set_offline <host>                - set host <host> offline
mode                              - print current mode.
set_active                        - switch into active mode.
set_manual                        - switch into manual mode.
set_passive                       - switch into passive mode.
move_role [--force] <role> <host> - move exclusive role <role> to host <host>
(Only use --force if you know what you are doing!)
set_ip <ip> <host>                - set role with ip <ip> to host <host>

[root@mysqlm0 sbin]# mmm_control checks
db2  ping         [last change: 2015/11/26 11:24:55]  OK
db2  mysql        [last change: 2015/11/26 11:24:55]  OK
db2  rep_threads  [last change: 2015/11/26 11:24:55]  OK
db2  rep_backlog  [last change: 2015/11/26 11:24:55]  OK: Backlog is null
db3  ping         [last change: 2015/11/26 11:24:55]  OK
db3  mysql        [last change: 2015/11/26 11:24:55]  OK
db3  rep_threads  [last change: 2015/11/26 11:24:55]  OK
db3  rep_backlog  [last change: 2015/11/26 11:24:55]  OK: Backlog is null
db1  ping         [last change: 2015/11/26 11:24:55]  OK
db1  mysql        [last change: 2015/11/26 11:24:55]  OK
db1  rep_threads  [last change: 2015/11/26 11:24:55]  OK
db1  rep_backlog  [last change: 2015/11/26 11:24:55]  OK: Backlog is null

--查看MySQL-MMM日志
[root@mysqlm0 mysql-mmm]# tail -100 /var/log/mysql-mmm/mmm_mond.log
2015/11/26 12:03:58 FATAL State of host 'db3' changed from REPLICATION_FAIL to ONLINE
2015/11/26 12:03:58 FATAL State of host 'db1' changed from REPLICATION_FAIL to ONLINE
2015/11/26 12:04:43 FATAL State of host 'db2' changed from HARD_OFFLINE to AWAITING_RECOVERY
2015/11/26 12:04:46 FATAL State of host 'db2' changed from AWAITING_RECOVERY to ONLINE because it was down for only 48 seconds
2015/11/26 12:05:01 FATAL State of host 'db3' changed from ONLINE to REPLICATION_FAIL
2015/11/26 12:11:33 FATAL State of host 'db3' changed from REPLICATION_FAIL to ONLINE


验证VIP

--192.168.0.21
[root@mysqlm1 mysql-mmm]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 08:00:27:66:22:74 brd ff:ff:ff:ff:ff:ff
inet 10.0.0.221/24 brd 10.0.0.255 scope global eth0
inet6 fe80::a00:27ff:fe66:2274/64 scope link
valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 08:00:27:c0:23:69 brd ff:ff:ff:ff:ff:ff
inet 192.168.0.21/24 brd 192.168.0.255 scope global eth1
inet 192.168.0.31/32 scope global eth1
inet 192.168.0.30/32 scope global eth1
inet6 fe80::a00:27ff:fec0:2369/64 scope link
valid_lft forever preferred_lft forever
--其他节点一样通过 ip addr 查看


注意事项

切换时如果第二主节点宕机,SLAVE会自动连接第一主节点,这样主从同步IO线程会失败。可以通过调整CHANGE MASTER位置点,来重新配置SLAVE端。(重新配置主从复制亦可)
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签:  mysql