[集群] 实践:使用MMM搭建Mysql集群
2009-03-26 11:52
1006 查看
(转)http://linux.chinaunix.net/bbs/thread-919036-1-2.html
MMM是 mysql-master-master的缩写,MMM在多台mysql server之间以主-主的形式复制数据,达到并发访问,提高性能的目的。
MMM项目来自Google:http://code.google.com/p/mysql-master-master/
我的安装过程参考了文档http://blog.kovyrin.net/2007/04/ ... -example-using-mmm/
以下是我搭建的过程。
我使用三台RHEL5U1 server,其中一台是Monitoring Server,另两台Mysql server用来读写数据。
192.168.20.5做为Monitoring Server,
192.168.20.9做为db1,
192.168.20.10做为db2。
三台服务器先安装mysql,我安装的是mysql 5.1.22的社区版本。
先安装3个perl的包:
Algorithm-Diff-1.1902.tar.gz
Proc-Daemon-0.03.tar.gz
DBD-mysql-4.006.tar.gz(依赖于mysql-devel包)
perl包的安装过程都是:
perl Makefile.PL
make
make test
make install
DBD-mysql包的安装(本地mysql server必须处于运行状态):
perl Makefile.PL --testuser=root --testpassword=abcdefg (后面的参数是本地mysql server登录的用户名和密码)
make
make test
make install
安装DBD-mysql的时候提示找不到mysql_config,需要安装mysql-devel包。
安装mmm:
./install.pl
先配置Master-Master replication:
在db1的/etc/my.cnf增加:
server-id = 1
log-bin = mysql-bin
在db2的/etc/my.cnf增加:
server-id = 2
log-bin = mysql-bin
为了保证replication能正常启动,我在启动mysql服务以前以前把/var/lib/mysql目录下与二进制日志相关的文件全部删除,包括
log-bin、
relaybin、mysql-bin.index、mysql_ndb-1-relay-bin.index、relay-log.info等文件,又
删除test库里的表。
启动mysql以后在db1上的mysql里执行命令:
[Copy to clipboard] [ - ]
CODE:
grant replication slave on *.* to 'replication'@'%' identified by 'slave';
change master to master_host='192.168.20.10', master_port=3306, master_user='replication', master_password='slave';
slave start;
在db2上的mysql里执行命令:
[Copy to clipboard] [ - ]
CODE:
grant replication slave on *.* to 'replication'@'%' identified by 'slave';
change master to master_host='192.168.20.9', master_port=3306, master_user='replication', master_password='slave';
slave start;
show slave status/G;的结果:
db1:
[Copy to clipboard] [ - ]
CODE:
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: 192.168.20.10
Master_User: replication
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: mysql-bin.000001
Read_Master_Log_Pos: 106
Relay_Log_File: mysql_ndb-1-relay-bin.000002
Relay_Log_Pos: 251
Relay_Master_Log_File: mysql-bin.000001
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
Replicate_Do_DB:
Replicate_Ignore_DB:
Replicate_Do_Table:
Replicate_Ignore_Table:
Replicate_Wild_Do_Table:
Replicate_Wild_Ignore_Table:
Last_Errno: 0
Last_Error:
Skip_Counter: 0
Exec_Master_Log_Pos: 106
Relay_Log_Space: 412
Until_Condition: None
Until_Log_File:
Until_Log_Pos: 0
Master_SSL_Allowed: No
Master_SSL_CA_File:
Master_SSL_CA_Path:
Master_SSL_Cert:
Master_SSL_Cipher:
Master_SSL_Key:
Seconds_Behind_Master: 0
Master_SSL_Verify_Server_Cert: No
Last_IO_Errno: 0
Last_IO_Error:
Last_SQL_Errno: 0
Last_SQL_Error:
1 row in set (0.00 sec)
ERROR:
No query specified
db2上的结果:
[Copy to clipboard] [ - ]
CODE:
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: 192.168.20.9
Master_User: replication
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: mysql-bin.000001
Read_Master_Log_Pos: 106
Relay_Log_File: mysql_ndb-2-relay-bin.000002
Relay_Log_Pos: 251
Relay_Master_Log_File: mysql-bin.000001
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
Replicate_Do_DB:
Replicate_Ignore_DB:
Replicate_Do_Table:
Replicate_Ignore_Table:
Replicate_Wild_Do_Table:
Replicate_Wild_Ignore_Table:
Last_Errno: 0
Last_Error:
Skip_Counter: 0
Exec_Master_Log_Pos: 106
Relay_Log_Space: 412
Until_Condition: None
Until_Log_File:
Until_Log_Pos: 0
Master_SSL_Allowed: No
Master_SSL_CA_File:
Master_SSL_CA_Path:
Master_SSL_Cert:
Master_SSL_Cipher:
Master_SSL_Key:
Seconds_Behind_Master: 0
Master_SSL_Verify_Server_Cert: No
Last_IO_Errno: 0
Last_IO_Error:
Last_SQL_Errno: 0
Last_SQL_Error:
1 row in set (0.00 sec)
ERROR:
No query specified
MMM所有配置文件都放在/usr/local/mmm/etc目录下。
管理节点上的配置文件mmm_mon.conf,在这个配置文件里指定了reader和writer角色的虚拟IP:
[Copy to clipboard] [ - ]
CODE:
#
# Master-Master Manager config (monitor)
#
# Debug mode
debug no
# Paths
pid_path /usr/local/mmm/var/mmmd.pid
status_path /usr/local/mmm/var/mmmd.status
bin_path /usr/local/mmm/bin
# Logging setup
log mydebug
file /usr/local/mmm/var/mmm-debug.log
level debug
log mytraps
file /usr/local/mmm/var/mmm-traps.log
level trap
email root@localhost
# MMMD command socket tcp-port
bind_port 9988
agent_port 9989
monitor_ip 127.0.0.1
# Cluster interface
cluster_interface eth0
# Cluster hosts addresses and access params
host db1
ip 192.168.20.9
port 3306
user rep_monitor
password RepMonitor
mode master
peer db2
host db2
ip 192.168.20.10
port 3306
user rep_monitor
password RepMonitor
mode master
peer db1
#
# Define roles
#
active_master_role writer
# Mysql Reader role
role reader
mode balanced
servers db1, db2
ip 192.168.20.27, 192.168.20.28
# Mysql Writer role
role writer
mode exclusive
servers db1
ip 192.168.20.29
#
# Checks parameters
#
# Ping checker
check ping
check_period 1
trap_period 5
timeout 2
# Mysql checker
# (restarts after 10000 checks to prevent memory leaks)
check mysql
check_period 1
trap_period 2
timeout 2
restart_after 10000
# Mysql replication backlog checker
# (restarts after 10000 checks to prevent memory leaks)
check rep_backlog
check_period 5
trap_period 10
max_backlog 60
timeout 2
restart_after 10000
# Mysql replication threads checker
# (restarts after 10000 checks to prevent memory leaks)
check rep_threads
check_period 1
trap_period 5
timeout 2
restart_after 10000
db1上的配置文件mmm_agent.conf:
[Copy to clipboard] [ - ]
CODE:
#
# Master-Master Manager config (agent)
#
# Debug mode
debug no
# Paths
pid_path /usr/local/mmm/var/mmmd_agent.pid
bin_path /usr/local/mmm/bin
# Logging setup
log mydebug
file /usr/local/mmm/var/mmm-debug.log
level debug
log mytraps
file /usr/local/mmm/var/mmm-traps.log
level trap
# MMMD command socket tcp-port and ip
bind_port 9989
# Cluster interface
cluster_interface eth1
# Define current server id
this db1
mode slave
# For masters
peer db2
# Cluster hosts addresses and access params
host db1
ip 192.168.20.9
port 3306
user rep_agent
password RepAgent
host db2
ip 192.168.20.10
port 3306
user rep_agent
password RepAgent
db2上的配置文件mmm_agent.conf:
[Copy to clipboard] [ - ]
CODE:
#
# Master-Master Manager config (agent)
#
# Debug mode
debug no
# Paths
pid_path /usr/local/mmm/var/mmmd_agent.pid
bin_path /usr/local/mmm/bin
# Logging setup
log mydebug
file /usr/local/mmm/var/mmm-debug.log
level debug
log mytraps
file /usr/local/mmm/var/mmm-traps.log
level trap
# MMMD command socket tcp-port and ip
bind_port 9989
# Cluster interface
cluster_interface eth1
# Define current server id
this db2
mode slave
# For masters
peer db1
# Cluster hosts addresses and access params
host db1
ip 192.168.20.9
port 3306
user rep_agent
password RepAgent
host db2
ip 192.168.20.10
port 3306
user rep_agent
password RepAgent
在db1和db2上建立新用户,用于管理节点访问db节点:
GRANT ALL PRIVILEGES on *.* to 'rep_monitor'@'192.168.20.5' identified by 'RepMonitor';
在db1和db2节点上启动agent:
mmmd_agent
在管理节点上启动管理进程:
mmmd_mon
[Copy to clipboard] [ - ]
CODE:
Reading config file: 'mmm_mon.conf'
$VAR1 = {
'db2' => {
'roles' => [],
'version' => '0',
'state' => 'AWAITING_RECOVERY'
},
'db1' => {
'roles' => [
'reader(192.168.20.27;)',
'reader(192.168.20.28;)',
'writer(192.168.20.29;)'
],
'version' => '0',
'state' => 'ONLINE'
}
};
Role: 'reader(192.168.20.27;)'
Adding role: 'reader' with ip '192.168.20.27'
Role: 'reader(192.168.20.28;)'
Adding role: 'reader' with ip '192.168.20.28'
Role: 'writer(192.168.20.29;)'
Adding role: 'writer' with ip '192.168.20.29'
在管理节点上启动db节点
mmm_control set_online db1
[Copy to clipboard] [ - ]
CODE:
Config file: mmm_mon.conf
Daemon is running!
Command sent to monitoring host. Result: OK: State of 'db1' changed to
ONLINE. Now you can wait some time and check its new roles!
mmm_control set_online db2
[Copy to clipboard] [ - ]
CODE:
Config file: mmm_mon.conf
Daemon is running!
Command sent to monitoring host. Result: OK: State of 'db2' changed to
ONLINE. Now you can wait some time and check its new roles!
管理节点上相关进程的状态:
[Copy to clipboard] [ - ]
CODE:
root 8653 0.6 14.9 256856 39192 ? Sl 15:51 0:01 perl /usr/local/sbin/mmmd_mon
root 8656 0.1 3.1 99868 8160 ? S 15:51 0:00 /_ perl /usr/local/mmm/bin/check/checker rep_backlog
root 8658 0.1 3.1 99856 8144 ? S 15:51 0:00 /_ perl /usr/local/mmm/bin/check/checker mysql
root 8661 0.1 1.8 87004 4932 ? S 15:51 0:00 /_ perl /usr/local/mmm/bin/check/checker ping
root 8926 0.0 0.1 1612 508 ? S 15:55 0:00 |
/_ /usr/local/mmm/bin/sys/fping -q -u -t 500 -C 1 192.168.20.10
root 8662 0.1 3.1 99868 8168 ? S 15:51 0:00 /_ perl /usr/local/mmm/bin/check/checker rep_threads
db1上的相关进程:
[Copy to clipboard] [ - ]
CODE:
root 8769 0.3 3.0 100520 7988 ? S 15:38 0:04 perl /usr/local/sbin/mmmd_agent
root 11824 15.0 2.5 94764 6588 ? S 15:56 0:00 /_
perl /usr/local/mmm/bin/agent/check_role writer(192.168.20.29;)
root 11825 17.0 2.9 101824 7776 ? S 15:56 0:00 /_ perl /usr/local/mmm/bin/mysql_allow_write
db2上的相关进程:
[Copy to clipboard] [ - ]
CODE:
root 8731 0.0 3.0 100524 7980 ? S 15:38 0:01 perl /usr/local/sbin/mmmd_agent
在管理节点上查看节点状态:
mmm_control show
[Copy to clipboard] [ - ]
CODE:
Config file: mmm_mon.conf
Daemon is running!
Servers status:
db1(192.168.20.9): master/ONLINE. Roles: writer(192.168.20.29;)
db2(192.168.20.10): master/ONLINE. Roles: reader(192.168.20.27;), reader(192.168.20.28;)
到这里集群搭好了,db1负责读数据,db2负责读和写数据,读写性能应该比单台mysql server高,对于类似read 90% + write 10%这样的应用来说,增加读节点以后,整体性能会有很大提高。
接下来用sysbench和super-smack测它的性能。
MMM是 mysql-master-master的缩写,MMM在多台mysql server之间以主-主的形式复制数据,达到并发访问,提高性能的目的。
MMM项目来自Google:http://code.google.com/p/mysql-master-master/
我的安装过程参考了文档http://blog.kovyrin.net/2007/04/ ... -example-using-mmm/
以下是我搭建的过程。
我使用三台RHEL5U1 server,其中一台是Monitoring Server,另两台Mysql server用来读写数据。
192.168.20.5做为Monitoring Server,
192.168.20.9做为db1,
192.168.20.10做为db2。
三台服务器先安装mysql,我安装的是mysql 5.1.22的社区版本。
先安装3个perl的包:
Algorithm-Diff-1.1902.tar.gz
Proc-Daemon-0.03.tar.gz
DBD-mysql-4.006.tar.gz(依赖于mysql-devel包)
perl包的安装过程都是:
perl Makefile.PL
make
make test
make install
DBD-mysql包的安装(本地mysql server必须处于运行状态):
perl Makefile.PL --testuser=root --testpassword=abcdefg (后面的参数是本地mysql server登录的用户名和密码)
make
make test
make install
安装DBD-mysql的时候提示找不到mysql_config,需要安装mysql-devel包。
安装mmm:
./install.pl
先配置Master-Master replication:
在db1的/etc/my.cnf增加:
server-id = 1
log-bin = mysql-bin
在db2的/etc/my.cnf增加:
server-id = 2
log-bin = mysql-bin
为了保证replication能正常启动,我在启动mysql服务以前以前把/var/lib/mysql目录下与二进制日志相关的文件全部删除,包括
log-bin、
relaybin、mysql-bin.index、mysql_ndb-1-relay-bin.index、relay-log.info等文件,又
删除test库里的表。
启动mysql以后在db1上的mysql里执行命令:
[Copy to clipboard] [ - ]
CODE:
grant replication slave on *.* to 'replication'@'%' identified by 'slave';
change master to master_host='192.168.20.10', master_port=3306, master_user='replication', master_password='slave';
slave start;
在db2上的mysql里执行命令:
[Copy to clipboard] [ - ]
CODE:
grant replication slave on *.* to 'replication'@'%' identified by 'slave';
change master to master_host='192.168.20.9', master_port=3306, master_user='replication', master_password='slave';
slave start;
show slave status/G;的结果:
db1:
[Copy to clipboard] [ - ]
CODE:
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: 192.168.20.10
Master_User: replication
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: mysql-bin.000001
Read_Master_Log_Pos: 106
Relay_Log_File: mysql_ndb-1-relay-bin.000002
Relay_Log_Pos: 251
Relay_Master_Log_File: mysql-bin.000001
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
Replicate_Do_DB:
Replicate_Ignore_DB:
Replicate_Do_Table:
Replicate_Ignore_Table:
Replicate_Wild_Do_Table:
Replicate_Wild_Ignore_Table:
Last_Errno: 0
Last_Error:
Skip_Counter: 0
Exec_Master_Log_Pos: 106
Relay_Log_Space: 412
Until_Condition: None
Until_Log_File:
Until_Log_Pos: 0
Master_SSL_Allowed: No
Master_SSL_CA_File:
Master_SSL_CA_Path:
Master_SSL_Cert:
Master_SSL_Cipher:
Master_SSL_Key:
Seconds_Behind_Master: 0
Master_SSL_Verify_Server_Cert: No
Last_IO_Errno: 0
Last_IO_Error:
Last_SQL_Errno: 0
Last_SQL_Error:
1 row in set (0.00 sec)
ERROR:
No query specified
db2上的结果:
[Copy to clipboard] [ - ]
CODE:
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: 192.168.20.9
Master_User: replication
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: mysql-bin.000001
Read_Master_Log_Pos: 106
Relay_Log_File: mysql_ndb-2-relay-bin.000002
Relay_Log_Pos: 251
Relay_Master_Log_File: mysql-bin.000001
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
Replicate_Do_DB:
Replicate_Ignore_DB:
Replicate_Do_Table:
Replicate_Ignore_Table:
Replicate_Wild_Do_Table:
Replicate_Wild_Ignore_Table:
Last_Errno: 0
Last_Error:
Skip_Counter: 0
Exec_Master_Log_Pos: 106
Relay_Log_Space: 412
Until_Condition: None
Until_Log_File:
Until_Log_Pos: 0
Master_SSL_Allowed: No
Master_SSL_CA_File:
Master_SSL_CA_Path:
Master_SSL_Cert:
Master_SSL_Cipher:
Master_SSL_Key:
Seconds_Behind_Master: 0
Master_SSL_Verify_Server_Cert: No
Last_IO_Errno: 0
Last_IO_Error:
Last_SQL_Errno: 0
Last_SQL_Error:
1 row in set (0.00 sec)
ERROR:
No query specified
MMM所有配置文件都放在/usr/local/mmm/etc目录下。
管理节点上的配置文件mmm_mon.conf,在这个配置文件里指定了reader和writer角色的虚拟IP:
[Copy to clipboard] [ - ]
CODE:
#
# Master-Master Manager config (monitor)
#
# Debug mode
debug no
# Paths
pid_path /usr/local/mmm/var/mmmd.pid
status_path /usr/local/mmm/var/mmmd.status
bin_path /usr/local/mmm/bin
# Logging setup
log mydebug
file /usr/local/mmm/var/mmm-debug.log
level debug
log mytraps
file /usr/local/mmm/var/mmm-traps.log
level trap
email root@localhost
# MMMD command socket tcp-port
bind_port 9988
agent_port 9989
monitor_ip 127.0.0.1
# Cluster interface
cluster_interface eth0
# Cluster hosts addresses and access params
host db1
ip 192.168.20.9
port 3306
user rep_monitor
password RepMonitor
mode master
peer db2
host db2
ip 192.168.20.10
port 3306
user rep_monitor
password RepMonitor
mode master
peer db1
#
# Define roles
#
active_master_role writer
# Mysql Reader role
role reader
mode balanced
servers db1, db2
ip 192.168.20.27, 192.168.20.28
# Mysql Writer role
role writer
mode exclusive
servers db1
ip 192.168.20.29
#
# Checks parameters
#
# Ping checker
check ping
check_period 1
trap_period 5
timeout 2
# Mysql checker
# (restarts after 10000 checks to prevent memory leaks)
check mysql
check_period 1
trap_period 2
timeout 2
restart_after 10000
# Mysql replication backlog checker
# (restarts after 10000 checks to prevent memory leaks)
check rep_backlog
check_period 5
trap_period 10
max_backlog 60
timeout 2
restart_after 10000
# Mysql replication threads checker
# (restarts after 10000 checks to prevent memory leaks)
check rep_threads
check_period 1
trap_period 5
timeout 2
restart_after 10000
db1上的配置文件mmm_agent.conf:
[Copy to clipboard] [ - ]
CODE:
#
# Master-Master Manager config (agent)
#
# Debug mode
debug no
# Paths
pid_path /usr/local/mmm/var/mmmd_agent.pid
bin_path /usr/local/mmm/bin
# Logging setup
log mydebug
file /usr/local/mmm/var/mmm-debug.log
level debug
log mytraps
file /usr/local/mmm/var/mmm-traps.log
level trap
# MMMD command socket tcp-port and ip
bind_port 9989
# Cluster interface
cluster_interface eth1
# Define current server id
this db1
mode slave
# For masters
peer db2
# Cluster hosts addresses and access params
host db1
ip 192.168.20.9
port 3306
user rep_agent
password RepAgent
host db2
ip 192.168.20.10
port 3306
user rep_agent
password RepAgent
db2上的配置文件mmm_agent.conf:
[Copy to clipboard] [ - ]
CODE:
#
# Master-Master Manager config (agent)
#
# Debug mode
debug no
# Paths
pid_path /usr/local/mmm/var/mmmd_agent.pid
bin_path /usr/local/mmm/bin
# Logging setup
log mydebug
file /usr/local/mmm/var/mmm-debug.log
level debug
log mytraps
file /usr/local/mmm/var/mmm-traps.log
level trap
# MMMD command socket tcp-port and ip
bind_port 9989
# Cluster interface
cluster_interface eth1
# Define current server id
this db2
mode slave
# For masters
peer db1
# Cluster hosts addresses and access params
host db1
ip 192.168.20.9
port 3306
user rep_agent
password RepAgent
host db2
ip 192.168.20.10
port 3306
user rep_agent
password RepAgent
在db1和db2上建立新用户,用于管理节点访问db节点:
GRANT ALL PRIVILEGES on *.* to 'rep_monitor'@'192.168.20.5' identified by 'RepMonitor';
在db1和db2节点上启动agent:
mmmd_agent
在管理节点上启动管理进程:
mmmd_mon
[Copy to clipboard] [ - ]
CODE:
Reading config file: 'mmm_mon.conf'
$VAR1 = {
'db2' => {
'roles' => [],
'version' => '0',
'state' => 'AWAITING_RECOVERY'
},
'db1' => {
'roles' => [
'reader(192.168.20.27;)',
'reader(192.168.20.28;)',
'writer(192.168.20.29;)'
],
'version' => '0',
'state' => 'ONLINE'
}
};
Role: 'reader(192.168.20.27;)'
Adding role: 'reader' with ip '192.168.20.27'
Role: 'reader(192.168.20.28;)'
Adding role: 'reader' with ip '192.168.20.28'
Role: 'writer(192.168.20.29;)'
Adding role: 'writer' with ip '192.168.20.29'
在管理节点上启动db节点
mmm_control set_online db1
[Copy to clipboard] [ - ]
CODE:
Config file: mmm_mon.conf
Daemon is running!
Command sent to monitoring host. Result: OK: State of 'db1' changed to
ONLINE. Now you can wait some time and check its new roles!
mmm_control set_online db2
[Copy to clipboard] [ - ]
CODE:
Config file: mmm_mon.conf
Daemon is running!
Command sent to monitoring host. Result: OK: State of 'db2' changed to
ONLINE. Now you can wait some time and check its new roles!
管理节点上相关进程的状态:
[Copy to clipboard] [ - ]
CODE:
root 8653 0.6 14.9 256856 39192 ? Sl 15:51 0:01 perl /usr/local/sbin/mmmd_mon
root 8656 0.1 3.1 99868 8160 ? S 15:51 0:00 /_ perl /usr/local/mmm/bin/check/checker rep_backlog
root 8658 0.1 3.1 99856 8144 ? S 15:51 0:00 /_ perl /usr/local/mmm/bin/check/checker mysql
root 8661 0.1 1.8 87004 4932 ? S 15:51 0:00 /_ perl /usr/local/mmm/bin/check/checker ping
root 8926 0.0 0.1 1612 508 ? S 15:55 0:00 |
/_ /usr/local/mmm/bin/sys/fping -q -u -t 500 -C 1 192.168.20.10
root 8662 0.1 3.1 99868 8168 ? S 15:51 0:00 /_ perl /usr/local/mmm/bin/check/checker rep_threads
db1上的相关进程:
[Copy to clipboard] [ - ]
CODE:
root 8769 0.3 3.0 100520 7988 ? S 15:38 0:04 perl /usr/local/sbin/mmmd_agent
root 11824 15.0 2.5 94764 6588 ? S 15:56 0:00 /_
perl /usr/local/mmm/bin/agent/check_role writer(192.168.20.29;)
root 11825 17.0 2.9 101824 7776 ? S 15:56 0:00 /_ perl /usr/local/mmm/bin/mysql_allow_write
db2上的相关进程:
[Copy to clipboard] [ - ]
CODE:
root 8731 0.0 3.0 100524 7980 ? S 15:38 0:01 perl /usr/local/sbin/mmmd_agent
在管理节点上查看节点状态:
mmm_control show
[Copy to clipboard] [ - ]
CODE:
Config file: mmm_mon.conf
Daemon is running!
Servers status:
db1(192.168.20.9): master/ONLINE. Roles: writer(192.168.20.29;)
db2(192.168.20.10): master/ONLINE. Roles: reader(192.168.20.27;), reader(192.168.20.28;)
到这里集群搭好了,db1负责读数据,db2负责读和写数据,读写性能应该比单台mysql server高,对于类似read 90% + write 10%这样的应用来说,增加读节点以后,整体性能会有很大提高。
接下来用sysbench和super-smack测它的性能。
相关文章推荐
- 搭建mysql-mmm高可用mysql集群
- Redis集群方案之使用豌豆荚Codis搭建(待实践)
- 搭建mysql集群,使用Percona XtraDB Cluster搭建
- 一步一图搭建-分布式服务器部署之mysql集群-使用amoeba整合mysql实现读写分离
- 使用MMM搭建Mysql同步高可用性
- 使用docker快速搭建MySQL主从集群
- 使用MMM搭建Mysql同步高可用性
- 【实践】使用本地源搭建ceph集群
- 使用MySQL-Cluster搭建MySQL数据库集群
- kafka入门:简介、使用场景、设计原理、主要配置及集群搭建(转)
- 搜索服务Solr集群搭建 使用ZooKeeper作为代理层
- Redis集群搭建与简单使用
- mysql 主从同步集群搭建(二)mysql5.5.25版本
- 使用PHP结合Ffmpeg快速搭建流媒体服务实践
- hadoop2.4.2集群搭建及hive与mysql集成文档记录
- 在window上使用VirtualBox搭建Ubuntu15.04全分布Hadoop2.7.1集群
- MySQL集群搭建--多主模式
- Cobar使用文档(可用作MySQL大型集群解决方案)
- 使用docker搭建hadoop分布式集群
- Hadoop实践(二)---Hadoop集群的使用和配置