您的位置:首页 > 运维架构 > 网站架构

Mysql的写高可用,读的负载均衡

2015-05-30 11:00 645 查看
Mysql的写高可用,读的负载均衡
DRBD+MYSQL+HEARTBEAT+PACEMAKER+LVS+KEEPALIVE
说明:
1. 是Mysql的高可用集群
2. 通过Mysql的主从复制,实现mysql的读写分离。
3. 集群资源管理是用的是pacemaker,对应的配置文件是cib.xml,而非旧版本的haresources。但haresources比cib.xml简单很多。
4. 使用heartbeat实现Mysql主服务的高可用,keepalived实现从服务器的高可用。
###########架构简介############
##mysql主服务器+DRBD的主节点
IP: 192.168.1.104――>drbd1
##mysql主服务器(备用)+DRBD的从节点
IP: 192.168.1.105――>drbd2
##mysql从服务器(realserver)
IP:192.168.1.106――>RS1
192.168.1.107――>RS2
192.168.1.107――>RS3
##LVS的DR+keepalived的主节点
IP:192.168.1.109――>lvs1
##LVS的DR+keepalived的从节点
IP:192.168.1.110――>lvs2
##heartbeat所使用的VIP:
IP:192.168.1.111――>写数据库时,所使用的VIP
##lvs+keepalived所使用的VIP:
IP:192.168.1.112――>读数据库时,所使用的VIP

###########所需软件#############
1. drbd-8.4.3.tar.gz
2. mysql-5.5.28-linux2.6-x86_64.tar.gz9(二进制)
3. Reusable-Cluster-Components-glue--glue-1.0.9.tar.bz2
4. ClusterLabs-resource-agents-v3.9.2-0-ge261943.tar.gz
5. pacemaker_1.1.7.orig.tar.gz
6. keepalived-1.2.7-3.el6.x86_64.rpm
注:yum源的配置如下:
[local]
baseurl=file:///mnt
gpgcheck=0
[ha]
baseurl=file:///mnt/HighAvailability
gpgcheck=0
[LB]
baseurl=file:///mnt/LoadBalancer
gpgcheck=0
[server]
baseurl=file:///mnt/Server
gpgcheck=0
############Drbd的安装与配置#############
##Drbd的安装
1. tar xf drbd-8.4.3.tar.gz -C /usr/local/src
2. cd /usr/loca/src/drbd-8.4.3
3. ./configure \
>--prefix=/usr/local/drbd \
>--with-km \
>--with-distro=redhat
报错1:
configure: error: Cannot build utils without flex, either install flex or pass the --without-utils option.
解决方法:
yum -y install flex
4. make && make install
5. cp /usr/local/drbd/etc/rc.d/init.d/drbd /etc/init.d/drbd
6. chkconfig --add drbd
7. ln -sv /usr/local/drbd/etc/drbd.conf /etc/
8. ln -sv /usr/local/drbd/etc/drbd.d /etc/
9. modprobe drbd
注:以上操作在主从DRBD服务器上都需要做
##Drbd的配置
1. 用fdisk -c /dev/sdb 分出10G大小的/dev/sdb1
2. 配置/etc/drbd.d/global_common.conf
global {
usage-count yes;
}
common {
handlers {
pri-on-incon-degr "/usr/lib/drbd/notify-pri-on-incon-degr.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f";
pri-lost-after-sb "/usr/lib/drbd/notify-pri-lost-after-sb.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f";
local-io-error "/usr/lib/drbd/notify-io-error.sh; /usr/lib/drbd/notify-emergency-shutdown.sh; echo o > /proc/sysrq-trigger ; halt -f";
fence-peer "/usr/lib/drbd/crm-fence-peer.sh";
split-brain "/usr/lib/drbd/notify-split-brain.sh root";
out-of-sync "/usr/lib/drbd/notify-out-of-sync.sh root";
}
startup {
wfc-timeout 120;
degr-wfc-timeout 120;
}

disk {
resync-rate 40M;
on-io-error detach;
fencing resource-only;
}
net {
protocol C;
cram-hmac-alg sha1;
shared-secret "mysql-ha";
csums-alg sha1;
verify-alg crc32c;
}
}
3. 配置/etc/drbd.d/r0.res
resource r0 {
device /dev/drbd0;
disk /dev/sdb1;
meta-disk internal;
on drbd1 {
address 192.168.1.104:7789;
}
on drbd2 {
address 192.168.1.105:7789;
}
}
4. drbdadm create-md r0
5. service drbd start
注:以上说有操作在主从节点都要进行
6. drbdadm primary r0
报错1:
0: State change failed: (-2) Need access to UpToDate data
Command 'drbdsetup primary 0' terminated with exit code 17
解决方法:
drbdadm -- --overwrite-data-of-peer primary all
7. mkfs -t ext4 /dev/drbd0
8. mkdir /data
9. mount /dev/drbd0 /data
#################MySQL的安装与配置#################
##MySQL的安装
1. tar xf mysql-5.5.28-linux2.6-x86_64.tar.gz -C /usr/local
2. ln -sv mysql-5.5.28-linux2.6-x86_64.tar.gz mysql
3. cd mysql
4. groupadd mysql
5. useradd mysql -g mysql -s /sbin/nologin -M -r
6. chown -R mysql.mysql .
7. chown -R mysql.mysql /data
8. scripts/mysql_install_db --user=mysql --datadir=/data
9. chown -R root .
10. cp support-files/mysql.server /etc/init.d/mysqld
11. cp support-files/my-large.cnf /data/my.cnf
12. ln -sv /data/my.cnf /etc/
13. ./bin/mysqld_safe --user=mysql &
14. 修改/etc/my.cnf
[mysqld]
datadir=/data
15. echo "PATH=$PATH:/usr/local/mysql/bin" > /etc/profile.d/mysql.sh
16. . /etc/profile
17. 修改/etc/init.d/mysqld
datadir=/data
注:以上同样的操作也要在mysql的三个从节点上进行,另外,在备用主节点上,除了8、11、13步骤外,其余的需要做一遍。
##MySQL的配置
主节点:
1.修改my.cnf
[mysqld]
server-id=11
log-bin=mysql-bin #默认是开启的
sync-binlog=1
innodb-file-per-table=1
2. 创建从服务器拷贝账号
grant replication client, replication slave on *.* to 'repl'@'192.168.1.106/107/108' identified by '123';
从节点:
1.修改my.cnf
[mysqld]
server-id=12 #另外两台分别是13,14
read-only=1
relay-log=relay-bin
innodb-file-per-table=1
2. 设置所要连接的主服务器以及拷贝的文件和所在位置
mysql> change master to
-> master_host='192.168.1.111',
-> master_user='repl',
-> master_password='123',
-> master_port=3306,
3. start slave;
4. show slave status; #查看是否配置成功。
##################Heartbeat-HA的安装################
>>>>>>>>>>>>定义环境变量<<<<<<<<<<<<<<<<
export PREFIX=/usr/local/heartbeat
export LCRSODIR=$PREFIX/libexec/lcrso
export CLUSTER_USER=hacluster
export CLUSTER_GROUP=haclient
export CFLAGS="$CFLAGS -I$PREFIX/include -L$PREFIX/lib64 -L$PREFIX/lib"
将上述变量直接运行,或加到/root/.bash_profile中
>>>>>>>>>>Reusable-Cluster-Components-glue--glue-1.0.9.tar.bz2的安装<<<<<<
1. groupadd -r haclient
2. useradd hacluster -g haclient -r -M -s /sbin/nologin
3. tar xf Reusable-Cluster-Components-glue--glue-1.0.9.tar.bz2
4. ./autogen.sh #注,运行此脚本时,提示安装autoconf、automake、libtool。因此,需要安装这两个软件:yum -y install autoconf automake libtool
另外,在此脚本运行期间,会提示libtoolize: `COPYING.LIB' not found in `/usr/share/libtool/libltdl',需要安装libtool-ltdl-devel
yum -y install libtool-ltdl-devel
此处,不安装libtool-ltdl-devel软件的话,脚本也会运行成功,但在make的时候会有如下错误:
gmake[1]: Entering directory `/usr/local/src/Reusable-Cluster-Components-glue--glue-1.0.9/libltdl'
gmake[1]: *** No rule to make target `all'. Stop.
gmake[1]: Leaving directory `/usr/local/src/Reusable-Cluster-Components-glue--glue-1.0.9/libltdl'
make: *** [all-recursive] Error 1
小结:
综上说述,在运行autgen.sh脚本之前,先要安装
yum -y install autoconf automake libtool libtool-ltdl-devel gettext 五个个软件。
5. ./configure --prefix=$PREFIX --enable-fatal-warnings=no --with-daemon-user=$CLUSTER_USER --with-daemon-group=$CLUSTER_GROUP --with-ocf-root=$PREFIX
#注,@@此处会有错误提示,需要安装glib2-devel、libxml2,则需要安装:
yum -y install glib2-devel libxml2-devel
@@还会提示:configure: error: BZ2 libraries not found,则需要安装bzip2-devel:
yum -y install bzip2-devel
6. make
#注,此处会报错1:
./.libs/libplumb.so: undefined reference to `uuid_parse'
./.libs/libplumb.so: undefined reference to `uuid_generate'
./.libs/libplumb.so: undefined reference to `uuid_copy'
./.libs/libplumb.so: undefined reference to `uuid_is_null'
./.libs/libplumb.so: undefined reference to `uuid_unparse'
./.libs/libplumb.so: undefined reference to `uuid_clear'
./.libs/libplumb.so: undefined reference to `uuid_compare'
collect2: ld returned 1 exit status
gmake[2]: *** [ipctest] Error 1
gmake[2]: Leaving directory `/usr/local/src/Reusable-Cluster-Components-glue--glue-1.0.9/lib/clplumbing'
gmake[1]: *** [all-recursive] Error 1
gmake[1]: Leaving directory `/usr/local/src/Reusable-Cluster-Components-glue--glue-1.0.9/lib'
make: *** [all-recursive] Error 1
解决方法:
需要安装libuuid-devel,不过安装后还需要重新运行第5个步骤:
yum -y install libuuid-devel
报错2:
gmake[2]: *** [hb_report.8] Error 4
gmake[2]: Leaving directory `/usr/local/src/Reusable-Cluster-Components-glue--glue-1.0.9/doc'
gmake[1]: *** [all-recursive] Error 1
gmake[1]: Leaving directory `/usr/local/src/Reusable-Cluster-Components-glue--glue-1.0.9/doc'
make: *** [all-recursive] Error 1
需要安装,yum -y install docbook-style-xsl
7. make install
8. echo /usr/local/heartbeat/lib >> /etc/ld.so.conf.d/heartbeat.conf
9. echo /usr/local/heartbeat/lib64 >> /etc/ld.so.conf.d/heartbeat.conf
9. ldconfig
小结:
安装cluster-glue前,先安装下列软件:
yum -y install autoconf automake libtool libtool-ltdl-devel gettext glib2-devel libxml2-devel bzip2-devel libuuid-devel docbook-style-xsl

>>>>>>>>>>>>>>heartbeat-3-0-7e3a82377fa8.tar.bz2的安装<<<<<<<<<<<<<<<
1. tar xf Heartbeat-3-0-7e3a82377fa8.tar.bz2
2. ./bootstrap
3. ./configure --prefix=$PREFIX --enable-fatal-warnings=no
#报错1:
configure: error: Core development headers were not found
这是因为无法找到头文件引起的。因此需要明确指定头文件所在路径:
CFLAGS=-I/usr/local/heartbeat/include
报错2:
gmake[2]: *** [api_test] Error 1
gmake[2]: Leaving directory `/usr/local/src/Heartbeat-3-0-7e3a82377fa8/lib/hbclient'
gmake[1]: *** [all-recursive] Error 1
gmake[1]: Leaving directory `/usr/local/src/Heartbeat-3-0-7e3a82377fa8/lib'
make: *** [all-recursive] Error 1
这是因为无法找到库文件造成的。因此需要明确指定库文件所在的路径:
LDFLAGS=-L/usr/local/heartbeat/lib
报错3:
In file included from ../include/lha_internal.h:41,
from strlcpy.c:1:
/usr/local/heartbeat/include/heartbeat/glue_config.h:105:1: error: "HA_HBCONF_DIR" redefined
In file included from ../include/lha_internal.h:38,
from strlcpy.c:1:
../include/config.h:390:1: error: this is the location of the previous definition
gmake[1]: *** [strlcpy.lo] Error 1
gmake[1]: Leaving directory `/usr/local/src/Heartbeat-3-0-7e3a82377fa8/replace'
make: *** [all-recursive] Error 1
解决方法:
将 /usr/local/heartbeat/include/heartbeat/glue_config.h 的105行删除或注释掉。

4. make && make install
小结:综上所述,如果进行源码安装的时候,若是自己指定软件安装的路径的话,则最好使用以下命令来明确指定头文件和库文件的路径,如上例:
CFLAGS=-I/usr/local/heartbeat/include
LDFLAGS=-L/usr/local/heartbeat/lib
>>>>>>>>>>>>ClusterLabs-resource-agents-v3.9.2-0-ge261943.tar.gz的安装<<<<
1. tar xf ClusterLabs-resource-agents-v3.9.2-0-ge261943.tar.gz
2. 修改configure.ac 文件:

OCF_RA_DIR_PREFIX="${prefix}/$OCF_RA_DIR"替换为:OCF_RA_DIR_PREFIX="$OCF_RA_DIR"
OCF_LIB_DIR_PREFIX="${prefix}/$OCF_LIB_DIR"替换为:
OCF_LIB_DIR_PREFIX="$OCF_LIB_DIR"
3. ./autogen.sh
4. ./configure \
--prefix=$PREFIX \
--enable-fatal-warnings=no \
make
5. make && make install
报错1:
/heartbeat/IPv6addr: error while loading shared libraries: libplumb.so.2: cannot open shared object file: No such file or directory
gmake[2]: *** [metadata-IPv6addr.xml] Error 127
解决方法:
1. echo /usr/local/heartbeat/lib >> /etc/ld.so.conf.d/heartbeat.conf
2. ldconfig
3. 然后重新编译
>>>>>>>>>>>>>>>Pacemaker的安装 <<<<<<<<<<<<<<<<<<
1. tar xf pacemaker_1.1.7.orig.tar.gz
2. ./autogen.sh
3. ./configure --prefix=$PREFIX --enable-fatal-warnings=no
注:以上操作在drbd主从节点上均要进行
报错1:
configure: error: The libxslt developement headers were not found
解决方法:
yum -y install libxslt-devel
报错2:
checking for cpg... configure: error: Package requirements (libcpg) were not met: No package 'libcpg' found
解放方法:
yum -y install corosynclib-devel
4. make && make install
5. echo "PATH=$PATH:/usr/local/heartbeat/sbin:/usr/local/heartbeat/bin" >>/etc/profile.d/heartbeat.sh
6. . /etc/profile.d/heartbeat.sh
错误1:做完以上操作后,除了crm命令外,其他的命令如:crm_inode、crm_report等等都可以正常使用,但使用crm时报以下错误:
abort: couldn't find crm libraries in [/usr/local/heartbeat/sbin /usr/local/heartbeat/lib64/python2.6 /root /usr/lib64/python26.zip /usr/lib64/python2.6 /usr/lib64/python2.6/plat-linux2 /usr/lib64/python2.6/lib-tk /usr/lib64/python2.6/lib-old /usr/lib64/python2.6/lib-dynload /usr/lib64/python2.6/site-packages /usr/lib64/python2.6/site-packages/PIL /usr/lib64/python2.6/site-packages/gst-0.10 /usr/lib64/python2.6/site-packages/gtk-2.0 /usr/lib64/python2.6/site-packages/webkit-1.0 /usr/lib/python2.6/site-packages /usr/lib/python2.6/site-packages/setuptools-0.6c11-py2.6.egg-info]
(check your install and PYTHONPATH)
解决方法:
1. echo "export PYTHONPATH=/usr/local/heartbeat/lib64/python2.6/site-packages" >>/etc/profile.d/heartbeat.sh
2. . /etc/profile.d/heartbeat.sh
########################Heartbeat的配置######################
1. cd /usr/local/heartbeat/share/doc/heartbeat
2. cp ha.cf haresources authkeys /usr/local/heartbeat/etc/ha.d
3. cd /usr/local/heartbeat/etc/ha.d
4. chmod 600 authkeys
5. vim /etc/hosts
192.168.1.104 drbd1
192.168.1.105 drbd2
6. vim ha.cf
autojoin none
bcast eth0
warntime 15
deadtime 60
initdead 120
keepalive 2
compression bz2
compression_threshold 2
debug 0
node drbd1
node drbd2
pacemaker respawn
7. vim authkeys
auth 1
1 crc
8. service heartbeat start
报错1:
/usr/local/heartbeat/etc/ha.d/shellfuncs: line 96: /usr/lib/ocf/lib//heartbeat/ocf-shellfuncs: No such file or directory
解决方法:
修改/usr/local/heartbeat/etc/ha.d/shellfuncs:
. /usr/local/heartbeat/usr/lib/ocf/lib//heartbeat/ocf-shellfuncs

报错2:
Starting High-Availability services: Heartbeat failure [rc=6]. Failed.
heartbeat[5175]: 2013/06/14_00:24:39 ERROR: Illegal directive [bcast] in /usr/local/heartbeat/etc/ha.d//ha.cf
heartbeat[5175]: 2013/06/14_00:24:39 ERROR: Compression module(bz2) not found
heartbeat[5175]: 2013/06/14_00:24:39 info: Pacemaker support: respawn
heartbeat[5175]: 2013/06/14_00:24:39 WARN: File /usr/local/heartbeat/etc/ha.d//haresources exists.
heartbeat[5175]: 2013/06/14_00:24:39 WARN: This file is not used because pacemaker is enabled
heartbeat[5175]: 2013/06/14_00:24:39 ERROR: Client child command [/usr/local/heartbeat/lib/heartbeat/ccm] is not executable
heartbeat[5175]: 2013/06/14_00:24:39 ERROR: Directive respawn hacluster /usr/local/heartbeat/lib/heartbeat/ccm failed
heartbeat[5175]: 2013/06/14_00:24:39 ERROR: Heartbeat not started: configuration error.
heartbeat[5175]: 2013/06/14_00:24:39 ERROR: Configuration error, heartbeat not started.
解决方法:

1. ln -svf /usr/local/heartbeat/lib64/heartbeat/ccm /usr/local/heartbeat/lib/heartbeat/
2. ln -svf /usr/local/heartbeat/lib64/heartbeat/plugins/RAExec/* /usr/local/heartbeat/lib/heartbeat/plugins/RAExec/
3. ln -svf /usr/local/heartbeat/lib64/heartbeat/plugins/* /usr/local/heartbeat/lib/heartbeat/plugins/

9. chkconfig heartbeat on ; checonfig logd on
######################配置pacemaker#############################
1. 配置如下:
使用如下命令:crm configure show 显示配置如下
node $id="97ae394b-5f7c-472c-85a7-8e22de0c656b" drbd2 \
attributes standby="off"
node $id="e0c675cd-57aa-4975-b36c-8564c13c714a" drbd1 \
attributes standby="off"
primitive drbd_r0 ocf:heartbeat:drbd \
params drbd_resource="r0" \
op monitor interval="30s" role="Master" \
op start interval="0" timeout="240s" \
op stop interval="0" timeout="100s"
primitive fs ocf:heartbeat:Filesystem \
params device="/dev/drbd0" directory="/data" fstype="ext4" \
op start interval="0" timeout="60s" \
op stop interval="0" timeout="60s" \
meta target-role="Started"
primitive myip ocf:heartbeat:IPaddr \
params ip="192.168.1.111"
primitive mysql lsb:mysqld
group mysqlservice fs myip mysql
ms ms_drbd_mysql drbd_r0 \
meta master-max="1" master-node-max="1" clone-max="2" clone-node-max="1" notify="true"
colocation fs_with_drbd_r0 inf: mysqlservice ms_drbd_mysql:Master
colocation mysql_on_drbd_master inf: mysql ms_drbd_mysql:Master
order fs_after_drbd inf: ms_drbd_mysql:promote fs:start
order mysql_after_fs inf: fs:start mysql:start
property $id="cib-bootstrap-options" \
dc-version="1.1.7-ee0730e13d124c3d58f00016c3376a1de5323cff" \
cluster-infrastructure="Heartbeat" \
no-quorum-policy="ignore" \
stonith-enabled="false" \
last-lrm-refresh="1371372103" \
expected-quorum-votes="2"
###################Keepalived+LVS的安装与配置###########################
>>>>>>>>>>>>>>>>>>>>>安装<<<<<<<<<<<<<<<<<<<<<<
1. yum -y localinstall keepalived-1.2.7-3.el6.x86_64.rpm ipvsadm
注:以上操作在服务器lvs1和lvs2上同时安装
>>>>>>>>>>>>>>>>>>>>>配置<<<<<<<<<<<<<<<<<<<<<<<<<<<
1. cd /etc/keepalived
2. vim keepalived.conf
! Configuration File for keepalived
global_defs {
router_id LVS_master #此值可任意设定,不过主备节点最好有所区别
}
vrrp_instance VI_1 {
state MASTER #lvs2改为BACKUP
interface eth0
virtual_router_id 51
priority 110 #lvs2的比值要比110小
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.1.112
}
}
virtual_server 192.168.1.112 3306 {
delay_loop 6
lb_algo rr
lb_kind dr
persistence_timeout 50
protocol TCP
real_server 192.168.1.106 3306 {
MISC_CHECK {
misc_path "/etc/keepalived/check_slave.sh"
misc_dynamic
}
}
real_server 192.168.1.106 3306 {
MISC_CHECK {
misc_path "/etc/keepalived/check_slave.sh 192.168.1.106"
misc_dynamic
}
}
real_server 192.168.1.107 3306 {
MISC_CHECK {
misc_path "/etc/keepalived/check_slave.sh 192.168.1.107"
misc_dynamic
}
}
real_server 192.168.1.108 3306 {
MISC_CHECK {
misc_path "/etc/keepalived/check_slave.sh 192.168.1.108"
misc_dynamic
}
}
}
3.编写check_slave.sh监控脚本
#!/usr/bin/perl -w
#connect mysql with perl
use DBI;
use DBD::mysql;
$host=$ARGV[0];
$user="root";
$pw="123";
$port=3306;
$db="test";
$SBM=120;
$dbh = DBI->connect("DBI:mysql:$db:$host:$port", $user, $pw, {RaiseError => 0, PrintError => 0});
if (!defined($dbh)) {
exit 1;
}
$slaveStatus = $dbh->prepare("show slave status");
$slaveStatus->execute;
$io = "";
$sql = "";
$sbm = "";
while (my $ref = $slaveStatus->fetchrow_hashref()){
$io = $ref->{'Slave_IO_Running'};
$sql = $ref->{'Slave_SQL_Running'};
$sbm = $ref->{'Seconds_Behind_Master'};
}
$slaveStatus->finish;
$dbh->disconnect();
if ( $io eq "No" || $sql eq "No") {
exit 1;
}
else{
if ( $sbm > $SBM ) {
exit 1;
}
else {
exit 0;
}
}
4. 在RS1、RS2、RS3的mysql中添加如下账号,其中RS_IP是realserver的IP地址:
grant replication client on *.* to 'root'@'RS_IP' identified by '123';
5. 在RS1、RS2、RS3中编写lvs控制脚本

vim /etc/init.t/lvsrs
#!/bin/bash
#
#chkconfig: 35 70 50
#
vip=192.168.1.100
lo=lo:0
retval=0
start() {
ifconfig $lo $vip netmask 255.255.255.255 up
route add -host $vip dev $lo
echo 1 > /proc/sys/net/ipv4/conf/lo/arp_ignore
echo 2 > /proc/sys/net/ipv4/conf/lo/arp_announce
echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore
echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce
}

stop() {
ifconfig $lo down
route del -host $vip
echo 0 > /proc/sys/net/ipv4/conf/lo/arp_ignore
echo 0 > /proc/sys/net/ipv4/conf/lo/arp_announce
echo 0 > /proc/sys/net/ipv4/conf/all/arp_ignore
echo 0 > /proc/sys/net/ipv4/conf/all/arp_announce
}
case $1 in
start)
start
retval=$?
[ $retval = 0 ] && echo "Starting lvs OK"
;;
stop)
stop
retval=$?
[ $retval = 0 ] && echo "Starting lvs Failed"
;;
*)
echo "Usage: $0 {start|stop}"
exit 1
;;
esac
exit 0
6. chmod +x /etc/keepalived/check_slave.sh
7. chmod +x /etc/init.d/lvsrs
8. chkconfig --add lvsrs
9. /etc/init.d/lvsrs start

本文出自 “一切皆有可能” 博客,请务必保留此出处http://noican.blog.51cto.com/4081966/1656579
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: