您的位置:首页 > 运维架构 > Nginx

如何快速构建高可用集群(Keepalived+Haproxy+Nginx)

2014-07-12 06:10 344 查看
转自次元立方网:http://www.it165.net/admin/html/201405/2957.html

 另一篇:HA-Proxy+Nginx实现web负载均衡(haproxy篇)http://www.it165.net/admin/html/201405/3171.html

 

组件及实现的功能
Keepalived:实现对Haproxy服务的高可用,并采用双主模型配置;
Haproxy:实现对Nginx的负载均衡和读写分离;
Nginx:实现对HTTP请求的高速处理;
 
架构设计图



 
重点概念
vrrp_script中节点权重改变算法
vrrp_script 里的script返回值为0时认为检测成功,其它值都会当成检测失败;
weight 为正时,脚本检测成功时此weight会加到priority上,检测失败时不加;
主失败:
主 priority < 从 priority + weight 时会切换。
主成功:
主 priority + weight > 从 priority + weight 时,主依然为主
weight 为负时,脚本检测成功时此weight不影响priority,检测失败时priority – abs(weight)
主失败:
主 priority – abs(weight) < 从priority 时会切换主从
主成功:
主 priority > 从priority 主依然为主
具体解释详见博文“Keepalived双主模型中vrrp_script中权重改变故障排查”
 
部署配置
Keepalived部署
配置

001.
yum -y
install
keepalived
#
两节点都需部署

002.
# 172.16.25.109

003.
# vi /etc/keepalived/keepalived.conf

004.
! Configuration File
for
keepalived

005.
global_defs {

006.
   
notification_email {

007.
         
root@localhost

008.
   
}

009.
   
notification_email_from admin@lnmmp.com

010.
   
smtp_connect_timeout 3

011.
   
smtp_server 127.0.0.1

012.
   
router_id LVS_DEVEL

013.
}

014.
vrrp_script chk_maintaince_down {

015.
   
script
"[[ -f /etc/keepalived/down
]] && exit 1 || exit 0"

016.
   
interval 1

017.
   
weight 2

018.
}

019.
vrrp_script chk_haproxy {

020.
    
script
"killall -0 haproxy"

021.
    
interval 1

022.
    
weight 2

023.
}

024.
vrrp_instance VI_1 {

025.
    
interface eth0

026.
    
state MASTER

027.
    
priority 100

028.
    
virtual_router_id 125

029.
    
garp_master_delay 1

030.
    
authentication {

031.
        
auth_type PASS

032.
        
auth_pass 1e3459f77aba4ded

033.
    
}

034.
    
track_interface {

035.
       
eth0

036.
    
}

037.
    
virtual_ipaddress {

038.
        
172.16.25.10/16 dev eth0 label eth0:0

039.
    
}

040.
    
track_script {

041.
        
chk_haproxy

042.
    
}

043.
    
notify_master
"/etc/keepalived/notify.sh
master 172.16.25.10"

044.
    
notify_fault
"/etc/keepalived/notify.sh
fault 172.16.25.10"

045.
}

046.
vrrp_instance VI_2 {

047.
    
interface eth0

048.
    
state BACKUP

049.
    
priority 99

050.
    
virtual_router_id 126

051.
    
garp_master_delay 1

052.
    
authentication {

053.
        
auth_type PASS

054.
        
auth_pass 7615c4b7f518cede

055.
    
}

056.
    
track_interface {

057.
       
eth0

058.
    
}

059.
    
virtual_ipaddress {

060.
        
172.16.25.11/16 dev eth0 label eth0:1

061.
    
}

062.
    
track_script {

063.
        
chk_haproxy

064.
chk_maintaince_down

065.
    
}

066.
    
notify_master
"/etc/keepalived/notify.sh
master 172.16.25.11"

067.
    
notify_backup
"/etc/keepalived/notify.sh
backup 172.16.25.11"

068.
    
notify_fault
"/etc/keepalived/notify.sh
fault 172.16.25.11"

069.
}

070.
# 172.16.25.110

071.
# vi /etc/keepalived/keepalived.conf

072.
! Configuration File
for
keepalived

073.
global_defs {

074.
   
notification_email {

075.
         
root@localhost

076.
   
}

077.
   
notification_email_from admin@lnmmp.com

078.
   
smtp_connect_timeout 3

079.
   
smtp_server 127.0.0.1

080.
   
router_id LVS_DEVEL

081.
}

082.
vrrp_script chk_maintaince_down {

083.
   
script
"[[ -f /etc/keepalived/down
]] && exit 1 || exit 0"

084.
   
interval 1

085.
   
weight 2

086.
}

087.
vrrp_script chk_haproxy {

088.
    
script
"killall -0 haproxy"

089.
    
interval 1

090.
    
weight 2

091.
}

092.
vrrp_instance VI_1 {

093.
    
interface eth0

094.
    
state BACKUP

095.
    
priority 99

096.
    
virtual_router_id 125

097.
    
garp_master_delay 1

098.
    
authentication {

099.
        
auth_type PASS

100.
        
auth_pass 1e3459f77aba4ded

101.
    
}

102.
    
track_interface {

103.
       
eth0

104.
    
}

105.
    
virtual_ipaddress {

106.
        
172.16.25.10/16 dev eth0 label eth0:0

107.
    
}

108.
    
track_script {

109.
        
chk_haproxy

110.
chk_maintaince_down

111.
    
}

112.
    
notify_master
"/etc/keepalived/notify.sh
master 172.16.25.10"

113.
    
notify_backup
"/etc/keepalived/notify.sh
backup 172.16.25.10"

114.
    
notify_fault
"/etc/keepalived/notify.sh
fault 172.16.25.10"

115.
}

116.
vrrp_instance VI_2 {

117.
    
interface eth0

118.
    
state MASTER

119.
    
priority 100

120.
    
virtual_router_id 126

121.
    
garp_master_delay 1

122.
    
authentication {

123.
        
auth_type PASS

124.
        
auth_pass 7615c4b7f518cede

125.
    
}

126.
    
track_interface {

127.
       
eth0

128.
    
}

129.
    
virtual_ipaddress {

130.
        
172.16.25.11/16 dev eth0 label eth0:1

131.
    
}

132.
    
track_script {

133.
        
chk_haproxy

134.
    
}

135.
    
notify_master
"/etc/keepalived/notify.sh
master 172.16.25.11"

136.
    
notify_backup
"/etc/keepalived/notify.sh
backup 172.16.25.11"

137.
    
notify_fault
"/etc/keepalived/notify.sh
fault 172.16.25.11"

138.
}

139.
# vi /etc/keepalived/notify.sh

140.
#!/bin/bash

141.
# Author: Jason.Yu <admin@lnmmp.com>

142.
# description: An example of notify script

143.
#

144.
contact='root@localhost'

145.
notify() {

146.
    
mailsubject=
"`hostname` to
be $1: $2 floating"

147.
    
mailbody=
"`date '+%F %H:%M:%S'`:
vrrp transition, `hostname` changed to be $1"

148.
    
echo
$mailbody | mail
-s
"$mailsubject"
$contact

149.
}

150.
case
"$1"
in

151.
    
master)

152.
        
notify master $2

153.
        
/etc/rc.d/init.d/haproxy restart

154.
        
exit
0

155.
    
;;

156.
    
backup)

157.
        
notify backup $2
# 在节点切换成backup状态时,无需刻意停止haproxy服务,防止chk_maintaince和chk_haproxy多次对haproxy服务操作;

158.
        
exit
0

159.
    
;;

160.
    
fault)

161.
        
notify fault $2
# 同上

162.
        
exit
0

163.
    
;;

164.
    
*)

165.
        
echo
'Usage: `basename
$0` {master|backup|fault}'

166.
        
exit
1

167.
    
;;

168.
esac


启动服务

1.
service keepalived start
# 在两个节点上都需要启动


keepalived双主模型启动



Haproxy部署
安装配置

01.
yum -y
install
haproxy
#
两节点都需部署

02.
vi
/etc/haproxy/haproxy.cfg
#
两节点配置一致

03.
global

04.
    
log         127.0.0.1 local2

05.
    
chroot      /var/lib/haproxy

06.
    
pidfile     /var/run/haproxy.pid

07.
    
maxconn     4000

08.
    
user         haproxy

09.
    
group       haproxy

10.
    
daemon
# 以后台程序运行;

11.
defaults

12.
    
mode                   http
#
选择HTTP模式,即可进行7层过滤;

13.
    
log                     global

14.
    
option                  httplog
#
可以得到更加丰富的日志输出;

15.
    
option                  dontlognull

16.
    
option http-server-close
#
server端可关闭HTTP连接的功能;

17.
    
option forwardfor except 127.0.0.0/8
#
传递client端的IP地址给server端,并写入“X-Forward_for”首部中;

18.
    
option                  redispatch

19.
    
retries                 3

20.
    
timeout http-request    10s

21.
    
timeout queue           1m

22.
    
timeout connect         10s

23.
    
timeout client          1m

24.
    
timeout server          1m

25.
    
timeout http-keep-alive 10s

26.
    
timeout check           10s

27.
    
maxconn                 30000

28.
listen stats

29.
    
modehttp

30.
    
bind0.0.0.0:1080
# 统计页面绑定1080端口;

31.
    
stats
enable
#
开启统计页面功能;

32.
    
stats hide-version
# 隐藏Haproxy版本号;

33.
    
stats uri     /haproxyadmin?stats
#
自定义统计页面的访问uri;

34.
    
stats realm   Haproxy\ Statistics
#
统计页面密码验证时的提示信息;

35.
    
stats auth    admin:admin
#
为统计页面开启登录验证功能;

36.
    
stats admin
if
TRUE
#
若登录用户验证通过,则赋予管理功能;

37.
frontend http-
in

38.
    
bind*:80

39.
    
mode http

40.
    
log global

41.
    
option httpclose

42.
    
option logasap

43.
    
option dontlognull

44.
    
capture request  header Host len 20

45.
    
capture request  header Referer len 60

46.
    
acl url_static       path_beg       -i /static /images /javascript /stylesheets

47.
    
acl url_static       path_end       -i .jpg .jpeg .gif .png .css .js .html

48.
    
use_backend static_servers
if
url_static
#
符合ACL规则的,请求转入后端静态服务器

49.
    
default_backend dynamic_servers
#
默认请求转入后端动态服务器

50.
backend static_servers

51.
    
balance roundrobin

52.
    
server imgsrv1 192.168.0.25:80 check maxconn 6000
#
静态服务器,可配置多台,还可设置权重weight;

53.
backend dynamic_servers

54.
    
balance
source
#
对于动态请求利用source调度算法,可一定程度上实现session保持;但最好利用cookie绑定的方式实现session保持

55.
    
server websrv1 192.168.0.35:80 check maxconn 1000
#
动态服务器,可配置多台,还可设置权重weight;


启动服务

1.
service haproxy start
# 两节点都需要启动


Nginx部署
见博客“如何测试Nginx的高性能”http://www.it165.net/admin/html/201405/2928.html
 
访问验证
Haproxy统计页面测试




QTNTdG14REFCVy14MHUxTUYwMjA5LmpwZw==" border="0" height="400" hspace="0" src="http://www.it165.net/uploadfile/files/2014/0504/20140504094357372.jpg"
title="统计页面展示.png" vspace="0" width="600" />
动静分离测试
 





高可用测试

内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签:  keepalived haproxy nginx