Lvs_DR模式构建配置
一.Lvs的简要介绍
LVS是Linux Virtual Server的简写,即Linux虚拟服务器。
(1)Lvs相关术语
-
LB(Load Balancer) :负载均衡器,也就是装有LVS(ipvsadm)的server
-
VIP(Virtual IP):虚拟IP,也就是给远程客户端(网民)提供服务的外部IP,比如,提供80服务,域名是www.a.com,则www.a.com 对应的A记录就是VIP
-
LD(Load Balancer Director):同LB,负载均衡调度器
-
real server:即后端提供真是服务的server,比如你提供的是80服务,那你机器可能就是装着Apache这中web服务器
-
DIP(Director IP):在NAT模式中是后端realserver的gateway,在DR和Tune中如果使用heartbeat或者keepalived,用来探测使用
-
RIP(Real Server IP):后端realserver的IP
(2)Lvs的三种工作模式
DR(直接路由) NAT(网络地址转换) Tune (隧道)
(3)Lvs_DR模式的工作原理
过程如下:
1)client 向目标 vip 发出请求,LB 接收。
2)VS 根据负载均衡算法选择一台 active 的 real server,将此 RIP 所在网卡的 mac 地址作为目标 mac 地址,发送到局域网里。
3)realserver在局域 网中收到这个帧,拆开后发现目标 IP(VIP)与本地匹配,于是
处理这个报文;随后重新封装报文,发送到局域网。
4)如果 client 与 VS在 同一网段,那么 client将收到这个回复报文。
二.Lvs_DR模式的搭建过程
1.实验环境的配置
此实验要求所有主机selinux为disabled状态,并且iptables为关闭状态。
主机名 | 作用 |
---|---|
server1(172.25.254.1) | LB(调度器) |
server2(172.25.254.2) | RS(真实后端服务器) |
server3(172.25.254.3) | RS(真实后端服务器) |
2.实验具体过程:
(1)扩展yum源
[root@server1 ~]#
vim /etc/yum.repos.d/rhel-source.repo
(2)将调度器,真实服务器设置在同一vlan
server1:
[root@server1 ~]#
ip addr add 172.25.254.100/24 dev eth0
server2:
[root@server2 ~]#
ip addr add 172.25.254.100/32 dev eth0
[root@server2 ~]#ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 52:54:00:63:7c:d6 brd ff:ff:ff:ff:ff:ff
inet 172.25.254.2/32 brd 172.25.254.255 scope global eth0
inet 172.25.254.100/32 scope global secondary eth0
inet6 fe80::5054:ff:fe63:7cd6/64 scope link
valid_lft forever preferred_lft forever
[root@server2 ~]#vim /var/www/html/index.html
www.wetsos.org - server2
[root@server2 ~]#/etc/init.d/httpd start
server3:
[root@server3 ~]#
ip addr add 172.25.254.100/32 dev eth0
[root@server3 ~]#ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 52:54:00:bb:be:50 brd ff:ff:ff:ff:ff:ff
inet 172.25.254.3/24 brd 172.25.254.255 scope global eth0
inet 172.25.254.100/32 scope global eth0
inet6 fe80::5054:ff:febb:be50/64 scope link
valid_lft forever preferred_lft forever
[root@server3 ~]#vim /var/www/html/index.html
www.westos.org - server3
[root@server3 ~]/etc/init.d/httpd start
(3)下载ipvsdm,并添加策略
[root@server1 ~]#
yum install ipvsadm -y #必须在有扩展yum源的基础上才可下载
[root@server1 ~]#ipvsadm -A -t 172.25.254.100:80 -s rr #轮询
[root@server1 ~]#ipvsadm -a -t 172.25.254.100:80 -r 172.25.254.2:80 -g
[root@server1 ~]#ipvsadm -a -t 172.25.254.100:80 -r 172.25.254.3:80 -g
[root@server1 ~]#ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 172.25.254.100:80 rr
-> 172.25.254.2:80 Route 1 0 0
-> 172.25.254.3:80 Route 1 0 0
(4)在物理机进行测试
由上图可见:访问172.25.254.100时,并不能唯一指向调度器server1,倘若要使其唯一指向调度器,需要抑制server2和server3的ARP。
何为ARP? ARP是地址解析协议,将IP地址影射到mac地址:要和其他设备进行通信,需要通过发送arp包告诉其他设备。
server2:
[root@server2 html]#
arptables -A IN -d 172.25.254.100 -j DROP#禁止100这个IP包发送过来
[root@server2 html]#arptables -A OUT -s 172.25.254.100 -j mangle --mangle-ip-s 172.25.254.2# 允许server2的数据包发送给100这个IP
[root@server2 html]#/etc/init.d/arptables_jf save #保存策略
[root@server2 ~]#arptables -nL #查看所写策略
server3:
[root@server3 html]#
arptables -A IN -d 172.25.254.100 -j DROP#禁止100这个IP包发送过来
[root@server3 html]#arptables -A OUT -s 172.25.254.100 -j mangle --mangle-ip-s 172.25.254.3# 允许server2的数据包发送给100这个IP
[root@server3 html]#/etc/init.d/arptables_jf save #保存策略
[root@server3 ~]#arptables -nL #查看所写策略
在物理机与浏览器上分别测试:
物理机上需要作本地解析!
[root@foundation49 ~]#
vim /etc/hosts
172.25.254.100 www.westos.org
可以看到访问172.25.254.100时硬件地址统一为server1,在浏览器上查询时随着网页不断刷新,访问到的默认发布页面也在发生变化。
(4)用ldirectord实现健康检查
[root@server1 ~]#
yum install -y ldirectord-3.9.5-3.1.x86_64.rpm #下载安装ldirectord
[root@server1 ~]#rpm -ql ldirectord #查看安装路径
/etc/ha.d
/etc/ha.d/resource.d
/etc/ha.d/resource.d/ldirectord
/etc/init.d/ldirectord
/etc/logrotate.d/ldirectord
/usr/lib/ocf/resource.d/heartbeat/ldirectord
/usr/sbin/ldirectord
/usr/share/doc/ldirectord-3.9.5
/usr/share/doc/ldirectord-3.9.5/COPYING
/usr/share/doc/ldirectord-3.9.5/ldirectord.cf
/usr/share/man/man8/ldirectord.8.gz
[root@server1 ~]# cp /usr/share/doc/ldirectord-3.9.5/ldirectord.cf /etc/ha.d
[root@server1 ~]#cd /etc/ha.d
[root@server1 ha.d]#ls
ldirectord.cf resource.d shellfuncs
[root@server1 ha.d]# vim ldirectord.cf
25 virtual=172.25.45.100:80#虚拟IP
26 real=172.25.45.2:80 gate#真实服务器的IP
27 real=172.25.45.3:80 gate
28 fallback=127.0.0.1:80 gate
29 service=http
30 scheduler=rr
31 #persistent=600#注释掉
32 #netmask=255.255.255.255注释掉
33 protocol=tcp
34 checktype=negotiate
35 checkport=80
36 request=“index.html”
37 #receive=“Test Page”#注释掉
38 #virtualhost=www.x.y.z注释掉
编辑server1的默认发布页面
[root@server1 ~]#
vim /var/www/html/index.html
在物理机上进行测试时,需要关闭server2和server3的httpd服务,访问172.25.254.100时才会访问server1的默认发布页面。
server1本机进行测试:
关闭server2和server3的httpd服务之后在物理机上进行测试:
四.用keepalive实现高可用集群
实验环境:
server1: LB(主 master)
server2 :RS(Real server)
server3: RS(Real Server)
server4 :备用master;当主master在工作时,它是处于休眠状态,一旦原来的master挂掉,立马成为新的主master
扩展yum源:
下载keepalived-2.0.6.tar.gz软件包;
[root@server1 ~]#
tar zxf keepalived-2.0.6.tar.gz#解压
[root@server1 keepalived-2.0.6]#./configure --prefix=/usr/local/keepalived --with-init=SYSV
[root@server1 keepalived-2.0.6]#yum install openssl-devel
[root@server1 keepalived-2.0.6]#make#编译
[root@server1 keepalived-2.0.6]#make install
[root@server1 keepalived-2.0.6]#cd /usr/local/keepalived/etc/rc.d/init.d
[root@server1 init.d]#chmod +x keepalived
[root@server1 init.d]#ln -s /usr/local/keepalived/etc/sysconfig/keepalived/etc/init.d
[root@server1 init.d]#ln -s /usr/local/keepalived/etc/keepalived /etc/
[root@server1 init.d]#ln -s /usr/local/keepalived/etc/sysconfig/keepalive /etc/sysconfig/
[root@server1 init.d]#ln -s /usr/local/keepalived/sbin/keepalived /sbin/
[root@server1 init.d]#/etc/init.d/keepalived start ##检测服务是否可以正常启动
[root@server1 init.d]#cd /usr/local
[root@server1 local]#scp -r keepalived/ server4:/usr/local/
[root@server1 local]#cd /etc/keepalived
[root@server1 keepalived]#vim keepalived.conf
global_defs {
notification_email {
root@localhost
}
notification_email_from Alexandre.Cassen@firewall.loc
smtp_server127.0.0.1
smtp_connect_timeout 30
router_id LVS_DEVEL
vrrp_skip_check_adv_addr
#vrrp_strict#注释以放其修改防火墙规则
vrrp_garp_interval 0
vrrp 20000 _gna_interval 0
vrrp_instance VI_1 {
state MASTER##默认为master
interface eth0
virtual_router_id 35
priority 100#数值越大,优先级越高
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
172.25.254.100
}
}
virtual_server172.25.254.100 80{#VS的vip,服务启动生效时自动添加
delay_loop3#对后端的健康检查时间
lb_algo rr#调度算法
lb_kindDR#模式为DR
#persistence_timeout 50#注释持续连接
protocol TCP
real_server 172.25.254.2 80{#RSIP
weight 1
TCP_CHECK{
connect_timeout 3
retry 3
delay_before_retry 3
}
}
real_server 172.25.254.3 80{
weight 1
TCP_CHECK{
connect_timeout 3
retry 3
delay_before_retry 35
}
}
}
[root@server1 keepalived]#
ip addr del 172.25.254.100/24 dev eth0#删除虚拟IP
[root@server1 keepalived]#/etc/init.d/keepalived restart#重启服务
[root@server1 keepalived]#scp keepalived.conf server4:/etc/keepalived/
查看日志:cat /var/log/messages
server1 Keepalived_vrrp[1875]: (VI_1) EnteringMASTER STATE
[root@server1 keepalived]#yum install mailx
##server4
4条软链接;
[root@server4 keepalived]#
/etc/init.d/keepalived start
[root@server4 keepalived]#yum install mailx -y
[root@server4 keepalived]#vim keepalived.conf
vrrp_instance VI_1 {
stateBACKUP##设置为默认备用的master
interface eth0
virtual_router_id 51
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
172.25.254.100
}
}
[root@server4 keepalived]#/etc/init.d/keepalived restart
查看日志:cat /var/log/messages server1 Keepalived_vrrp[1875]: (VI_1) Entering `BACKUP STATE`
在物理机上进行测试:关闭server1的keepalive再次访问172.25.254.100,此时server4成为Master。
测试结果如下:
完成!
- LVS_DR模式构建配置
- 配置LVS/DR模式的LB集群
- LVS的NAT与DR模式的简单配置
- LVS DR模式负载均衡配置
- ubuntu上配置LVS DR模式 + Keepalived
- lvs-dr模式 安装配置
- LVS DR模式配置中的3个关键问题
- lvs-dr模式中后端RS需要做的配置
- Lvs群集(DR/NAT)两种模式配置
- CentOS6.4 配置LVS(DR模式)
- Centos6 Lvs+Keepalived Dr模式 配置搭建
- LVS的DR模式配置
- Lvs之NAT、DR、TUN三种模式的应用配置案例
- LVS的DR和NAT模式配置
- LVS DR模式 + keepalived 负载均衡配置详解(基础篇)
- LVS-DR模式配置详解
- lvs的dr模式与ip tun模式的临时的配置
- LVS+Keepalived之DR模式配置
- LVS的2种模式,DR/NAT详解和具体配置
- opensips+lvs配置(DR模式)1--lvs DR模式配置