keepalived+lvs高可用集群部署
1、环境准备
vip:192.168.171.15
dr1:192.168.171.11
dr2:192.168.171.12
web1:192.168.171.13
web2:192.168.171.14
2、拓扑
3、部署
3.1 dr1(master)部署:
在master上安装配置Keepalived和ipvsadm:
#yum install keepalived ipvsadm -y
修改dr1上Keepalived的配置文件做以下更改:
#vim /etc/keepalived/keepalived.conf ! Configuration File for keepalived global_defs { router_id Director1 #两边不一样。 } vrrp_instance VI_1 { state MASTER #另外一台机器是BACKUP interface ens33 #心跳网卡 virtual_router_id 51 #虚拟路由编号,主备要一致 priority 150 #优先级 advert_int 1 #检查间隔,单位秒 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 192.168.171.15/24 dev ens33 #VIP和工作端口 } } virtual_server 192.168.171.15 80 { #LVS 配置,VIP delay_loop 3 #服务论询的时间间隔,#每隔3秒检查一次real_server状态 lb_algo rr #LVS 调度算法 lb_kind DR #LVS 集群模式 protocol TCP real_server 192.168.171.13 80 { weight 1 TCP_CHECK { connect_timeout 3 #健康检查方式,连接超时时间 } } real_server 192.168.171.14 80 { weight 1 TCP_CHECK { connect_timeout 3 } } }
3.2 dr2(BACKUP)部署
安装keepalived和ipvsadm:
#yum install keepalived ipvsadm -y
拷贝master上的keepalived.conf到backup上
#scp 192.168.171.11:/etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf
修改配置文件:
#vim /etc/keepalived/keepalived.conf ! Configuration File for keepalived global_defs { router_id Director2 #两边不一样。 } vrrp_instance VI_1 { state BACKUP #另外一台机器是BACKUP interface ens33 #心跳网卡 virtual_router_id 51 #虚拟路由编号,主备要一致 priority 149 #优先级 advert_int 1 #检查间隔,单位秒 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 192.168.171.15/24 dev ens33 #VIP和工作端口 } } virtual_server 192.168.171.15 80 { #LVS 配置,VIP delay_loop 3 #服务论询的时间间隔,#每隔3秒检查一次real_server状态 lb_algo rr #LVS 调度算法 lb_kind DR #LVS 集群模式 protocol TCP real_server 192.168.171.13 80 { weight 1 TCP_CHECK { connect_timeout 3 #健康检查方式,连接超时时间 } } real_server 192.168.171.14 80 { weight 1 TCP_CHECK { connect_timeout 3 } } }
3.3 启动dr1和dr2上面的keepalived
#systemctl enable keepalived
#systemctl start keepalived
重启主机
#reboot
3.4 部署web环境(web1 、web2配置相同)
安装web测试站点:
#yum install -y httpd && systemctl start httpd && systemctl enable httpd
检查httpd服务是否启动:
#netstat -antp | grep httpd
自定义web主页,以便观察负载均衡结果:
#vim /usr/share/httpd/noindex/index.html
配置虚拟地址:
#cp /etc/sysconfig/network-scripts/{ifcfg-lo,ifcfg-lo:0} #vim /etc/sysconfig/network-scripts/ifcfg-lo:0 DEVICE=lo:0 IPADDR=192.168.171.15 NETMASK=255.255.255.255 ONBOOT=yes
配置路由:
#vim /etc/rc.local /sbin/route add host 192.168.171.15 dev lo:0
配置ARP(忽略arp请求 ,但可以回复):
#vim /etc/sysctl.conf net.ipv4.conf.all.arp_ignore = 1 net.ipv4.conf.all.arp_announce = 2 net.ipv4.conf.default.arp_ignore = 1 net.ipv4.conf.default.arp_announce = 2 net.ipv4.conf.lo.arp_ignore = 1 net.ipv4.conf.lo.arp_announce = 2
重启两台web主机:
#reboot
4、测试、验证
观察lvs路由条目(master节点上面):
#ipvsadm -L
观察vip地址是否在master机器上:
#ip a
客户端访问测试:
curl http://192.168.171.15
关闭master上的keepalived服务,再次访问vip,这是vip会在BACKUP上面(可以通过在dr2主机上面通过ip a 查看),再将master节点上面的keepalived启动,这是vip又回到master节点上面
关闭web1站点服务,再次访问VIP,这是访问的web资源就是web2的资源
至此,keepalived+lvs的高可用集群就部署完毕了。
- lvs+keepalived部署高可用集群
- CentOS 6.3下部署LVS(NAT)+keepalived实现高性能高可用负载均衡
- VM虚拟机上 实现CentOS 6.X下部署LVS(NAT)+keepalived实现高性能高可用负载均衡
- LVS+Keepalived实现高可用集群
- Keepalived+LVS+nginx搭建nginx高可用集群
- LVS+keepalived 下部署RTSP负载均衡
- HA集群(高可用集群)之keepalived+LVS/DR(详解二)
- 部署Keepalived+LVS服务器
- LVS原理详解及部署之五:LVS+keepalived实现负载均衡&高可用
- CentOS 7 部署LVS集群(DR模式、NAT模式,LVS+keepalived)
- lvs fullnat部署手册(二)keepalived配置篇
- 高可用集群LVS+Keepalived+rhel6.2 +win2008R2
- keepalived高可用集群介绍与部署
- Linux下部署lvs+keepalived实现高可用负载均衡
- LVS+Keepalived+Nginx+Tomcat高可用集群搭建
- VM虚拟机上 实现CentOS 6.X下部署LVS(DR)+keepalived实现高性能高可用负载均衡
- 负载均衡高可用集群方案(lvs+keepalived)
- Linux企业运维篇——SaltStack的部署和配置keepalived高可用集群
- CentOS 6.3下部署LVS(NAT)+keepalived实现高性能高可用负载均衡
- lvs+keepalived高可用集群