lvs+keepalived实现高可用samba集群
2017-12-12 14:05
721 查看
通常在使用samba时,基于可靠性和性能的要求需要部署成集群,特此记录,整理
环境是ubuntu1604,离线安装,为方便操作,建议配置本地源,配置用户sudo免密码
安装lvs实现具体samba服务器的调度
安装keepalived实现lvs调度的主备容灾
留神
该配置存在如下问题:
1、往samba集群写文件时,所有的流量都经过主调度节点,从samba集群读文件时,流量不经过调度节点
ntpserver配置文件ntp.conf如下
其余服务器安装ntpdate,然后向ntpserver同步,并且写入crontab
时间同步后建议使用如下命令将时间写入硬件
在node0和node1安装ipvsadm和keepalived
安装成功后,修改主机的keepalived.conf如下:
配置路由转发
修改 /etc/sysctl.conf ,设置net.ipv4.ip_forward = 1 ,执行sysctl -p 立即生效
修改samba配置文件:
创建samba用户
启动samba并且加入开机启动项
配置路由
环境是ubuntu1604,离线安装,为方便操作,建议配置本地源,配置用户sudo免密码
安装lvs实现具体samba服务器的调度
安装keepalived实现lvs调度的主备容灾
留神
该配置存在如下问题:
1、往samba集群写文件时,所有的流量都经过主调度节点,从samba集群读文件时,流量不经过调度节点
配置apt本地源
先清空/etc/apt/下的sources.list,然后上次iso到服务器,执行如下命令sudo mount -o loop xxx.iso /media/cdrom sudo apt-cdrom -m -d /media/cdrom add sudo apt-get update
配置用户sudo免密码
adduser shenpp echo "shenpp ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/shenpp chmod 0440 /etc/sudoers.d/shenpp
关闭防火墙
sudo ufw stop; sudo ufw disabled;
时钟同步
选取一台服务器配置为ntpserver,其余服务器向其同步sudo apt-get install ntp
ntpserver配置文件ntp.conf如下
driftfile /var/lib/ntp/ntp.drift restrict 127.0.0.1 restrict ::1 restrict source notrap nomodify noquery server 127.127.1.0 fudge 127.127.1.0 startum 0
sudo /etc/init.d/ntp start
其余服务器安装ntpdate,然后向ntpserver同步,并且写入crontab
sudo apt-get install ntpdate sudo ntpdate ntpserverIP sudo crontab -e */1 * * * * ntpdate ntpserverIP >/dev/null 2>&1
时间同步后建议使用如下命令将时间写入硬件
sudo hwclock -w
安装LVS和keepalived
集群共有12台服务器,选取2台(node0,node1)安装keepalived,其余10台(node2-node11)做提供samba服务的realserver在node0和node1安装ipvsadm和keepalived
sudo apt-get install keepalived ipvsadm #ubuntu1604默认已经安装ipvsadm
安装成功后,修改主机的keepalived.conf如下:
global_defs { notification_email { } smtp_connect_timeout 30 router_id LVS_DEVEL } vrrp_instance VI_1 { state MASTER #备机为BACKUP interface bond0 virtual_router_id 51 priority 100 #备机为90 advert_int 1 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 10.254.245.10 } } virtual_server 10.254.245.10 445 { #DR模式给定浮动IP delay_loop 6 #(每隔6秒查询realserver状态) lb_algo rr #(轮询算法,还可以配置lvs) lb_kind DR #(Direct Route) persistence_timeout 10 #(同一IP的连接10秒内被分配到同一台realserver) protocol TCP real_server 10.254.245.13 445 { weight 1 #(权重) TCP_CHECK { connect_timeout 10 #(10秒无响应超时) nb_get_retry 3 delay_before_retry 3 connect_port 445 } } real_server 10.254.245.14 445 { weight 1 TCP_CHECK { connect_timeout 10 nb_get_retry 3 delay_before_retry 3 connect_port 445 } } real_server 10.254.245.15 445 { weight 1 TCP_CHECK { connect_timeout 10 nb_get_retry 3 delay_before_retry 3 connect_port 445 } } real_server 10.254.245.16 445 { weight 1 TCP_CHECK { connect_timeout 10 nb_get_retry 3 delay_before_retry 3 connect_port 445 } } real_server 10.254.245.17 445 { weight 1 TCP_CHECK { connect_timeout 10 nb_get_retry 3 delay_before_retry 3 connect_port 445 } } real_server 10.254.245.18 445 { weight 1 TCP_CHECK { connect_timeout 10 nb_get_retry 3 delay_before_retry 3 connect_port 445 } } real_server 10.254.245.19 445 { weight 1 TCP_CHECK { connect_timeout 10 nb_get_retry 3 delay_before_retry 3 connect_port 445 } } real_server 10.254.245.20 445 { weight 1 TCP_CHECK { connect_timeout 10 nb_get_retry 3 delay_before_retry 3 connect_port 445 } } real_server 10.254.245.21 445 { weight 1 TCP_CHECK { connect_timeout 10 nb_get_retry 3 delay_before_retry 3 connect_port 445 } } real_server 10.254.245.22 445 { weight 1 TCP_CHECK { connect_timeout 10 nb_get_retry 3 delay_before_retry 3 connect_port 445 } } } virtual_server 10.254.245.10 139 { delay_loop 6 lb_algo rr lb_kind DR persistence_timeout 10 protocol TCP real_server 10.254.245.13 139 { weight 1 TCP_CHECK { connect_timeout 10 nb_get_retry 3 delay_before_retry 3 connect_port 139 } } real_server 10.254.245.14 139 { weight 1 TCP_CHECK { connect_timeout 10 nb_get_retry 3 delay_before_retry 3 connect_port 139 } } real_server 10.254.245.15 139 { weight 1 TCP_CHECK { connect_timeout 10 nb_get_retry 3 delay_before_retry 3 connect_port 139 } } real_server 10.254.245.16 139 { weight 1 TCP_CHECK { connect_timeout 10 nb_get_retry 3 delay_before_retry 3 connect_port 139 } } real a1f2 _server 10.254.245.17 139 { weight 1 TCP_CHECK { connect_timeout 10 nb_get_retry 3 delay_before_retry 3 connect_port 139 } } real_server 10.254.245.18 139 { weight 1 TCP_CHECK { connect_timeout 10 nb_get_retry 3 delay_before_retry 3 connect_port 139 } } real_server 10.254.245.19 139 { weight 1 TCP_CHECK { connect_timeout 10 nb_get_retry 3 delay_before_retry 3 connect_port 139 } } real_server 10.254.245.20 139 { weight 1 TCP_CHECK { connect_timeout 10 nb_get_retry 3 delay_before_retry 3 connect_port 139 } } real_server 10.254.245.21 139 { weight 1 TCP_CHECK { connect_timeout 10 nb_get_retry 3 delay_before_retry 3 connect_port 139 } } real_server 10.254.245.22 139 { weight 1 TCP_CHECK { connect_timeout 10 nb_get_retry 3 delay_before_retry 3 connect_port 139 } } }
配置路由转发
修改 /etc/sysctl.conf ,设置net.ipv4.ip_forward = 1 ,执行sysctl -p 立即生效
sudo /etc/init.d/keepalived start sudo update-rc.d keepalived defaults #加入开机启动项
安装配置samba服务节点
安装samba服务sudo apt-get install samba
修改samba配置文件:
[samba] comment = samba direct path = /mnt/test valid users = test public = no writable = yes printable = no create mask = 0777 directory mask = 0777
创建samba用户
sudo useradd test –u 2500 echo –e “testtest\testtest” | smbpasswd –a test–s #创建test用户,密码testtest #该条命令便于在10台机器批量创建
启动samba并且加入开机启动项
sudo /etc/init.d/samba start
配置路由
vim /usr/local/sbin/lvs_dr_rs.sh #!/bin/bash vip=10.254.245.10 ifconfig lo:0 $vip broadcast $vip netmask 255.255.255.255 up route add -host $vip lo:0 echo "1" >/proc/sys/net/ipv4/conf/lo/arp_ignore echo "2" >/proc/sys/net/ipv4/conf/lo/arp_announce echo "1" >/proc/sys/net/ipv4/conf/all/arp_ignore echo "2" >/proc/sys/net/ipv4/conf/all/arp_announce
sh /usr/local/sbin/lvs_dr_rs.sh echo 'sh /usr/local/sbin/lvs_dr_rs.sh' >> /etc/rc.local
相关文章推荐
- LVS+Keepalived实现高可用集群 二
- LVS+Keepalived实现高可用集群
- LVS+KEEPALIVED实现负载均高可用集群
- LVS+Keepalived实现高可用集群
- LVS+Keepalived实现高可用集群
- keepalived+lvs实现高可用负载均衡集群
- LVS + Keepalived 实现高可用、负载均衡 Web 集群
- 双主模型高可用负载均衡集群的实现(keepalived+lvs-dr)
- 企业级RHEL5.8 下实现Keepalived + LVS集群高可用
- LVS+Keepalived实现高可用集群
- LVS+Keepalived实现高可用集群
- lvs DR模式 +keepalived 实现directory 高可用、httpd服务负载均衡集群
- keepalived+LVS 实现双机热备、负载均衡、失效转移 高性能 高可用 高伸缩性 服务器集群
- keepalived实现LVS集群的高可用(1)
- LVS + Keepalived 实现高可用负载集群
- LVS+Keepalived实现高可用集群
- LVS+Keepalived实现高可用集群
- lvs+keepalived实现高可用负载均衡集群
- lvs + keepalived高可用负载均衡集群双主实现
- linux下lvs+Keepalived实现高可用服务器集群(NAT模式)