LVS负载均衡(7)-- LVS+keepalived实现高可用

时间:2021-07-14
本文章向大家介绍LVS负载均衡(7)-- LVS+keepalived实现高可用,主要包括LVS负载均衡(7)-- LVS+keepalived实现高可用使用实例、应用技巧、基本知识点总结和需要注意事项,具有一定的参考价值,需要的朋友可以参考一下。


1. LVS+keepalived实现高可用

LVS 可以实现负载均衡功能,但是没有健康检查机制,如果一台 RS 节点故障,LVS 任然会将请求调度至该故障 RS 节点服务器;可以使用 Keepalived 来实现解决:

  • 1.使用 Keepalived 可以实现 LVS 的健康检查机制, RS 节点故障,则自动剔除该故障的 RS 节点,如果 RS 节点恢复则自动加入集群。

  • 2.使用 Keeplaived 可以解决 LVS 单点故障,以此实现 LVS 的高可用。

1.1 实验环境说明

实验拓扑图如下,使用LVS的DR模型:

  • 客户端:主机名:xuzhichao;地址:eth1:192.168.20.17;
  • 路由器:主机名:router;地址:eth1:192.168.20.50;eth2:192.168.50.50;
  • LVS负载均衡:
    • 主机名:lvs-01;地址:eth2:192.168.50.31;
    • 主机名:lvs-02;地址:eth2:192.168.50.32;
    • VIP地址:192.168.50.100和192.168.50.101;
  • WEB服务器,使用nginx1.20.1:
    • 主机名:nginx02;地址:eth2:192.168.50.22;
    • 主机名:nginx03;地址:eth2:192.168.50.23;

1.2 路由器配置

  • ROUTER设备的IP地址和路由信息如下:

    [root@router ~]# ip add
    3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
        link/ether 00:0c:29:4f:a9:ca brd ff:ff:ff:ff:ff:ff
        inet 192.168.20.50/24 brd 192.168.20.255 scope global noprefixroute eth1
           valid_lft forever preferred_lft forever
    4: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
        link/ether 00:0c:29:4f:a9:d4 brd ff:ff:ff:ff:ff:ff
        inet 192.168.50.50/24 brd 192.168.50.255 scope global noprefixroute eth2
           valid_lft forever preferred_lft forever
    
    #此场景中无需配置路由
    [root@router ~]# route -n
    Kernel IP routing table
    Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
    192.168.20.0    0.0.0.0         255.255.255.0   U     101    0        0 eth1
    192.168.50.0    0.0.0.0         255.255.255.0   U     104    0        0 eth2
    
  • 打开router设备的ip_forward功能:

    [root@router ~]# echo "net.ipv4.ip_forward = 1" >> /etc/sysctl.conf
    [root@router ~]# sysctl -p
    net.ipv4.ip_forward = 1
    
  • 把LVS的虚IP地址的80和443端口映射到路由器外网地址的80和443端口,也可以使用地址映射:

    #端口映射:
    [root@router ~]# iptables -t nat -A PREROUTING -d 192.168.20.50 -p tcp --dport 80 -j DNAT --to 192.168.50.100:80
    [root@router ~]# iptables -t nat -A PREROUTING -d 192.168.20.50 -p tcp --dport 443 -j DNAT --to 192.168.50.100:443
    
    #地址映射:
    [root@router ~]# iptables -t nat -A PREROUTING -d 192.168.20.50 -j DNAT --to 192.168.50.100
    
    #源NAT,让内部主机上网使用
    [root@router ~]# iptables -t nat -A POSTROUTING -s 192.168.50.0/24 -j SNAT --to 192.168.20.50
    
    #查看NAT配置:
    [root@router ~]# iptables -t nat -vnL
    Chain PREROUTING (policy ACCEPT 0 packets, 0 bytes)
     pkts bytes target     prot opt in     out     source               destination         
        0     0 DNAT       tcp  --  *      *       0.0.0.0/0            192.168.20.50        tcp dpt:80 to:192.168.50.100:80
        0     0 DNAT       tcp  --  *      *       0.0.0.0/0            192.168.20.50        tcp dpt:443 to:192.168.50.100:443
    
    Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
     pkts bytes target     prot opt in     out     source               destination         
    
    Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
     pkts bytes target     prot opt in     out     source               destination         
    
    Chain POSTROUTING (policy ACCEPT 0 packets, 0 bytes)
     pkts bytes target     prot opt in     out     source               destination         
        0     0 SNAT       all  --  *      *       192.168.50.0/24      0.0.0.0/0            to:192.168.20.50
    

1.3 WEB服务器nginx配置

  • nginx02主机的网络配置如下:

    #1.在lo接口配置两个VIP地址:
    [root@nginx02 ~]# cat /etc/sysconfig/network-scripts/ifcfg-lo:0
    DEVICE=lo:0
    BOOTPROTO=none
    IPADDR=192.168.50.100
    NETMASK=255.255.255.255   <==注意:此处的掩码不能与RIP的掩码配置的一样,否则其他主机无法学习到RIP的ARP信息,会影响RIP的直连路由,而且设置的掩码不能过大,让VIP和CIP计算成同一网段,建议设置为32位掩码。
    ONBOOT=yes
    NAME=loopback
    
    #2.重启网卡生效:
    [root@nginx02 ~]# ifdown lo:0 && ifup lo:0
    [root@nginx02 ~]# ifconfig lo:0
    lo:0: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
            inet 192.168.50.100  netmask 255.255.255.255
            loop  txqueuelen 1000  (Local Loopback)
    
    #3.eth2接口地址如下:
    [root@nginx02 ~]# ip add
    4: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
        link/ether 00:0c:29:d9:f9:7d brd ff:ff:ff:ff:ff:ff
        inet 192.168.50.22/24 brd 192.168.50.255 scope global noprefixroute eth2
           valid_lft forever preferred_lft forever
    
    #4.路由配置:网关指向路由器192.168.50.50
    [root@nginx02 ~]# ip route add default via 192.168.50.50 dev eth2   <==默认路由必须指定下一跳地址和出接口,否则有可能会从lo:0接口出去,导致不通。
    
    [root@nginx02 ~]# route -n
    Kernel IP routing table
    Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
    0.0.0.0         192.168.50.50   0.0.0.0         UG    0      0        0 eth2
    192.168.50.0    0.0.0.0         255.255.255.0   U     103    0        0 eth2
    
  • 配置 arp ,不对外宣告本机 VIP 地址,也不响应其他节点发起 ARP 请求 本机的VIP

    [root@nginx02 ~]# echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore
    [root@nginx02 ~]# echo 1 > /proc/sys/net/ipv4/conf/lo/arp_ignore
    [root@nginx02 ~]# echo 1 > /proc/sys/net/ipv4/conf/default/arp_ignore
     
    [root@nginx02 ~]# echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce
    [root@nginx02 ~]# echo 2 > /proc/sys/net/ipv4/conf/lo/arp_announce
    [root@nginx02 ~]# echo 2 > /proc/sys/net/ipv4/conf/default/arp_announce
    
  • nginx03主机的网络配置如下:

    #1.在lo接口配置VIP地址:
    [root@nginx03 ~]# cat /etc/sysconfig/network-scripts/ifcfg-lo:0
    DEVICE=lo:0
    BOOTPROTO=none
    IPADDR=192.168.50.100
    NETMASK=255.255.255.255    <==注意:此处的掩码不能与RIP的掩码配置的一样,否则其他主机无法学习到RIP的ARP信息,会影响RIP的直连路由,而且设置的掩码不能过大,让VIP和CIP计算成同一网段,建议设置为32位掩码。
    ONBOOT=yes
    NAME=loopback
    
    #2.重启网卡生效:
    [root@nginx03 ~]# ifdown lo:0 && ifup lo:0
    [root@nginx03 ~]# ifconfig lo:0
    lo:0: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
            inet 192.168.50.100  netmask 255.255.255.255
            loop  txqueuelen 1000  (Local Loopback)
    
    #3.eth2接口地址如下:
    [root@nginx03 ~]# ip add show eth2
    4: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
        link/ether 00:0c:29:0a:bf:63 brd ff:ff:ff:ff:ff:ff
        inet 192.168.50.23/24 brd 192.168.50.255 scope global noprefixroute eth2
           valid_lft forever preferred_lft forever
    
    
    #4.路由配置:网关指向路由器192.168.50.50
    [root@nginx03 ~]# ip route add default via 192.168.50.50 dev eth2  <==默认路由必须指定下一跳地址和出接口,否则有可能会从lo:0接口出去,导致不通。
    
    [root@nginx03 ~]# route -n
    Kernel IP routing table
    Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
    0.0.0.0         192.168.50.50   0.0.0.0         UG    0      0        0 eth2
    192.168.50.0    0.0.0.0         255.255.255.0   U     103    0        0 eth2
    
  • 配置 arp ,不对外宣告本机 VIP 地址,也不响应其他节点发起 ARP 请求 本机的VIP

    [root@nginx03 ~]# echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore
    [root@nginx03 ~]# echo 1 > /proc/sys/net/ipv4/conf/lo/arp_ignore
    [root@nginx03 ~]# echo 1 > /proc/sys/net/ipv4/conf/default/arp_ignore
    
    [root@nginx03 ~]# echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce
    [root@nginx03 ~]# echo 2 > /proc/sys/net/ipv4/conf/lo/arp_announce
    [root@nginx03 ~]# echo 2 > /proc/sys/net/ipv4/conf/default/arp_announce
    
  • nginx配置文件两台WEB服务器保持一致:

    [root@nginx03 ~]# cat /etc/nginx/conf.d/xuzhichao.conf
    server {
    	listen 80 default_server;
    	listen 443 ssl;
    	server_name www.xuzhichao.com;
    	access_log /var/log/nginx/access_xuzhichao.log access_json;
    	charset utf-8,gbk;	
    	
    	#SSL配置
    	ssl_certificate_key /apps/nginx/certs/www.xuzhichao.com.key;
    	ssl_certificate /apps/nginx/certs/www.xuzhichao.com.crt;
    	ssl_session_cache shared:ssl_cache:20m;
    	ssl_session_timeout 10m;
    	ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
    	keepalive_timeout 65;
    	
    	#防盗链
    	valid_referers none blocked server_names *.b.com  b.*  ~\.baidu\.  ~\.google\.;
    	
    	if ( $invalid_referer ) {
    		return 403;	
    	}
    
    	client_max_body_size 10m;
    
    	#浏览器图标
    	location = /favicon.ico {
    		root /data/nginx/xuzhichao;
    	}
    
    	location / {
    		root /data/nginx/xuzhichao;
    		index index.html index.php;
    		
    		#http自动跳转https
    		if ($scheme = http) {
    			rewrite ^/(.*)$ https://www.xuzhichao.com/$1;
    		}
    	}
    }
    
    #重启nginx服务:
    [root@nginx03 ~]# nginx -t
    nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
    nginx: configuration file /etc/nginx/nginx.conf test is successful
    [root@nginx03 ~]# systemctl reload nginx.service 
    
  • nginx02主机的主页文件如下:

    [root@nginx02 certs]# cat /data/nginx/xuzhichao/index.html
    node1.xuzhichao.com page
    
  • nginx03主机的主页文件如下:

    [root@nginx03 ~]# cat /data/nginx/xuzhichao/index.html 
    node2.xuzhichao.com page
    
  • 测试访问:

    [root@lvs-01 ~]# curl -Hhost:www.xuzhichao.com  -k https://192.168.50.23
    node2.xuzhichao.com page
    [root@lvs-01 ~]# curl -Hhost:www.xuzhichao.com  -k https://192.168.50.22
    node1.xuzhichao.com page
    

1.4 LVS+keepalived配置

1.4.1 keepalived检测后端服务器状态语法

虚拟服务器:
配置参数:
	virtual_server IP port |
	virtual_server fwmark int 
	{
		...
		real_server {
			...
		}
		...
	}
	
常用参数:
	 delay_loop <INT>:服务轮询的时间间隔;
	 lb_algo rr|wrr|lc|wlc|lblc|sh|dh:定义调度方法;
	 lb_kind NAT|DR|TUN:集群的类型;
	 persistence_timeout <INT>:持久连接时长;
	 protocol TCP:服务协议;
	 sorry_server <IPADDR> <PORT>:备用服务器地址;
	 real_server <IPADDR> <PORT>
	{
		 weight <INT>   定义RS权重
		 notify_up <STRING>|<QUOTED-STRING>  定义RS上线时调用的脚本
		 notify_down <STRING>|<QUOTED-STRING>  定义RS下线或故障时调用的脚本
		 HTTP_GET|SSL_GET|TCP_CHECK|SMTP_CHECK|MISC_CHECK { ... }:定义当前主机的健康状态检测方法;
	 }
			
HTTP_GET|SSL_GET:应用层检测
HTTP_GET|SSL_GET {
	url {
		    path <URL_PATH>:定义要监控的URL;
			status_code <INT>:判断上述检测机制为健康状态的响应码;
			digest <STRING>:判断上述检测机制为健康状态的响应的内容的校验码;
		}
		nb_get_retry <INT>:重试次数;
		delay_before_retry <INT>:重试之前的延迟时长,间隔时长;
		connect_ip <IP ADDRESS>:向当前RS的哪个IP地址发起健康状态检测请求,默认为real_server定义的地址
		connect_port <PORT>:向当前RS的哪个PORT发起健康状态检测请求,默认为real_server定义的端口
		bindto <IP ADDRESS>:发出健康状态检测请求时使用的源地址;默认为出接口地址
		bind_port <PORT>:发出健康状态检测请求时使用的源端口;
		connect_timeout <INTEGER>:连接请求的超时时长;
	}
	
传输层检测:
TCP_CHECK {
	connect_ip <IP ADDRESS>:向当前RS的哪个IP地址发起健康状态检测请求
	connect_port <PORT>:向当前RS的哪个PORT发起健康状态检测请求
	bindto <IP ADDRESS>:发出健康状态检测请求时使用的源地址;
	bind_port <PORT>:发出健康状态检测请求时使用的源端口;
	connect_timeout <INTEGER>:连接请求的超时时长;
}

1.4.2 keepalived配置实例

  • 安装keepalived软件包:

    [root@lvs-01 ~]# yum install keepalived -y
    
  • lvs01节点的keepalived配置文件:

    #1.keepalived配置文件如下:
    [root@lvs-01 ~]# cat /etc/keepalived/keepalived.conf 
    ! Configuration File for keepalived
    
    global_defs {
       notification_email {
    	   root@localhost
       }
       notification_email_from keepalived@localhost
       smtp_server 127.0.0.1
       smtp_connect_timeout 30
       router_id LVS01
       script_user root
       enable_script_security
    }
    
    vrrp_instance VI_1 {
        state MASTER
        interface eth2
        virtual_router_id 51
        priority 120
        advert_int 3
        authentication {
            auth_type PASS
            auth_pass 1111
        }
        virtual_ipaddress {
            192.168.50.100/32 dev eth2
        }
    
        track_interface {
        	eth2
        }
    
        notify_master "/etc/keepalived/notify.sh master"
        notify_backup "/etc/keepalived/notify.sh backup"
        notify_fault "/etc/keepalived/notify.sh fault"
    }
    
    virtual_server 192.168.50.100 443 {
        delay_loop 6
        lb_algo rr
        lb_kind DR
        protocol TCP
    
        sorry_server 192.168.20.24 443
    
        real_server 192.168.50.22 443 {
            weight 1
            SSL_GET {
                url {
                  path /index.html
                  status_code 200
                }
                connect_timeout 3
                nb_get_retry 3
                delay_before_retry 3
            }
        }
        
        real_server 192.168.50.23 443 {
            weight 1
            SSL_GET {
                url {
                  path /index.html
                  status_code 200
                }
                connect_timeout 3
                nb_get_retry 3
                delay_before_retry 3
            }
        }
    }
    
    virtual_server 192.168.50.100 80 {
        delay_loop 6
        lb_algo rr 
        lb_kind DR
        protocol TCP
    
       real_server 192.168.50.22 80 {
            weight 1
            TCP_CHECK {
                connect_timeout 3
                nb_get_retry 3
                delay_before_retry 3
            }
        }
       
       real_server 192.168.50.23 80 {
            weight 1
            TCP_CHECK {
                connect_timeout 3
                nb_get_retry 3
                delay_before_retry 3
    		}
        }
    }
    
    #2.keepalived的notify.sh脚本
    [root@lvs-01 keepalived]# cat notify.sh 
    #!/bin/bash
    
    contact='root@localhost'
    notify() {
    	    local mailsubject="$(hostname) to be $1, vip floating"
    		local mailbody="$(date +'%F %T'): vrrp transition, $(hostname) changed to be $1"
    	    echo "$mailbody" | mail -s "$mailsubject" $contact
    }
    
    case $1 in
    master)
        notify master
    	;;
    backup)
    	notify backup
        ;;
    fault)
        notify fault
    	;;
    *)
    	echo "Usage: $(basename $0) {master|backup|fault}"
    	exit 1
    	;;
    esac
    
    #增加执行权限
    [root@lvs-01 keepalived]# chmod +x notify.sh
    
    #3.增加默认路由指向路由器网关
    [root@lvs-01 ~]# ip route add default via 192.168.50.50 dev eth2
    
    [root@lvs-01 ~]# route -n
    Kernel IP routing table
    Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
    0.0.0.0         192.168.50.50   0.0.0.0         UG    0      0        0 eth2
    192.168.50.0    0.0.0.0         255.255.255.0   U     102    0        0 eth2
    
    #4.启动keepalived服务:
    [root@lvs-01 ~]# systemctl start keepalived.service
    
    #5.查看自动生成的ipvs规则:
    [root@lvs-01 ~]# ipvsadm -Ln
    IP Virtual Server version 1.2.1 (size=4096)
    Prot LocalAddress:Port Scheduler Flags
      -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
    TCP  192.168.50.100:80 rr
      -> 192.168.50.22:80             Route   1      0          0         
      -> 192.168.50.23:80             Route   1      0          0         
    TCP  192.168.50.100:443 rr
      -> 192.168.50.22:443            Route   1      0          0         
      -> 192.168.50.23:443            Route   1      0          0  
      
    #6.查看VIP所在的主机:
    [root@lvs-01 ~]# ip add 
    4: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
        link/ether 00:0c:29:21:84:9d brd ff:ff:ff:ff:ff:ff
        inet 192.168.50.31/24 brd 192.168.50.255 scope global noprefixroute eth2
           valid_lft forever preferred_lft forever
        inet 192.168.50.100/32 scope global eth2
           valid_lft forever preferred_lft forever
    
  • lvs02节点的keepalived配置文件:

    #1.keepalived配置文件如下:
    [root@lvs-02 ~]# cat /etc/keepalived/keepalived.conf 
    ! Configuration File for keepalived
    
    global_defs {
       notification_email {
    	   root@localhost
       }
       notification_email_from keepalived@localhost
       smtp_server 127.0.0.1
       smtp_connect_timeout 30
       router_id LVS02
       script_user root
       enable_script_security
    }
    
    vrrp_instance VI_1 {
        state BACKUP
        interface eth2
        virtual_router_id 51
        priority 100
        advert_int 3
        authentication {
            auth_type PASS
            auth_pass 1111
        }
        virtual_ipaddress {
            192.168.50.100/32 dev eth2
        }
    
        track_interface {
        	eth2
        }
    
        notify_master "/etc/keepalived/notify.sh master"
        notify_backup "/etc/keepalived/notify.sh backup"
        notify_fault "/etc/keepalived/notify.sh fault"
    }
    
    virtual_server 192.168.50.100 443 {
        delay_loop 6
        lb_algo rr
        lb_kind DR
        protocol TCP
    
        sorry_server 192.168.20.24 443
    
        real_server 192.168.50.22 443 {
            weight 1
            SSL_GET {
                url {
                  path /index.html
                  status_code 200
                }
                connect_timeout 3
                nb_get_retry 3
                delay_before_retry 3
            }
        }
        
        real_server 192.168.50.23 443 {
            weight 1
            SSL_GET {
                url {
                  path /index.html
                  status_code 200
                }
                connect_timeout 3
                nb_get_retry 3
                delay_before_retry 3
            }
        }
    }
    
    virtual_server 192.168.50.100 80 {
        delay_loop 6
        lb_algo rr 
        lb_kind DR
        protocol TCP
    
       real_server 192.168.50.22 80 {
            weight 1
            TCP_CHECK {
                connect_timeout 3
                nb_get_retry 3
                delay_before_retry 3
            }
        }
       
       real_server 192.168.50.23 80 {
            weight 1
            TCP_CHECK {
                connect_timeout 3
                nb_get_retry 3
                delay_before_retry 3
    		}
        }
    }
    
    #2.keepalived的notify.sh脚本
    [root@lvs-02 keepalived]# cat notify.sh 
    #!/bin/bash
    
    contact='root@localhost'
    notify() {
    	    local mailsubject="$(hostname) to be $1, vip floating"
    		local mailbody="$(date +'%F %T'): vrrp transition, $(hostname) changed to be $1"
    	    echo "$mailbody" | mail -s "$mailsubject" $contact
    }
    
    case $1 in
    master)
        notify master
    	;;
    backup)
    	notify backup
        ;;
    fault)
        notify fault
    	;;
    *)
    	echo "Usage: $(basename $0) {master|backup|fault}"
    	exit 1
    	;;
    esac
    
    #增加执行权限
    [root@lvs-02 keepalived]# chmod +x notify.sh
    
    #3.增加默认路由指向路由器网关
    [root@lvs-02 ~]# ip route add default via 192.168.50.50 dev eth2
    
    [root@lvs-02 ~]# route -n
    Kernel IP routing table
    Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
    0.0.0.0         192.168.50.50   0.0.0.0         UG    0      0        0 eth2
    192.168.50.0    0.0.0.0         255.255.255.0   U     102    0        0 eth2
    
    #4.启动keepalived服务:
    [root@lvs-02 ~]# systemctl start keepalived.service
    
    #5.查看自动生成的ipvs规则:
    [root@lvs-02 ~]# ipvsadm -Ln
    IP Virtual Server version 1.2.1 (size=4096)
    Prot LocalAddress:Port Scheduler Flags
      -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
    TCP  192.168.50.100:80 rr
      -> 192.168.50.22:80             Route   1      0          0         
      -> 192.168.50.23:80             Route   1      0          0         
    TCP  192.168.50.100:443 rr
      -> 192.168.50.22:443            Route   1      0          0         
      -> 192.168.50.23:443            Route   1      0          0 
      
    #6.查看VIP,不在本机:
    [root@lvs-02 ~]# ip add
    4: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
        link/ether 00:0c:29:e4:cf:17 brd ff:ff:ff:ff:ff:ff
        inet 192.168.50.32/24 brd 192.168.50.255 scope global noprefixroute eth2
           valid_lft forever preferred_lft forever
    
  • 使用客户端测试

    • 客户端网络配置如下:

      [root@xuzhichao ~]# ip add
      3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
          link/ether 00:0c:29:2f:d0:da brd ff:ff:ff:ff:ff:ff
          inet 192.168.20.17/24 brd 192.168.20.255 scope global noprefixroute eth1
             valid_lft forever preferred_lft forever
      
      [root@xuzhichao ~]# route -n
      Kernel IP routing table
      Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
      192.168.20.0    0.0.0.0         255.255.255.0   U     101    0        0 eth1
      
    • 测试访问:

      #1.测试使用http方式访问,重定向到https
      [root@xuzhichao ~]# for i in {1..10} ;do curl -k -L -Hhost:www,xuzhichao.com http://192.168.20.50; done
      node2.xuzhichao.com page
      node1.xuzhichao.com page
      node2.xuzhichao.com page
      node1.xuzhichao.com page
      node2.xuzhichao.com page
      node1.xuzhichao.com page
      node2.xuzhichao.com page
      node1.xuzhichao.com page
      node2.xuzhichao.com page
      node1.xuzhichao.com page
      
      #2.测试直接使用https方式访问
      [root@xuzhichao ~]# for i in {1..10} ;do curl -k -Hhost:www,xuzhichao.com https://192.168.20.50; done
      node2.xuzhichao.com page
      node1.xuzhichao.com page
      node2.xuzhichao.com page
      node1.xuzhichao.com page
      node2.xuzhichao.com page
      node1.xuzhichao.com page
      node2.xuzhichao.com page
      node1.xuzhichao.com page
      node2.xuzhichao.com page
      node1.xuzhichao.com page
      

1.5 RS故障场景测试

  • 把nginx02节点的nginx服务停止

    [root@nginx02 ~]# systemctl stop nginx.service
    
  • 查看两个节点的日志和ipvs规则变化:

    #1.查看日志,发现检测后端主机失败,将RS从集群中移除
    [root@lvs-01 ~]# tail -f  /var/log/keepalived.log
    Jul 13 20:00:57 lvs-01 Keepalived_healthcheckers[13466]: TCP connection to [192.168.50.22]:80 failed.
    Jul 13 20:00:59 lvs-01 Keepalived_healthcheckers[13466]: Error connecting server [192.168.50.22]:443.
    Jul 13 20:01:00 lvs-01 Keepalived_healthcheckers[13466]: TCP connection to [192.168.50.22]:80 failed.
    Jul 13 20:01:00 lvs-01 Keepalived_healthcheckers[13466]: Check on service [192.168.50.22]:80 failed after 1 retry.
    Jul 13 20:01:00 lvs-01 Keepalived_healthcheckers[13466]: Removing service [192.168.50.22]:80 from VS [192.168.50.100]:80
    Jul 13 20:01:00 lvs-01 Keepalived_healthcheckers[13466]: Remote SMTP server [127.0.0.1]:25 connected.
    Jul 13 20:01:00 lvs-01 Keepalived_healthcheckers[13466]: SMTP alert successfully sent.
    Jul 13 20:01:02 lvs-01 Keepalived_healthcheckers[13466]: Error connecting server [192.168.50.22]:443.
    Jul 13 20:01:05 lvs-01 Keepalived_healthcheckers[13466]: Error connecting server [192.168.50.22]:443.
    Jul 13 20:01:08 lvs-01 Keepalived_healthcheckers[13466]: Error connecting server [192.168.50.22]:443.
    Jul 13 20:01:08 lvs-01 Keepalived_healthcheckers[13466]: Check on service [192.168.50.22]:443 failed after 3 retry.
    Jul 13 20:01:08 lvs-01 Keepalived_healthcheckers[13466]: Removing service [192.168.50.22]:443 from VS [192.168.50.100]:443
    Jul 13 20:01:08 lvs-01 Keepalived_healthcheckers[13466]: Remote SMTP server [127.0.0.1]:25 connected.
    Jul 13 20:01:08 lvs-01 Keepalived_healthcheckers[13466]: SMTP alert successfully sent.
    
    #2.查看ipvs规则,192.168.50.22主机已经被移除集群:
    [root@lvs-01 ~]# ipvsadm -Ln
    IP Virtual Server version 1.2.1 (size=4096)
    Prot LocalAddress:Port Scheduler Flags
      -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
    TCP  192.168.50.100:80 rr
      -> 192.168.50.23:80             Route   1      0          0         
    TCP  192.168.50.100:443 rr
      -> 192.168.50.23:443            Route   1      0          0         
    
  • 客户端测试,访问全部分配给nginx03节点:

    [root@xuzhichao ~]# for i in {1..10} ;do curl -L -k -Hhost:www.xuzhichao.com http://192.168.20.50 ;done
    node2.xuzhichao.com page
    node2.xuzhichao.com page
    node2.xuzhichao.com page
    node2.xuzhichao.com page
    node2.xuzhichao.com page
    node2.xuzhichao.com page
    node2.xuzhichao.com page
    node2.xuzhichao.com page
    node2.xuzhichao.com page
    node2.xuzhichao.com page
    
  • 恢复nginx02节点,查看两个lvs节点的日志和ipvs规则:

    #1.打开nginx02节点的nginx服务:
    [root@nginx02 ~]# systemctl start nginx.service
    
    #2.查看lvs01的keepalived日志,nginx02节点检测成功,加入后端主机:
    [root@lvs-01 ~]# tail -f  /var/log/keepalived.log
    Jul 13 20:06:44 lvs-01 Keepalived_healthcheckers[13466]: HTTP status code success to [192.168.50.22]:443 url(1).
    Jul 13 20:06:44 lvs-01 Keepalived_healthcheckers[13466]: Remote Web server [192.168.50.22]:443 succeed on service.
    Jul 13 20:06:44 lvs-01 Keepalived_healthcheckers[13466]: Adding service [192.168.50.22]:443 to VS [192.168.50.100]:443
    Jul 13 20:06:44 lvs-01 Keepalived_healthcheckers[13466]: Remote SMTP server [127.0.0.1]:25 connected.
    Jul 13 20:06:44 lvs-01 Keepalived_healthcheckers[13466]: SMTP alert successfully sent.
    Jul 13 20:06:49 lvs-01 Keepalived_healthcheckers[13466]: TCP connection to [192.168.50.22]:80 success.
    Jul 13 20:06:49 lvs-01 Keepalived_healthcheckers[13466]: Adding service [192.168.50.22]:80 to VS [192.168.50.100]:80
    Jul 13 20:06:49 lvs-01 Keepalived_healthcheckers[13466]: Remote SMTP server [127.0.0.1]:25 connected.
    Jul 13 20:06:49 lvs-01 Keepalived_healthcheckers[13466]: SMTP alert successfully sent.
    
    #3.查看ipvs规则:
    [root@lvs-01 ~]# ipvsadm -Ln
    IP Virtual Server version 1.2.1 (size=4096)
    Prot LocalAddress:Port Scheduler Flags
      -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
    TCP  192.168.50.100:80 rr
      -> 192.168.50.22:80             Route   1      0          0         
      -> 192.168.50.23:80             Route   1      0          0         
    TCP  192.168.50.100:443 rr
      -> 192.168.50.22:443            Route   1      0          0         
      -> 192.168.50.23:443            Route   1      0          0         
    
  • 此时使用客户端测试,两个nginx节点恢复正常访问:

    [root@xuzhichao ~]# for i in {1..10} ;do curl -L -k -Hhost:www.xuzhichao.com http://192.168.20.50 ;done
    node1.xuzhichao.com page
    node2.xuzhichao.com page
    node1.xuzhichao.com page
    node2.xuzhichao.com page
    node1.xuzhichao.com page
    node2.xuzhichao.com page
    node1.xuzhichao.com page
    node2.xuzhichao.com page
    node1.xuzhichao.com page
    node2.xuzhichao.com page
    

1.6 lvs设备故障场景测试

  • 把lvs-01节点的keepalived服务关闭,模拟lvs-01节点故障,查看负载均衡集群情况:

    #1.把lvs-01节点的keepalived服务关闭:
    [root@lvs-01 ~]# systemctl stop keepalived.service
    
    #2.查看keepalived日志情况:
    [root@lvs-01 ~]# tail -f  /var/log/keepalived.log
    Jul 13 20:11:08 lvs-01 Keepalived[13465]: Stopping
    Jul 13 20:11:08 lvs-01 Keepalived_vrrp[13467]: VRRP_Instance(VI_1) sent 0 priority
    Jul 13 20:11:08 lvs-01 Keepalived_vrrp[13467]: VRRP_Instance(VI_1) removing protocol VIPs.
    Jul 13 20:11:08 lvs-01 Keepalived_healthcheckers[13466]: Removing service [192.168.50.22]:80 from VS [192.168.50.100]:80
    Jul 13 20:11:08 lvs-01 Keepalived_healthcheckers[13466]: Removing service [192.168.50.23]:80 from VS [192.168.50.100]:80
    Jul 13 20:11:08 lvs-01 Keepalived_healthcheckers[13466]: Stopped
    Jul 13 20:11:09 lvs-01 Keepalived_vrrp[13467]: Stopped
    Jul 13 20:11:09 lvs-01 Keepalived[13465]: Stopped Keepalived v1.3.5 (03/19,2017), git commit v1.3.5-6-g6fa32f2
    
    [root@lvs-02 ~]# tail -f  /var/log/keepalived.log
    Jul 13 20:11:09 lvs-02 Keepalived_vrrp[2247]: VRRP_Instance(VI_1) Transition to MASTER STATE
    Jul 13 20:11:12 lvs-02 Keepalived_vrrp[2247]: VRRP_Instance(VI_1) Entering MASTER STATE
    Jul 13 20:11:12 lvs-02 Keepalived_vrrp[2247]: VRRP_Instance(VI_1) setting protocol VIPs.
    Jul 13 20:11:12 lvs-02 Keepalived_vrrp[2247]: Sending gratuitous ARP on eth2 for 192.168.50.100
    Jul 13 20:11:12 lvs-02 Keepalived_vrrp[2247]: VRRP_Instance(VI_1) Sending/queueing gratuitous ARPs on eth2 for 192.168.50.100
    Jul 13 20:11:12 lvs-02 Keepalived_vrrp[2247]: Sending gratuitous ARP on eth2 for 192.168.50.100
    Jul 13 20:11:12 lvs-02 Keepalived_vrrp[2247]: Sending gratuitous ARP on eth2 for 192.168.50.100
    
    #3.查看VIP情况,已经转移到lvs-02节点:
    [root@lvs-02 ~]# ip add
    4: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
        link/ether 00:0c:29:e4:cf:17 brd ff:ff:ff:ff:ff:ff
        inet 192.168.50.32/24 brd 192.168.50.255 scope global noprefixroute eth2
           valid_lft forever preferred_lft forever
        inet 192.168.50.100/32 scope global eth2
           valid_lft forever preferred_lft forever
    
    [root@lvs-01 ~]# ip add
    4: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
        link/ether 00:0c:29:21:84:9d brd ff:ff:ff:ff:ff:ff
        inet 192.168.50.31/24 brd 192.168.50.255 scope global noprefixroute eth2
           valid_lft forever preferred_lft forever
    
    #4.测试客户端访问正常:
    [root@xuzhichao ~]# for i in {1..10} ;do curl -L -k -Hhost:www.xuzhichao.com http://192.168.20.50 ;done
    node1.xuzhichao.com page
    node2.xuzhichao.com page
    node1.xuzhichao.com page
    node2.xuzhichao.com page
    node1.xuzhichao.com page
    node2.xuzhichao.com page
    node1.xuzhichao.com page
    node2.xuzhichao.com page
    node1.xuzhichao.com page
    node2.xuzhichao.com page
    
  • 把lvs-01节点恢复,观察负载均衡集群情况:

    #1.打开lvs-01节点的keepalived服务:
    [root@lvs-01 ~]# systemctl start keepalived.service
    
    #2.查看keepalived日志情况:
    [root@lvs-01 ~]# tail -f  /var/log/keepalived.log
    Jul 13 20:15:36 lvs-01 Keepalived_vrrp[13724]: VRRP_Instance(VI_1) Transition to MASTER STATE
    Jul 13 20:15:39 lvs-01 Keepalived_vrrp[13724]: VRRP_Instance(VI_1) Entering MASTER STATE
    Jul 13 20:15:39 lvs-01 Keepalived_vrrp[13724]: VRRP_Instance(VI_1) setting protocol VIPs.
    Jul 13 20:15:39 lvs-01 Keepalived_vrrp[13724]: Sending gratuitous ARP on eth2 for 192.168.50.100
    Jul 13 20:15:39 lvs-01 Keepalived_vrrp[13724]: VRRP_Instance(VI_1) Sending/queueing gratuitous ARPs on eth2 for 192.168.50.100
    Jul 13 20:15:39 lvs-01 Keepalived_vrrp[13724]: Sending gratuitous ARP on eth2 for 192.168.50.100
    
    [root@lvs-02 ~]# tail -f  /var/log/keepalived.log
    Jul 13 20:15:36 lvs-02 Keepalived_vrrp[2247]: VRRP_Instance(VI_1) Received advert with higher priority 120, ours 100
    Jul 13 20:15:36 lvs-02 Keepalived_vrrp[2247]: VRRP_Instance(VI_1) Entering BACKUP STATE
    Jul 13 20:15:36 lvs-02 Keepalived_vrrp[2247]: VRRP_Instance(VI_1) removing protocol VIPs.
    Jul 13 20:15:36 lvs-02 Keepalived_vrrp[2247]: Opening script file /etc/keepalived/notify.sh
    
    #3.查看VIP情况,回到lvs-01节点:
    [root@lvs-01 ~]# ip add
    4: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
        link/ether 00:0c:29:21:84:9d brd ff:ff:ff:ff:ff:ff
        inet 192.168.50.31/24 brd 192.168.50.255 scope global noprefixroute eth2
           valid_lft forever preferred_lft forever
        inet 192.168.50.100/32 scope global eth2
           valid_lft forever preferred_lft forever
    
    [root@lvs-02 ~]# ip add
    3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
        link/ether 00:0c:29:e4:cf:0d brd ff:ff:ff:ff:ff:ff
    4: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
        link/ether 00:0c:29:e4:cf:17 brd ff:ff:ff:ff:ff:ff
        inet 192.168.50.32/24 brd 192.168.50.255 scope global noprefixroute eth2
           valid_lft forever preferred_lft forever
           
    #4.客户端测试访问正常:
    [root@xuzhichao ~]# for i in {1..10} ;do curl -L -k -Hhost:www.xuzhichao.com http://192.168.20.50 ;done
    node2.xuzhichao.com page
    node1.xuzhichao.com page
    node2.xuzhichao.com page
    node1.xuzhichao.com page
    node2.xuzhichao.com page
    node1.xuzhichao.com page
    node2.xuzhichao.com page
    node1.xuzhichao.com page
    node2.xuzhichao.com page
    node1.xuzhichao.com page
    

原文地址:https://www.cnblogs.com/xuwymm/p/15010109.html