lvs/dr原理和特点
将请求报文的目标MAC地址设定为挑选出的RS的MAC地址
LVS-DR模型的特性
特点1:保证前端路由将目标地址为ⅥP报文统统发给irector Server,而不是RSRS跟 Director Server必须在同一个物理网络中
所有的请求报文经由 Director Server,但响应报文必须不能进过 Director Server不支持地址转换,也不支持端口映射RS可以是大多数常见的操作系统RS的网关绝不允许指向DIP(因为我们不允许他经过director)RS上的o接口配置VP的IP地址
缺陷:RS和DS必须在同一机房中
准备四台虚拟机
server1安装ipvsadm
yum install ipvsadm -y
调度策略
ipvsadm-A-t172.25.0.100:80-Srr排非调度策略,r轮询, ipvsadm--hel
-A -add- service 添加一条新的虚拟服务
-t TCP/UDP协议的虚拟服务
-s 调度算法(10)
ipsan -a -t 172.25.0.100:80 -r 172.25.0.2:80
-g #排后端两个rs
-a 在一个虚拟服务中添加一个新的真实服务器
-g | -m | -i LVS 模式为: DR | NAT | TUN
-t 说明成拟服务器提供的是tp的服务
server1进行如下操作
[root@server1 ~]# ipvsadm -A -t 172.25.254.100:80 -s rr
[root@server1 ~]# ipvsadm -a -t 172.25.254.100:80 -r 172.25.254.2:80 -g
[root@server1 ~]# ipvsadm -a -t 172.25.254.100:80 -r 172.25.254.3:80 -g
[root@server1 ~]# ip addr add 172.25.254.100/32 dev eth0
server2
[root@server2 ~]# ip addr add 172.25.254.100/32 dev eth0
server3
[root@server3 ~]# ip addr add 172.25.254.100/32 dev eth0
查看规则是否添加上去
[root@server1 ~]# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 172.25.254.100:80 rr
-> 172.25.254.2:80 Route 1 0 0
-> 172.25.254.3:80 Route 1 0 0
打开server2和server3的80端口
[root@server2 ~]# yum install httpd -y
[root@server2 ~]# cd /var/www/html/
[root@server2 html]# vim index.html
[root@server2 html]# systemctl start httpd
[root@server2 html]#
server3安装hpptd
[root@server3 ~]# yum install httpd -y
[root@server3 ~]# cd /var/www/html/
[root@server3 html]# ls
[root@server3 html]# vim index.html
server3
[root@server3 html]# systemctl start httpd
真机测试
[kiosk@foundation74 ~]$ curl 172.25.254.100
server3
ARP协议详解
ARP协议是“ Address Resolution Protocol"(地址解析协议)的缩写。其作用是在以太网环境中,数据的传输所依懒的是MAC地址而非P地址,而将已知P地址转换为MAC地址的工作是由ARP协议来完成的。在局域网中,网络中实际传输的是“帧”,帧里面是有目标主机的MAC地址的。在以太网中,一个主机和另一个主机进行直接通信,必须要知道目标主机的MAC地址。但这个目标MAC地址是如何获得的呢?它就是通过地址解析协议获得的。所谓“地址解析”就是主机在发送帧前将目标P地址转换成目标MAC地址的过程。ARP协议的基本功能就是通过目标设备的IP地址,查询目标设备的MAC地址,以保证通信的顺利进行
在2和3上配置arptabs
[root@server2 html]# yum install arptables-0.0.4-8.el7.x86_64 -y
[root@server3 html]# yum install arptables-0.0.4-8.el7.x86_64 -y
查看是否有策略
[root@server2 html]# arptables -nL
Chain INPUT (policy ACCEPT)
Chain OUTPUT (policy ACCEPT)
Chain FORWARD (policy ACCEPT)
配置
[root@server2 html]# arptables -A INPUT -d 172.25.254.100 -j DROP ##来的ARP,如果目的IP是VIP的,丢弃
[root@server2 html]# arptables -A OUTPUT -s 172.25.254.100 -j mangle --mangle-ip-s 172.25.254.2##发出去的ARP包,如果源IP是VIP的,改成realserver的IP
[root@server3 html]# arptables -A INPUT -d 172.25.254.100 -j DROP
[root@server3 html]# arptables -A OUTPUT -s 172.25.254.100 -j mangle --mangle-ip-s 172.25.254.3
celve
查看是否配上
[root@server2 html]# arptables -nL
Chain INPUT (policy ACCEPT)
-j DROP -d 172.25.254.100
Chain OUTPUT (policy ACCEPT)
-j mangle -s 172.25.254.100 --mangle-ip-s 172.25.254.2
Chain FORWARD (policy ACCEPT)
[root@server3 html]# arptables -nL
Chain INPUT (policy ACCEPT)
-j DROP -d 172.25.254.100
Chain OUTPUT (policy ACCEPT)
-j mangle -s 172.25.254.100 --mangle-ip-s 172.25.254.3
Chain FORWARD (policy ACCEPT)
然后去真机测试
[kiosk@foundation74 ~]$ curl 172.25.254.100
server2
[kiosk@foundation74 ~]$ curl 172.25.254.100
server3
[kiosk@foundation74 ~]$ curl 172.25.254.100
server2
[kiosk@foundation74 ~]$ curl 172.25.254.100
server3
去server1查看调度器是否生效
[root@server1 ~]# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 172.25.254.100:80 rr
-> 172.25.254.2:80 Route 1 0 2
-> 172.25.254.3:80 Route 1 0 2
arp_ignore和arp_announce参数都和ARP协议相关,主要用于控制系统返回arp响应和发送arp请求时的动作。
这两个参数很重要,特别是在LVS的DR场景下,它们的配置t直接影响到DR转发是否正
arp_ignore参数的作用是控制系统在收到外部的arp请求时,是否要返回arp响应
1:只响应目的IP地址为接收网卡上的本地地址的arp请求arp_announce的作用是控制系统在对外发送arp请求时,如何选择arp请求数据包的源IP地址
2:忽略IP数据包的源IP地址,选择该发送网卡.上最合适的本地地址作为arp请求的源IP地址
sysctl -w net.ipv4.conf.lo.arp_ignore= 1
sysctl -w net.ipv4.conf.lo.arp_announce =2
sysctI -w net.ipv4.conf.all.arp_ignore= 1
sysctl -w net.ipv4.conf.all.arp__announce= 2
sysctl -p:立即生效
调度器如和能检查后端状况
如果关闭2的httpd服务在真机测试当进行调度的时候还是会将2调度出来
[root@server2 html]# systemctl stop httpd
[kiosk@foundation74 ~]$ curl 172.25.254.100
curl: (7) Failed connect to 172.25.254.100:80; Connection refused
[kiosk@foundation74 ~]$ curl 172.25.254.100
server3
[kiosk@foundation74 ~]$ curl 172.25.254.100
curl: (7) Failed connect to 172.25.254.100:80; Connection refused
[kiosk@foundation74 ~]$ curl 172.25.254.100
server3
需要配置server1的高可用的yum源
[HighAvailabilithy]
name=HighAvailabilithy
baseurl=http://172.25.254.74/rhel7.3/addons/HighAvailability
gpgcheck=0
enabled=1
[root@server1 yum.repos.d]# yum clean all
Loaded plugins: product-id, search-disabled-repos, subscription-manager
This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register.
Cleaning repos: HighAvailabilithy rhel7.3
Cleaning up everything
[root@server1 yum.repos.d]# yum repolist
从真机上拷贝安装包
[root@server1 ~]# scp [email protected]:/home/kiosk/Desktop/keepalived-2.0.20.tar.gz /root
The authenticity of host '172.25.254.74 (172.25.254.74)' can't be established.
ECDSA key fingerprint is e8:af:d9:6e:84:4e:dd:a9:46:9d:5d:ad:15:be:3c:f5.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '172.25.254.74' (ECDSA) to the list of known hosts.
[email protected]'s password:
keepalived-2.0.20.tar.gz 100% 1012KB 1.0MB/s 00:00
[root@server1 ~]# scp [email protected]:/home/kiosk/Desktop/ldirectord-3.9.5-3.1.x86_64.rpm /root
[email protected]'s password:
ldirectord-3.9.5-3.1.x86_64.rpm
安装软件
[root@server1 ~]# yum install ldirectord-3.9.5-3.1.x86_64.rpm -y
查看生成的文件路径
[root@server1 ~]# rpm -qpl ldirectord-3.9.5-3.1.x86_64.rpm
warning: ldirectord-3.9.5-3.1.x86_64.rpm: Header V3 DSA/SHA1 Signature, key ID 7b709911: NOKEY
/etc/ha.d
/etc/ha.d/resource.d
/etc/ha.d/resource.d/ldirectord
/etc/init.d/ldirectord
/etc/logrotate.d/ldirectord
/usr/lib/ocf/resource.d/heartbeat/ldirectord
/usr/sbin/ldirectord
/usr/share/doc/ldirectord-3.9.5
/usr/share/doc/ldirectord-3.9.5/COPYING
/usr/share/doc/ldirectord-3.9.5/ldirectord.cf
/usr/share/man/man8/ldirectord.8.gz
进行拷贝/usr/share/doc/ldirectord-3.9.5/ldirectord.cf
[root@server1 ~]# cp /usr/share/doc/ldirectord-3.9.5/ldirectord.cf /etc/ha.d/
[root@server1 ~]# cd /etc/ha.d/
[root@server1 ha.d]# ls
ldirectord.cf resource.d shellfuncs
vim ldirectord.cf对该文件进行编辑如果存在故障能自己检查出来
virtual=172.25.254.100:80
real=172.25.254.2:80 gate
real=172.25.254.3:80 gate
fallback=127.0.0.1:80 gate
service=http
scheduler=rr
#persistent=600
#netmask=255.255.255.255
protocol=tcp
checktype=negotiate
checkport=80
request="index.html"
#receive="Test Page"
#virtualhost=www.x.y.z
[root@server1 ha.d]# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 172.25.254.100:80 rr
-> 172.25.254.2:80 Route 1 0 0
-> 172.25.254.3:80 Route 1 0 0
然后进行测试
[kiosk@foundation74 Desktop]$ curl 172.25.254.100
server2
[kiosk@foundation74 Desktop]$ curl 172.25.254.100
server3
[kiosk@foundation74 Desktop]$ curl 172.25.254.100
server2
[kiosk@foundation74 Desktop]$ curl 172.25.254.100
server3
关闭2的httpd服务
然后打开server1的ldirectord
[root@server1 ha.d]# systemctl start ldirectord
[root@server1 ha.d]# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 172.25.254.100:80 rr
-> 172.25.254.3:80 Route 1 0 0
在主机进行测试可得到如下结果
[kiosk@foundation74 Desktop]$ curl 172.25.254.100
server3
[kiosk@foundation74 Desktop]$ curl 172.25.254.100
server3
[kiosk@foundation74 Desktop]$ curl 172.25.254.100
server3
Keepalived是什么?
Keepalived起初是为LVS设计的,专门用来监控集群系统中各个服务节点的状态,如果某个服务器节点出现异常,或者工作出现故障,Keepalived将检测到,并将出现的故障的服务器节点从集群系统中剔除,这些工作全部是自动完成的,不需要人工干涉,需要人工完成的只是修复出现故障的服务节点。
keepalived工作原理
keepalived是以VRRP协议为实现基础的,VRRP全称Virtual Router Redundancy Protocol,即虚拟路由冗余协议。
虚拟路由冗余协议,可以认为是实现路由器高可用的协议,即将N台提供相同功能的路由器组成--个路由器组,这个组里面有--个master和多个backup,master上面有-一个对外提供服务的vip(该路由器所在局域网内其他机器的默认路由为该vip),master会发组播,当backup收不到vrrp包时就认为master宕掉了,这时就需要根据VRRP的优先级来选举-个backup当master。这样的话就可以保证路由器的高可用
增加一台server4作为server1的backup
安装keeplived
[root@server1 ~]# tar zxf keepalived-2.0.20.tar.gz
[root@server1 ~]# ls
keepalived-2.0.20 keepalived-2.0.20.tar.gz
[root@server1 ~]# cd keepalived-2.0.20/
[root@server1 keepalived-2.0.20]# ls
aclocal.m4 bin_install compile CONTRIBUTORS doc install-sh lib missing TODO
ar-lib build_setup configure COPYING genhash keepalived Makefile.am README.md
AUTHOR ChangeLog configure.ac depcomp INSTALL keepalived.spec.in Makefile.in snap
[root@server1 keepalived-2.0.20]#
yum install gcc openssl-devel -y
[root@server1 keepalived-2.0.20]# ./configure --prefix=/user/local/keepalived --with-init=systemd
安装成功
Use IPVS Framework : Yes
同样的操作在server4上再进行一遍然后进行下列软链接
[root@server1 local]# ln -s /user/local/keepalived/etc/sysconfig/keepalived /etc/sysconfig/
[root@server1 local]# ln -s /user/local/keepalived/etc/keepalived /etc/
[root@server1 local]# ln -s /user/local/keepalived/sbin/keepalived /sbin/
同样的操作在server4上再进行一遍
关闭ldirectord.service服务
[root@server1 local]# systemctl stop ldirectord.service
[root@server1 local]# systemctl disable ldirectord.service
ldirectord.service is not a native service, redirecting to /sbin/chkconfig.
Executing /sbin/chkconfig ldirectord off
删除VIP
[root@server1 local]# ip addr del 172.25.254.100/32 dev eth0
编辑keepalived(组合了esrver1和server4)
! Configuration File for keepalived
2
3 global_defs {
4 notification_email {
5 root@localhost
6 }
7 notification_email_from keepalived@localhost
8 smtp_server 127.0.0.1
9 smtp_connect_timeout 30
10 router_id LVS_DEVEL
11 vrrp_skip_check_adv_addr
12 # vrrp_strict
13 vrrp_garp_interval 0
14 vrrp_gna_interval 0
15 }
16
17 vrrp_instance VI_1 {
18 state MASTER
19 interface eth0
20 virtual_router_id 51
21 priority 100
22 advert_int 1
23 authentication {
24 auth_type PASS
25 auth_pass 1111
26 }
27 virtual_ipaddress {
28 172.25.254.100
29 }
30 }
31
32 virtual_server 172.25.254.100 80 {
33 delay_loop 3
34 lb_algo rr
35 lb_kind DR
36 #persistence_timeout 50
37 protocol TCP
38
39 real_server 172.25.254.2 80 {
40 TCP_CHECK{
41 weight 1
42 cnnect_port 80
43 connect_timeout 3
44 }
45 }
46 real_server 172.25.254.2 80 {
47 TCP_CHECK{
48 weight 1
49 cnnect_port 80
50 connect_timeout 3
51 }
52 }
53 }
复制到服务器server4
[root@server1 keepalived]# scp keepalived.conf [email protected]:/etc/keepalived/
同时修改如下:BACKUP priority 50
vrrp_instance VI_1 {
state BACKUP
interface eth0
virtual_router_id 51
priority 50
安装发送邮件的软件
yum install mailx -y
可以通过mail 查看
[root@server1 keepalived]# mail
Heirloom Mail version 12.5 7/5/10. Type ? for help.
"/var/spool/mail/root": 2 messages 2 new
>N 1 keepalived@localhost Fri Feb 21 06:09 17/678 "[LVS_DEVEL] Realserver [172.25.254.2]:tcp:80 of virtual server [172.25.254.100]:tcp:80 - DO"
N 2 keepalived@localhost Fri Feb 21 06:09 17/677 "[LVS_DEVEL] Realserver [172.25.254.2]:tcp:80 of virtual server [172.25.254.100]:tcp:80 - UP"
&
Status: R
=> TCP CHECK failed on service <=
&
当关闭1的时候在进行测试这个时候4充当服务的角色
[root@server1 keepalived]# systemctl stop keepalived.service
[kiosk@foundation74 Desktop]$ curl 172.25.254.100
server2
[kiosk@foundation74 Desktop]$ curl 172.25.254.100
server2
Keepalived高可用故障切换转移原理
Keepalived高可用服务对之间的故障切换转移,是通过VRRP(Virtual Router Redundancy Protocol,虚拟路由器冗余协议)来实现的。
在Keepalived服务正常工作时,主Master节点会不断地向备节点发送(多播的方式)心跳消息,用以告诉备Backup节点自己还活看,当主Master节点发生故障时,就无法发送心跳消息,备节点也就因此无法继续检测到来自主Master节点的心跳了,于是调用自身的接管程序,接管主Master节点的IP资源及服务。而当主Master节点恢复时,备Backup节点又会释放主节点故障时自身接管的IP资源及服务,恢复到原来的备用角色
LVS/Tun原理和特点
在原有的IP报文外再次封装多一层IP首部,内部IP首部(源地址为CIP,目标IIP为VIP),外层IP首部(源地址为DIP,目标IP为RIP)
(a)当用户请求叨叨Director Server,此时请求的数据报文会先到内核空间的PREROUTING链。此时报文的源IP为CIP,目标IP为VIP
(b)PREROUTING检查发现数据包的目标IP是本机,将数据包送至INPUT链
(c)IPVS比对数据包请求的服务是否为集群服务,若是,在请求报文的首部再次封装一层IP报文,封装源IP为DIP,目标IP为RIP。然后发至POSTROUTING链。此时源IP为DIP,目标IP为RIP
(d)POSTROUTING链根据最新封装的IP报文,将数据包发至RS(因为在外层封装多了一层IP首部,所以可以理解为此时通过隧道传输)。此时源IP为DIP,目标IP为RIP
(e)RS接收到报文后发现是自己的IP地址,就将报文接收下来,拆除掉最外层的IP后,会发现里面还有-层IP首部,而且目标是自己的Io接口VIP,那么此时RS开始处理此请求,处理完成之后,通过Io接口送给eth0网卡,然后向外传递。此时的源IP地址为VIP,目标IP为CIP
f)响应报文最终送达至客户端
LVS-Tun模型特性
RIP、VIP、DIP全是公网地址
RS的网关不会也不可能指向DIP
所有的请求报文经由Director Server,但响应报文必须不能进过Director Server
不支持端口映射
RS的系统必须支持隧道
其实企业中最常用的是DR实现方式,而NAT配置.上比较简单和方便
RIP实现
[root@server1 keepalived]# ipvsadm -C
[root@server1 keepalived]# modprobe ipip
[root@server1 ~]# ip addr del 172.25.254.100/32 dev eth0
RTNETLINK answers: Cannot assign requested address
[root@server1 ~]# ip addr add 172.25.254.100/32 dev tunl0
添加IP隧道的规则
[root@server1 ~]# ipvsadm -A -t 172.25.254.100:80 -s rr
[root@server1 ~]# ipvsadm -a -t 172.25.254.100:80 -r 172.25.254.2:80 -i
[root@server1 ~]# ipvsadm -a -t 172.25.254.100:80 -r 172.25.254.3:80 -i
[root@server1 ~]# ipvsadm -l
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP server1:http rr
-> server2:http Tunnel 1 0 0
-> server3:http Tunnel 1 0 0
在server2和server3都添加隧道规则[
root@server3 html]# modprobe ipip
[root@server3 html]# ip addr del 172.25.254.100/32 dev eth0
[root@server3 html]# ip addr add 172.25.254.100/32 dev tunl0
[root@server2 html]# modprobe ipip
[root@server2 html]# ip addr del 172.25.254.100/32 dev eth0
[root@server2 html]# ip addr add 172.25.254.100/32 dev tunl0
激活隧道模式
ip link set up tunl0
关闭server2和server3的内核参数防止反向校验[
root@server3 html]# sysctl -a | grep rp_filter
net.ipv4.conf.all.arp_filter = 0
net.ipv4.conf.all.rp_filter = 0
net.ipv4.conf.default.arp_filter = 0
net.ipv4.conf.default.rp_filter = 0
net.ipv4.conf.eth0.arp_filter = 0
net.ipv4.conf.eth0.rp_filter = 0
net.ipv4.conf.lo.arp_filter = 0
net.ipv4.conf.lo.rp_filter = 0
net.ipv4.conf.tunl0.arp_filter = 0
net.ipv4.conf.tunl0.rp_filter = 0
[root@server2 html]# sysctl -a | grep rp_filter
net.ipv4.conf.all.arp_filter = 0
net.ipv4.conf.all.rp_filter = 0
net.ipv4.conf.default.arp_filter = 0
net.ipv4.conf.default.rp_filter = 0
net.ipv4.conf.eth0.arp_filter = 0
net.ipv4.conf.eth0.rp_filter = 0
net.ipv4.conf.lo.arp_filter = 0
net.ipv4.conf.lo.rp_filter = 0
net.ipv4.conf.tunl0.arp_filter = 0
net.ipv4.conf.tunl0.rp_filter = 0
真机测试
[kiosk@foundation74 Desktop]$ curl 172.25.254.100
server2
[kiosk@foundation74 Desktop]$ curl 172.25.254.100
server3
[kiosk@foundation74 Desktop]$ curl 172.25.254.100
server2
[kiosk@foundation74 Desktop]$ curl 172.25.254.100
server3
53,2 Bot