lvs+keepalived+glusterfs群集
准备工作:
主机名 | 操作系统 | IP地址 | 担任角色 |
---|---|---|---|
master | CentOS7 | 192.168.1.1 | lvs主调度器 |
backup | CentOS7 | 192.168.1.2 | lvs副调度器 |
web1 | CentOS7 | 192.168.1.3 | web服务器1,glusterfs客户端 |
web2 | CentOS7 | 192.168.1.4 | web服务器2,glusterfs客户端 |
node1 | Centos7 | 192.168.1.5 | 分布式文件系统节点1 |
node2 | Centos7 | 192.168.1.6 | 分布式文件系统节点2 |
glusterfs软件包从下面链接下载:
https://pan.baidu.com/s/19P8ReLY4fdVnrfYfVD9mnA
提取码:7unm
实验描述:
- 搭建 lvs–DR web 群集负载均衡调度系统,利用 keepalived 技术实现调度器的双机热备
- 数据存储采用分布式文件系统群集 glusterfs
- web群集提供服务的群集IP为192.168.1.188
一、部署lvs+keepalived服务器
1.lvs主调度器配置
192.168.1.1
[root@localhost ~]# hostnamectl set-hostname master #配置主机名
[root@localhost ~]# bash
[root@master ~]# mount /dev/cdrom /media/cdrom #挂载光盘
mount: /dev/sr0 写保护,将以只读方式挂载
[root@master ~]# yum -y install keepalived ipvsadm #安装支持软件
[root@master ~]# systemctl enable keepalived #设置开机自启
Created symlink from /etc/systemd/system/multi-user.target.wants/keepalived.service to /usr/lib/systemd/system/keepalived.service.
[root@master ~]# cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak
[root@master ~]# vim /etc/keepalived/keepalived.conf
global_defs {
router_id 1
}
vrrp_instance VI_1 {
state MASTER
interface ens33
virtual_router_id 1
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 123456
}
virtual_ipaddress {
192.168.1.188
}
}
virtual_server 192.168.1.188 80 {
delay_loop 15
lb_algo rr
lb_kind DR
protocol TCP
real_server 192.168.1.3 80 {
weight 1
TCP_CHECK {
connect_port 80
connect_timeout 3
nb_get_retry 3
delay_before_retry 4
}
}
real_server 192.168.1.4 80 {
weight 1
TCP_CHECK {
connect_port 80
connect_timeout 3
nb_get_retry 3
delay_before_retry 4
}
}
}
[root@master ~]# ipvsadm -C #清空原有策略
[root@master ~]# modprobe ip_vs #加载系统内核的服务模块
[root@master ~]# lsmod | grep ip_vs #查看系统模块运行状态
ip_vs 145497 0
nf_conntrack 139224 1 ip_vs
libcrc32c 12644 3 xfs,ip_vs,nf_conntrack
[root@master ~]# echo "modprobe ip_vs" >> /etc/rc.local #添加到开机自运行
[root@master ~]# systemctl restart keepalived #重启keepalived服务
[root@master ~]# ip a #查看漂移IP是否存在
2.lvs副调度器配置
192.168.1.2
[root@localhost ~]# hostnamectl set-hostname backup #配置主机名
[root@localhost ~]# bash
[root@backup ~]# mount /dev/cdrom /media/cdrom #挂载光盘
mount: /dev/sr0 写保护,将以只读方式挂载
[root@backup ~]# yum -y install keepliaved ipvsadm #安装支持软件
[root@backup ~]# systemctl enable keepalived #设置开机自启
Created symlink from /etc/systemd/system/multi-user.target.wants/keepalived.service to /usr/lib/systemd/system/keepalived.service.
[root@backup ~]# cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak
[root@backup ~]# vim /etc/keepalived/keepalived.conf
lvs副调度器和主调度器配置配置一致,但要做以下修改:
将 router_id 1 改为 router_id 2
将 state MASTER 改为 state BACKUP
将 priority 100 改为 priority 99
[root@backup ~]# modprobe ip_vs #加载系统内核的服务模块
[root@backup ~]# lsmod | grep ip_vs #查看系统模块运行状态
[root@backup ~]# echo "modprobe ip_vs" >> /etc/rc.local #开机启动时加载ip_vs模块
[root@backup ~]# systemctl restart keepalived #重启keepalived服务
为了验证lvs+keepalived集群是否正常运行,部署httpd的本地网站目录进行验证
二、部署节点服务器
1.安装Apache服务
192.168.1.3:
[root@localhost ~]# hostnamectl set-hostname web1 #设置主机名
[root@localhost ~]# bash
[root@web1 ~]# mount /dev/cdrom /media/cdrom #挂载光盘
mount: /dev/sr0 写保护,将以只读方式挂载
[root@web1 ~]# yum -y install httpd #安装httpd服务
[root@web1 ~]# echo "<h1>This is web1</h1>" > /var/www/html/index.html #写入测试页面
[root@web1 ~]# systemctl enable httpd #设置开机自启
Created symlink from /etc/systemd/system/multi-user.target.wants/httpd.service to /usr/lib/systemd/system/httpd.service.
[root@web1 ~]# systemctl start httpd #开启httpd服务
[root@web1 ~]# curl 192.168.1.3 #访问本机测试
<h1>This is web1</h1>
192.168.1.4:
[root@localhost ~]# hostnamectl set-hostname web2 #设置主机名
[root@localhost ~]# bash
[root@web2 ~]# mount /dev/cdrom /media/cdrom #挂载光盘
mount: /dev/sr0 写保护,将以只读方式挂载
[root@web2 ~]# yum -y install httpd #安装httpd服务
[root@web2 ~]# echo "<h1>This is web2</h1>" > /var/www/html/index.html #写入测试页面
[root@web2 ~]# systemctl enable httpd #设置开机自启
Created symlink from /etc/systemd/system/multi-user.target.wants/httpd.service to /usr/lib/systemd/system/httpd.service.
[root@web2 ~]# systemctl start httpd #开启httpd服务
[root@web2 ~]# curl 192.168.1.4 #访问本机测试
<h1>This is web2</h1>
2.配置群集接口
web1,web2配置一致
[root@web1 ~]# cat <<END >> /etc/sysconfig/network-scripts/ifcfg-lo:0
DEVEICE=lo:0
ONBOOT=yes
IPADDR=192.168.1.188
NETMASK=255.255.255.255
END
[root@web1 ~]# systemctl restart network
[root@web1 ~]# ifconfig lo:0
3.调整/proc内核参数,关闭ARP响应
web1,web2配置一致:
[root@web1 ~]# cat <<END >> /etc/sysctl.conf
net.ipv4.conf.all.arp_ignore = 1
net.ipv4.conf.all.arp_announce = 2
net.ipv4.conf.default.arp_ignore = 1
net.ipv4.conf.default.arp_announce = 2
net.ipv4.conf.lo.arp_ignore = 1
net.ipv4.conf.lo.arp_announce = 2
END
[root@web1 ~]# sysctl -p #让其生效
4.添加到群集IP地址的本地路由记录
[root@web1 ~]# echo "/sbin/route add -host 192.168.1.188 dev lo:0" >> /etc/rc.local
[root@web1 ~]# route add -host 192.168.1.188 dev lo:0
5.客户机进行验证
1)客户机浏览器访问:http://192.168.1.188
192.168.1.188是刚才设置的漂移IP
2)lvs服务器 查看负载均衡情况
[root@master ~]# ipvsadm -ln
验证成功后,web1,web2 分别删除首页,为后续挂载glusterfs做准备
[root@web1 ~]# rm -rf /var/www/html/*
[root@web1 ~]# ll /var/www/html/
总用量 0
三、部署glusterfs服务器
分别为node1,node2各添加一块20G硬盘,分成两个分区,各10G容量
1.磁盘分区
node1,node2操作步骤一致
[root@localhost ~]# fdisk /dev/sdb
-------
命令(输入 m 获取帮助):n #创建分区
Partition type:
p primary (0 primary, 0 extended, 4 free)
e extended
Select (default p):p #创建主分区
分区号 (1-4,默认 1): #回车
起始 扇区 (2048-41943039,默认为 2048): #回车
将使用默认值 2048
Last 扇区, +扇区 or +size{
K,M,G} (2048-41943039,默认为 41943039):+10G #容量设置为10G
分区 1 已设置为 Linux 类型,大小设为 10 GiB
---------------
命令(输入 m 获取帮助):n #创建分区
Partition type:
p primary (1 primary, 0 extended, 3 free)
e extended
Select (default p): p #创建主分区
分区号 (2-4,默认 2): #回车
起始 扇区 (20973568-41943039,默认为 20973568): #回车
将使用默认值 20973568
Last 扇区, +扇区 or +size{
K,M,G} (20973568-41943039,默认为 41943039):+10G #容量设置为10G
分区 2 已设置为 Linux 类型,大小设为 10 GiB
命令(输入 m 获取帮助):wq #保存退出
2.格式化并挂载
node1上操作
[root@localhost ~]# mkfs.xfs /dev/sdb1 #格式化sdb1
[root@localhost ~]# mkfs.xfs /dev/sdb2 #格式化sdb2
[root@localhost ~]# mkdir -p /brick1/sdb{
1..2} #创建两个挂载点
[root@localhost ~]# mount /dev/sdb1 /brick1/sdb1 #挂载sdb1分区
[root@localhost ~]# mount /dev/sdb2 /brick1/sdb2 #挂载sdb2分区
[root@localhost ~]# cat <<END >> /etc/fstab #设置开机自挂载
/dev/sdb1 /brick1/sdb1 xfs defaults 0 0
/dev/sdb2 /brick1/sdb2 xfs defaults 0 0
END
[root@localhost ~]# df -hT | grep brick1 #查看是否挂载成功
/dev/sdb1 xfs 10G 33M 10G 1% /brick1/sdb1
/dev/sdb2 xfs 10G 33M 10G 1% /brick1/sdb2
node2上操作
[root@localhost ~]# mkfs.xfs /dev/sdb1 #格式化sdb1
[root@localhost ~]# mkfs.xfs /dev/sdb2 #格式化sdb2
[root@localhost ~]# mkdir -p /brick2/sdb{
1..2} #创建两个挂载点
[root@localhost ~]# mount /dev/sdb1 /brick2/sdb1 #挂载sdb1分区
[root@localhost ~]# mount /dev/sdb2 /brick2/sdb2 #挂载sdb2分区
[root@localhost ~]# cat <<END >> /etc/fstab #设置开机自挂载
/dev/sdb1 /brick2/sdb1 xfs defaults 0 0
/dev/sdb2 /brick2/sdb2 xfs defaults 0 0
END
[root@localhost ~]# df -hT | grep brick2 #查看是否挂载成功
/dev/sdb1 xfs 10G 33M 10G 1% /brick2/sdb1
/dev/sdb2 xfs 10G 33M 10G 1% /brick2/sdb2
3.配置hosts文件
node1上操作
[root@localhost ~]# cat <<END >> /etc/hosts
192.168.1.5 node1
192.168.1.6 node2
END
[root@localhost ~]# hostnamectl set-hostname node1 #设置主机名
[root@localhost ~]# bash
node2上操作
[root@localhost ~]# cat <<END >> /etc/hosts
192.168.1.5 node1
192.168.1.6 node2
END
[root@localhost ~]# hostnamectl set-hostname node2 #设置主机名
[root@localhost ~]# bash
4.配置yum本地源,安装环境包
node1,node2上操作一致
[root@node1 ~]# cd /mnt/
上传gfsrepo文件夹里的软件到此目录
[root@node1 mnt]# mount /dev/cdrom /media/cdrom #挂载光盘
mount: /dev/sr0 写保护,将以只读方式挂载
[root@node1 mnt]# yum -y install attr psmisc #安装依赖包
[root@node1 mnt]# vi /etc/yum.repos.d/GLFS.repo
[GLFS]
name=GLFS
baseurl=file:///mnt
enable=1
gpgcheck=0
[root@node1 mnt]# yum clean all #清空yum缓存
[root@node1 mnt]# yum -y install glusterfs glusterfs-server glusterfs-fuse glusterfs-rdma
[root@node1 mnt]# systemctl start glusterd #开启gluster
[root@node1 mnt]# systemctl enable glusterd #设置开机自启
Created symlink from /etc/systemd/system/multi-user.target.wants/glusterd.service to /usr/lib/systemd/system/glusterd.service.
[root@node1 mnt]# netstat -anpt | grep glusterd #查看端口是否开启
tcp 0 0 0.0.0.0:24007 0.0.0.0:* LISTEN 16040/glusterd
5.创建群集,并添加节点
node1,node2上任意一台上操作即可
[root@node1 ~]# gluster peer probe node1
peer probe: success. Probe on localhost not needed
[root@node1 ~]# gluster peer probe node2
peer probe: success.
[root@node1 ~]# gluster peer status #查看节点状态
Number of Peers: 1
Hostname: node2
Uuid: 42e9950f-e49d-44f2-80c9-502a31bf6b8b
State: Peer in Cluster (Connected)
6.创建复制卷
创建第一个复制卷
[root@node1 ~]# gluster volume create rep-web1 replica 2 node1:/brick1/sdb1 node2:/brick2/sdb1 force
volume create: rep-web1: success: please start the volume to access data
[root@node1 ~]# gluster volume start rep-web1 #开启复制卷
volume start: rep-web1: success
[root@node1 ~]# gluster volume info rep-web1 #查看复制卷状态
Volume Name: rep-web1
Type: Replicate
Volume ID: 8e9ad069-cd65-4db8-8be8-2a5d8bafecce
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: node1:/brick1/sdb1
Brick2: node2:/brick2/sdb1
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
创建第二个复制卷
[root@node1 ~]# gluster volume create rep-web2 replica 2 node1:/brick1/sdb2 node2:/brick2/sdb2 force
volume create: rep-web2: success: please start the volume to access data
[root@node1 ~]# gluster volume start rep-web2 #开启复制卷
volume start: rep-web2: success
[root@node1 ~]# gluster volume info rep-web2 #查看复制卷状态
Volume Name: rep-web2
Type: Replicate
Volume ID: 2732d944-415b-4625-b192-7274201474ed
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: node1:/brick1/sdb2
Brick2: node2:/brick2/sdb2
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
四、web集群即glusterfs客户端配置
1.web1,web2安装客户端软件
挂光盘,并配置yum源
[root@web1 ~]# yum -y install attr psmisc #安装依赖包
[root@web1 ~]# cd /mnt/
上传gfsrepo文件夹里的软件到此目录
[root@web1 mnt]# vi /etc/yum.repos.d/GLFS.repo
[GLFS]
name=GLFS
baseurl=file:///mnt
enable=1
gpgcheck=0
[root@web1 mnt]# yum clean all #清空yum缓存
[root@web1 mnt]# yum -y install glusterfs glusterfs-fuse
2.修改hosts文件
[root@web1 mnt]# cat <<END >> /etc/hosts
192.168.1.5 node1
192.168.1.6 node2
END
3.验证:ping -c 2 node1 测试
4.挂载glusterfs文件系统
1)web1上操作
[root@web1 ~]# mount.glusterfs node1:rep-web1 /var/www/html/
[root@web1 ~]# echo "<h1>This is we1</h1>" > /var/www/html/index.html
2)web2上操作
[root@web2 ~]# mount.glusterfs node1:rep-web2 /var/www/html/
[root@web2 ~]# echo "<h1>This is web2</h1>" > /var/www/html/index.html
验证:在node1,node2分别访问挂载目录,查看是否有复制文件存在
五、项目验证
验证1:关闭主调度器,查看整个集群是否能够正常工作