密码认证约定
密码名称 |
描述 |
数据库密码(不能使用变量) |
数据库的root密码 |
ADMIN_PASS |
admin 用户密码 |
CINDER_DBPASS |
块设备存储服务的数据库密码 |
CINDER_PASS |
块设备存储服务的 cinder 密码 |
DASH_DBPASS |
Database password for the Dashboard |
DEMO_PASS |
demo 用户的密码 |
GLANCE_DBPASS |
镜像服务的数据库密码 |
GLANCE_PASS |
镜像服务的 glance 用户密码 |
KEYSTONE_DBPASS |
认证服务的数据库密码 |
METADATA_SECRET |
Secret for the metadata proxy |
NEUTRON_DBPASS |
网络服务的数据库密码 |
NEUTRON_PASS |
网络服务的 neutron 用户密码 |
NOVA_DBPASS |
计算服务的数据库密码 |
NOVA_PASS |
计算服务中``nova``用户的密码 |
PLACEMENT_PASS |
Password of the Placement service user placement |
RABBIT_PASS |
RabbitMQ的guest用户密码 |
一、openstack ha背景
一、openstack HA背景
1.1、为什么要部署ha?
1、 系统停机
2、 数据丢失
3、 Openstack平台挂掉
4、 高可用
1.2、HA细节阐述
服务分类
无状态服务:
nova-api, nova-conductor,glance-api,keystone-api, neutron-api,和nova-scheduler
有状态服务:
有状态的OpenStack服务包括OpenStack数据库和消息队列。
互信配置
二、mysql 集群
1.MariaDB Galera Cluster 介绍
Galera Cluster是由第三方公司Codership所研发的一套免费开源的集群高可用方案,实现了数据零丢失,官网地址为http://galeracluster.com/。其在MySQLInnoDB存储引擎基础上打了wrep(虚拟全同步复制),Percona/MariaDB已捆绑在各自的发行版本中。
MariaDB Galera Cluster是MariaDB同步多主机集群。它仅支持XtraDB/InnoDB存储引擎(虽然有对MyISAM实验支持,具体看wsrep_replicate_myisam系统变量)。
MariaDB Galera Cluster主要功能:
l 同步复制
l 真正的multi-master,即所有节点可以同时读写数据库
l 自动的节点成员控制,失效节点自动被清除
l 新节点加入数据自动复制
l 真正的并行复制,行级
l 用户可以直接连接集群,使用感受上与MySQL完全一致
优势:
l 因为是多主,所以不存在Slavelag(延迟)
l 不存在丢失事务的情况
l 同时具有读和写的扩展能力
l 更小的客户端延迟
l 节点间数据是同步的,而Master/Slave模式是异步的,不同slave上的binlog可能是不同的
缺点:
l 加入新节点时开销大,需要复制完整的数据
l 不能有效地解决写扩展的问题,所有的写操作都发生在所有的节点
l 有多少个节点,就有多少份重复的数据
l 由于事务提交需要跨节点通信,即涉及分布式事务操作,因此写入会比主从复制慢很多,节点越多,写入越慢,死锁和回滚也会更加频繁
l 对网络要求比较高,如果网络出现波动不稳定,则可能会造成两个节点失联,Galera Cluster集群会发生脑裂,服务将不可用
还有一些地方存在局限:
l 仅支持InnoDB/XtraDB存储引擎,任何写入其他引擎的表,包括mysql.*表都不会被复制。但是DDL语句可以复制,但是insert into mysql.user(MyISAM存储引擎)之类的插入数据不会被复制
l Delete操作不支持没有主键的表,因为没有主键的表在不同的节点上的顺序不同,如果执行select … limit …将出现不同的结果集
l LOCK/UNLOCK TABLES/FLUSH TABLES WITH READ LOCKS不支持单表所锁,以及锁函数GET_LOCK()、RELEASE_LOCK(),但FLUSH TABLES WITH READ LOCK支持全局表锁
l General Query Log日志不能保存在表中,如果开始查询日志,则只能保存到文件中
l 不能有大事务写入,不能操作wsrep_max_ws_rows=131072(行),且写入集不能超过wsrep_max_ws_size=1073741824(1GB),否则客户端直接报错
l 由于集群是乐观锁并发控制,因此,在commit阶段会有事务冲突发生。如果两个事务在集群中的不同节点上对同一行写入并提交,则失败的节点将回滚,客户端返回死锁报错
l XA分布式事务不支持Codership Galera Cluster,在提交时可能会回滚
l 整个集群的写入吞吐量取决于最弱的节点限制,集群要使用同一的配置
技术:
Galera集群的复制功能是基于认证的复制,其流程如下:
当客户端发出一个commit的指令,在事务被提交之前,所有对数据库的更改都会被write-set收集起来,并且将write-set 记录的内容发送给其他节点。
write-set 将在每个节点上使用搜索到的主键进行确认性认证测试,测试结果决定着节点是否应用write-set更改数据。如果认证测试失败,节点将丢弃 write-set ;如果认证测试成功,则事务提交,工作原理如下图:
关于新节点的加入,流程如下:
新加入的节点叫做Joiner,给Joiner提供复制的节点叫Donor。在该过程中首先会检查本地grastate.dat文件的seqno事务号是否在远端donor节点galera.cache文件里,如果存在,那么进行Incremental State Transfer(IST)增量同步复制,将剩余的事务发送过去;如果不存在那么进行State Snapshot Transfer(SST)全量同步复制。SST有三种全量拷贝方式:mysqldump、rsync和xtrabackup。SST的方法可以通过wsrep_sst_method这个参数来设置。
2.环境规划
所有节点操作
Controller1 192.168.16.11
Controller2 192.168.16.12
Controller3 192.168.16.13
下面三台上都操作
Dns配置
[root@controller01 ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.16.10 controller
192.168.16.11 controller1
192.168.16.12 controller2
192.168.16.13 controller3
192.168.0.31 computer
互信配置
[root@controller1]#ssh-keygen -t rsa
[root@controller2]#ssh-keygen -t rsa
[root@controller3]#ssh-keygen -t rsa
Controller1上
[root@controller1 ~]# ssh-copy-id -i .ssh/id_rsa.pub root@controller2
[root@controller1 ~]# ssh-copy-id -i .ssh/id_rsa.pub root@controller3
Controller2上
[root@controller2~]# ssh-copy-id -i .ssh/id_rsa.pub root@controller1
[root@controller2 ~]# ssh-copy-id -i .ssh/id_rsa.pub root@controller3
Controller3上
[root@controller3 ~]# ssh-copy-id -i .ssh/id_rsa.pub root@controller1
[root@controller3~]# ssh-copy-id -i .ssh/id_rsa.pub root@controller2
3安装
所有节点操作
yum源配置,最好找国内yum源
[mariadb]
name = MariaDB
baseurl = https://mirrors.ustc.edu.cn/mariadb/yum/10.1/centos7-amd64
gpgkey=https://mirrors.ustc.edu.cn/mariadb/yum/RPM-GPG-KEY-MariaDB
gpgcheck=1
scp /etc/yum.repos.d/mariadb.repo root@controller2:/etc/yum.repos.d/mariadb.repo
scp /etc/yum.repos.d/mariadb.repo root@controller3:/etc/yum.repos.d/mariadb.repo
yum install MariaDB-server MariaDB-client galera
4.配置
cp server.cnf server.cnf.bak
controller01上配置
注意:数据库编码一定要在创建数据库前修改成utf8,默认是拉丁文
[root@controller1 ~]# cat /etc/my.cnf.d/server.cnf | grep -v "^#" | grep -v "^$"
[server]
[mysqld]
collation-server = utf8_general_ci
character-set-server = utf8
[galera]
wsrep_provider= "/usr/lib64/galera/libgalera_smm.so" #插件;
wsrep_cluster_address="gcomm://controller2,controller3"
wsrep_cluster_name= MyCluster
wsrep_node_address=192.168.16.11 #节点地址,可省略;
wsrep_node_name=controller1 #节点名,可省略;
binlog_format=row #二进制日志格式;
default_storage_engine=InnoDB
innodb_file_per_table = on
innodb_autoinc_lock_mode=2 #锁格式;
bind-address=192.168.16.11 #工作时监听的地址;
wsrep_on=ON
tmpdir = /tmp
skip-external-locking
skip-name-resolve
max_connections=3600
innodb_log_file_size=100m
event_scheduler = ON
max_allowed_packet = 20M
max_connections = 4096
[embedded]
[mariadb]
[mariadb-10.1]
[root@controller1 ~]#
controller02上配置
[root@controller2 ~]# cat /etc/my.cnf.d/server.cnf | grep -v "^#" | grep -v "^$"
[server]
[mysqld]
collation-server = utf8_general_ci
character-set-server = utf8
[galera]
wsrep_provider= "/usr/lib64/galera/libgalera_smm.so" #插件;
wsrep_cluster_address="gcomm://controller1,controller3"
wsrep_cluster_name= MyCluster
wsrep_node_address=192.168.16.12 #节点地址,可省略;
wsrep_node_name=controller2 #节点名,可省略;
binlog_format=row #二进制日志格式;
default_storage_engine=InnoDB
innodb_file_per_table = on
innodb_autoinc_lock_mode=2 #锁格式;
bind-address=192.168.16.12 #工作时监听的地址;
wsrep_on=ON
tmpdir = /tmp
skip-external-locking
skip-name-resolve
max_connections=3600
innodb_log_file_size=100m
event_scheduler = ON
max_allowed_packet = 20M
max_connections = 4096
[embedded]
[mariadb]
[mariadb-10.1]
Controller03上配置
[root@controller3 ~]# cat /etc/my.cnf.d/server.cnf | grep -v "^#" | grep -v "^$"
[server]
[mysqld]
collation-server = utf8_general_ci
character-set-server = utf8
[galera]
wsrep_provider= "/usr/lib64/galera/libgalera_smm.so" #插件;
wsrep_cluster_address="gcomm://controller1,controller2"
wsrep_cluster_name= MyCluster
wsrep_node_address=192.168.16.13 #节点地址,可省略;
wsrep_node_name=controller3 #节点名,可省略;
binlog_format=row #二进制日志格式;
default_storage_engine=InnoDB
innodb_file_per_table = on
innodb_autoinc_lock_mode=2 #锁格式;
bind-address=192.168.16.13 #工作时监听的地址;
wsrep_on=ON
tmpdir = /tmp
skip-external-locking
skip-name-resolve
max_connections=3600
innodb_log_file_size=100m
event_scheduler = ON
max_allowed_packet = 20M
max_connections = 4096
[embedded]
[mariadb]
[mariadb-10.1]
5.初始化密码
所有controller都操作
mysql_secure_installation
设置密码为Admin123!
6.启动第一个
只需要在第一个节点上controller01上操作
mysqld --wsrep-new-cluster --user root
在controller02,controller03节点上正常启动数据库
systemctl start mariadb
在controller01上查看集群节点数目
MariaDB [(none)]> show status like 'wsrep%';
确认更改后的默认语言是utf8
MariaDB [(none)]> show variables like '%char%';
7.MariaDB galera cluster 全部停止后如何再启动
【问题场景】
1.正式环境下基本上不会出现此类情况
2.测试环境的时候可能会出现,如自己电脑上搞的几个虚拟机上测试,后来全部关机了,再想启动集群,报错了
【系统环境】
CentOS7 + MariaDB10.1.22+galera cluster
【解决方式】
1.正常第一次启动集群,使用命令:galera_new_cluster ,其他版本请另行参考
2.整个集群关闭后,再重新启动,则打开任一主机,输入命令:
vim /var/lib/mysql/grastate.dat
或者找到最后关闭的那个节点默认值就是1
3.重新启动集群命令:
mysqld --wsrep-new-cluster --user root
4.其他节点:systemctl start mariadb
或者/etc/init.d/mysql start
注:正常情况下 先停止节点,然后找最后停止的那台上去执行启动集群
mysqld --wsrep-new-cluster --user root
然后再依次启动
8、memcched集群构建
安装memcached
[root@controller1 ~]# yum -y install memcached
[root@controller2 ~]# yum -y install memcached
[root@controller3 ~]# yum -y install memcached
开机启动memcached
[root@controller1 ~]# systemctl enable memcached
[root@controller2 ~]# systemctl enable memcached
[root@controller3 ~]# systemctl enable memcached
启动memcached
[root@controller1 ~]# systemctl start memcached
[root@controller2 ~]# systemctl start memcached
[root@controller3 ~]# systemctl start memcached
三、rabbitmq集群安装
1.基础安装
所有控制节点上安装
# yum install rabbitmq-server -y
2.集群配置
配置监听地址(每个controller节点都需要配置本地监听地址):
[root@controller1 ~]# vim /etc/rabbitmq/rabbitmq-env.conf # 在RHEL7.2系统上默认是不存在该文件的
RABBITMQ_NODE_IP_ADDRESS=192.168.16.11
RABBITMQ_NODE_PORT=5672
只启动第一个节点:
[root@controller1 ~]# systemctl start rabbitmq-server
拷贝.erlang.cookie文件到controller2 controller3,注意该文件的权限是400
[root@controller1 ~]# scp /var/lib/rabbitmq/.erlang.cookie controller2:/var/lib/rabbitmq/
.erlang.cookie 100% 20 0.0KB/s 00:00
[root@controller1 ~]# scp /var/lib/rabbitmq/.erlang.cookie controller3:/var/lib/rabbitmq/
.erlang.cookie 100% 20 0.0KB/s 00:00
[root@controller1 ~]# ll /var/lib/rabbitmq/.erlang.cookie
-r-------- 1 rabbitmq rabbitmq 20 Nov 30 00:00 /var/lib/rabbitmq/.erlang.cookie
[root@controller2 ~]# chown -R rabbitmq:rabbitmq /var/lib/rabbitmq/.erlang.cookie
[root@controller3 ~]# chown -R rabbitmq:rabbitmq /var/lib/rabbitmq/.erlang.cookie
启动controller2、controller3 rabbitmq服务
# systemctl start rabbitmq-server
将controller2、controller3 与controller1组成集群
Controller2:
[root@controller2 ~]# rabbitmqctl stop_app
Stopping node rabbit@controller2 ...
[root@controller2 ~]# rabbitmqctl join_cluster rabbit@controller1
Clustering node rabbit@controller2 with rabbit@controller1 ...
[root@controller2 ~]# rabbitmqctl start_app
Starting node rabbit@controller2 ...
Controller3:
[root@controller3 ~]# rabbitmqctl stop_app
Stopping node rabbit@controller3 ...
[root@controller3 ~]# rabbitmqctl join_cluster rabbit@controller1
Clustering node rabbit@controller3 with rabbit@controller1 ...
[root@controller3 ~]# rabbitmqctl start_app
Starting node rabbit@controller3 …
在任意节点执行 rabbitmqctl cluster_status 查看集群
[root@controller1 ~]# rabbitmqctl cluster_status
Cluster status of node rabbit@controller1 ...
[{nodes,[{disc,[rabbit@controller1,rabbit@controller2,rabbit@controller3]}]},
{running_nodes,[rabbit@controller3,rabbit@controller2,rabbit@controller1]},
{cluster_name,<<"rabbit@controller1">>},
{partitions,[]},
{alarms,[{rabbit@controller3,[]},
{rabbit@controller2,[]},
{rabbit@controller1,[]}]}]
设置镜像队列
在任意一个节点上执行:
[root@controller1 rabbitmq]# rabbitmqctl set_policy ha-all '^(?!amq\.).*' '{"ha-mode": "all"}'
Setting policy "ha-all" for pattern "^(?!amq\\.).*" to "{\"ha-mode\": \"all\"}" with priority "0" ...
...done.
将所有队列设置为镜像队列,即队列会被复制到各个节点,各个节点状态保持一致。
在rabbitmq中创建openstack用户
[root@controller1 rabbitmq]# rabbitmqctl add_user openstack openstack
Creating user "openstack" ...
...done.
[root@controller1 rabbitmq]# rabbitmqctl set_permissions openstack ".*" ".*" ".*"
Setting permissions for user "openstack" in vhost "/" ...
...done.
到此,rabbitmq高可用搭建完毕。
在配置Mariadb-galera和rabbitmq监听地址的时候,rabbitmq配置监听地址尤其要注意,配置如下:
[root@controller1 ~]# chown rabbitmq:rabbitmq /etc/rabbitmq/rabbitmq-env.conf
[root@controller1 ~]# systemctl restart rabbitmq-server
[root@controller1 ~]# netstat -ntplu | egrep 5672
tcp 0 0 192.168.16.11:5672 0.0.0.0:* LISTEN 29644/beam.smp
tcp 0 0 0.0.0.0:25672 0.0.0.0:* LISTEN 29644/beam.smp
rabbitmq-env.conf 说明:
RABBITMQ_NODE_IP_ADDRESS= //IP地址,空串bind所有地址,指定地址bind指定网络接口
RABBITMQ_NODE_PORT= //TCP端口号,默认是5672
RABBITMQ_NODENAME= //节点名称。默认是rabbit
RABBITMQ_CONFIG_FILE= //配置文件路径 ,即rabbitmq.config文件路径
RABBITMQ_MNESIA_BASE= //mnesia所在路径
RABBITMQ_LOG_BASE= //日志所在路径
RABBITMQ_PLUGINS_DIR= //插件所在路径
3.页面访问
[root@controller1 ~]# rabbitmq-plugins enable rabbitmq_management
[root@controller2 ~]# rabbitmq-plugins enable rabbitmq_management
[root@controller3 ~]# rabbitmq-plugins enable rabbitmq_management
四、haproxy+pacemaker
1.Pcs、haproxy基础安装
以下三个节点都需要执行:
#yuminstall pcs -y
# systemctl start pcsd ; systemctl enable pcsd
# echo'hacluster' | passwd --stdin hacluster
#yuminstall haproxy rsyslog -y
#echo'net.ipv4.ip_nonlocal_bind = 1' >> /etc/sysctl.conf # 启动服务的时候,允许忽视VIP的存在
# echo'net.ipv4.ip_forward = 1' >> /etc/sysctl.conf # 开启内核转发功能
# sysctl -p
在任意节点创建用于haproxy监控Mariadb的用户
MariaDB [(none)]> CREATE USER 'haproxy'@'%' ;
2.配置haproxy
配置haproxy用于负载均衡器
[root@controller1 ~]# egrep -v "^#|^$" /etc/haproxy/haproxy.cfg
[root@controller1 haproxy]# egrep -v "^#|^$" /etc/haproxy/haproxy.cfg
global
log 127.0.0.1 local2
chroot /var/lib/haproxy
pidfile /var/run/haproxy.pid
maxconn 4000
user haproxy
group haproxy
daemon
# turn on stats unix socket
stats socket /var/lib/haproxy/stats
defaults
mode http
log global
option httplog
option dontlognull
option http-server-close
option forwardfor except 127.0.0.0/8
option redispatch
retries 3
timeout http-request 10s
timeout queue 1m
timeout connect 10s
timeout client 1m
timeout server 1m
timeout http-keep-alive 10s
timeout check 10s
maxconn 3000
frontend main *:3366
default_backend mariadb
backend mariadb
balance roundrobin
server mariadb1controller:3306 check
server mariadb2 controller2:3306 check
server mariadb3 controller3:3306 check
frontend main *:11222
use_backend memcached
backend memcached
balance roundrobin
server memcached1controller:11211 check
server memcached2 controller2:11211 check
server memcached3 controller3:11211 check
[root@controller1 haproxy]#
另外两台配置一样,scp即可
注意:
(1)确保haproxy配置无误,建议首先修改ip和端口启动测试是否成功。
(2)Mariadb-Galera和rabbitmq默认监听到 0.0.0.0 修改调整监听到本地 192.168.0.x
(3)将haproxy正确的配置拷贝到其他节点,无需手动启动haproxy服务
3.日志配置
为haproxy配置日志(所有controller节点执行):
# vim /etc/rsyslog.conf
…
$ModLoad imudp
$UDPServerRun 514
…
local2.* /var/log/haproxy/haproxy.log
…
# mkdir -pv /var/log/haproxy/
mkdir: created directory ‘/var/log/haproxy/’
# systemctl restart rsyslog
4.启动haproxy进行验证操作:
# systemctl start haproxy
# systemctl enable haproxy
[root@controller1 ~]# netstat -ntplu | grep ha
tcp 0 0 192.168.16.10:3306 0.0.0.0:* LISTEN 15467/haproxy
tcp 0 0 192.168.16.10:11211 0.0.0.0:* LISTEN 15467/haproxy
udp 0 0 0.0.0.0:43268 0.0.0.0:* 15466/haproxy
验证成功,关闭haproxy
# systemctl stop haproxy
5.创建控制节点集群
在controller1节点上执行:
[root@controller1 ~]# pcs cluster auth controller1 controller2 controller3 -u hacluster -p hacluster --force
controller3: Authorized
controller2: Authorized
controller: Authorized
创建控制节点集群:
[root@controller1 ~]# pcs cluster setup --name openstack-cluster controller1 controller2 controller3 --force
Destroying cluster on nodes: controller1, controller2, controller3...
controller3: Stopping Cluster (pacemaker)...
controller2: Stopping Cluster (pacemaker)...
controller: Stopping Cluster (pacemaker)...
controller3: Successfully destroyed cluster
controller: Successfully destroyed cluster
controller2: Successfully destroyed cluster
Sending 'pacemaker_remote authkey' to 'controller1', 'controller2', 'controller3'
controller3: successful distribution of the file 'pacemaker_remote authkey'
controller: successful distribution of the file 'pacemaker_remote authkey'
controller2: successful distribution of the file 'pacemaker_remote authkey'
Sending cluster config files to the nodes...
controller: Succeeded
controller2: Succeeded
controller3: Succeeded
Synchronizing pcsd certificates on nodes controller1, controller2, controller3...
controller3: Success
controller2: Success
controller: Success
Restarting pcsd on the nodes in order to reload the certificates...
controller3: Success
controller2: Success
controller: Success
启动集群的所有节点:
[root@controller1 ~]# pcs cluster start --all
controller2: Starting Cluster...
controller: Starting Cluster...
controller3: Starting Cluster...
[root@controller1 ~]# pcs cluster enable --all
controller: Cluster Enabled
controller2: Cluster Enabled
controller3: Cluster Enabled
查看集群信息:
[root@controller1 ~]# pcs status
Cluster name: openstack-cluster
WARNING: no stonith devices and stonith-enabled is not false
Stack: corosync
Current DC: controller3 (version 1.1.16-12.el7_4.4-94ff4df) - partition with quorum
Last updated: Thu Nov 30 19:30:43 2017
Last change: Thu Nov 30 19:30:17 2017 by hacluster via crmd on controller3
3 nodes configured
0 resources configured
Online: [ controller1 controller2 controller3 ]
No resources
Daemon Status:
corosync: active/enabled
pacemaker: active/enabled
pcsd: active/enabled
[root@controller1 ~]# pcs cluster status
Cluster Status:
Stack: corosync
Current DC: controller3 (version 1.1.16-12.el7_4.4-94ff4df) - partition with quorum
Last updated: Thu Nov 30 19:30:52 2017
Last change: Thu Nov 30 19:30:17 2017 by hacluster via crmd on controller3
3 nodes configured
0 resources configured
PCSD Status:
controller2: Online
controller3: Online
controller: Online
三个节点都在线
默认的表决规则建议集群中的节点个数为奇数且不低于3。当集群只有2个节点,其中1个节点崩坏,由于不符合默认的表决规则,集群资源不发生转移,集群整体仍不可用。no-quorum-policy="ignore"可以解决此双节点的问题,但不要用于生产环境。换句话说,生产环境还是至少要3节点。
pe-warn-series-max、pe-input-series-max、pe-error-series-max代表日志深度。
cluster-recheck-interval是节点重新检查的频率。
[root@controller1 ~]# pcs property set pe-warn-series-max=1000 pe-input-series-max=1000 pe-error-series-max=1000 cluster-recheck-interval=5min
禁用stonith:
stonith是一种能够接受指令断电的物理设备,环境无此设备,如果不关闭该选项,执行pcs命令总是含其报错信息。
[root@controller1 ~]# pcs property set stonith-enabled=false
二个节点时,忽略节点quorum功能:
[root@controller1 ~]# pcs property set no-quorum-policy=ignore
验证集群配置信息
[root@controller1 ~]# crm_verify -L -V
为集群配置虚拟 ip
[root@controller1 ~]# pcs resource create ClusterIP ocf:heartbeat:IPaddr2 ip="192.168.16.10" cidr_netmask=32 nic=eno16777736 op monitor interval=30s
6.haproxy集群构建
到此,Pacemaker+corosync 是为 haproxy服务的,添加haproxy资源到pacemaker集群
[root@controller1 ~]# pcs resource create lb-haproxy systemd:haproxy --clone
说明:创建克隆资源,克隆的资源会在全部节点启动。这里haproxy会在三个节点自动启动。
列出所有的ocf脚本
[root@controller1 ~]# pcs resource list ocf
列出所有的systemd资源
[root@controller1 ~]# pcs resource list systemd
查看Pacemaker资源情况
[root@controller1 ~]# pcs resource
ClusterIP (ocf::heartbeat:IPaddr2): Started controller1 # 心跳的资源绑定在第三个节点的
Clone Set: lb-haproxy-clone [lb-haproxy] # haproxy克隆资源
Started: [ controller1 controller2 controller3 ]
注意:这里一定要进行资源绑定,否则每个节点都会启动haproxy,造成访问混乱
将这两个资源绑定到同一个节点上
[root@controller1 ~]# pcs constraint colocation add lb-haproxy-clone ClusterIP INFINITY
绑定成功
[root@controller1 ~]# pcs resource
ClusterIP (ocf::heartbeat:IPaddr2): Started controller3
Clone Set: lb-haproxy-clone [lb-haproxy]
Started: [ controller1]
Stopped: [ controller2 controller3 ]
配置资源的启动顺序,先启动vip,然后haproxy再启动,因为haproxy是监听到vip
[root@controller1 ~]# pcs constraint order ClusterIP then lb-haproxy-clone
手动指定资源到某个默认节点,因为两个资源绑定关系,移动一个资源,另一个资源自动转移。
[root@controller1 ~]# pcs constraint location ClusterIP prefers controller1
[root@controller1 ~]# pcs resource
ClusterIP (ocf::heartbeat:IPaddr2): Started controller1
Clone Set: lb-haproxy-clone [lb-haproxy]
Started: [ controller1 ]
Stopped: [ controller2 controller3 ]
[root@controller1 ~]# pcs resource defaults resource-stickiness=100 # 设置资源粘性,防止自动切回造成集群不稳定
现在vip已经绑定到controller1节点
[root@controller1 ~]# ip a | grep global
inet 192.168.16.11/24 brd 192.168.0.255 scope global eno16777736
inet 192.168.16.10/32 brd 192.168.0.255 scope global eno16777736
inet 192.168.118.11/24 brd 192.168.118.255 scope global eno33554992
Pcs
7.集群测试
Mysql –uroot –pAdmin123!
创建用户并所有赋权,包含远程登陆权限
GRANT ALL PRIVILEGES ON *.* TO 'galera'@'%' IDENTIFIED BY 'galera' WITH GRANT OPTION;
尝试通过vip远程连接数据库
[root@controller1 haproxy]# mysql -ugalera -pgalera -h 192.168.16.10
Controller2节点上查看