MHA算是业内比较成熟的MySQL高可用解决方案,在MySQL故障切换过程中,MHA能做到自动完成数据库的故障切换操作,并且在进行故障切换的过程中,MHA能在最大程度上保证数据的一致性,以达到真正意义上的高可用。软件主要有MHA Manager(管理节点)和MHA Node(数据节点)两部分组成,在MHA自动故障切换过程中,MHA试图从宕机的主服务器上保存二进制日志,最大程度的保证数据的不丢失,但这并不总是可行的。例如,如果主服务器硬件故障或无法通过ssh访问,MHA没法保存二进制日志,只进行故障转移而丢失了最新的数据。使用MySQL 5.5的半同步复制,可以大大降低数据丢失的风险。MHA可以与半同步复制结合起来。如果只有一个slave已经收到了最新的二进制日志,MHA可以将最新的二进制日志应用于其他所有的slave服务器上,因此可以保证所有节点的数据一致性。
目前MHA主要支持一主多从的架构,要搭建MHA,要求一个复制集群中必须最少有三台数据库服务器,一主二从,即一台充当master,一台充当备用master,另外一台充当从库,因为至少需要三台服务器。
下面我们就开始着手配置我们的MHA高可用,因为本人只有两台虚拟机,所以就只能按照两台来搞了,中间也踩了点坑,下面看一下我们的基本环境:
MySQL1(master):172.16.16.34:3306 +MHA Manager+MHA Node MySQL2(slave1):172.16.16.35:3306+MHA Node MySQL3(slave2):172.16.16.35:3307+MHA Node
我们假设一主两从的环境我们已经搭建好了。
只有两台机器,所以说凑合着用起来吧
1:首先我们要安装MHA的安装包,安装MHA以前,要安装一些依赖环境
NODE节点:
yum install -y perl-DBD-MySQL
Manager节点:
yum install perl-DBD-MySQL perl-Config-Tiny perl-Log-Dispatch perl-Parallel-ForkManager perl-Time-HiRes -y
但是这些包我们系统是没有的,需要我们安装相应的epel第三方资源库,再安装,我们可以先去
https://fedoraproject.org/wiki/EPEL 这个网站下载我们需要的包,然后安装:
[root@localhost yum.repos.d]# rpm -ivh epel-release-6-8.noarch.rpm
安装完以后执行以下语句查看一下源:
[root@localhost yum.repos.d]# yum repolist Loaded plugins: fastestmirror, security Loading mirror speeds from cached hostfile * epel: mirror.lzu.edu.cn repo id repo name status base -6 - Base - 163.com 6,706 *epel Extra Packages for Enterprise Linux 6 - x86_64 12,305 extras CentOS-6 - Extras - 163.com 45 updates CentOS-6 - Updates - 163.com 318 yum yum 6,367 repolist: 25,741
可以看到已经有epel相关的资源了,所以我们就可以执行执行上面的yum语句安装MHA的依赖环境。
安装完成以后在两台机器安装NODE节点在master机器安装Manage:
[root@localhost sa]# rpm -ivh mha4mysql-node-0.57-0.el7.noarch.rpm [root@localhost sa]# rpm -ivh mha4mysql-manager-0.57-0.el7.noarch.rpm
我这边包是已经下载好的,直接使用rpm安装了。至此算是安装完毕了
简单介绍一下MHA的Manager工具包和Node工具包
Manager工具包主要包括以下几个工具:
masterha_check_ssh 检查MHA的SSH配置状况
masterha_check_repl 检查MySQL复制状况
masterha_manger 启动MHA
masterha_check_status 检测当前MHA运行状态
masterha_master_monitor 检测master是否宕机
masterha_master_switch 控制故障转移(自动或者手动)
masterha_conf_host 添加或删除配置的server信息
Node工具包(这些工具通常由MHA Manager的脚本触发,无需人为操作)主要包括以下几个工具:
save_binary_logs 保存和复制master的二进制日志
apply_diff_relay_logs 识别差异的中继日志事件并将其差异的事件应用于其他的slave
filter_mysqlbinlog 去除不必要的ROLLBACK事件(MHA已不再使用这个工具)
purge_relay_logs 清除中继日志(不会阻塞SQL线程)
2:配置主机SSH免密登录
由于我这两台测试机是从运维手里申请的,折腾过来配置SSH浪费了不少时间,而且我这边还是两台server代替MHA的一主两从一管理的四台机器,中间还是有点问题的
两台机器生成自己的公钥信息:ssh-keygen -t rsa
以一台机器为例,34拷贝自己的公钥到其他机器:
scp ~/.ssh/id_rsa.pub root@172.16.16.35:/root/.ssh/authorized_keys
然后执行授权语句:
chmod 600 /root/.ssh/authorized_keys
按说是OK了,我们验证一下:
[root@localhost .ssh]# masterha_check_ssh --conf=/etc/mha/app1.cnf Sat May 27 10:11:15 2017 - [warning] Global configuration file /etc/masterha_default.cnf not found. Skipping. Sat May 27 10:11:15 2017 - [info] Reading application default configuration from /etc/mha/app1.cnf.. Sat May 27 10:11:15 2017 - [info] Reading server configuration from /etc/mha/app1.cnf.. Sat May 27 10:11:15 2017 - [info] Starting SSH connection tests.. Sat May 27 10:11:16 2017 - [error][/usr/share/perl5/vendor_perl/MHA/SSHCheck.pm, ln63] Sat May 27 10:11:15 2017 - [debug] Connecting via SSH from root@172.16.16.34(172.16.16.34:22) to root@172.16.16.35(172.16.16.35:22).. ssh: connect to host 172.16.16.34 port 22: Connection refused Sat May 27 10:11:15 2017 - [error][/usr/share/perl5/vendor_perl/MHA/SSHCheck.pm, ln111] SSH connection from root@172.16.16.34(172.16.16.34:22) to root@172.16.16.35(172.16.16.35:22) failed! Sat May 27 10:11:16 2017 - [error][/usr/share/perl5/vendor_perl/MHA/SSHCheck.pm, ln63] Sat May 27 10:11:16 2017 - [debug] Connecting via SSH from root@172.16.16.35(172.16.16.35:22) to root@172.16.16.34(172.16.16.34:22).. ssh: connect to host 172.16.16.35 port 22: Connection refused Sat May 27 10:11:16 2017 - [error][/usr/share/perl5/vendor_perl/MHA/SSHCheck.pm, ln111] SSH connection from root@172.16.16.35(172.16.16.35:22) to root@172.16.16.34(172.16.16.34:22) failed! Sat May 27 10:11:17 2017 - [error][/usr/share/perl5/vendor_perl/MHA/SSHCheck.pm, ln63] Sat May 27 10:11:16 2017 - [debug] Connecting via SSH from root@172.16.16.35(172.16.16.35:22) to root@172.16.16.34(172.16.16.34:22).. ssh: connect to host 172.16.16.35 port 22: Connection refused Sat May 27 10:11:16 2017 - [error][/usr/share/perl5/vendor_perl/MHA/SSHCheck.pm, ln111] SSH connection from root@172.16.16.35(172.16.16.35:22) to root@172.16.16.34(172.16.16.34:22) failed! SSH Configuration Check Failed! at /usr/bin/masterha_check_ssh line 44 [root@localhost .ssh]# masterha_check_ssh --conf=/etc/mha/app1.cnf Sat May 27 10:11:40 2017 - [warning] Global configuration file /etc/masterha_default.cnf not found. Skipping. Sat May 27 10:11:40 2017 - [info] Reading application default configuration from /etc/mha/app1.cnf.. Sat May 27 10:11:40 2017 - [info] Reading server configuration from /etc/mha/app1.cnf.. Sat May 27 10:11:40 2017 - [info] Starting SSH connection tests.. Sat May 27 10:11:41 2017 - [error][/usr/share/perl5/vendor_perl/MHA/SSHCheck.pm, ln63] Sat May 27 10:11:40 2017 - [debug] Connecting via SSH from root@172.16.16.34(172.16.16.34:22) to root@172.16.16.35(172.16.16.35:22).. ssh: connect to host 172.16.16.34 port 22: Connection refused Sat May 27 10:11:40 2017 - [error][/usr/share/perl5/vendor_perl/MHA/SSHCheck.pm, ln111] SSH connection from root@172.16.16.34(172.16.16.34:22) to root@172.16.16.35(172.16.16.35:22) failed! Sat May 27 10:11:41 2017 - [error][/usr/share/perl5/vendor_perl/MHA/SSHCheck.pm, ln63] Sat May 27 10:11:41 2017 - [debug] Connecting via SSH from root@172.16.16.35(172.16.16.35:22) to root@172.16.16.34(172.16.16.34:22).. ssh: connect to host 172.16.16.35 port 22: Connection refused Sat May 27 10:11:41 2017 - [error][/usr/share/perl5/vendor_perl/MHA/SSHCheck.pm, ln111] SSH connection from root@172.16.16.35(172.16.16.35:22) to root@172.16.16.34(172.16.16.34:22) failed! Sat May 27 10:11:42 2017 - [error][/usr/share/perl5/vendor_perl/MHA/SSHCheck.pm, ln63] Sat May 27 10:11:41 2017 - [debug] Connecting via SSH from root@172.16.16.35(172.16.16.35:22) to root@172.16.16.34(172.16.16.34:22).. ssh: connect to host 172.16.16.35 port 22: Connection refused Sat May 27 10:11:41 2017 - [error][/usr/share/perl5/vendor_perl/MHA/SSHCheck.pm, ln111] SSH connection from root@172.16.16.35(172.16.16.35:22) to root@172.16.16.34(172.16.16.34:22) failed! SSH Configuration Check Failed!
发现是失败的,我们这里需要把自己的公钥信息加入到认证(两台机器都要执行):
[root@localhost .ssh]# cat id_rsa.pub >>authorized_keys
再次执行就OK了
[root@localhost .ssh]# masterha_check_ssh --conf=/etc/mha/app1.cnf
这里使用到了MHA的配置文件,我们贴一下:
[root@localhost .ssh]# cat /etc/mha/app1.cnf [server default] manager_log=/var/log/mha/app1/manager.log manager_workdir=/var/log/mha/app1.log master_binlog_dir=/home/mysql/db3306/log/ master_ip_failover_script=/usr/local/bin/master_ip_failover master_ip_online_change_script=/usr/local/bin/master_ip_online_change password=123456 ping_interval=1 remote_workdir=/tmp repl_password=123456 repl_user=root report_script=/usr/local/bin/send_report shutdown_script="" ssh_user=root user=root [server1] hostname=172.16.16.34 port=3306 [server2] hostname=172.16.16.35 port=3306 candidate_master=1 check_repl_delay=0 [server3] hostname=172.16.16.35 port=3307
我这里创建了一个root@%的最高权限给MHA来使用。由于我们假设一主两从是已经搭建好的,具体授权什么的也不在赘述了。相信大家配置MHA的话对于这些小问题都是小儿科了。
3:我们也可以检测一下复制的问题。
不过在此之前要设置我们的从库read_only=1;
mysql -h172.16.16.35 -P3306 -uroot -p123456 -e'set global read_only=1' mysql -h172.16.16.35 -P3307 -uroot -p123456 -e'set global read_only=1'
然后执行检查:
[root@localhost .ssh]# masterha_check_repl --conf=/etc/mha/app1.cnf Sat May 27 15:01:57 2017 - [warning] Global configuration file /etc/masterha_default.cnf not found. Skipping. Sat May 27 15:01:57 2017 - [info] Reading application default configuration from /etc/mha/app1.cnf.. Sat May 27 15:01:57 2017 - [info] Reading server configuration from /etc/mha/app1.cnf.. Sat May 27 15:01:57 2017 - [info] MHA::MasterMonitor version 0.57. Sat May 27 15:01:57 2017 - [info] GTID failover mode = 1 Sat May 27 15:01:57 2017 - [info] Dead Servers: Sat May 27 15:01:57 2017 - [info] Alive Servers: Sat May 27 15:01:57 2017 - [info] 172.16.16.34(172.16.16.34:3306) Sat May 27 15:01:57 2017 - [info] 172.16.16.35(172.16.16.35:3306) Sat May 27 15:01:57 2017 - [info] 172.16.16.35(172.16.16.35:3307) Sat May 27 15:01:57 2017 - [info] Alive Slaves: Sat May 27 15:01:57 2017 - [info] 172.16.16.35(172.16.16.35:3306) Version=5.7.14-log (oldest major version between slaves) log-bin:enabled Sat May 27 15:01:57 2017 - [info] GTID ON Sat May 27 15:01:57 2017 - [info] Replicating from 172.16.16.34(172.16.16.34:3306) Sat May 27 15:01:57 2017 - [info] Primary candidate for the new Master (candidate_master is set) Sat May 27 15:01:57 2017 - [info] 172.16.16.35(172.16.16.35:3307) Version=5.7.14-log (oldest major version between slaves) log-bin:enabled Sat May 27 15:01:57 2017 - [info] GTID ON Sat May 27 15:01:57 2017 - [info] Replicating from 172.16.16.34(172.16.16.34:3306) Sat May 27 15:01:57 2017 - [info] Current Alive Master: 172.16.16.34(172.16.16.34:3306) Sat May 27 15:01:57 2017 - [info] Checking slave configurations.. Sat May 27 15:01:57 2017 - [info] Checking replication filtering settings.. Sat May 27 15:01:57 2017 - [info] binlog_do_db= , binlog_ignore_db= Sat May 27 15:01:57 2017 - [info] Replication filtering check ok. Sat May 27 15:01:57 2017 - [info] GTID (with auto-pos) is supported. Skipping all SSH and Node package checking. Sat May 27 15:01:57 2017 - [info] Checking SSH publickey authentication settings on the current master.. Sat May 27 15:01:57 2017 - [info] HealthCheck: SSH to 172.16.16.34 is reachable. Sat May 27 15:01:57 2017 - [info] 172.16.16.34(172.16.16.34:3306) (current master) +--172.16.16.35(172.16.16.35:3306) +--172.16.16.35(172.16.16.35:3307) Sat May 27 15:01:57 2017 - [info] Checking replication health on 172.16.16.35.. Sat May 27 15:01:57 2017 - [info] ok. Sat May 27 15:01:57 2017 - [info] Checking replication health on 172.16.16.35.. Sat May 27 15:01:57 2017 - [info] ok. Sat May 27 15:01:57 2017 - [warning] master_ip_failover_script is not defined. Sat May 27 15:01:57 2017 - [warning] shutdown_script is not defined. Sat May 27 15:01:57 2017 - [info] Got exit code 0 (Not master dead). MySQL Replication Health is OK.
我们看到复制是OK 的,这里我们注释掉了#master_ip_failover_script,根据我看大师兄的博客里面所说MHA的Failover有两种方式:一种是虚拟IP地址,一种是全局配置文件。MHA并没有限定使用哪一种方式,而是让用户自己选择,虚拟IP地址的方式会牵扯到其它的软件,比如keepalive软件,而且还要修改脚本master_ip_failover。所以说我们这里先注释掉这块。
虽然已经成功了,但是有两个warning,因为这两个脚本我们还没有定义,后面补上,先不管他
4:提起MHA
[root@localhost .ssh]#nohup masterha_manager --conf=/etc/masterha/app1.cnf --remove_dead_master_conf --ignore_last_failover < /dev/null > /var/log/mha/app1/manager.log 2>&1 & [1] 8195
检查一下MHA的运行状态:
[root@localhost .ssh]# masterha_check_status --conf=/etc/masterha/app1.cnf app1 (pid:8469) is running(0:PING_OK), master:172.16.16.34
发现是运行状态,证明启动是成功的,我们去看一下日志:
[root@localhost masterha]# cat /var/log/mha/app1/manager.log Sat May 27 15:50:47 2017 - [warning] Global configuration file /etc/masterha_default.cnf not found. Skipping. Sat May 27 15:50:47 2017 - [info] Reading application default configuration from /etc/masterha/app1.cnf.. Sat May 27 15:50:47 2017 - [info] Reading server configuration from /etc/masterha/app1.cnf.. Sat May 27 15:50:47 2017 - [info] MHA::MasterMonitor version 0.57. Sat May 27 15:50:47 2017 - [warning] /var/log/mha/app1.log/app1.master_status.health already exists. You might have killed manager with SIGKILL(-9), may run two or more monitoring process for the same application, or use the same working directory. Check for details, and consider setting --workdir separately. Sat May 27 15:50:48 2017 - [info] GTID failover mode = 1 Sat May 27 15:50:48 2017 - [info] Dead Servers: Sat May 27 15:50:48 2017 - [info] Alive Servers: Sat May 27 15:50:48 2017 - [info] 172.16.16.34(172.16.16.34:3306) Sat May 27 15:50:48 2017 - [info] 172.16.16.35(172.16.16.35:3306) Sat May 27 15:50:48 2017 - [info] 172.16.16.35(172.16.16.35:3307) Sat May 27 15:50:48 2017 - [info] Alive Slaves: Sat May 27 15:50:48 2017 - [info] 172.16.16.35(172.16.16.35:3306) Version=5.7.14-log (oldest major version between slaves) log-bin:enabled Sat May 27 15:50:48 2017 - [info] GTID ON Sat May 27 15:50:48 2017 - [info] Replicating from 172.16.16.34(172.16.16.34:3306) Sat May 27 15:50:48 2017 - [info] Primary candidate for the new Master (candidate_master is set) Sat May 27 15:50:48 2017 - [info] 172.16.16.35(172.16.16.35:3307) Version=5.7.14-log (oldest major version between slaves) log-bin:enabled Sat May 27 15:50:48 2017 - [info] GTID ON Sat May 27 15:50:48 2017 - [info] Replicating from 172.16.16.34(172.16.16.34:3306) Sat May 27 15:50:48 2017 - [info] Current Alive Master: 172.16.16.34(172.16.16.34:3306) Sat May 27 15:50:48 2017 - [info] Checking slave configurations.. Sat May 27 15:50:48 2017 - [info] Checking replication filtering settings.. Sat May 27 15:50:48 2017 - [info] binlog_do_db= , binlog_ignore_db= Sat May 27 15:50:48 2017 - [info] Replication filtering check ok. Sat May 27 15:50:48 2017 - [info] GTID (with auto-pos) is supported. Skipping all SSH and Node package checking. Sat May 27 15:50:48 2017 - [info] Checking SSH publickey authentication settings on the current master.. Sat May 27 15:50:48 2017 - [info] HealthCheck: SSH to 172.16.16.34 is reachable. Sat May 27 15:50:48 2017 - [info] 172.16.16.34(172.16.16.34:3306) (current master) +--172.16.16.35(172.16.16.35:3306) +--172.16.16.35(172.16.16.35:3307) Sat May 27 15:50:48 2017 - [warning] master_ip_failover_script is not defined. Sat May 27 15:50:48 2017 - [warning] shutdown_script is not defined. Sat May 27 15:50:48 2017 - [info] Set master ping interval 1 seconds. Sat May 27 15:50:48 2017 - [info] Set secondary check script: /usr/bin/masterha_secondary_check -s server03 -s server02 Sat May 27 15:50:48 2017 - [info] Starting ping health check on 172.16.16.34(172.16.16.34:3306).. Sat May 27 15:50:48 2017 - [info] Ping(SELECT) succeeded, waiting until MySQL doesn't respond..
如果我们向关闭的话也非常简单
[root@localhost .ssh]# masterha_stop --conf=/etc/mha/app1.cnf
5:管理VIP:
我们上面已经说过了,有两种VIP的管理方式,一种是keepalived,一种是脚本的方式管理VIP,keepalived的管理方式比较简单就是主节点和备用节点两台机器,监控MySQL进程就好了,这个和keepalived+MySQL双主并没有太大区别在配置方面,关于这个配置可以看下我的上篇博客,博客地址:
keepalived+MySQL双主搭建
下面我们主要使用脚本的方式管理VIP,定义master_ip_failover,我们这里直接使用大师兄的博客里面的脚本:
#!/usr/bin/env perl use strict; use warnings FATAL => 'all'; use Getopt::Long; my ( $command, $ssh_user, $orig_master_host, $orig_master_ip, $orig_master_port, $new_master_host, $new_master_ip, $new_master_port ); my $vip = '172.16.16.20/24'; my $key = '1'; my $ssh_start_vip = "/sbin/ifconfig eth0:$key $vip"; my $ssh_stop_vip = "/sbin/ifconfig eth0:$key down"; GetOptions( 'command=s' => \$command, 'ssh_user=s' => \$ssh_user, 'orig_master_host=s' => \$orig_master_host, 'orig_master_ip=s' => \$orig_master_ip, 'orig_master_port=i' => \$orig_master_port, 'new_master_host=s' => \$new_master_host, 'new_master_ip=s' => \$new_master_ip, 'new_master_port=i' => \$new_master_port, ); exit &main(); sub main { print "\n\nIN SCRIPT TEST====$ssh_stop_vip==$ssh_start_vip===\n\n"; if ( $command eq "stop" || $command eq "stopssh" ) { my $exit_code = 1; eval { print "Disabling the VIP on old master: $orig_master_host \n"; &stop_vip(); $exit_code = 0; }; if ($@) { warn "Got Error: $@\n"; exit $exit_code; } exit $exit_code; } elsif ( $command eq "start" ) { my $exit_code = 10; eval { print "Enabling the VIP - $vip on the new master - $new_master_host \n"; &start_vip(); $exit_code = 0; }; if ($@) { warn $@; exit $exit_code; } exit $exit_code; } elsif ( $command eq "status" ) { print "Checking the Status of the script.. OK \n"; exit 0; } else { &usage(); exit 1; } } sub start_vip() { `ssh $ssh_user\@$new_master_host \" $ssh_start_vip \"`; } sub stop_vip() { return 0 unless ($ssh_user); `ssh $ssh_user\@$orig_master_host \" $ssh_stop_vip \"`; } sub usage { print "Usage: master_ip_failover --command=start|stop|stopssh|status --orig_master_host=host --orig_master_ip=ip --orig_master_port=port --new_master_host=host --new_master_ip=ip --new_master_port=port\n"; }
然后我们手动在server1上添加虚拟IP
/sbin/ifconfig eth0:1 172.16.16.20/24
重新提起来MHA manager:
[root@localhost masterha]# masterha_stop --conf=/etc/masterha/app1.cnf [root@localhost masterha]# nohup masterha_manager --conf=/etc/masterha/app1.cnf --remove_dead_master_conf --ignore_last_failover < /dev/null > /var/log/mha/app1/manager.log 2>&1 & [root@localhost masterha]# masterha_check_status --conf=/etc/masterha/app1.cnf app1 (pid:3953) is running(0:PING_OK), master:172.16.16.34