MHA + mysql5.7 一主二从配置安装步骤
1.安装mysql5.7
2.配置主从复制开启gtid,半同步复制
3.配置三台机器的互信
4.安装MHA-node节点
5.安装MHA-manger节点
6.利用MHA工具检测SSH
7.利用MHA工具检测主从结构
8.添加vip,启动MHA服务
9.测试MHA切换
10.遇到的报错以及解决
安装日志如下:
mysql5.7安装
centos 7关闭防火墙
用户之前的命令报错
[root@m1 ~]# service iptables stop
Redirecting to /bin/systemctl stop iptables.service
Failed to stop iptables.service: Unit iptables.service not loaded.
7查看防火墙的状态
[root@m1 ~]# firewall-cmd --state
running
[root@m1 ~]# systemctl status firewalld.service
● firewalld.service - firewalld - dynamic firewall daemon
Loaded: loaded (/usr/lib/systemd/system/firewalld.service; enabled; vendor preset: enabled)
Active: active (running) since 二 2018-08-14 11:56:12 CST; 1h 17min ago
Docs: man:firewalld(1)
Main PID: 735 (firewalld)
Tasks: 2
CGroup: /system.slice/firewalld.service
└─735 /usr/bin/python -Es /usr/sbin/firewalld --nofork --nopid
8月 14 11:56:03 m1 systemd[1]: Starting firewalld - dynamic firewall daemon...
8月 14 11:56:12 m1 systemd[1]: Started firewalld - dynamic firewall daemon.
[root@m1 ~]# systemctl status firewalld
● firewalld.service - firewalld - dynamic firewall daemon
Loaded: loaded (/usr/lib/systemd/system/firewalld.service; enabled; vendor preset: enabled)
Active: active (running) since 二 2018-08-14 11:56:12 CST; 1h 17min ago
Docs: man:firewalld(1)
Main PID: 735 (firewalld)
Tasks: 2
CGroup: /system.slice/firewalld.service
└─735 /usr/bin/python -Es /usr/sbin/firewalld --nofork --nopid
8月 14 11:56:03 m1 systemd[1]: Starting firewalld - dynamic firewall daemon...
8月 14 11:56:12 m1 systemd[1]: Started firewalld - dynamic firewall daemon.
临时关闭防火墙
[root@m1 ~]# systemctl stop firewalld.service
[root@m1 ~]# systemctl status firewalld
● firewalld.service - firewalld - dynamic firewall daemon
Loaded: loaded (/usr/lib/systemd/system/firewalld.service; enabled; vendor preset: enabled)
Active: inactive (dead) since 二 2018-08-14 13:14:31 CST; 1s ago
Docs: man:firewalld(1)
Process: 735 ExecStart=/usr/sbin/firewalld --nofork --nopid $FIREWALLD_ARGS (code=exited, status=0/SUCCESS)
Main PID: 735 (code=exited, status=0/SUCCESS)
8月 14 11:56:03 m1 systemd[1]: Starting firewalld - dynamic firewall daemon...
8月 14 11:56:12 m1 systemd[1]: Started firewalld - dynamic firewall daemon.
8月 14 13:14:30 m1 systemd[1]: Stopping firewalld - dynamic firewall daemon...
8月 14 13:14:31 m1 systemd[1]: Stopped firewalld - dynamic firewall daemon.
开机禁用防火墙
[root@m1 ~]# systemctl disable firewalld.service
Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
[root@m1 ~]#
centos7关闭selinux
查看selinux的状态
[root@m1 ~]# getenforce
Enforcing
临时关闭selinux
[root@m1 ~]# setenforce 0
[root@m1 ~]# getenforce
Permissive
永久关闭 把SELINUX=enforcing 改成SELINUX=disabled
[root@m1 ~]# vi /etc/selinux/config
# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
# enforcing - SELinux security policy is enforced.
# permissive - SELinux prints warnings instead of enforcing.
# disabled - No SELinux policy is loaded.
SELINUX=disabled
# SELINUXTYPE= can take one of three two values:
# targeted - Targeted processes are protected,
# minimum - Modification of targeted policy. Only selected processes are protected.
# mls - Multi Level Security protection.
SELINUXTYPE=targeted
设置
vm.swappiness = 1 - 5
查看
[root@m1 ~]# cat /sys/fs/cgroup/memory/memory.swappiness
30
临时设置
[root@m1 ~]# sysctl -w vm.swappiness=5
vm.swappiness = 5
[root@m1 ~]# cat /sys/fs/cgroup/memory/memory.swappiness
5
永久设置写到文件
[root@m1 ~]# echo vm.swappiness = 5 >> /etc/sysctl.conf
[root@m1 ~]# cat /sys/fs/cgroup/memory/memory.swappiness
5
修改open files 和max user processes
[root@m1 ~]# vi /etc/systemd/system.conf
DefaultLimitNOFILE=65535
#DefaultLimitAS=
DefaultLimitNPROC=65535
#DefaultLimitMEMLOCK=
#DefaultLimitLOCKS=
[root@m1 ~]#
[root@m1 ~]# ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 3834
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 3834
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
[root@m1 ~]#
修改后的
[root@m1 ~]# ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 3834
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 65535
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 65535
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
[root@m1 ~]#
安装mysql5.7
[root@m1 mysqlinstall]# groupadd mysql
[root@m1 mysqlinstall]# useradd -r -g mysql -s /bin/false mysql
[root@m1 mysqlinstall]# cd /usr/local/
[root@m1 local]# ln -s /home/mysqlinstall/mysql-5.7.22-linux-glibc2.12-x86_64 mysql
[root@m1 local]# ll
总用量 0
drwxr-xr-x. 2 root root 6 4月 11 12:59 bin
drwxr-xr-x. 2 root root 6 4月 11 12:59 etc
drwxr-xr-x. 2 root root 6 4月 11 12:59 games
drwxr-xr-x. 2 root root 6 4月 11 12:59 include
drwxr-xr-x. 2 root root 6 4月 11 12:59 lib
drwxr-xr-x. 2 root root 6 4月 11 12:59 lib64
drwxr-xr-x. 2 root root 6 4月 11 12:59 libexec
lrwxrwxrwx 1 root root 54 8月 14 16:20 mysql -> /home/mysqlinstall/mysql-5.7.22-linux-glibc2.12-x86_64
drwxr-xr-x. 2 root root 6 4月 11 12:59 sbin
drwxr-xr-x. 5 root root 49 8月 14 10:03 share
drwxr-xr-x. 2 root root 6 4月 11 12:59 src
[root@m1 local]# cd mysql/
[root@m1 mysql]# ls
bin COPYING docs include lib man README share support-files
[root@m1 mysql]#
初始化:
[root@node2 mysqldir]# mysqld --defaults-file=/home/mysqldir/my.cnf --basedir=/home/mysqldir --datadir=/home/mysqldir/data --user=mysql --initialize
[root@node2 mysqldir]#
[root@node2 mysqldir]#
[root@node2 mysqldir]# ls
data my.cnf mysql-bin.000001 mysql-bin.index mysql-error.log mysql-slow.log
[root@node2 mysqldir]# cd data/
[root@node2 data]# ls
auto.cnf ib_buffer_pool ibdata1 ib_logfile0 ib_logfile1 mysql performance_schema sys
启动数据库
mysqld --defaults-file=/home/mysqldir/my.cnf --user=mysql --datadir=/home/mysqldir/data &
登陆
[root@node3 mysqldir]# mysql -uroot -p -P3380 -S /home/mysqldir/mysql.sock
Enter password:
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 2
Server version: 5.7.22-log
Copyright (c) 2000, 2018, Oracle and/or its affiliates. All rights reserved.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql>
mysql>
mysql> use mysql
ERROR 1820 (HY000): You must reset your password using ALTER USER statement before executing this statement.
修改密码
mysql> set password = 'root001';
Query OK, 0 rows affected (0.00 sec)
mysql> use mysql
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A
Database changed
mysql>
[root@node2 mysqldir]# mysql -uroot -p -P3380 -S /home/mysqldir/mysql.sock
Enter password:
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 2
Server version: 5.7.22-log
Copyright (c) 2000, 2018, Oracle and/or its affiliates. All rights reserved.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql>
mysql>
mysql> set password = 'root001';
Query OK, 0 rows affected (0.00 sec)
mysql> flush privilleges;
mysql> select host,user from user;
+-----------+---------------+
| host | user |
+-----------+---------------+
| localhost | mysql.session |
| localhost | mysql.sys |
| localhost | root |
+-----------+---------------+
3 rows in set (0.00 sec)
mysql> update user set host='%' where user='root';
Query OK, 1 row affected (0.00 sec)
Rows matched: 1 Changed: 1 Warnings: 0
mysql> flush privileges;
Query OK, 0 rows affected (0.01 sec)
一个节点遇到的问题:
[root@node1 mysqldir]# mysql -uroot -p -P3380 -S /home/mysqldir/mysql.sock
Enter password:
ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: NO)
[root@node1 mysqldir]#
[root@node1 mysqldir]#
[root@node1 mysqldir]# mysql -uroot -p -P3380 -S /home/mysqldir/mysql.sock
Enter password:
ERROR 1862 (HY000): Your password has expired. To log in you must change it using a client that supports expired passwords.
MySQL [mysql]>
MySQL [mysql]> update user set password_expired='N' where user='root';
Query OK, 1 row affected (0.01 sec)
Rows matched: 1 Changed: 1 Warnings: 0
MySQL [mysql]> flush privileges;
Query OK, 0 rows affected (0.02 sec)
MySQL [mysql]> exit
一主二从GTID复制环境搭建:
规划:
node1为主 安装mha node
node2为slave安装mha node和manager
node3为slave安装mha node
配置主从复制
grant replication slave on *.* to 'repl'@'%' identified by 'repl';
gtid
添加如下参数到配置文件中
gtid-mode=on
enforce-gtid-consistency=true
在主库创建复制账号
[root@node2 mysqldir]# mysql -uroot -proot001 -P3380 -S /home/mysqldir/mysql.sock
mysql: [Warning] Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 10
Server version: 5.7.22-log MySQL Community Server (GPL)
Copyright (c) 2000, 2018, Oracle and/or its affiliates. All rights reserved.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql> grant replication slave on *.* to 'repl'@'%' identified by 'repl';
Query OK, 0 rows affected, 1 warning (0.05 sec)
mysql> show warnings;
+---------+------+------------------------------------------------------------------------------------------------------------------------------------+
| Level | Code | Message |
+---------+------+------------------------------------------------------------------------------------------------------------------------------------+
| Warning | 1287 | Using GRANT for creating new user is deprecated and will be removed in future release. Create new user with CREATE USER statement. |
+---------+------+------------------------------------------------------------------------------------------------------------------------------------+
1 row in set (0.00 sec)
5.7建议使用使用create创建用户,然后grant再赋权
create user 'repl'@'%' identified by 'repl';
grant replication slave on *.* to 'repl'@'%';
[root@node3 mysqldir]# mysql -uroot -p -P3380 -S /home/mysqldir/mysql.sock
Enter password:
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 3
Server version: 5.7.22-log MySQL Community Server (GPL)
Copyright (c) 2000, 2018, Oracle and/or its affiliates. All rights reserved.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql> grant replication slave on *.* to 'repl'@'%' identified by 'repl';
Query OK, 0 rows affected, 1 warning (0.01 sec)
[root@node1 mysqldir]# mysql -uroot -proot001 -P3308 -S /home/mysqldir/mysql.sock
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MySQL connection id is 7
Server version: 5.7.22-log MySQL Community Server (GPL)
Copyright (c) 2000, 2017, Oracle, MariaDB Corporation Ab and others.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
MySQL [(none)]> grant replication slave on *.* to 'repl'@'%' identified by 'repl';
Query OK, 0 rows affected, 1 warning (0.03 sec)
mysql> show slave status \G;
Empty set (0.00 sec)
ERROR:
No query specified
mysql>
配置同步
mysql> CHANGE MASTER TO MASTER_HOST='192.168.88.20', MASTER_PORT=3380, MASTER_USER='repl', MASTER_PASSWORD='repl', MASTER_AUTO_POSITION=1;
Query OK, 0 rows affected, 2 warnings (0.06 sec)
mysql> show slave status \G;
*************************** 1. row ***************************
Slave_IO_State:
Master_Host: 192.168.88.20
Master_User: repl
Master_Port: 3380
Connect_Retry: 60
Master_Log_File:
Read_Master_Log_Pos: 4
Relay_Log_File: relay-bin.000001
Relay_Log_Pos: 4
Relay_Master_Log_File:
Slave_IO_Running: No
Slave_SQL_Running: No
Replicate_Do_DB:
Replicate_Ignore_DB:
Replicate_Do_Table:
Replicate_Ignore_Table:
Replicate_Wild_Do_Table:
Replicate_Wild_Ignore_Table:
Last_Errno: 0
Last_Error:
Skip_Counter: 0
Exec_Master_Log_Pos: 0
Relay_Log_Space: 154
Until_Condition: None
Until_Log_File:
Until_Log_Pos: 0
Master_SSL_Allowed: No
Master_SSL_CA_File:
Master_SSL_CA_Path:
Master_SSL_Cert:
Master_SSL_Cipher:
Master_SSL_Key:
Seconds_Behind_Master: NULL
Master_SSL_Verify_Server_Cert: No
Last_IO_Errno: 0
Last_IO_Error:
Last_SQL_Errno: 0
Last_SQL_Error:
Replicate_Ignore_Server_Ids:
Master_Server_Id: 0
Master_UUID:
Master_Info_File: /home/mysqldir/data/master.info
SQL_Delay: 0
SQL_Remaining_Delay: NULL
Slave_SQL_Running_State:
Master_Retry_Count: 86400
Master_Bind:
Last_IO_Error_Timestamp:
Last_SQL_Error_Timestamp:
Master_SSL_Crl:
Master_SSL_Crlpath:
Retrieved_Gtid_Set:
Executed_Gtid_Set: da94ed38-a036-11e8-bbc7-525400cdd46d:1-2
Auto_Position: 1
Replicate_Rewrite_DB:
Channel_Name:
Master_TLS_Version:
1 row in set (0.00 sec)
ERROR:
No query specified
mysql> start slave;
Query OK, 0 rows affected (0.00 sec)
mysql> show slave status \G;
*************************** 1. row ***************************
Slave_IO_State:
Master_Host: 192.168.88.20
Master_User: repl
Master_Port: 3380
Connect_Retry: 60
Master_Log_File:
Read_Master_Log_Pos: 4
Relay_Log_File: relay-bin.000001
Relay_Log_Pos: 4
Relay_Master_Log_File:
Slave_IO_Running: No
Slave_SQL_Running: Yes
Replicate_Do_DB:
Replicate_Ignore_DB:
Replicate_Do_Table:
Replicate_Ignore_Table:
Replicate_Wild_Do_Table:
Replicate_Wild_Ignore_Table:
Last_Errno: 0
Last_Error:
Skip_Counter: 0
Exec_Master_Log_Pos: 0
Relay_Log_Space: 154
Until_Condition: None
Until_Log_File:
Until_Log_Pos: 0
Master_SSL_Allowed: No
Master_SSL_CA_File:
Master_SSL_CA_Path:
Master_SSL_Cert:
Master_SSL_Cipher:
Master_SSL_Key:
Seconds_Behind_Master: 0
Master_SSL_Verify_Server_Cert: No
Last_IO_Errno: 1593
Last_IO_Error: Fatal error: The slave I/O thread stops because master and slave have equal MySQL server ids; these ids must be different for replication to work (or the --replicate-same-server-id option must be used on slave but this does not always make sense; please check the manual before using it).
Last_SQL_Errno: 0
Last_SQL_Error:
Replicate_Ignore_Server_Ids:
Master_Server_Id: 18
Master_UUID:
Master_Info_File: /home/mysqldir/data/master.info
SQL_Delay: 0
SQL_Remaining_Delay: NULL
Slave_SQL_Running_State: Slave has read all relay log; waiting for more updates
Master_Retry_Count: 86400
Master_Bind:
Last_IO_Error_Timestamp: 180815 13:16:16
Last_SQL_Error_Timestamp:
Master_SSL_Crl:
Master_SSL_Crlpath:
Retrieved_Gtid_Set:
Executed_Gtid_Set: da94ed38-a036-11e8-bbc7-525400cdd46d:1-2
Auto_Position: 1
Replicate_Rewrite_DB:
Channel_Name:
Master_TLS_Version:
1 row in set (0.00 sec)
上述错误的原因是由于server_id一样导致的
修改server_id重启mysql,再次查看复制正常
[root@node3 mysqldir]# ps -ef |grep mysql
mysql 4844 4721 0 10:57 pts/1 00:00:04 mysqld --defaults-file=/home/mysqldir/my.cnf --user=mysql --datadir=/home/mysqldir/data
root 5094 4721 0 13:16 pts/1 00:00:00 grep --color=auto mysql
[root@node3 mysqldir]# kill -9 4844
mysql> show slave status \G;
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: 192.168.88.20
Master_User: repl
Master_Port: 3380
Connect_Retry: 60
Master_Log_File: mysql-bin.000008
Read_Master_Log_Pos: 194
Relay_Log_File: relay-bin.000009
Relay_Log_Pos: 407
Relay_Master_Log_File: mysql-bin.000008
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
Replicate_Do_DB:
Replicate_Ignore_DB:
Replicate_Do_Table:
Replicate_Ignore_Table:
Replicate_Wild_Do_Table:
Replicate_Wild_Ignore_Table:
Last_Errno: 0
Last_Error:
Skip_Counter: 0
Exec_Master_Log_Pos: 194
Relay_Log_Space: 1136
Until_Condition: None
Until_Log_File:
Until_Log_Pos: 0
Master_SSL_Allowed: No
Master_SSL_CA_File:
Master_SSL_CA_Path:
Master_SSL_Cert:
Master_SSL_Cipher:
Master_SSL_Key:
Seconds_Behind_Master: 0
Master_SSL_Verify_Server_Cert: No
Last_IO_Errno: 0
Last_IO_Error:
Last_SQL_Errno: 0
Last_SQL_Error:
Replicate_Ignore_Server_Ids:
Master_Server_Id: 20
Master_UUID: e69195e8-9fdb-11e8-8473-525400e25850
Master_Info_File: /home/mysqldir/data/master.info
SQL_Delay: 0
SQL_Remaining_Delay: NULL
Slave_SQL_Running_State: Slave has read all relay log; waiting for more updates
Master_Retry_Count: 86400
Master_Bind:
Last_IO_Error_Timestamp:
Last_SQL_Error_Timestamp:
Master_SSL_Crl:
Master_SSL_Crlpath:
Retrieved_Gtid_Set: e69195e8-9fdb-11e8-8473-525400e25850:1-5
Executed_Gtid_Set: da94ed38-a036-11e8-bbc7-525400cdd46d:1-2,
e69195e8-9fdb-11e8-8473-525400e25850:1-5
Auto_Position: 1
Replicate_Rewrite_DB:
Channel_Name:
Master_TLS_Version:
1 row in set (0.00 sec)
ERROR:
No query specified
mysql>
[root@node2 mysqldir]# mysql -uroot -proot001 -P3380 -S /home/mysqldir/mysql.sock
mysql: [Warning] Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 14
Server version: 5.7.22-log MySQL Community Server (GPL)
Copyright (c) 2000, 2018, Oracle and/or its affiliates. All rights reserved.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql>
mysql>
mysql> show slave status \G;
Empty set (0.00 sec)
ERROR:
No query specified
mysql> CHANGE MASTER TO MASTER_HOST='192.168.88.20', MASTER_PORT=3380, MASTER_USER='repl', MASTER_PASSWORD='repl', MASTER_AUTO_POSITION=1;
Query OK, 0 rows affected, 2 warnings (0.12 sec)
mysql> show slave status \G;
*************************** 1. row ***************************
Slave_IO_State:
Master_Host: 192.168.88.20
Master_User: repl
Master_Port: 3380
Connect_Retry: 60
Master_Log_File:
Read_Master_Log_Pos: 4
Relay_Log_File: relay-bin.000001
Relay_Log_Pos: 4
Relay_Master_Log_File:
Slave_IO_Running: No
Slave_SQL_Running: No
Replicate_Do_DB:
Replicate_Ignore_DB:
Replicate_Do_Table:
Replicate_Ignore_Table:
Replicate_Wild_Do_Table:
Replicate_Wild_Ignore_Table:
Last_Errno: 0
Last_Error:
Skip_Counter: 0
Exec_Master_Log_Pos: 0
Relay_Log_Space: 154
Until_Condition: None
Until_Log_File:
Until_Log_Pos: 0
Master_SSL_Allowed: No
Master_SSL_CA_File:
Master_SSL_CA_Path:
Master_SSL_Cert:
Master_SSL_Cipher:
Master_SSL_Key:
Seconds_Behind_Master: NULL
Master_SSL_Verify_Server_Cert: No
Last_IO_Errno: 0
Last_IO_Error:
Last_SQL_Errno: 0
Last_SQL_Error:
Replicate_Ignore_Server_Ids:
Master_Server_Id: 0
Master_UUID:
Master_Info_File: /home/mysqldir/data/master.info
SQL_Delay: 0
SQL_Remaining_Delay: NULL
Slave_SQL_Running_State:
Master_Retry_Count: 86400
Master_Bind:
Last_IO_Error_Timestamp:
Last_SQL_Error_Timestamp:
Master_SSL_Crl:
Master_SSL_Crlpath:
Retrieved_Gtid_Set:
Executed_Gtid_Set: 7f07d504-9fd8-11e8-80c9-525400829ae9:1-5
Auto_Position: 1
Replicate_Rewrite_DB:
Channel_Name:
Master_TLS_Version:
1 row in set (0.00 sec)
ERROR:
No query specified
mysql> start slave;
Query OK, 0 rows affected (0.00 sec)
mysql> show slave status \G;
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: 192.168.88.20
Master_User: repl
Master_Port: 3380
Connect_Retry: 60
Master_Log_File: mysql-bin.000008
Read_Master_Log_Pos: 194
Relay_Log_File: relay-bin.000008
Relay_Log_Pos: 367
Relay_Master_Log_File: mysql-bin.000008
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
Replicate_Do_DB:
Replicate_Ignore_DB:
Replicate_Do_Table:
Replicate_Ignore_Table:
Replicate_Wild_Do_Table:
Replicate_Wild_Ignore_Table:
Last_Errno: 0
Last_Error:
Skip_Counter: 0
Exec_Master_Log_Pos: 194
Relay_Log_Space: 662
Until_Condition: None
Until_Log_File:
Until_Log_Pos: 0
Master_SSL_Allowed: No
Master_SSL_CA_File:
Master_SSL_CA_Path:
Master_SSL_Cert:
Master_SSL_Cipher:
Master_SSL_Key:
Seconds_Behind_Master: 0
Master_SSL_Verify_Server_Cert: No
Last_IO_Errno: 0
Last_IO_Error:
Last_SQL_Errno: 0
Last_SQL_Error:
Replicate_Ignore_Server_Ids:
Master_Server_Id: 20
Master_UUID: e69195e8-9fdb-11e8-8473-525400e25850
Master_Info_File: /home/mysqldir/data/master.info
SQL_Delay: 0
SQL_Remaining_Delay: NULL
Slave_SQL_Running_State: Slave has read all relay log; waiting for more updates
Master_Retry_Count: 86400
Master_Bind:
Last_IO_Error_Timestamp:
Last_SQL_Error_Timestamp:
Master_SSL_Crl:
Master_SSL_Crlpath:
Retrieved_Gtid_Set:
Executed_Gtid_Set: 7f07d504-9fd8-11e8-80c9-525400829ae9:1-5
Auto_Position: 1
Replicate_Rewrite_DB:
Channel_Name:
Master_TLS_Version:
1 row in set (0.00 sec)
ERROR:
No query specified
mysql>
测试在主库创建数据库
MySQL [(none)]> show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| mysql |
| performance_schema |
| sys |
+--------------------+
4 rows in set (0.00 sec)
MySQL [(none)]> create database t_mha DEFAULT CHARACTER SET utf8 COLLATE utf8_general_ci;
Query OK, 1 row affected (0.00 sec)
MySQL [(none)]> show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| mysql |
| performance_schema |
| sys |
| t_mha |
+--------------------+
5 rows in set (0.01 sec)
备库查看同步成功
mysql> show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| mysql |
| performance_schema |
| sys |
| t_mha |
+--------------------+
5 rows in set (0.00 sec)
备库查看同步成功
mysql>
mysql> show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| mysql |
| performance_schema |
| sys |
| t_mha |
+--------------------+
5 rows in set (0.00 sec)
mysql>
安装半同步插件:
mysql> install plugin rpl_semi_sync_master soname 'semisync_master.so';
Query OK, 0 rows affected (0.01 sec)
mysql> show plugins;
+----------------------------+----------+--------------------+--------------------+---------+
| Name | Status | Type | Library | License |
+----------------------------+----------+--------------------+--------------------+---------+
| binlog | ACTIVE | STORAGE ENGINE | NULL | GPL |
| mysql_native_password | ACTIVE | AUTHENTICATION | NULL | GPL |
| sha256_password | ACTIVE | AUTHENTICATION | NULL | GPL |
| InnoDB | ACTIVE | STORAGE ENGINE | NULL | GPL |
| INNODB_TRX | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_LOCKS | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_LOCK_WAITS | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_CMP | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_CMP_RESET | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_CMPMEM | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_CMPMEM_RESET | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_CMP_PER_INDEX | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_CMP_PER_INDEX_RESET | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_BUFFER_PAGE | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_BUFFER_PAGE_LRU | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_BUFFER_POOL_STATS | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_TEMP_TABLE_INFO | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_METRICS | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_FT_DEFAULT_STOPWORD | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_FT_DELETED | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_FT_BEING_DELETED | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_FT_CONFIG | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_FT_INDEX_CACHE | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_FT_INDEX_TABLE | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_SYS_TABLES | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_SYS_TABLESTATS | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_SYS_INDEXES | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_SYS_COLUMNS | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_SYS_FIELDS | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_SYS_FOREIGN | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_SYS_FOREIGN_COLS | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_SYS_TABLESPACES | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_SYS_DATAFILES | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_SYS_VIRTUAL | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| MyISAM | ACTIVE | STORAGE ENGINE | NULL | GPL |
| MRG_MYISAM | ACTIVE | STORAGE ENGINE | NULL | GPL |
| MEMORY | ACTIVE | STORAGE ENGINE | NULL | GPL |
| CSV | ACTIVE | STORAGE ENGINE | NULL | GPL |
| PERFORMANCE_SCHEMA | ACTIVE | STORAGE ENGINE | NULL | GPL |
| BLACKHOLE | ACTIVE | STORAGE ENGINE | NULL | GPL |
| partition | ACTIVE | STORAGE ENGINE | NULL | GPL |
| ARCHIVE | ACTIVE | STORAGE ENGINE | NULL | GPL |
| FEDERATED | DISABLED | STORAGE ENGINE | NULL | GPL |
| ngram | ACTIVE | FTPARSER | NULL | GPL |
| rpl_semi_sync_master | ACTIVE | REPLICATION | semisync_master.so | GPL |
+----------------------------+----------+--------------------+--------------------+---------+
45 rows in set (0.00 sec)
mysql> install plugin rpl_semi_sync_slave soname 'semisync_slave.so';
Query OK, 0 rows affected (0.01 sec)
mysql> show plugins;
+----------------------------+----------+--------------------+--------------------+---------+
| Name | Status | Type | Library | License |
+----------------------------+----------+--------------------+--------------------+---------+
| binlog | ACTIVE | STORAGE ENGINE | NULL | GPL |
| mysql_native_password | ACTIVE | AUTHENTICATION | NULL | GPL |
| sha256_password | ACTIVE | AUTHENTICATION | NULL | GPL |
| InnoDB | ACTIVE | STORAGE ENGINE | NULL | GPL |
| INNODB_TRX | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_LOCKS | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_LOCK_WAITS | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_CMP | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_CMP_RESET | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_CMPMEM | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_CMPMEM_RESET | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_CMP_PER_INDEX | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_CMP_PER_INDEX_RESET | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_BUFFER_PAGE | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_BUFFER_PAGE_LRU | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_BUFFER_POOL_STATS | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_TEMP_TABLE_INFO | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_METRICS | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_FT_DEFAULT_STOPWORD | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_FT_DELETED | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_FT_BEING_DELETED | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_FT_CONFIG | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_FT_INDEX_CACHE | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_FT_INDEX_TABLE | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_SYS_TABLES | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_SYS_TABLESTATS | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_SYS_INDEXES | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_SYS_COLUMNS | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_SYS_FIELDS | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_SYS_FOREIGN | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_SYS_FOREIGN_COLS | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_SYS_TABLESPACES | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_SYS_DATAFILES | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_SYS_VIRTUAL | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| MyISAM | ACTIVE | STORAGE ENGINE | NULL | GPL |
| MRG_MYISAM | ACTIVE | STORAGE ENGINE | NULL | GPL |
| MEMORY | ACTIVE | STORAGE ENGINE | NULL | GPL |
| CSV | ACTIVE | STORAGE ENGINE | NULL | GPL |
| PERFORMANCE_SCHEMA | ACTIVE | STORAGE ENGINE | NULL | GPL |
| BLACKHOLE | ACTIVE | STORAGE ENGINE | NULL | GPL |
| partition | ACTIVE | STORAGE ENGINE | NULL | GPL |
| ARCHIVE | ACTIVE | STORAGE ENGINE | NULL | GPL |
| FEDERATED | DISABLED | STORAGE ENGINE | NULL | GPL |
| ngram | ACTIVE | FTPARSER | NULL | GPL |
| rpl_semi_sync_master | ACTIVE | REPLICATION | semisync_master.so | GPL |
| rpl_semi_sync_slave | ACTIVE | REPLICATION | semisync_slave.so | GPL |
+----------------------------+----------+--------------------+--------------------+---------+
mysql> show variables like '%semi%';
+-------------------------------------------+------------+
| Variable_name | Value |
+-------------------------------------------+------------+
| rpl_semi_sync_master_enabled | OFF |
| rpl_semi_sync_master_timeout | 10000 |
| rpl_semi_sync_master_trace_level | 32 |
| rpl_semi_sync_master_wait_for_slave_count | 1 |
| rpl_semi_sync_master_wait_no_slave | ON |
| rpl_semi_sync_master_wait_point | AFTER_SYNC |
| rpl_semi_sync_slave_enabled | OFF |
| rpl_semi_sync_slave_trace_level | 32 |
+-------------------------------------------+------------+
8 rows in set (0.00 sec)
[root@node2 ~]# mysql -uroot -proot001 -P3380 -S /home/mysqldir/mysql.sock
Server version: 5.7.22-log MySQL Community Server (GPL)
Copyright (c) 2000, 2018, Oracle and/or its affiliates. All rights reserved.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql>
mysql> install plugin rpl_semi_sync_master soname 'semisync_master.so';
Query OK, 0 rows affected (0.04 sec)
mysql> install plugin rpl_semi_sync_slave soname 'semisync_slave.so';
Query OK, 0 rows affected (0.01 sec)
mysql> show plugins;
+----------------------------+----------+--------------------+--------------------+---------+
| Name | Status | Type | Library | License |
+----------------------------+----------+--------------------+--------------------+---------+
| binlog | ACTIVE | STORAGE ENGINE | NULL | GPL |
| mysql_native_password | ACTIVE | AUTHENTICATION | NULL | GPL |
| sha256_password | ACTIVE | AUTHENTICATION | NULL | GPL |
| InnoDB | ACTIVE | STORAGE ENGINE | NULL | GPL |
| INNODB_TRX | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_LOCKS | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_LOCK_WAITS | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_CMP | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_CMP_RESET | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_CMPMEM | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_CMPMEM_RESET | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_CMP_PER_INDEX | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_CMP_PER_INDEX_RESET | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_BUFFER_PAGE | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_BUFFER_PAGE_LRU | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_BUFFER_POOL_STATS | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_TEMP_TABLE_INFO | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_METRICS | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_FT_DEFAULT_STOPWORD | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_FT_DELETED | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_FT_BEING_DELETED | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_FT_CONFIG | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_FT_INDEX_CACHE | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_FT_INDEX_TABLE | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_SYS_TABLES | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_SYS_TABLESTATS | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_SYS_INDEXES | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_SYS_COLUMNS | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_SYS_FIELDS | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_SYS_FOREIGN | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_SYS_FOREIGN_COLS | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_SYS_TABLESPACES | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_SYS_DATAFILES | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_SYS_VIRTUAL | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| MyISAM | ACTIVE | STORAGE ENGINE | NULL | GPL |
| MRG_MYISAM | ACTIVE | STORAGE ENGINE | NULL | GPL |
| MEMORY | ACTIVE | STORAGE ENGINE | NULL | GPL |
| CSV | ACTIVE | STORAGE ENGINE | NULL | GPL |
| PERFORMANCE_SCHEMA | ACTIVE | STORAGE ENGINE | NULL | GPL |
| BLACKHOLE | ACTIVE | STORAGE ENGINE | NULL | GPL |
| partition | ACTIVE | STORAGE ENGINE | NULL | GPL |
| ARCHIVE | ACTIVE | STORAGE ENGINE | NULL | GPL |
| FEDERATED | DISABLED | STORAGE ENGINE | NULL | GPL |
| ngram | ACTIVE | FTPARSER | NULL | GPL |
| rpl_semi_sync_master | ACTIVE | REPLICATION | semisync_master.so | GPL |
| rpl_semi_sync_slave | ACTIVE | REPLICATION | semisync_slave.so | GPL |
+----------------------------+----------+--------------------+--------------------+---------+
46 rows in set (0.00 sec)
mysql> show variables like '%semi%';
+-------------------------------------------+------------+
| Variable_name | Value |
+-------------------------------------------+------------+
| rpl_semi_sync_master_enabled | OFF |
| rpl_semi_sync_master_timeout | 10000 |
| rpl_semi_sync_master_trace_level | 32 |
| rpl_semi_sync_master_wait_for_slave_count | 1 |
| rpl_semi_sync_master_wait_no_slave | ON |
| rpl_semi_sync_master_wait_point | AFTER_SYNC |
| rpl_semi_sync_slave_enabled | OFF |
| rpl_semi_sync_slave_trace_level | 32 |
+-------------------------------------------+------------+
8 rows in set (0.00 sec)
开启半同步复制功能:
master
MySQL [(none)]> show variables like '%semi%';
+-------------------------------------------+------------+
| Variable_name | Value |
+-------------------------------------------+------------+
| rpl_semi_sync_master_enabled | OFF |
| rpl_semi_sync_master_timeout | 10000 |
| rpl_semi_sync_master_trace_level | 32 |
| rpl_semi_sync_master_wait_for_slave_count | 1 |
| rpl_semi_sync_master_wait_no_slave | ON |
| rpl_semi_sync_master_wait_point | AFTER_SYNC |
| rpl_semi_sync_slave_enabled | OFF |
| rpl_semi_sync_slave_trace_level | 32 |
+-------------------------------------------+------------+
8 rows in set (0.01 sec)
MySQL [(none)]> set global rpl_semi_sync_master_enabled=on;
Query OK, 0 rows affected (0.00 sec)
MySQL [(none)]> show variables like '%semi%';
+-------------------------------------------+------------+
| Variable_name | Value |
+-------------------------------------------+------------+
| rpl_semi_sync_master_enabled | ON |
| rpl_semi_sync_master_timeout | 10000 |
| rpl_semi_sync_master_trace_level | 32 |
| rpl_semi_sync_master_wait_for_slave_count | 1 |
| rpl_semi_sync_master_wait_no_slave | ON |
| rpl_semi_sync_master_wait_point | AFTER_SYNC |
| rpl_semi_sync_slave_enabled | OFF |
| rpl_semi_sync_slave_trace_level | 32 |
+-------------------------------------------+------------+
8 rows in set (0.00 sec)
MySQL [(none)]>
set global rpl_semi_sync_master_enabled=on;
set global rpl_semi_sync_slave_enabled=on;
slave 1
mysql> show variables like '%semi%';
+-------------------------------------------+------------+
| Variable_name | Value |
+-------------------------------------------+------------+
| rpl_semi_sync_master_enabled | OFF |
| rpl_semi_sync_master_timeout | 10000 |
| rpl_semi_sync_master_trace_level | 32 |
| rpl_semi_sync_master_wait_for_slave_count | 1 |
| rpl_semi_sync_master_wait_no_slave | ON |
| rpl_semi_sync_master_wait_point | AFTER_SYNC |
| rpl_semi_sync_slave_enabled | OFF |
| rpl_semi_sync_slave_trace_level | 32 |
+-------------------------------------------+------------+
8 rows in set (0.00 sec)
mysql>
mysql> set global rpl_semi_sync_slave_enabled=on;
Query OK, 0 rows affected (0.00 sec)
mysql> show variables like '%semi%';
+-------------------------------------------+------------+
| Variable_name | Value |
+-------------------------------------------+------------+
| rpl_semi_sync_master_enabled | OFF |
| rpl_semi_sync_master_timeout | 10000 |
| rpl_semi_sync_master_trace_level | 32 |
| rpl_semi_sync_master_wait_for_slave_count | 1 |
| rpl_semi_sync_master_wait_no_slave | ON |
| rpl_semi_sync_master_wait_point | AFTER_SYNC |
| rpl_semi_sync_slave_enabled | ON |
| rpl_semi_sync_slave_trace_level | 32 |
+-------------------------------------------+------------+
8 rows in set (0.01 sec)
mysql>
在主库查看半同步复制状态 Rpl_semi_sync_master_clients 为0 因为备库配置没有重启io线程
MySQL [(none)]>
MySQL [(none)]> show global status like '%semi%';
+--------------------------------------------+-------+
| Variable_name | Value |
+--------------------------------------------+-------+
| Rpl_semi_sync_master_clients | 0 |
| Rpl_semi_sync_master_net_avg_wait_time | 0 |
| Rpl_semi_sync_master_net_wait_time | 0 |
| Rpl_semi_sync_master_net_waits | 0 |
| Rpl_semi_sync_master_no_times | 0 |
| Rpl_semi_sync_master_no_tx | 0 |
| Rpl_semi_sync_master_status | ON |
| Rpl_semi_sync_master_timefunc_failures | 0 |
| Rpl_semi_sync_master_tx_avg_wait_time | 0 |
| Rpl_semi_sync_master_tx_wait_time | 0 |
| Rpl_semi_sync_master_tx_waits | 0 |
| Rpl_semi_sync_master_wait_pos_backtraverse | 0 |
| Rpl_semi_sync_master_wait_sessions | 0 |
| Rpl_semi_sync_master_yes_tx | 0 |
| Rpl_semi_sync_slave_status | OFF |
+--------------------------------------------+-------+
15 rows in set (0.00 sec)
重启io线程
mysql> stop slave io_thread;
Query OK, 0 rows affected (0.00 sec)
mysql> start slave io_thread;
Query OK, 0 rows affected (0.00 sec)
再次查看Rpl_semi_sync_master_clients 为1了
MySQL [(none)]>
MySQL [(none)]> show global status like '%semi%';
+--------------------------------------------+-------+
| Variable_name | Value |
+--------------------------------------------+-------+
| Rpl_semi_sync_master_clients | 1 |
| Rpl_semi_sync_master_net_avg_wait_time | 0 |
| Rpl_semi_sync_master_net_wait_time | 0 |
| Rpl_semi_sync_master_net_waits | 0 |
| Rpl_semi_sync_master_no_times | 0 |
| Rpl_semi_sync_master_no_tx | 0 |
| Rpl_semi_sync_master_status | ON |
| Rpl_semi_sync_master_timefunc_failures | 0 |
| Rpl_semi_sync_master_tx_avg_wait_time | 0 |
| Rpl_semi_sync_master_tx_wait_time | 0 |
| Rpl_semi_sync_master_tx_waits | 0 |
| Rpl_semi_sync_master_wait_pos_backtraverse | 0 |
| Rpl_semi_sync_master_wait_sessions | 0 |
| Rpl_semi_sync_master_yes_tx | 0 |
| Rpl_semi_sync_slave_status | OFF |
+--------------------------------------------+-------+
15 rows in set (0.00 sec)
MySQL [(none)]>
另外一个slave也重启io线程
MySQL [(none)]> show global status like '%semi%';
+--------------------------------------------+-------+
| Variable_name | Value |
+--------------------------------------------+-------+
| Rpl_semi_sync_master_clients | 2 |
| Rpl_semi_sync_master_net_avg_wait_time | 0 |
| Rpl_semi_sync_master_net_wait_time | 0 |
| Rpl_semi_sync_master_net_waits | 0 |
| Rpl_semi_sync_master_no_times | 0 |
| Rpl_semi_sync_master_no_tx | 0 |
| Rpl_semi_sync_master_status | ON |
| Rpl_semi_sync_master_timefunc_failures | 0 |
| Rpl_semi_sync_master_tx_avg_wait_time | 0 |
| Rpl_semi_sync_master_tx_wait_time | 0 |
| Rpl_semi_sync_master_tx_waits | 0 |
| Rpl_semi_sync_master_wait_pos_backtraverse | 0 |
| Rpl_semi_sync_master_wait_sessions | 0 |
| Rpl_semi_sync_master_yes_tx | 0 |
| Rpl_semi_sync_slave_status | OFF |
+--------------------------------------------+-------+
15 rows in set (0.00 sec)
把如下配置添加的配置文件中,下次重启半同步复制功能也开启
rpl_semi_sync_master_enabled=on
rpl_semi_sync_slave_enabled=on
半同步复制切换为异步复制
主库:
MySQL [t_mha]> show variables like '%rpl_semi_sync_master_timeout%';
+------------------------------+-------+
| Variable_name | Value |
+------------------------------+-------+
| rpl_semi_sync_master_timeout | 10000 |
+------------------------------+-------+
1 row in set (0.01 sec)
MySQL [t_mha]> show global status like '%semi%';
+--------------------------------------------+-------+
| Variable_name | Value |
+--------------------------------------------+-------+
| Rpl_semi_sync_master_clients | 2 |
| Rpl_semi_sync_master_net_avg_wait_time | 0 |
| Rpl_semi_sync_master_net_wait_time | 0 |
| Rpl_semi_sync_master_net_waits | 2 |
| Rpl_semi_sync_master_no_times | 0 |
| Rpl_semi_sync_master_no_tx | 0 |
| Rpl_semi_sync_master_status | ON |
| Rpl_semi_sync_master_timefunc_failures | 0 |
| Rpl_semi_sync_master_tx_avg_wait_time | 692 |
| Rpl_semi_sync_master_tx_wait_time | 692 |
| Rpl_semi_sync_master_tx_waits | 1 |
| Rpl_semi_sync_master_wait_pos_backtraverse | 0 |
| Rpl_semi_sync_master_wait_sessions | 0 |
| Rpl_semi_sync_master_yes_tx | 1 |
| Rpl_semi_sync_slave_status | OFF |
+--------------------------------------------+-------+
15 rows in set (0.00 sec)
关闭一个从库的io线程
mysql> stop slave io_thread;
Query OK, 0 rows affected (0.01 sec)
mysql> show variables like '%semi%';
+-------------------------------------------+------------+
| Variable_name | Value |
+-------------------------------------------+------------+
| rpl_semi_sync_master_enabled | ON |
| rpl_semi_sync_master_timeout | 10000 |
| rpl_semi_sync_master_trace_level | 32 |
| rpl_semi_sync_master_wait_for_slave_count | 1 |
| rpl_semi_sync_master_wait_no_slave | ON |
| rpl_semi_sync_master_wait_point | AFTER_SYNC |
| rpl_semi_sync_slave_enabled | ON |
| rpl_semi_sync_slave_trace_level | 32 |
+-------------------------------------------+------------+
8 rows in set (0.00 sec)
再次查看主库 Rpl_semi_sync_master_clients 变为1了
MySQL [t_mha]> show global status like '%semi%';
+--------------------------------------------+-------+
| Variable_name | Value |
+--------------------------------------------+-------+
| Rpl_semi_sync_master_clients | 1 |
| Rpl_semi_sync_master_net_avg_wait_time | 0 |
| Rpl_semi_sync_master_net_wait_time | 0 |
| Rpl_semi_sync_master_net_waits | 2 |
| Rpl_semi_sync_master_no_times | 0 |
| Rpl_semi_sync_master_no_tx | 0 |
| Rpl_semi_sync_master_status | ON |
| Rpl_semi_sync_master_timefunc_failures | 0 |
| Rpl_semi_sync_master_tx_avg_wait_time | 692 |
| Rpl_semi_sync_master_tx_wait_time | 692 |
| Rpl_semi_sync_master_tx_waits | 1 |
| Rpl_semi_sync_master_wait_pos_backtraverse | 0 |
| Rpl_semi_sync_master_wait_sessions | 0 |
| Rpl_semi_sync_master_yes_tx | 1 |
| Rpl_semi_sync_slave_status | OFF |
+--------------------------------------------+-------+
15 rows in set (0.01 sec)
主库插入数据,立即响应,因为只要有一个slave响应就算成功
MySQL [t_mha]> insert into test values(1,'xx');
Query OK, 1 row affected (0.01 sec)
关闭第二个slave的io线程
mysql> stop slave io_thread;
Query OK, 0 rows affected (0.00 sec)
mysql>
mysql> show variables like '%semi%';
+-------------------------------------------+------------+
| Variable_name | Value |
+-------------------------------------------+------------+
| rpl_semi_sync_master_enabled | ON |
| rpl_semi_sync_master_timeout | 10000 |
| rpl_semi_sync_master_trace_level | 32 |
| rpl_semi_sync_master_wait_for_slave_count | 1 |
| rpl_semi_sync_master_wait_no_slave | ON |
| rpl_semi_sync_master_wait_point | AFTER_SYNC |
| rpl_semi_sync_slave_enabled | ON |
| rpl_semi_sync_slave_trace_level | 32 |
+-------------------------------------------+------------+
8 rows in set (0.00 sec)
查看主库 Rpl_semi_sync_master_clients变为0
MySQL [t_mha]> show global status like '%semi%';
+--------------------------------------------+-------+
| Variable_name | Value |
+--------------------------------------------+-------+
| Rpl_semi_sync_master_clients | 0 |
| Rpl_semi_sync_master_net_avg_wait_time | 0 |
| Rpl_semi_sync_master_net_wait_time | 0 |
| Rpl_semi_sync_master_net_waits | 3 |
| Rpl_semi_sync_master_no_times | 0 |
| Rpl_semi_sync_master_no_tx | 0 |
| Rpl_semi_sync_master_status | ON |
| Rpl_semi_sync_master_timefunc_failures | 0 |
| Rpl_semi_sync_master_tx_avg_wait_time | 726 |
| Rpl_semi_sync_master_tx_wait_time | 1453 |
| Rpl_semi_sync_master_tx_waits | 2 |
| Rpl_semi_sync_master_wait_pos_backtraverse | 0 |
| Rpl_semi_sync_master_wait_sessions | 0 |
| Rpl_semi_sync_master_yes_tx | 2 |
| Rpl_semi_sync_slave_status | OFF |
+--------------------------------------------+-------+
15 rows in set (0.00 sec)
插入数据,要等待10s 因为两个备库io线程已经关闭了导致半同步复制关闭,主库一直等待备库的响应,直接到10s钟超时
MySQL [t_mha]> insert into test values(2,'2xx');
Query OK, 1 row affected (10.00 sec)
MySQL [t_mha]>
可以看出 Rpl_semi_sync_master_status 已经为off了 已经从半同步复制转换为异步复制了
MySQL [t_mha]> show global status like '%semi%';
+--------------------------------------------+-------+
| Variable_name | Value |
+--------------------------------------------+-------+
| Rpl_semi_sync_master_clients | 0 |
| Rpl_semi_sync_master_net_avg_wait_time | 0 |
| Rpl_semi_sync_master_net_wait_time | 0 |
| Rpl_semi_sync_master_net_waits | 3 |
| Rpl_semi_sync_master_no_times | 1 |
| Rpl_semi_sync_master_no_tx | 1 |
| Rpl_semi_sync_master_status | OFF |
| Rpl_semi_sync_master_timefunc_failures | 0 |
| Rpl_semi_sync_master_tx_avg_wait_time | 726 |
| Rpl_semi_sync_master_tx_wait_time | 1453 |
| Rpl_semi_sync_master_tx_waits | 2 |
| Rpl_semi_sync_master_wait_pos_backtraverse | 0 |
| Rpl_semi_sync_master_wait_sessions | 0 |
| Rpl_semi_sync_master_yes_tx | 2 |
| Rpl_semi_sync_slave_status | OFF |
+--------------------------------------------+-------+
重启备库的io_thread开启半同步复制
mysql> start slave io_thread;
Query OK, 0 rows affected (0.00 sec)
mysql> start slave io_thread;
Query OK, 0 rows affected (0.00 sec)
主库半同步复制开启
MySQL [(none)]> show global status like '%semi%';
+--------------------------------------------+-------+
| Variable_name | Value |
+--------------------------------------------+-------+
| Rpl_semi_sync_master_clients | 2 |
| Rpl_semi_sync_master_net_avg_wait_time | 0 |
| Rpl_semi_sync_master_net_wait_time | 0 |
| Rpl_semi_sync_master_net_waits | 4 |
| Rpl_semi_sync_master_no_times | 1 |
| Rpl_semi_sync_master_no_tx | 1 |
| Rpl_semi_sync_master_status | ON |
| Rpl_semi_sync_master_timefunc_failures | 0 |
| Rpl_semi_sync_master_tx_avg_wait_time | 726 |
| Rpl_semi_sync_master_tx_wait_time | 1453 |
| Rpl_semi_sync_master_tx_waits | 2 |
| Rpl_semi_sync_master_wait_pos_backtraverse | 0 |
| Rpl_semi_sync_master_wait_sessions | 0 |
| Rpl_semi_sync_master_yes_tx | 2 |
| Rpl_semi_sync_slave_status | OFF |
+--------------------------------------------+-------+
15 rows in set (0.00 sec)
设置node1,node2,node3互信
[root@node2 .ssh]# ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
90:d8:fb:18:6b:1a:3b:31:e6:c3:63:5a:d6:53:a7:ca root@node2
The key's randomart image is:
+--[ RSA 2048]----+
| |
| o . |
| . + |
| o |
| o S . |
| +. * o |
| ++o* o |
| oB* o |
| .oo+E |
+-----------------+
[root@node2 .ssh]#
[root@node2 .ssh]# ls
id_rsa id_rsa.pub known_hosts
[root@node2 .ssh]# ls -al
总用量 16
drwx------. 2 root root 54 8月 15 17:54 .
dr-xr-x---. 4 root root 4096 8月 15 14:11 ..
-rw-------. 1 root root 1679 8月 15 17:54 id_rsa
-rw-r--r--. 1 root root 392 8月 15 17:54 id_rsa.pub
-rw-r--r--. 1 root root 181 6月 29 13:40 known_hosts
[root@node2 .ssh]# cat id_rsa.pub >> authorized_keys
[root@node2 .ssh]#
[root@node2 .ssh]# pwd
/root/.ssh
[root@node2 .ssh]# ll
总用量 16
-rw-r--r--. 1 root root 392 8月 15 17:55 authorized_keys
-rw-------. 1 root root 1679 8月 15 17:54 id_rsa
-rw-r--r--. 1 root root 392 8月 15 17:54 id_rsa.pub
-rw-r--r--. 1 root root 181 6月 29 13:40 known_hosts
[root@node2 .ssh]# cat authorized_keys
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCuAIaOxDS7ijf/18WCwWVXRyn5+apcIXFoz30KtlM45yd2/HOEYNQZzJEoLpw8luQ64vbwHwAyFGfzKXwoMB8S3niSV8NKRcjhtZPoDxGH9pwdQUgfPJ2+Lry04z2wooBlv/PuQgJUy6WI3EJo0AiplOtQDsKUNpilaWL4nKw8FmxLuc4qguEgq6lCO8IIyIghK3vAAR63ZdDSIHNTs0e+hAKrDpKNDAPvpsGhVcRyBbN/iK4T4T11uYs17ySVjuwWsXUqDXv9+mHWUUQZMeZKF88rftjnpVh+yCv9L9GFYK3wsOlZdlmQ8+PJo5n7QQmUYxKHcYH3bMiAHbiGvWI5 root@node2
[root@node2 .ssh]#
[root@node2 .ssh]#
[root@node2 .ssh]# scp ~/.ssh/authorized_keys node2:~/.ssh/
The authenticity of host 'node2 (192.168.88.18)' can't be established.
ECDSA key fingerprint is 46:38:bc:12:99:36:10:2a:55:a3:84:84:e0:41:4c:47.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'node2,192.168.88.18' (ECDSA) to the list of known hosts.
authorized_keys 100% 392 0.4KB/s 00:00
[root@node2 .ssh]# scp ~/.ssh/authorized_keys node3:~/.ssh/
The authenticity of host 'node3 (192.168.88.19)' can't be established.
ECDSA key fingerprint is 2c:53:6f:4a:4b:e3:69:1b:5a:d6:6c:14:1b:0c:36:8c.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'node3,192.168.88.19' (ECDSA) to the list of known hosts.
root@node3's password:
Permission denied, please try again.
root@node3's password:
authorized_keys 100% 392 0.4KB/s 00:00
[root@node2 .ssh]# cat authorized_keys
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCuAIaOxDS7ijf/18WCwWVXRyn5+apcIXFoz30KtlM45yd2/HOEYNQZzJEoLpw8luQ64vbwHwAyFGfzKXwoMB8S3niSV8NKRcjhtZPoDxGH9pwdQUgfPJ2+Lry04z2wooBlv/PuQgJUy6WI3EJo0AiplOtQDsKUNpilaWL4nKw8FmxLuc4qguEgq6lCO8IIyIghK3vAAR63ZdDSIHNTs0e+hAKrDpKNDAPvpsGhVcRyBbN/iK4T4T11uYs17ySVjuwWsXUqDXv9+mHWUUQZMeZKF88rftjnpVh+yCv9L9GFYK3wsOlZdlmQ8+PJo5n7QQmUYxKHcYH3bMiAHbiGvWI5 root@node2
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCx16tSC60Vafe0i6uEpQ1JnPL6lcDstuGsLEYfhxo1G1b1cksex6A7CeDTht6p9DaAaP2axfOno/N1fCiuDRTz7xcsXS59M0kTTtwGB6xNXfvr+R58RFlQ7K6c0d46IWoVaYZTBCoXEcTuA1Z72j8tTISGvG2mPv24H2icXQANQUz59kBks+fokLAqp7z8B5j/+UDy57JBfwSNqIFFhzrPnk6SAPYyTaOZXP/BfKuFp4pRscvZU00ORl+oZkO2lmcOl/1iXkKVXIIHYl+LzhIqtbT/+qSJNS4zjBHlvej61WtWeyyZNqsfjTmfwBIE4OwpgVuXayWqcqJq5Xn5MJ4j root@node3
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDNl0eXkF2z6ryD4cOixg6YicpJInqKUcsTFsPfrzttrgWXCYNtwACOIuVQ9KqSnjfh+CtGR4LbgQculiasdq64cDYJFYfrOvodh20p237ZKMMsmdnUAuvDGuhLsdPJ+WdG+akYs8Npe71OpRFAmAniRMH47K/dZ/Zv0J1IA772QKQz4Uv/N9hpYME0ofAuw88i8UohAdrg+7531he8RqzCkEBYwLddz/IhlPcdQj6kz9Yb417SKHmWnHsDrwWmvD5epNO/m5/q02xw1Ad7SGbwFmHt5tPbQinR/34ldwDEU9keLHAZuYcLjp1AS2aTX5BLY9cvonclt2MlRKfmSSOx root@node1
[root@node2 .ssh]#
[root@node3 .ssh]# ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
69:ad:51:91:e5:46:48:4b:54:de:38:29:16:e4:77:b6 root@node3
The key's randomart image is:
+--[ RSA 2048]----+
| +B=+ |
| ooB + |
| * O + |
| = + + . |
| S . E |
| . o |
| . |
| |
| |
+-----------------+
[root@node3 .ssh]# cat id_rsa.pub >> authorized_keys
[root@node3 .ssh]# ll
?? 16
-rw-r--r-- 1 root root 392 8? 15 17:57 authorized_keys
-rw------- 1 root root 1675 8? 15 17:56 id_rsa
-rw-r--r-- 1 root root 392 8? 15 17:56 id_rsa.pub
-rw-r--r-- 1 root root 181 8? 15 09:29 known_hosts
[root@node3 .ssh]# cat authorized_keys
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCx16tSC60Vafe0i6uEpQ1JnPL6lcDstuGsLEYfhxo1G1b1cksex6A7CeDTht6p9DaAaP2axfOno/N1fCiuDRTz7xcsXS59M0kTTtwGB6xNXfvr+R58RFlQ7K6c0d46IWoVaYZTBCoXEcTuA1Z72j8tTISGvG2mPv24H2icXQANQUz59kBks+fokLAqp7z8B5j/+UDy57JBfwSNqIFFhzrPnk6SAPYyTaOZXP/BfKuFp4pRscvZU00ORl+oZkO2lmcOl/1iXkKVXIIHYl+LzhIqtbT/+qSJNS4zjBHlvej61WtWeyyZNqsfjTmfwBIE4OwpgVuXayWqcqJq5Xn5MJ4j root@node3
[root@node3 .ssh]#
[root@node3 .ssh]#
[root@node3 .ssh]# cat authorized_keys
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCx16tSC60Vafe0i6uEpQ1JnPL6lcDstuGsLEYfhxo1G1b1cksex6A7CeDTht6p9DaAaP2axfOno/N1fCiuDRTz7xcsXS59M0kTTtwGB6xNXfvr+R58RFlQ7K6c0d46IWoVaYZTBCoXEcTuA1Z72j8tTISGvG2mPv24H2icXQANQUz59kBks+fokLAqp7z8B5j/+UDy57JBfwSNqIFFhzrPnk6SAPYyTaOZXP/BfKuFp4pRscvZU00ORl+oZkO2lmcOl/1iXkKVXIIHYl+LzhIqtbT/+qSJNS4zjBHlvej61WtWeyyZNqsfjTmfwBIE4OwpgVuXayWqcqJq5Xn5MJ4j root@node3
[root@node3 .ssh]#
[root@node3 .ssh]#
[root@node3 .ssh]# cat authorized_keys
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCuAIaOxDS7ijf/18WCwWVXRyn5+apcIXFoz30KtlM45yd2/HOEYNQZzJEoLpw8luQ64vbwHwAyFGfzKXwoMB8S3niSV8NKRcjhtZPoDxGH9pwdQUgfPJ2+Lry04z2wooBlv/PuQgJUy6WI3EJo0AiplOtQDsKUNpilaWL4nKw8FmxLuc4qguEgq6lCO8IIyIghK3vAAR63ZdDSIHNTs0e+hAKrDpKNDAPvpsGhVcRyBbN/iK4T4T11uYs17ySVjuwWsXUqDXv9+mHWUUQZMeZKF88rftjnpVh+yCv9L9GFYK3wsOlZdlmQ8+PJo5n7QQmUYxKHcYH3bMiAHbiGvWI5 root@node2
[root@node3 .ssh]# cat id_rsa.pub >> authorized_keys
[root@node3 .ssh]# cat authorized_keys
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCuAIaOxDS7ijf/18WCwWVXRyn5+apcIXFoz30KtlM45yd2/HOEYNQZzJEoLpw8luQ64vbwHwAyFGfzKXwoMB8S3niSV8NKRcjhtZPoDxGH9pwdQUgfPJ2+Lry04z2wooBlv/PuQgJUy6WI3EJo0AiplOtQDsKUNpilaWL4nKw8FmxLuc4qguEgq6lCO8IIyIghK3vAAR63ZdDSIHNTs0e+hAKrDpKNDAPvpsGhVcRyBbN/iK4T4T11uYs17ySVjuwWsXUqDXv9+mHWUUQZMeZKF88rftjnpVh+yCv9L9GFYK3wsOlZdlmQ8+PJo5n7QQmUYxKHcYH3bMiAHbiGvWI5 root@node2
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCx16tSC60Vafe0i6uEpQ1JnPL6lcDstuGsLEYfhxo1G1b1cksex6A7CeDTht6p9DaAaP2axfOno/N1fCiuDRTz7xcsXS59M0kTTtwGB6xNXfvr+R58RFlQ7K6c0d46IWoVaYZTBCoXEcTuA1Z72j8tTISGvG2mPv24H2icXQANQUz59kBks+fokLAqp7z8B5j/+UDy57JBfwSNqIFFhzrPnk6SAPYyTaOZXP/BfKuFp4pRscvZU00ORl+oZkO2lmcOl/1iXkKVXIIHYl+LzhIqtbT/+qSJNS4zjBHlvej61WtWeyyZNqsfjTmfwBIE4OwpgVuXayWqcqJq5Xn5MJ4j root@node3
[root@node3 .ssh]# scp ~/.ssh/authorized_keys node1:~/.ssh/
The authenticity of host 'node1 (192.168.88.20)' can't be established.
ECDSA key fingerprint is 9b:cb:93:41:32:7e:89:8b:46:73:d0:5d:cb:9d:ab:57.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'node1,192.168.88.20' (ECDSA) to the list of known hosts.
root@node1's password:
authorized_keys 100% 784 0.8KB/s 00:00
[root@node3 .ssh]# cat authorized_keys
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCuAIaOxDS7ijf/18WCwWVXRyn5+apcIXFoz30KtlM45yd2/HOEYNQZzJEoLpw8luQ64vbwHwAyFGfzKXwoMB8S3niSV8NKRcjhtZPoDxGH9pwdQUgfPJ2+Lry04z2wooBlv/PuQgJUy6WI3EJo0AiplOtQDsKUNpilaWL4nKw8FmxLuc4qguEgq6lCO8IIyIghK3vAAR63ZdDSIHNTs0e+hAKrDpKNDAPvpsGhVcRyBbN/iK4T4T11uYs17ySVjuwWsXUqDXv9+mHWUUQZMeZKF88rftjnpVh+yCv9L9GFYK3wsOlZdlmQ8+PJo5n7QQmUYxKHcYH3bMiAHbiGvWI5 root@node2
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCx16tSC60Vafe0i6uEpQ1JnPL6lcDstuGsLEYfhxo1G1b1cksex6A7CeDTht6p9DaAaP2axfOno/N1fCiuDRTz7xcsXS59M0kTTtwGB6xNXfvr+R58RFlQ7K6c0d46IWoVaYZTBCoXEcTuA1Z72j8tTISGvG2mPv24H2icXQANQUz59kBks+fokLAqp7z8B5j/+UDy57JBfwSNqIFFhzrPnk6SAPYyTaOZXP/BfKuFp4pRscvZU00ORl+oZkO2lmcOl/1iXkKVXIIHYl+LzhIqtbT/+qSJNS4zjBHlvej61WtWeyyZNqsfjTmfwBIE4OwpgVuXayWqcqJq5Xn5MJ4j root@node3
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDNl0eXkF2z6ryD4cOixg6YicpJInqKUcsTFsPfrzttrgWXCYNtwACOIuVQ9KqSnjfh+CtGR4LbgQculiasdq64cDYJFYfrOvodh20p237ZKMMsmdnUAuvDGuhLsdPJ+WdG+akYs8Npe71OpRFAmAniRMH47K/dZ/Zv0J1IA772QKQz4Uv/N9hpYME0ofAuw88i8UohAdrg+7531he8RqzCkEBYwLddz/IhlPcdQj6kz9Yb417SKHmWnHsDrwWmvD5epNO/m5/q02xw1Ad7SGbwFmHt5tPbQinR/34ldwDEU9keLHAZuYcLjp1AS2aTX5BLY9cvonclt2MlRKfmSSOx root@node1
[root@node3 .ssh]#
[root@node1 .ssh]# ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
2d:ad:97:2c:b0:09:ff:59:8d:ed:22:7b:d4:32:16:14 root@node1
The key's randomart image is:
+--[ RSA 2048]----+
| E. |
| . |
| . |
| o. |
| . . S oo |
| o + +==. |
| + oo*oo |
| ..=o. |
| ++ .. |
+-----------------+
[root@node1 .ssh]#
[root@node1 .ssh]# ls
id_rsa id_rsa.pub known_hosts
[root@node1 .ssh]# ll
总用量 12
-rw-------. 1 root root 1675 8月 15 17:56 id_rsa
-rw-r--r--. 1 root root 392 8月 15 17:56 id_rsa.pub
-rw-r--r--. 1 root root 531 6月 29 14:07 known_hosts
[root@node1 .ssh]# cat id_rsa.pub >> authorized_keys
[root@node1 .ssh]#
[root@node1 .ssh]#
[root@node1 .ssh]# cat authorized_keys
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDNl0eXkF2z6ryD4cOixg6YicpJInqKUcsTFsPfrzttrgWXCYNtwACOIuVQ9KqSnjfh+CtGR4LbgQculiasdq64cDYJFYfrOvodh20p237ZKMMsmdnUAuvDGuhLsdPJ+WdG+akYs8Npe71OpRFAmAniRMH47K/dZ/Zv0J1IA772QKQz4Uv/N9hpYME0ofAuw88i8UohAdrg+7531he8RqzCkEBYwLddz/IhlPcdQj6kz9Yb417SKHmWnHsDrwWmvD5epNO/m5/q02xw1Ad7SGbwFmHt5tPbQinR/34ldwDEU9keLHAZuYcLjp1AS2aTX5BLY9cvonclt2MlRKfmSSOx root@node1
[root@node1 .ssh]#
[root@node1 .ssh]# cat authorized_keys
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCuAIaOxDS7ijf/18WCwWVXRyn5+apcIXFoz30KtlM45yd2/HOEYNQZzJEoLpw8luQ64vbwHwAyFGfzKXwoMB8S3niSV8NKRcjhtZPoDxGH9pwdQUgfPJ2+Lry04z2wooBlv/PuQgJUy6WI3EJo0AiplOtQDsKUNpilaWL4nKw8FmxLuc4qguEgq6lCO8IIyIghK3vAAR63ZdDSIHNTs0e+hAKrDpKNDAPvpsGhVcRyBbN/iK4T4T11uYs17ySVjuwWsXUqDXv9+mHWUUQZMeZKF88rftjnpVh+yCv9L9GFYK3wsOlZdlmQ8+PJo5n7QQmUYxKHcYH3bMiAHbiGvWI5 root@node2
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCx16tSC60Vafe0i6uEpQ1JnPL6lcDstuGsLEYfhxo1G1b1cksex6A7CeDTht6p9DaAaP2axfOno/N1fCiuDRTz7xcsXS59M0kTTtwGB6xNXfvr+R58RFlQ7K6c0d46IWoVaYZTBCoXEcTuA1Z72j8tTISGvG2mPv24H2icXQANQUz59kBks+fokLAqp7z8B5j/+UDy57JBfwSNqIFFhzrPnk6SAPYyTaOZXP/BfKuFp4pRscvZU00ORl+oZkO2lmcOl/1iXkKVXIIHYl+LzhIqtbT/+qSJNS4zjBHlvej61WtWeyyZNqsfjTmfwBIE4OwpgVuXayWqcqJq5Xn5MJ4j root@node3
[root@node1 .ssh]#
[root@node1 .ssh]#
[root@node1 .ssh]# cat id_rsa.pub >> authorized_keys
[root@node1 .ssh]# cat authorized_keys
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCuAIaOxDS7ijf/18WCwWVXRyn5+apcIXFoz30KtlM45yd2/HOEYNQZzJEoLpw8luQ64vbwHwAyFGfzKXwoMB8S3niSV8NKRcjhtZPoDxGH9pwdQUgfPJ2+Lry04z2wooBlv/PuQgJUy6WI3EJo0AiplOtQDsKUNpilaWL4nKw8FmxLuc4qguEgq6lCO8IIyIghK3vAAR63ZdDSIHNTs0e+hAKrDpKNDAPvpsGhVcRyBbN/iK4T4T11uYs17ySVjuwWsXUqDXv9+mHWUUQZMeZKF88rftjnpVh+yCv9L9GFYK3wsOlZdlmQ8+PJo5n7QQmUYxKHcYH3bMiAHbiGvWI5 root@node2
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCx16tSC60Vafe0i6uEpQ1JnPL6lcDstuGsLEYfhxo1G1b1cksex6A7CeDTht6p9DaAaP2axfOno/N1fCiuDRTz7xcsXS59M0kTTtwGB6xNXfvr+R58RFlQ7K6c0d46IWoVaYZTBCoXEcTuA1Z72j8tTISGvG2mPv24H2icXQANQUz59kBks+fokLAqp7z8B5j/+UDy57JBfwSNqIFFhzrPnk6SAPYyTaOZXP/BfKuFp4pRscvZU00ORl+oZkO2lmcOl/1iXkKVXIIHYl+LzhIqtbT/+qSJNS4zjBHlvej61WtWeyyZNqsfjTmfwBIE4OwpgVuXayWqcqJq5Xn5MJ4j root@node3
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDNl0eXkF2z6ryD4cOixg6YicpJInqKUcsTFsPfrzttrgWXCYNtwACOIuVQ9KqSnjfh+CtGR4LbgQculiasdq64cDYJFYfrOvodh20p237ZKMMsmdnUAuvDGuhLsdPJ+WdG+akYs8Npe71OpRFAmAniRMH47K/dZ/Zv0J1IA772QKQz4Uv/N9hpYME0ofAuw88i8UohAdrg+7531he8RqzCkEBYwLddz/IhlPcdQj6kz9Yb417SKHmWnHsDrwWmvD5epNO/m5/q02xw1Ad7SGbwFmHt5tPbQinR/34ldwDEU9keLHAZuYcLjp1AS2aTX5BLY9cvonclt2MlRKfmSSOx root@node1
[root@node1 .ssh]# scp ~/.ssh/authorized_keys node2:~/.ssh/
root@node2's password:
authorized_keys 100% 1176 1.2KB/s 00:00
[root@node1 .ssh]# scp ~/.ssh/authorized_keys node3:~/.ssh/
root@node3's password:
authorized_keys 100% 1176 1.2KB/s 00:00
shh node1 date
ssh node2 date
ssh node3 date
执行都不需要密码
复制用户已经创建了
创建管理用户:
create user 'muser'@'%' identified by '123456';
grant all privileges on *.* to 'muser'@'%';
[root@node1 .ssh]# mysql -uroot -p -P3380 -S /home/mysqldir/mysql.sock
Enter password:
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MySQL connection id is 12
Server version: 5.7.22-log MySQL Community Server (GPL)
Copyright (c) 2000, 2017, Oracle, MariaDB Corporation Ab and others.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
MySQL [(none)]>
MySQL [(none)]>
MySQL [(none)]> create user 'muser'@'%' identified by '123456';
Query OK, 0 rows affected (0.00 sec)
MySQL [(none)]> grant all privileges on *.* to 'muser'@'%';
Query OK, 0 rows affected (0.01 sec)
MySQL [(none)]> use mysql
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A
Database changed
MySQL [mysql]> select host,user from user;
+-----------+---------------+
| host | user |
+-----------+---------------+
| % | muser |
| % | repl |
| % | root |
| localhost | mysql.session |
| localhost | mysql.sys |
+-----------+---------------+
5 rows in set (0.00 sec)
MySQL [mysql]>
安装mha-node节点
[root@node1 home]# cd mha-node/
[root@node1 mha-node]# ls
mha4mysql-node-master.zip
[root@node1 mha-node]# unzip mha4mysql-node-master.zip
Archive: mha4mysql-node-master.zip
6709262947c58b2f19bbbc8e451a35c14a85e1fb
creating: mha4mysql-node-master/
inflating: mha4mysql-node-master/.gitignore
inflating: mha4mysql-node-master/AUTHORS
inflating: mha4mysql-node-master/COPYING
inflating: mha4mysql-node-master/MANIFEST
inflating: mha4mysql-node-master/MANIFEST.SKIP
inflating: mha4mysql-node-master/Makefile.PL
inflating: mha4mysql-node-master/README
creating: mha4mysql-node-master/bin/
inflating: mha4mysql-node-master/bin/apply_diff_relay_logs
inflating: mha4mysql-node-master/bin/filter_mysqlbinlog
inflating: mha4mysql-node-master/bin/purge_relay_logs
inflating: mha4mysql-node-master/bin/save_binary_logs
creating: mha4mysql-node-master/debian/
inflating: mha4mysql-node-master/debian/changelog
extracting: mha4mysql-node-master/debian/compat
inflating: mha4mysql-node-master/debian/control
inflating: mha4mysql-node-master/debian/copyright
extracting: mha4mysql-node-master/debian/docs
extracting: mha4mysql-node-master/debian/rules
creating: mha4mysql-node-master/lib/
creating: mha4mysql-node-master/lib/MHA/
inflating: mha4mysql-node-master/lib/MHA/BinlogHeaderParser.pm
inflating: mha4mysql-node-master/lib/MHA/BinlogManager.pm
inflating: mha4mysql-node-master/lib/MHA/BinlogPosFindManager.pm
inflating: mha4mysql-node-master/lib/MHA/BinlogPosFinder.pm
inflating: mha4mysql-node-master/lib/MHA/BinlogPosFinderElp.pm
inflating: mha4mysql-node-master/lib/MHA/BinlogPosFinderXid.pm
inflating: mha4mysql-node-master/lib/MHA/NodeConst.pm
inflating: mha4mysql-node-master/lib/MHA/NodeUtil.pm
inflating: mha4mysql-node-master/lib/MHA/SlaveUtil.pm
creating: mha4mysql-node-master/rpm/
inflating: mha4mysql-node-master/rpm/masterha_node.spec
creating: mha4mysql-node-master/t/
inflating: mha4mysql-node-master/t/99-perlcritic.t
inflating: mha4mysql-node-master/t/perlcriticrc
[root@node1 mha-node]# ls
mha4mysql-node-master mha4mysql-node-master.zip
[root@node1 mha-node]# cd mha4mysql-node-master
[root@node1 mha4mysql-node-master]# ls
AUTHORS bin COPYING debian lib Makefile.PL MANIFEST MANIFEST.SKIP README rpm t
[root@node1 mha4mysql-node-master]# yum install perl-DBD-MySQL
已加载插件:fastestmirror
base | 3.6 kB 00:00:00
epel/x86_64/metalink | 5.8 kB 00:00:00
epel | 3.2 kB 00:00:00
extras | 3.4 kB 00:00:00
updates | 3.4 kB 00:00:00
(1/4): epel/x86_64/updateinfo | 934 kB 00:00:00
(2/4): epel/x86_64/primary | 3.6 MB 00:00:00
(3/4): extras/7/x86_64/primary_db | 174 kB 00:00:00
(4/4): updates/7/x86_64/primary_db | 5.0 MB 00:00:01
Loading mirror speeds from cached hostfile
* base: centos.ustc.edu.cn
* epel: mirrors.huaweicloud.com
* extras: centos.ustc.edu.cn
* updates: linux.cs.nctu.edu.tw
epel 12646/12646
软件包 perl-DBD-MySQL-4.023-6.el7.x86_64 已安装并且是最新版本
无须任何处理
[root@node1 mha4mysql-node-master]#
[root@node1 mha4mysql-node-master]# yum -y install perl-CPAN*
已加载插件:fastestmirror
Loading mirror speeds from cached hostfile
* base: centos.ustc.edu.cn
* epel: mirrors.huaweicloud.com
* extras: centos.ustc.edu.cn
* updates: linux.cs.nctu.edu.tw
正在解决依赖关系
--> 正在检查事务
---> 软件包 perl-CPAN.noarch.0.1.9800-292.el7 将被 安装
--> 正在处理依赖关系 perl(local::lib),它被软件包 perl-CPAN-1.9800-292.el7.noarch 需要
--> 正在处理依赖关系 perl(ExtUtils::MakeMaker),它被软件包 perl-CPAN-1.9800-292.el7.noar
[root@node1 mha4mysql-node-master]# perl Makefile.PL
Can't locate inc/Module/Install.pm in @INC (@INC contains: /usr/local/lib64/perl5 /usr/local/share/perl5 /usr/lib64/perl5/vendor_perl /usr/share/perl5/vendor_perl /usr/lib64/perl5 /usr/share/perl5 .) at Makefile.PL line 1.
BEGIN failed--compilation aborted at Makefile.PL line 1.
[root@node1 mha4mysql-node-master]# yum install *Module::Install*
[root@node1 mha4mysql-node-master]# perl Makefile.PL
include /home/mha-node/mha4mysql-node-master/inc/Module/Install.pm
include inc/Module/Install/Metadata.pm
include inc/Module/Install/Base.pm
include inc/Module/Install/Makefile.pm
include inc/Module/Install/Scripts.pm
include inc/Module/Install/AutoInstall.pm
include inc/Module/Install/Include.pm
include inc/Module/AutoInstall.pm
*** Module::AutoInstall version 1.06
*** Checking for Perl dependencies...
[Core Features]
- DBI ...loaded. (1.627)
- DBD::mysql ...loaded. (4.023)
*** Module::AutoInstall configuration finished.
include inc/Module/Install/WriteAll.pm
include inc/Module/Install/Win32.pm
include inc/Module/Install/Can.pm
include inc/Module/Install/Fetch.pm
Checking if your kit is complete...
Warning: the following files are missing in your kit:
META.yml
Please inform the author.
Writing Makefile for mha4mysql::node
Writing MYMETA.yml and MYMETA.json
Writing META.yml
[root@node1 mha4mysql-node-master]#
[root@node1 mha4mysql-node-master]#
cp lib/MHA/BinlogManager.pm blib/lib/MHA/BinlogManager.pm
cp lib/MHA/BinlogPosFindManager.pm blib/lib/MHA/BinlogPosFindManager.pm
cp lib/MHA/BinlogPosFinderXid.pm blib/lib/MHA/BinlogPosFinderXid.pm
cp lib/MHA/BinlogHeaderParser.pm blib/lib/MHA/BinlogHeaderParser.pm
cp lib/MHA/BinlogPosFinder.pm blib/lib/MHA/BinlogPosFinder.pm
cp lib/MHA/BinlogPosFinderElp.pm blib/lib/MHA/BinlogPosFinderElp.pm
cp lib/MHA/NodeUtil.pm blib/lib/MHA/NodeUtil.pm
cp lib/MHA/SlaveUtil.pm blib/lib/MHA/SlaveUtil.pm
cp lib/MHA/NodeConst.pm blib/lib/MHA/NodeConst.pm
cp bin/filter_mysqlbinlog blib/script/filter_mysqlbinlog
/usr/bin/perl "-Iinc" -MExtUtils::MY -e 'MY->fixin(shift)' -- blib/script/filter_mysqlbinlog
cp bin/apply_diff_relay_logs blib/script/apply_diff_relay_logs
/usr/bin/perl "-Iinc" -MExtUtils::MY -e 'MY->fixin(shift)' -- blib/script/apply_diff_relay_logs
cp bin/purge_relay_logs blib/script/purge_relay_logs
/usr/bin/perl "-Iinc" -MExtUtils::MY -e 'MY->fixin(shift)' -- blib/script/purge_relay_logs
cp bin/save_binary_logs blib/script/save_binary_logs
/usr/bin/perl "-Iinc" -MExtUtils::MY -e 'MY->fixin(shift)' -- blib/script/save_binary_logs
Manifying blib/man1/filter_mysqlbinlog.1
Manifying blib/man1/apply_diff_relay_logs.1
Manifying blib/man1/purge_relay_logs.1
Manifying blib/man1/save_binary_logs.1
Installing /usr/local/share/perl5/MHA/BinlogManager.pm
Installing /usr/local/share/perl5/MHA/BinlogPosFindManager.pm
Installing /usr/local/share/perl5/MHA/BinlogPosFinderXid.pm
Installing /usr/local/share/perl5/MHA/BinlogHeaderParser.pm
Installing /usr/local/share/perl5/MHA/BinlogPosFinder.pm
Installing /usr/local/share/perl5/MHA/BinlogPosFinderElp.pm
Installing /usr/local/share/perl5/MHA/NodeUtil.pm
Installing /usr/local/share/perl5/MHA/SlaveUtil.pm
Installing /usr/local/share/perl5/MHA/NodeConst.pm
Installing /usr/local/share/man/man1/filter_mysqlbinlog.1
Installing /usr/local/share/man/man1/apply_diff_relay_logs.1
Installing /usr/local/share/man/man1/purge_relay_logs.1
Installing /usr/local/share/man/man1/save_binary_logs.1
Installing /usr/local/bin/filter_mysqlbinlog
Installing /usr/local/bin/apply_diff_relay_logs
Installing /usr/local/bin/purge_relay_logs
Installing /usr/local/bin/save_binary_logs
Appending installation info to /usr/lib64/perl5/perllocal.pod
[root@node1 mha4mysql-node-master]#
[root@node2 mha-master]# ls
mha4mysql-manager-master.zip
[root@node2 mha-master]# unzip mha4mysql-manager-master.zip
Archive: mha4mysql-manager-master.zip
abe11f9abb74bc5886012ff5cc9e209fb9b093c1
creating: mha4mysql-manager-master/
inflating: mha4mysql-manager-master/.gitignore
inflating: mha4mysql-manager-master/AUTHORS
inflating: mha4mysql-manager-master/COPYING
inflating: mha4mysql-manager-master/MANIFEST
inflating: mha4mysql-manager-master/MANIFEST.SKIP
inflating: mha4mysql-manager-master/Makefile.PL
inflating: mha4mysql-manager-master/README
creating: mha4mysql-manager-master/bin/
inflating: mha4mysql-manager-master/bin/masterha_check_repl
inflating: mha4mysql-manager-master/bin/masterha_check_ssh
inflating: mha4mysql-manager-master/bin/masterha_check_status
inflating: mha4mysql-manager-master/bin/masterha_conf_host
inflating: mha4mysql-manager-master/bin/masterha_manager
inflating: mha4mysql-manager-master/bin/masterha_master_monitor
inflating: mha4mysql-manager-master/bin/masterha_master_switch
inflating: mha4mysql-manager-master/bin/masterha_secondary_check
inflating: mha4mysql-manager-master/bin/masterha_stop
creating: mha4mysql-manager-master/debian/
inflating: mha4mysql-manager-master/debian/changelog
extracting: mha4mysql-manager-master/debian/compat
inflating: mha4mysql-manager-master/debian/control
inflating: mha4mysql-manager-master/debian/copyright
extracting: mha4mysql-manager-master/debian/docs
extracting: mha4mysql-manager-master/debian/rules
creating: mha4mysql-manager-master/lib/
creating: mha4mysql-manager-master/lib/MHA/
inflating: mha4mysql-manager-master/lib/MHA/Config.pm
inflating: mha4mysql-manager-master/lib/MHA/DBHelper.pm
inflating: mha4mysql-manager-master/lib/MHA/FileStatus.pm
inflating: mha4mysql-manager-master/lib/MHA/HealthCheck.pm
inflating: mha4mysql-manager-master/lib/MHA/ManagerAdmin.pm
inflating: mha4mysql-manager-master/lib/MHA/ManagerAdminWrapper.pm
inflating: mha4mysql-manager-master/lib/MHA/ManagerConst.pm
inflating: mha4mysql-manager-master/lib/MHA/ManagerUtil.pm
inflating: mha4mysql-manager-master/lib/MHA/MasterFailover.pm
inflating: mha4mysql-manager-master/lib/MHA/MasterMonitor.pm
inflating: mha4mysql-manager-master/lib/MHA/MasterRotate.pm
inflating: mha4mysql-manager-master/lib/MHA/SSHCheck.pm
inflating: mha4mysql-manager-master/lib/MHA/Server.pm
inflating: mha4mysql-manager-master/lib/MHA/ServerManager.pm
creating: mha4mysql-manager-master/rpm/
inflating: mha4mysql-manager-master/rpm/masterha_manager.spec
creating: mha4mysql-manager-master/samples/
creating: mha4mysql-manager-master/samples/conf/
inflating: mha4mysql-manager-master/samples/conf/app1.cnf
inflating: mha4mysql-manager-master/samples/conf/masterha_default.cnf
creating: mha4mysql-manager-master/samples/scripts/
inflating: mha4mysql-manager-master/samples/scripts/master_ip_failover
inflating: mha4mysql-manager-master/samples/scripts/master_ip_online_change
inflating: mha4mysql-manager-master/samples/scripts/power_manager
inflating: mha4mysql-manager-master/samples/scripts/send_report
creating: mha4mysql-manager-master/t/
inflating: mha4mysql-manager-master/t/99-perlcritic.t
inflating: mha4mysql-manager-master/t/perlcriticrc
creating: mha4mysql-manager-master/tests/
inflating: mha4mysql-manager-master/tests/intro.txt
inflating: mha4mysql-manager-master/tests/run_suites.sh
creating: mha4mysql-manager-master/tests/t/
inflating: mha4mysql-manager-master/tests/t/bulk_tran_insert.pl
inflating: mha4mysql-manager-master/tests/t/change_relay_log_info.sh
inflating: mha4mysql-manager-master/tests/t/check
inflating: mha4mysql-manager-master/tests/t/env.sh
inflating: mha4mysql-manager-master/tests/t/force_start_m.sh
inflating: mha4mysql-manager-master/tests/t/grant.sql
inflating: mha4mysql-manager-master/tests/t/grant_nopass.sql
inflating: mha4mysql-manager-master/tests/t/init.sh
inflating: mha4mysql-manager-master/tests/t/insert.pl
inflating: mha4mysql-manager-master/tests/t/insert_binary.pl
inflating: mha4mysql-manager-master/tests/t/kill_m.sh
inflating: mha4mysql-manager-master/tests/t/master_ip_failover
inflating: mha4mysql-manager-master/tests/t/master_ip_failover_blank
inflating: mha4mysql-manager-master/tests/t/master_ip_online_change
inflating: mha4mysql-manager-master/tests/t/mha_test.cnf.tmpl
inflating: mha4mysql-manager-master/tests/t/mha_test_binlog.cnf.tmpl
inflating: mha4mysql-manager-master/tests/t/mha_test_connect.cnf.tmpl
inflating: mha4mysql-manager-master/tests/t/mha_test_err1.cnf.tmpl
inflating: mha4mysql-manager-master/tests/t/mha_test_err2.cnf.tmpl
inflating: mha4mysql-manager-master/tests/t/mha_test_gtid_fail1.cnf.tmpl
inflating: mha4mysql-manager-master/tests/t/mha_test_gtid_fail2.cnf.tmpl
inflating: mha4mysql-manager-master/tests/t/mha_test_gtid_ok.cnf.tmpl
inflating: mha4mysql-manager-master/tests/t/mha_test_ignore.cnf.tmpl
inflating: mha4mysql-manager-master/tests/t/mha_test_large.cnf.tmpl
inflating: mha4mysql-manager-master/tests/t/mha_test_latest.cnf.tmpl
inflating: mha4mysql-manager-master/tests/t/mha_test_mm.cnf.tmpl
inflating: mha4mysql-manager-master/tests/t/mha_test_mm_online.cnf.tmpl
inflating: mha4mysql-manager-master/tests/t/mha_test_multi.cnf.tmpl
inflating: mha4mysql-manager-master/tests/t/mha_test_multi_online.cnf.tmpl
inflating: mha4mysql-manager-master/tests/t/mha_test_nobinlog.cnf.tmpl
inflating: mha4mysql-manager-master/tests/t/mha_test_nopass.cnf.tmpl
inflating: mha4mysql-manager-master/tests/t/mha_test_online.cnf.tmpl
inflating: mha4mysql-manager-master/tests/t/mha_test_online_pass.cnf.tmpl
inflating: mha4mysql-manager-master/tests/t/mha_test_pass.cnf.tmpl
inflating: mha4mysql-manager-master/tests/t/mha_test_reset.cnf.tmpl
inflating: mha4mysql-manager-master/tests/t/mha_test_ssh.cnf.tmpl
inflating: mha4mysql-manager-master/tests/t/my-row.cnf
inflating: mha4mysql-manager-master/tests/t/my.cnf
inflating: mha4mysql-manager-master/tests/t/run.sh
inflating: mha4mysql-manager-master/tests/t/run_bg.sh
inflating: mha4mysql-manager-master/tests/t/run_tests
extracting: mha4mysql-manager-master/tests/t/start_m.sh
extracting: mha4mysql-manager-master/tests/t/start_s1.sh
extracting: mha4mysql-manager-master/tests/t/start_s2.sh
extracting: mha4mysql-manager-master/tests/t/start_s4.sh
extracting: mha4mysql-manager-master/tests/t/stop_m.sh
extracting: mha4mysql-manager-master/tests/t/stop_s1.sh
extracting: mha4mysql-manager-master/tests/t/stop_s2.sh
extracting: mha4mysql-manager-master/tests/t/stop_s4.sh
inflating: mha4mysql-manager-master/tests/t/t_4tier.sh
inflating: mha4mysql-manager-master/tests/t/t_4tier_subm_dead.sh
inflating: mha4mysql-manager-master/tests/t/t_advisory_connect.sh
inflating: mha4mysql-manager-master/tests/t/t_advisory_select.sh
inflating: mha4mysql-manager-master/tests/t/t_apply_many_logs.sh
inflating: mha4mysql-manager-master/tests/t/t_apply_many_logs2.sh
inflating: mha4mysql-manager-master/tests/t/t_apply_many_logs3.sh
inflating: mha4mysql-manager-master/tests/t/t_binary.sh
inflating: mha4mysql-manager-master/tests/t/t_conf.sh
inflating: mha4mysql-manager-master/tests/t/t_data_io_error.sh
inflating: mha4mysql-manager-master/tests/t/t_dual_master_error.sh
inflating: mha4mysql-manager-master/tests/t/t_filter_incorrect.sh
inflating: mha4mysql-manager-master/tests/t/t_ignore_nostart.sh
inflating: mha4mysql-manager-master/tests/t/t_ignore_recovery1.sh
inflating: mha4mysql-manager-master/tests/t/t_ignore_recovery2.sh
inflating: mha4mysql-manager-master/tests/t/t_ignore_recovery3.sh
inflating: mha4mysql-manager-master/tests/t/t_ignore_recovery4.sh
inflating: mha4mysql-manager-master/tests/t/t_ignore_start.sh
inflating: mha4mysql-manager-master/tests/t/t_keep_relay_log_purge.sh
inflating: mha4mysql-manager-master/tests/t/t_large_data.sh
inflating: mha4mysql-manager-master/tests/t/t_large_data_bulk.sh
inflating: mha4mysql-manager-master/tests/t/t_large_data_bulk_slow.sh
inflating: mha4mysql-manager-master/tests/t/t_large_data_slow.sh
inflating: mha4mysql-manager-master/tests/t/t_large_data_slow2.sh
inflating: mha4mysql-manager-master/tests/t/t_large_data_slow3.sh
inflating: mha4mysql-manager-master/tests/t/t_large_data_sql_fail.sh
inflating: mha4mysql-manager-master/tests/t/t_large_data_sql_stop.sh
inflating: mha4mysql-manager-master/tests/t/t_large_data_tran.sh
inflating: mha4mysql-manager-master/tests/t/t_latest_recovery1.sh
inflating: mha4mysql-manager-master/tests/t/t_latest_recovery2.sh
inflating: mha4mysql-manager-master/tests/t/t_latest_recovery3.sh
inflating: mha4mysql-manager-master/tests/t/t_manual.sh
inflating: mha4mysql-manager-master/tests/t/t_mm.sh
inflating: mha4mysql-manager-master/tests/t/t_mm_3tier.sh
inflating: mha4mysql-manager-master/tests/t/t_mm_3tier_subm_dead.sh
inflating: mha4mysql-manager-master/tests/t/t_mm_normal.sh
inflating: mha4mysql-manager-master/tests/t/t_mm_normal_skip_reset.sh
inflating: mha4mysql-manager-master/tests/t/t_mm_noslaves.sh
inflating: mha4mysql-manager-master/tests/t/t_mm_ro_fail.sh
inflating: mha4mysql-manager-master/tests/t/t_mm_subm_dead.sh
inflating: mha4mysql-manager-master/tests/t/t_mm_subm_dead_many.sh
inflating: mha4mysql-manager-master/tests/t/t_mm_subm_dead_noslave.sh
inflating: mha4mysql-manager-master/tests/t/t_needsync_1.sh
inflating: mha4mysql-manager-master/tests/t/t_needsync_1_nocm.sh
inflating: mha4mysql-manager-master/tests/t/t_needsync_1_nopass.sh
inflating: mha4mysql-manager-master/tests/t/t_needsync_1_pass.sh
inflating: mha4mysql-manager-master/tests/t/t_needsync_1_ssh.sh
inflating: mha4mysql-manager-master/tests/t/t_needsync_2.sh
inflating: mha4mysql-manager-master/tests/t/t_needsync_2_nobinlog.sh
inflating: mha4mysql-manager-master/tests/t/t_needsync_2_pass.sh
inflating: mha4mysql-manager-master/tests/t/t_needsync_2_ssh.sh
inflating: mha4mysql-manager-master/tests/t/t_needsync_binlog.sh
inflating: mha4mysql-manager-master/tests/t/t_needsync_fail.sh
inflating: mha4mysql-manager-master/tests/t/t_needsync_flush.sh
inflating: mha4mysql-manager-master/tests/t/t_needsync_flush2.sh
inflating: mha4mysql-manager-master/tests/t/t_needsync_flush3.sh
inflating: mha4mysql-manager-master/tests/t/t_needsync_flush_slave.sh
inflating: mha4mysql-manager-master/tests/t/t_new_master_heavy.sh
inflating: mha4mysql-manager-master/tests/t/t_new_master_heavy_wait.sh
inflating: mha4mysql-manager-master/tests/t/t_no_relay_log.sh
inflating: mha4mysql-manager-master/tests/t/t_no_relay_log_gtid.sh
inflating: mha4mysql-manager-master/tests/t/t_normal_crash.sh
inflating: mha4mysql-manager-master/tests/t/t_normal_crash_nocm.sh
inflating: mha4mysql-manager-master/tests/t/t_online_3tier.sh
inflating: mha4mysql-manager-master/tests/t/t_online_3tier_slave.sh
inflating: mha4mysql-manager-master/tests/t/t_online_3tier_slave_keep.sh
inflating: mha4mysql-manager-master/tests/t/t_online_busy.sh
inflating: mha4mysql-manager-master/tests/t/t_online_filter.sh
inflating: mha4mysql-manager-master/tests/t/t_online_mm.sh
inflating: mha4mysql-manager-master/tests/t/t_online_mm_3tier.sh
inflating: mha4mysql-manager-master/tests/t/t_online_mm_3tier_slave.sh
inflating: mha4mysql-manager-master/tests/t/t_online_mm_skip_reset.sh
inflating: mha4mysql-manager-master/tests/t/t_online_normal.sh
inflating: mha4mysql-manager-master/tests/t/t_online_slave.sh
inflating: mha4mysql-manager-master/tests/t/t_online_slave_fail.sh
inflating: mha4mysql-manager-master/tests/t/t_online_slave_pass.sh
inflating: mha4mysql-manager-master/tests/t/t_online_slave_sql_stop.sh
inflating: mha4mysql-manager-master/tests/t/t_recover_master_fail.sh
inflating: mha4mysql-manager-master/tests/t/t_recover_slave_fail.sh
inflating: mha4mysql-manager-master/tests/t/t_recover_slave_fail2.sh
inflating: mha4mysql-manager-master/tests/t/t_recover_slave_ok.sh
inflating: mha4mysql-manager-master/tests/t/t_save_binlog_gtid.sh
inflating: mha4mysql-manager-master/tests/t/t_save_master_log.sh
inflating: mha4mysql-manager-master/tests/t/t_save_master_log_pass.sh
inflating: mha4mysql-manager-master/tests/t/t_save_master_log_ssh.sh
inflating: mha4mysql-manager-master/tests/t/t_slave_incorrect.sh
inflating: mha4mysql-manager-master/tests/t/t_slave_sql_start.sh
inflating: mha4mysql-manager-master/tests/t/t_slave_sql_start2.sh
inflating: mha4mysql-manager-master/tests/t/t_slave_sql_start3.sh
inflating: mha4mysql-manager-master/tests/t/t_slave_stop.sh
inflating: mha4mysql-manager-master/tests/t/tran_insert.pl
inflating: mha4mysql-manager-master/tests/t/waitpid
[root@node2 mha-master]# ls
安装mha-manager节点
mha4mysql-manager-master mha4mysql-manager-master.zip
[root@node2 mha-master]# cd mha4mysql-manager-master
[root@node2 mha4mysql-manager-master]# ls
AUTHORS bin COPYING debian lib Makefile.PL MANIFEST MANIFEST.SKIP README rpm samples t tests
[root@node2 mha4mysql-manager-master]# perl Makefile.PL
include /home/mha-master/mha4mysql-manager-master/inc/Module/Install.pm
include inc/Module/Install/Metadata.pm
include inc/Module/Install/Base.pm
include inc/Module/Install/Makefile.pm
include inc/Module/Install/Scripts.pm
include inc/Module/Install/AutoInstall.pm
include inc/Module/Install/Include.pm
include inc/Module/AutoInstall.pm
*** Module::AutoInstall version 1.06
*** Checking for Perl dependencies...
[Core Features]
- DBI ...loaded. (1.627)
- DBD::mysql ...loaded. (4.023)
- Time::HiRes ...loaded. (1.9725)
- Config::Tiny ...missing.
- Log::Dispatch ...missing.
- Parallel::ForkManager ...missing.
- MHA::NodeConst ...loaded. (0.58)
==> Auto-install the 3 mandatory module(s) from CPAN? [y] y
*** Dependencies will be installed the next time you type 'make'.
*** Module::AutoInstall configuration finished.
include inc/Module/Install/WriteAll.pm
include inc/Module/Install/Win32.pm
include inc/Module/Install/Can.pm
include inc/Module/Install/Fetch.pm
Checking if your kit is complete...
Warning: the following files are missing in your kit:
META.yml
Please inform the author.
Warning: prerequisite Config::Tiny 0 not found.
Warning: prerequisite Log::Dispatch 0 not found.
Warning: prerequisite Parallel::ForkManager 0 not found.
Writing Makefile for mha4mysql::manager
Writing MYMETA.yml and MYMETA.json
Writing META.yml
[root@node2 mha4mysql-manager-master]# yum install *Tiny*
已加载插件:fastestmirror
Loading mirror speeds from cached hostfile
* base: mirrors.163.com
* extras: mirrors.163.com
* updates: mirrors.163.com
软件包 perl-HTTP-Tiny-0.033-3.el7.noarch 已安装并且是最新版本
软件包 perl-YAML-Tiny-1.51-6.el7.noarch 已安装并且是最新版本
正在解决依赖关系
--> 正在检查事务
---> 软件包 perl-CSS-Tiny.noarch.0.1.19-5.el7 将被 安装
--> 正在处理依赖关系 perl(Clone),它被软件包 perl-CSS-Tiny-1.19-5.el7.noarch 需要
---> 软件包 perl-Capture-Tiny.noarch.0.0.24-1.el7 将被 安装
---> 软件包 perl-Config-Tiny.noarch.0.2.14-7.el7 将被 安装
---> 软件包 perl-Try-Tiny.noarch.0.0.12-2.el7 将被 安装
--> 正在检查事务
---> 软件包 perl-Clone.x86_64.0.0.34-5.el7 将被 安装
--> 解决依赖关系完成
依赖关系解决
=================================================================================================================================================================================================================
Package 架构 版本 源 大小
=================================================================================================================================================================================================================
正在安装:
perl-CSS-Tiny noarch 1.19-5.el7 base 23 k
perl-Capture-Tiny noarch 0.24-1.el7 base 31 k
perl-Config-Tiny noarch 2.14-7.el7 base 25 k
perl-Try-Tiny noarch 0.12-2.el7 base 23 k
为依赖而安装:
perl-Clone x86_64 0.34-5.el7 base 18 k
事务概要
=================================================================================================================================================================================================================
安装 4 软件包 (+1 依赖软件包)
总下载量:120 k
安装大小:196 k
Is this ok [y/d/N]:
安装manager工具:
#tar -zxvf mha4mysql-manager-*.*.tar.gz
#perl Makefile.PL
*** Module::AutoInstall version 1.03
*** Checking for Perl dependencies...
[Core Features]
- DBI ...loaded. (1.616)--------------显示missing 则需安装dbi包
- DBD::mysql ...loaded. (4.020)--------------显示missing 则必须安装dbd:mysql包
- Time::HiRes ...loaded. (1.972101)-----------显示missing 则需要perl -MCPAN -e "install Time::HiRes"
- Config::Tiny ...loaded. (2.20)---------------显示missing 则需要perl -MCPAN -e "install Config::Tiny"
- Log::Dispatch ...loaded. (2.41)---------------显示missing 则需要perl -MCPAN -e "install Log::Dispatch"
- Parallel::ForkManager ...loaded. (1.06)---------------显示missing 则需要perl -MCPAN -e "install Parallel::ForkManager"
- MHA::NodeConst ...loaded. (0.54) 先装Node及上面安装的mha4mysql-node包,这里才能通过
#make
#make isntall
[root@node2 mha4mysql-manager-master]# perl Makefile.PL
include /home/mha-master/mha4mysql-manager-master/inc/Module/Install.pm
include inc/Module/Install/Metadata.pm
include inc/Module/Install/Base.pm
include inc/Module/Install/Makefile.pm
include inc/Module/Install/Scripts.pm
include inc/Module/Install/AutoInstall.pm
include inc/Module/Install/Include.pm
include inc/Module/AutoInstall.pm
*** Module::AutoInstall version 1.06
*** Checking for Perl dependencies...
[Core Features]
- DBI ...loaded. (1.627)
- DBD::mysql ...loaded. (4.023)
- Time::HiRes ...loaded. (1.9725)
- Config::Tiny ...loaded. (2.14)
- Log::Dispatch ...loaded. (2.67)
- Parallel::ForkManager ...loaded. (1.20)
- MHA::NodeConst ...loaded. (0.58)
*** Module::AutoInstall configuration finished.
include inc/Module/Install/WriteAll.pm
include inc/Module/Install/Win32.pm
include inc/Module/Install/Can.pm
include inc/Module/Install/Fetch.pm
Writing Makefile for mha4mysql::manager
Writing MYMETA.yml and MYMETA.json
Writing META.yml
[root@node2 mha4mysql-manager-master]#
[root@node2 mha4mysql-manager-master]# make
cp lib/MHA/ManagerUtil.pm blib/lib/MHA/ManagerUtil.pm
cp lib/MHA/Config.pm blib/lib/MHA/Config.pm
cp lib/MHA/HealthCheck.pm blib/lib/MHA/HealthCheck.pm
cp lib/MHA/ManagerConst.pm blib/lib/MHA/ManagerConst.pm
cp lib/MHA/ServerManager.pm blib/lib/MHA/ServerManager.pm
cp lib/MHA/FileStatus.pm blib/lib/MHA/FileStatus.pm
cp lib/MHA/ManagerAdmin.pm blib/lib/MHA/ManagerAdmin.pm
cp lib/MHA/ManagerAdminWrapper.pm blib/lib/MHA/ManagerAdminWrapper.pm
cp lib/MHA/MasterFailover.pm blib/lib/MHA/MasterFailover.pm
cp lib/MHA/MasterMonitor.pm blib/lib/MHA/MasterMonitor.pm
cp lib/MHA/MasterRotate.pm blib/lib/MHA/MasterRotate.pm
cp lib/MHA/SSHCheck.pm blib/lib/MHA/SSHCheck.pm
cp lib/MHA/Server.pm blib/lib/MHA/Server.pm
cp lib/MHA/DBHelper.pm blib/lib/MHA/DBHelper.pm
cp bin/masterha_stop blib/script/masterha_stop
/usr/bin/perl "-Iinc" -MExtUtils::MY -e 'MY->fixin(shift)' -- blib/script/masterha_stop
cp bin/masterha_conf_host blib/script/masterha_conf_host
/usr/bin/perl "-Iinc" -MExtUtils::MY -e 'MY->fixin(shift)' -- blib/script/masterha_conf_host
cp bin/masterha_check_repl blib/script/masterha_check_repl
/usr/bin/perl "-Iinc" -MExtUtils::MY -e 'MY->fixin(shift)' -- blib/script/masterha_check_repl
cp bin/masterha_check_status blib/script/masterha_check_status
/usr/bin/perl "-Iinc" -MExtUtils::MY -e 'MY->fixin(shift)' -- blib/script/masterha_check_status
cp bin/masterha_master_monitor blib/script/masterha_master_monitor
/usr/bin/perl "-Iinc" -MExtUtils::MY -e 'MY->fixin(shift)' -- blib/script/masterha_master_monitor
cp bin/masterha_check_ssh blib/script/masterha_check_ssh
/usr/bin/perl "-Iinc" -MExtUtils::MY -e 'MY->fixin(shift)' -- blib/script/masterha_check_ssh
cp bin/masterha_master_switch blib/script/masterha_master_switch
/usr/bin/perl "-Iinc" -MExtUtils::MY -e 'MY->fixin(shift)' -- blib/script/masterha_master_switch
cp bin/masterha_secondary_check blib/script/masterha_secondary_check
/usr/bin/perl "-Iinc" -MExtUtils::MY -e 'MY->fixin(shift)' -- blib/script/masterha_secondary_check
cp bin/masterha_manager blib/script/masterha_manager
/usr/bin/perl "-Iinc" -MExtUtils::MY -e 'MY->fixin(shift)' -- blib/script/masterha_manager
Manifying blib/man1/masterha_stop.1
Manifying blib/man1/masterha_conf_host.1
Manifying blib/man1/masterha_check_repl.1
Manifying blib/man1/masterha_check_status.1
Manifying blib/man1/masterha_master_monitor.1
Manifying blib/man1/masterha_check_ssh.1
Manifying blib/man1/masterha_master_switch.1
Manifying blib/man1/masterha_secondary_check.1
Manifying blib/man1/masterha_manager.1
[root@node2 mha4mysql-manager-master]# make install
Installing /usr/local/share/perl5/MHA/ManagerUtil.pm
Installing /usr/local/share/perl5/MHA/Config.pm
Installing /usr/local/share/perl5/MHA/HealthCheck.pm
Installing /usr/local/share/perl5/MHA/ManagerConst.pm
Installing /usr/local/share/perl5/MHA/ServerManager.pm
Installing /usr/local/share/perl5/MHA/FileStatus.pm
Installing /usr/local/share/perl5/MHA/ManagerAdmin.pm
Installing /usr/local/share/perl5/MHA/ManagerAdminWrapper.pm
Installing /usr/local/share/perl5/MHA/MasterFailover.pm
Installing /usr/local/share/perl5/MHA/MasterMonitor.pm
Installing /usr/local/share/perl5/MHA/MasterRotate.pm
Installing /usr/local/share/perl5/MHA/SSHCheck.pm
Installing /usr/local/share/perl5/MHA/Server.pm
Installing /usr/local/share/perl5/MHA/DBHelper.pm
Installing /usr/local/share/man/man1/masterha_stop.1
Installing /usr/local/share/man/man1/masterha_conf_host.1
Installing /usr/local/share/man/man1/masterha_check_repl.1
Installing /usr/local/share/man/man1/masterha_check_status.1
Installing /usr/local/share/man/man1/masterha_master_monitor.1
Installing /usr/local/share/man/man1/masterha_check_ssh.1
Installing /usr/local/share/man/man1/masterha_master_switch.1
Installing /usr/local/share/man/man1/masterha_secondary_check.1
Installing /usr/local/share/man/man1/masterha_manager.1
Installing /usr/local/bin/masterha_stop
Installing /usr/local/bin/masterha_conf_host
Installing /usr/local/bin/masterha_check_repl
Installing /usr/local/bin/masterha_check_status
Installing /usr/local/bin/masterha_master_monitor
Installing /usr/local/bin/masterha_check_ssh
Installing /usr/local/bin/masterha_master_switch
Installing /usr/local/bin/masterha_secondary_check
Installing /usr/local/bin/masterha_manager
Appending installation info to /usr/lib64/perl5/perllocal.pod
[root@node2 mha4mysql-manager-master]#
[root@node2 ~]# cd /home
[root@node2 home]# ls
hadoop mha-master mha-node mysqldata mysqldata2 mysqldir mysqlinstall
[root@node2 home]# cd mha-master/
[root@node2 mha-master]# ls
mha4mysql-manager-master Parallel-ForkManager-1.20 perl-parallel-forkmanager-1.19-3-any.pkg.tar usr
mha4mysql-manager-master.zip Parallel-ForkManager-1.20.tar.gz perl-Parallel-ForkManager-1.20-alt1.noarch.rpm
[root@node2 mha-master]# cd mha4mysql-manager-master
[root@node2 mha4mysql-manager-master]# ls
AUTHORS bin blib COPYING debian inc lib Makefile Makefile.PL MANIFEST MANIFEST.SKIP META.yml MYMETA.json MYMETA.yml pm_to_blib README rpm samples t tests
[root@node2 mha4mysql-manager-master]# cd samples/
[root@node2 samples]# ls
conf scripts
[root@node2 etc]# mkdir mha
[root@node2 etc]# cd mha/
[root@node2 mha]# ls
[root@node2 mha]# pwd
/etc/mha
MHA配置文件
[root@node2 mha]# vi mha.conf
[server default]
user=muser
password=123456
manager_workdir=/usr/local/mha
manager_log=/usr/local/mha/manager.log
remote_workdir=/usr/local/mha
ssh_user=root
repl_user=repl
repl_password=repl
ping_interval=3
master_ip_failover_script= /usr/local/scripts/master_ip_failover
master_ip_online_change_script= /usr/local/scripts/master_ip_online_change
[server1]
hostname=node1
ssh_port=22
master_binlog_dir=/home/mysqldir
candidate_master=1
port=3380
[server2]
hostname=node2
ssh_port=22
master_binlog_dir=/home/mysqldir
candidate_master=1
port=3380
[server3]
hostname=node3
ssh_port=22
master_binlog_dir=/home/mysqldir
candidate_master=1
port=3380
总用量 16
-rwxr-xr-x. 1 root root 3648 8月 16 09:42 master_ip_failover
-rwxr-xr-x. 1 root root 9870 8月 16 09:42 master_ip_online_change
[root@node2 scripts]# chmod +x master_ip_*
[root@node2 scripts]# ll
总用量 16
-rwxr-xr-x. 1 root root 3648 8月 16 09:42 master_ip_failover
-rwxr-xr-x. 1 root root 9870 8月 16 09:42 master_ip_online_change
切换脚本
[root@node2 scripts]# cat master_ip_failover
#!/usr/bin/env perl
use strict;
use warnings FATAL => 'all';
use Getopt::Long;
my (
$command, $ssh_user, $orig_master_host, $orig_master_ip,
$orig_master_port, $new_master_host, $new_master_ip, $new_master_port
);
my $vip = '192.168.88.222/24';
my $key = '1';
my $ssh_start_vip = "/sbin/ifconfig eth0:$key $vip";
my $ssh_stop_vip = "/sbin/ifconfig eth0:$key down";
GetOptions(
'command=s' => \$command,
'ssh_user=s' => \$ssh_user,
'orig_master_host=s' => \$orig_master_host,
'orig_master_ip=s' => \$orig_master_ip,
'orig_master_port=i' => \$orig_master_port,
'new_master_host=s' => \$new_master_host,
'new_master_ip=s' => \$new_master_ip,
'new_master_port=i' => \$new_master_port,
);
exit &main();
sub main {
print "\n\nIN SCRIPT TEST====$ssh_stop_vip==$ssh_start_vip===\n\n";
if ( $command eq "stop" || $command eq "stopssh" ) {
my $exit_code = 1;
eval {
print "Disabling the VIP on old master: $orig_master_host \n";
&stop_vip();
$exit_code = 0;
};
if ($@) {
warn "Got Error: $@\n";
exit $exit_code;
}
exit $exit_code;
}
elsif ( $command eq "start" ) {
my $exit_code = 10;
eval {
print "Enabling the VIP - $vip on the new master - $new_master_host \n";
&start_vip();
$exit_code = 0;
};
if ($@) {
warn $@;
exit $exit_code;
}
exit $exit_code;
}
elsif ( $command eq "status" ) {
print "Checking the Status of the script.. OK \n";
exit 0;
}
else {
&usage();
exit 1;
}
}
sub start_vip() {
`ssh $ssh_user\@$new_master_host \" $ssh_start_vip \"`;
}
sub stop_vip() {
return 0 unless ($ssh_user);
`ssh $ssh_user\@$orig_master_host \" $ssh_stop_vip \"`;
}
sub usage {
print
"Usage: master_ip_failover --command=start|stop|stopssh|status --orig_master_host=host --orig_master_ip=ip --orig_master_port=port --new_master_host=host --new_master_ip=ip --new_master_port=port\n";
}
[root@node2 scripts]#
[root@node2 scripts]# cat master_ip_online_change
#!/usr/bin/env perl
use strict;
use warnings FATAL => 'all';
use Getopt::Long;
use MHA::DBHelper;
use MHA::NodeUtil;
use Time::HiRes qw( sleep gettimeofday tv_interval );
use Data::Dumper;
my $_tstart;
my $_running_interval = 0.1;
my $vip = "192.168.88.222";
my $if = "eth0";
my (
$command, $orig_master_is_new_slave, $orig_master_host,
$orig_master_ip, $orig_master_port, $orig_master_user,
$orig_master_password, $orig_master_ssh_user, $new_master_host,
$new_master_ip, $new_master_port, $new_master_user,
$new_master_password, $new_master_ssh_user,
);
GetOptions(
'command=s' => \$command,
'orig_master_is_new_slave' => \$orig_master_is_new_slave,
'orig_master_host=s' => \$orig_master_host,
'orig_master_ip=s' => \$orig_master_ip,
'orig_master_port=i' => \$orig_master_port,
'orig_master_user=s' => \$orig_master_user,
'orig_master_password=s' => \$orig_master_password,
'orig_master_ssh_user=s' => \$orig_master_ssh_user,
'new_master_host=s' => \$new_master_host,
'new_master_ip=s' => \$new_master_ip,
'new_master_port=i' => \$new_master_port,
'new_master_user=s' => \$new_master_user,
'new_master_password=s' => \$new_master_password,
'new_master_ssh_user=s' => \$new_master_ssh_user,
);
exit &main();
sub drop_vip {
my $output = `ssh -o ConnectTimeout=15 -o ConnectionAttempts=3 $orig_master_host /sbin/ip addr del $vip/32 dev $if`;
}
sub add_vip {
my $output = `ssh -o ConnectTimeout=15 -o ConnectionAttempts=3 $new_master_host /sbin/ip addr add $vip/32 dev $if`;
}
sub current_time_us {
my ( $sec, $microsec ) = gettimeofday();
my $curdate = localtime($sec);
return $curdate . " " . sprintf( "%06d", $microsec );
}
sub sleep_until {
my $elapsed = tv_interval($_tstart);
if ( $_running_interval > $elapsed ) {
sleep( $_running_interval - $elapsed );
}
}
sub get_threads_util {
my $dbh = shift;
my $my_connection_id = shift;
my $running_time_threshold = shift;
my $type = shift;
$running_time_threshold = 0 unless ($running_time_threshold);
$type = 0 unless ($type);
my @threads;
my $sth = $dbh->prepare("SHOW PROCESSLIST");
$sth->execute();
while ( my $ref = $sth->fetchrow_hashref() ) {
my $id = $ref->{Id};
my $user = $ref->{User};
my $host = $ref->{Host};
my $command = $ref->{Command};
my $state = $ref->{State};
my $query_time = $ref->{Time};
my $info = $ref->{Info};
$info =~ s/^\s*(.*?)\s*$/$1/ if defined($info);
next if ( $my_connection_id == $id );
next if ( defined($query_time) && $query_time < $running_time_threshold );
next if ( defined($command) && $command eq "Binlog Dump" );
next if ( defined($user) && $user eq "system user" );
next
if ( defined($command)
&& $command eq "Sleep"
&& defined($query_time)
&& $query_time >= 1 );
if ( $type >= 1 ) {
next if ( defined($command) && $command eq "Sleep" );
next if ( defined($command) && $command eq "Connect" );
}
if ( $type >= 2 ) {
next if ( defined($info) && $info =~ m/^select/i );
next if ( defined($info) && $info =~ m/^show/i );
}
push @threads, $ref;
}
return @threads;
}
sub main {
if ( $command eq "stop" ) {
## Gracefully killing connections on the current master
# 1. Set read_only= 1 on the new master
# 2. DROP USER so that no app user can establish new connections
# 3. Set read_only= 1 on the current master
# 4. Kill current queries
# * Any database access failure will result in script die.
my $exit_code = 1;
eval {
## Setting read_only=1 on the new master (to avoid accident)
my $new_master_handler = new MHA::DBHelper();
# args: hostname, port, user, password, raise_error(die_on_error)_ or_not
$new_master_handler->connect( $new_master_ip, $new_master_port,
$new_master_user, $new_master_password, 1 );
print current_time_us() . " Set read_only on the new master.. ";
$new_master_handler->enable_read_only();
if ( $new_master_handler->is_read_only() ) {
print "ok.\n";
}
else {
die "Failed!\n";
}
$new_master_handler->disconnect();
# Connecting to the orig master, die if any database error happens
my $orig_master_handler = new MHA::DBHelper();
$orig_master_handler->connect( $orig_master_ip, $orig_master_port,
$orig_master_user, $orig_master_password, 1 );
## Drop application user so that nobody can connect. Disabling per-session binlog beforehand
$orig_master_handler->disable_log_bin_local();
# print current_time_us() . " Drpping app user on the orig master..\n";
print current_time_us() . " drop vip $vip..\n";
#drop_app_user($orig_master_handler);
&drop_vip();
## Waiting for N * 100 milliseconds so that current connections can exit
my $time_until_read_only = 15;
$_tstart = [gettimeofday];
my @threads = get_threads_util( $orig_master_handler->{dbh},
$orig_master_handler->{connection_id} );
while ( $time_until_read_only > 0 && $#threads >= 0 ) {
if ( $time_until_read_only % 5 == 0 ) {
printf
"%s Waiting all running %d threads are disconnected.. (max %d milliseconds)\n",
current_time_us(), $#threads + 1, $time_until_read_only * 100;
if ( $#threads < 5 ) {
print Data::Dumper->new( [$_] )->Indent(0)->Terse(1)->Dump . "\n"
foreach (@threads);
}
}
sleep_until();
$_tstart = [gettimeofday];
$time_until_read_only--;
@threads = get_threads_util( $orig_master_handler->{dbh},
$orig_master_handler->{connection_id} );
}
## Setting read_only=1 on the current master so that nobody(except SUPER) can write
print current_time_us() . " Set read_only=1 on the orig master.. ";
$orig_master_handler->enable_read_only();
if ( $orig_master_handler->is_read_only() ) {
print "ok.\n";
}
else {
die "Failed!\n";
}
## Waiting for M * 100 milliseconds so that current update queries can complete
my $time_until_kill_threads = 5;
@threads = get_threads_util( $orig_master_handler->{dbh},
$orig_master_handler->{connection_id} );
while ( $time_until_kill_threads > 0 && $#threads >= 0 ) {
if ( $time_until_kill_threads % 5 == 0 ) {
printf
"%s Waiting all running %d queries are disconnected.. (max %d milliseconds)\n",
current_time_us(), $#threads + 1, $time_until_kill_threads * 100;
if ( $#threads < 5 ) {
print Data::Dumper->new( [$_] )->Indent(0)->Terse(1)->Dump . "\n"
foreach (@threads);
}
}
sleep_until();
$_tstart = [gettimeofday];
$time_until_kill_threads--;
@threads = get_threads_util( $orig_master_handler->{dbh},
$orig_master_handler->{connection_id} );
}
## Terminating all threads
print current_time_us() . " Killing all application threads..\n";
$orig_master_handler->kill_threads(@threads) if ( $#threads >= 0 );
print current_time_us() . " done.\n";
$orig_master_handler->enable_log_bin_local();
$orig_master_handler->disconnect();
## After finishing the script, MHA executes FLUSH TABLES WITH READ LOCK
$exit_code = 0;
};
if ($@) {
warn "Got Error: $@\n";
exit $exit_code;
}
exit $exit_code;
}
elsif ( $command eq "start" ) {
## Activating master ip on the new master
# 1. Create app user with write privileges
# 2. Moving backup script if needed
# 3. Register new master's ip to the catalog database
# We don't return error even though activating updatable accounts/ip failed so that we don't interrupt slaves' recovery.
# If exit code is 0 or 10, MHA does not abort
my $exit_code = 10;
eval {
my $new_master_handler = new MHA::DBHelper();
# args: hostname, port, user, password, raise_error_or_not
$new_master_handler->connect( $new_master_ip, $new_master_port,
$new_master_user, $new_master_password, 1 );
## Set read_only=0 on the new master
$new_master_handler->disable_log_bin_local();
print current_time_us() . " Set read_only=0 on the new master.\n";
$new_master_handler->disable_read_only();
## Creating an app user on the new master
#print current_time_us() . " Creating app user on the new master..\n";
print current_time_us() . "Add vip $vip on $if..\n";
# create_app_user($new_master_handler);
&add_vip();
$new_master_handler->enable_log_bin_local();
$new_master_handler->disconnect();
## Update master ip on the catalog database, etc
$exit_code = 0;
};
if ($@) {
warn "Got Error: $@\n";
exit $exit_code;
}
exit $exit_code;
}
elsif ( $command eq "status" ) {
# do nothing
exit 0;
}
else {
&usage();
exit 1;
}
}
sub usage {
print
"Usage: master_ip_online_change --command=start|stop|status --orig_master_host=host --orig_master_ip=ip --orig_master_port=port --new_master_host=host --new_master_ip=ip --new_master_port=port\n";
die;
}
[root@node2 scripts]#
[root@node2 scripts]# /usr/local/bin/masterha_check_ssh -conf=/etc/mha/mha.conf
Thu Aug 16 09:45:36 2018 - [warning] Global configuration file /etc/masterha_default.cnf not found. Skipping.
Thu Aug 16 09:45:36 2018 - [info] Reading application default configuration from /etc/mha/mha.conf..
Thu Aug 16 09:45:36 2018 - [info] Reading server configuration from /etc/mha/mha.conf..
Thu Aug 16 09:45:36 2018 - [info] Starting SSH connection tests..
Thu Aug 16 09:45:39 2018 - [debug]
Thu Aug 16 09:45:37 2018 - [debug] Connecting via SSH from root@node2(192.168.88.18:22) to root@node1(192.168.88.20:22)..
Thu Aug 16 09:45:38 2018 - [debug] ok.
Thu Aug 16 09:45:38 2018 - [debug] Connecting via SSH from root@node2(192.168.88.18:22) to root@node3(192.168.88.19:22)..
Thu Aug 16 09:45:38 2018 - [debug] ok.
Thu Aug 16 09:45:39 2018 - [debug]
Thu Aug 16 09:45:36 2018 - [debug] Connecting via SSH from root@node1(192.168.88.20:22) to root@node2(192.168.88.18:22)..
Thu Aug 16 09:45:37 2018 - [debug] ok.
Thu Aug 16 09:45:37 2018 - [debug] Connecting via SSH from root@node1(192.168.88.20:22) to root@node3(192.168.88.19:22)..
Thu Aug 16 09:45:38 2018 - [debug] ok.
Thu Aug 16 09:45:39 2018 - [debug]
Thu Aug 16 09:45:37 2018 - [debug] Connecting via SSH from root@node3(192.168.88.19:22) to root@node1(192.168.88.20:22)..
Thu Aug 16 09:45:38 2018 - [debug] ok.
Thu Aug 16 09:45:38 2018 - [debug] Connecting via SSH from root@node3(192.168.88.19:22) to root@node2(192.168.88.18:22)..
Thu Aug 16 09:45:38 2018 - [debug] ok.
Thu Aug 16 09:45:39 2018 - [info] All SSH connection tests passed successfully.
[root@node2 scripts]#
[root@node2 scripts]# /usr/local/bin/masterha_check_repl -conf=/etc/mha/mha.conf
Thu Aug 16 10:11:20 2018 - [warning] Global configuration file /etc/masterha_default.cnf not found. Skipping.
Aug 16 10:11:20 2018 - [info] Reading application default configuration from /etc/mha/mha.conf..
Thu Aug 16 10:11:20 2018 - [info] Reading server configuration from /etc/mha/mha.conf..
Thu Aug 16 10:11:20 2018 - [info] MHA::MasterMonitor version 0.58.
Thu Aug 16 10:11:22 2018 - [info] GTID failover mode = 1
Thu Aug 16 10:11:22 2018 - [info] Dead Servers:
Thu Aug 16 10:11:22 2018 - [info] Alive Servers:
Thu Aug 16 10:11:22 2018 - [info] node1(192.168.88.20:3380)
Thu Aug 16 10:11:22 2018 - [info] node2(192.168.88.18:3380)
Thu Aug 16 10:11:22 2018 - [info] node3(192.168.88.19:3380)
Thu Aug 16 10:11:22 2018 - [info] Alive Slaves:
Thu Aug 16 10:11:22 2018 - [info] node2(192.168.88.18:3380) Version=5.7.22-log (oldest major version between slaves) log-bin:enabled
Thu Aug 16 10:11:22 2018 - [info] GTID ON
Thu Aug 16 10:11:22 2018 - [info] Replicating from 192.168.88.20(192.168.88.20:3380)
Thu Aug 16 10:11:22 2018 - [info] Primary candidate for the new Master (candidate_master is set)
Thu Aug 16 10:11:22 2018 - [info] node3(192.168.88.19:3380) Version=5.7.22-log (oldest major version between slaves) log-bin:enabled
Thu Aug 16 10:11:22 2018 - [info] GTID ON
Thu Aug 16 10:11:22 2018 - [info] Replicating from 192.168.88.20(192.168.88.20:3380)
Thu Aug 16 10:11:22 2018 - [info] Not candidate for the new Master (no_master is set)
Thu Aug 16 10:11:22 2018 - [info] Current Alive Master: node1(192.168.88.20:3380)
Thu Aug 16 10:11:22 2018 - [info] Checking slave configurations..
Thu Aug 16 10:11:22 2018 - [info] read_only=1 is not set on slave node2(192.168.88.18:3380).
Thu Aug 16 10:11:22 2018 - [info] read_only=1 is not set on slave node3(192.168.88.19:3380).
Thu Aug 16 10:11:22 2018 - [info] Checking replication filtering settings..
Thu Aug 16 10:11:22 2018 - [info] binlog_do_db= , binlog_ignore_db=
Thu Aug 16 10:11:22 2018 - [info] Replication filtering check ok.
Thu Aug 16 10:11:22 2018 - [info] GTID (with auto-pos) is supported. Skipping all SSH and Node package checking.
Thu Aug 16 10:11:22 2018 - [info] Checking SSH publickey authentication settings on the current master..
Thu Aug 16 10:11:22 2018 - [info] HealthCheck: SSH to node1 is reachable.
Thu Aug 16 10:11:22 2018 - [info]
node1(192.168.88.20:3380) (current master)
+--node2(192.168.88.18:3380)
+--node3(192.168.88.19:3380)
Thu Aug 16 10:11:22 2018 - [info] Checking replication health on node2..
Thu Aug 16 10:11:22 2018 - [info] ok.
Thu Aug 16 10:11:22 2018 - [info] Checking replication health on node3..
Thu Aug 16 10:11:22 2018 - [info] ok.
Thu Aug 16 10:11:22 2018 - [info] Checking master_ip_failover_script status:
Thu Aug 16 10:11:22 2018 - [info] /usr/local/scripts/master_ip_failover --command=status --ssh_user=root --orig_master_host=node1 --orig_master_ip=192.168.88.20 --orig_master_port=3380
Bareword "FIXME_xxx" not allowed while "strict subs" in use at /usr/local/scripts/master_ip_failover line 93.
Execution of /usr/local/scripts/master_ip_failover aborted due to compilation errors.
Thu Aug 16 10:11:22 2018 - [error][/usr/local/share/perl5/MHA/MasterMonitor.pm, ln229] Failed to get master_ip_failover_script status with return code 255:0.
Thu Aug 16 10:11:22 2018 - [error][/usr/local/share/perl5/MHA/MasterMonitor.pm, ln427] Error happened on checking configurations. at /usr/local/bin/masterha_check_repl line 48.
Thu Aug 16 10:11:22 2018 - [error][/usr/local/share/perl5/MHA/MasterMonitor.pm, ln525] Error happened on monitoring servers.
Thu Aug 16 10:11:22 2018 - [info] Got exit code 1 (Not master dead).
MySQL Replication Health is NOT OK!
[root@node2 scripts]# /usr/local/bin/masterha_check_repl -conf=/etc/mha/mha.conf
Thu Aug 16 10:24:58 2018 - [warning] Global configuration file /etc/masterha_default.cnf not found. Skipping.
Thu Aug 16 10:24:58 2018 - [info] Reading application default configuration from /etc/mha/mha.conf..
Thu Aug 16 10:24:58 2018 - [info] Reading server configuration from /etc/mha/mha.conf..
Thu Aug 16 10:24:58 2018 - [info] MHA::MasterMonitor version 0.58.
Thu Aug 16 10:24:59 2018 - [info] GTID failover mode = 1
Thu Aug 16 10:24:59 2018 - [info] Dead Servers:
Thu Aug 16 10:24:59 2018 - [info] Alive Servers:
Thu Aug 16 10:24:59 2018 - [info] node1(192.168.88.20:3380)
Thu Aug 16 10:24:59 2018 - [info] node2(192.168.88.18:3380)
Thu Aug 16 10:24:59 2018 - [info] node3(192.168.88.19:3380)
Thu Aug 16 10:24:59 2018 - [info] Alive Slaves:
Thu Aug 16 10:24:59 2018 - [info] node2(192.168.88.18:3380) Version=5.7.22-log (oldest major version between slaves) log-bin:enabled
Thu Aug 16 10:24:59 2018 - [info] GTID ON
Thu Aug 16 10:24:59 2018 - [info] Replicating from 192.168.88.20(192.168.88.20:3380)
Thu Aug 16 10:24:59 2018 - [info] Primary candidate for the new Master (candidate_master is set)
Thu Aug 16 10:24:59 2018 - [info] node3(192.168.88.19:3380) Version=5.7.22-log (oldest major version between slaves) log-bin:enabled
Thu Aug 16 10:24:59 2018 - [info] GTID ON
Thu Aug 16 10:24:59 2018 - [info] Replicating from 192.168.88.20(192.168.88.20:3380)
Thu Aug 16 10:24:59 2018 - [info] Not candidate for the new Master (no_master is set)
Thu Aug 16 10:24:59 2018 - [info] Current Alive Master: node1(192.168.88.20:3380)
Thu Aug 16 10:24:59 2018 - [info] Checking slave configurations..
Thu Aug 16 10:24:59 2018 - [info] read_only=1 is not set on slave node2(192.168.88.18:3380).
Thu Aug 16 10:24:59 2018 - [info] read_only=1 is not set on slave node3(192.168.88.19:3380).
Thu Aug 16 10:24:59 2018 - [info] Checking replication filtering settings..
Thu Aug 16 10:24:59 2018 - [info] binlog_do_db= , binlog_ignore_db=
Thu Aug 16 10:24:59 2018 - [info] Replication filtering check ok.
Thu Aug 16 10:24:59 2018 - [info] GTID (with auto-pos) is supported. Skipping all SSH and Node package checking.
Thu Aug 16 10:24:59 2018 - [info] Checking SSH publickey authentication settings on the current master..
Thu Aug 16 10:24:59 2018 - [info] HealthCheck: SSH to node1 is reachable.
Thu Aug 16 10:24:59 2018 - [info]
node1(192.168.88.20:3380) (current master)
+--node2(192.168.88.18:3380)
+--node3(192.168.88.19:3380)
Thu Aug 16 10:24:59 2018 - [info] Checking replication health on node2..
Thu Aug 16 10:24:59 2018 - [info] ok.
Thu Aug 16 10:24:59 2018 - [info] Checking replication health on node3..
Thu Aug 16 10:24:59 2018 - [info] ok.
Thu Aug 16 10:24:59 2018 - [info] Checking master_ip_failover_script status:
Thu Aug 16 10:24:59 2018 - [info] /usr/local/scripts/master_ip_failover --command=status --ssh_user=root --orig_master_host=node1 --orig_master_ip=192.168.88.20 --orig_master_port=3380
Thu Aug 16 10:25:00 2018 - [info] OK.
Thu Aug 16 10:25:00 2018 - [warning] shutdown_script is not defined.
Thu Aug 16 10:25:00 2018 - [info] Got exit code 0 (Not master dead).
MySQL Replication Health is OK.
[root@node2 scripts]#
[root@node1 ~]# ip addr add 192.168.88.222 dev eth0
[root@node1 ~]# ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 52:54:00:e2:58:50 brd ff:ff:ff:ff:ff:ff
inet 192.168.88.20/24 brd 192.168.88.255 scope global eth0
valid_lft forever preferred_lft forever
inet 192.168.88.222/32 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::5054:ff:fee2:5850/64 scope link
valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN
link/ether 02:42:55:8e:6e:fd brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 scope global docker0
valid_lft forever preferred_lft forever
4: br-baa8d5ef8e9f: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
link/ether 02:42:f2:5d:f8:98 brd ff:ff:ff:ff:ff:ff
inet 172.18.0.1/16 scope global br-baa8d5ef8e9f
valid_lft forever preferred_lft forever
inet6 fe80::42:f2ff:fe5d:f898/64 scope link
valid_lft forever preferred_lft forever
6: veth08dcede: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-baa8d5ef8e9f state UP
link/ether 66:6c:03:32:b7:57 brd ff:ff:ff:ff:ff:ff
inet6 fe80::646c:3ff:fe32:b757/64 scope link
valid_lft forever preferred_lft forever
10: veth6b14645: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-baa8d5ef8e9f state UP
link/ether 52:73:0f:40:fb:ad brd ff:ff:ff:ff:ff:ff
inet6 fe80::5073:fff:fe40:fbad/64 scope link
valid_lft forever preferred_lft forever
12: veth0f06c44: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-baa8d5ef8e9f state UP
link/ether 6e:93:a9:b9:c0:f0 brd ff:ff:ff:ff:ff:ff
inet6 fe80::6c93:a9ff:feb9:c0f0/64 scope link
valid_lft forever preferred_lft forever
14: veth83d007f: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-baa8d5ef8e9f state UP
link/ether 02:1d:51:af:08:dd brd ff:ff:ff:ff:ff:ff
inet6 fe80::1d:51ff:feaf:8dd/64 scope link
valid_lft forever preferred_lft forever
20: vethc01bfd0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-baa8d5ef8e9f state UP
link/ether 96:9b:4b:f1:78:bc brd ff:ff:ff:ff:ff:ff
inet6 fe80::949b:4bff:fef1:78bc/64 scope link
valid_lft forever preferred_lft forever
33144: veth855a5c9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-baa8d5ef8e9f state UP
link/ether e2:5c:ec:0d:c2:ef brd ff:ff:ff:ff:ff:ff
inet6 fe80::e05c:ecff:fe0d:c2ef/64 scope link
valid_lft forever preferred_lft forever
33148: veth9870301: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-baa8d5ef8e9f state UP
link/ether e6:b3:a4:f2:02:1a brd ff:ff:ff:ff:ff:ff
inet6 fe80::e4b3:a4ff:fef2:21a/64 scope link
valid_lft forever preferred_lft forever
[root@node1 ~]#
[root@node2 scripts]# vi /tmp/mha_manager.log
启动MHA
[root@node2 scripts]# nohup masterha_manager --conf /etc/mha/mha.conf > /tmp/mha_manager.log </dev/null 2>&1 &
[1] 20198
[root@node2 scripts]#
[root@node2 scripts]#
[root@node2 scripts]#
[root@node2 scripts]#
[root@node2 scripts]# vi /tmp/mha_manager.log
[root@node2 scripts]#
[root@node2 scripts]#
[root@node2 scripts]#
[root@node2 scripts]#
[root@node2 scripts]#
[root@node2 scripts]# vi /tmp/mha_manager.log
[root@node2 scripts]# masterha_check_status --conf=/etc/mha/mha.conf
mha (pid:20198) is running(0:PING_OK), master:node1
[root@node2 scripts]#
[root@node2 scripts]# masterha_check_repl --conf=/etc/mha/mha.conf
Thu Aug 16 11:35:47 2018 - [warning] Global configuration file /etc/masterha_default.cnf not found. Skipping.
Thu Aug 16 11:35:47 2018 - [info] Reading application default configuration from /etc/mha/mha.conf..
Thu Aug 16 11:35:47 2018 - [info] Reading server configuration from /etc/mha/mha.conf..
Thu Aug 16 11:35:47 2018 - [info] MHA::MasterMonitor version 0.58.
Thu Aug 16 11:35:49 2018 - [info] GTID failover mode = 1
Thu Aug 16 11:35:49 2018 - [info] Dead Servers:
Thu Aug 16 11:35:49 2018 - [info] Alive Servers:
Thu Aug 16 11:35:49 2018 - [info] node1(192.168.88.20:3380)
Thu Aug 16 11:35:49 2018 - [info] node2(192.168.88.18:3380)
Thu Aug 16 11:35:49 2018 - [info] node3(192.168.88.19:3380)
Thu Aug 16 11:35:49 2018 - [info] Alive Slaves:
Thu Aug 16 11:35:49 2018 - [info] node2(192.168.88.18:3380) Version=5.7.22-log (oldest major version between slaves) log-bin:enabled
Thu Aug 16 11:35:49 2018 - [info] GTID ON
Thu Aug 16 11:35:49 2018 - [info] Replicating from 192.168.88.20(192.168.88.20:3380)
Thu Aug 16 11:35:49 2018 - [info] Primary candidate for the new Master (candidate_master is set)
Thu Aug 16 11:35:49 2018 - [info] node3(192.168.88.19:3380) Version=5.7.22-log (oldest major version between slaves) log-bin:enabled
Thu Aug 16 11:35:49 2018 - [info] GTID ON
Thu Aug 16 11:35:49 2018 - [info] Replicating from 192.168.88.20(192.168.88.20:3380)
Thu Aug 16 11:35:49 2018 - [info] Not candidate for the new Master (no_master is set)
Thu Aug 16 11:35:49 2018 - [info] Current Alive Master: node1(192.168.88.20:3380)
Thu Aug 16 11:35:49 2018 - [info] Checking slave configurations..
Thu Aug 16 11:35:49 2018 - [info] read_only=1 is not set on slave node2(192.168.88.18:3380).
Thu Aug 16 11:35:49 2018 - [info] read_only=1 is not set on slave node3(192.168.88.19:3380).
Thu Aug 16 11:35:49 2018 - [info] Checking replication filtering settings..
Thu Aug 16 11:35:49 2018 - [info] binlog_do_db= , binlog_ignore_db=
Thu Aug 16 11:35:49 2018 - [info] Replication filtering check ok.
Thu Aug 16 11:35:49 2018 - [info] GTID (with auto-pos) is supported. Skipping all SSH and Node package checking.
Thu Aug 16 11:35:49 2018 - [info] Checking SSH publickey authentication settings on the current master..
Thu Aug 16 11:35:49 2018 - [info] HealthCheck: SSH to node1 is reachable.
Thu Aug 16 11:35:49 2018 - [info]
node1(192.168.88.20:3380) (current master)
+--node2(192.168.88.18:3380)
+--node3(192.168.88.19:3380)
Thu Aug 16 11:35:49 2018 - [info] Checking replication health on node2..
Thu Aug 16 11:35:49 2018 - [info] ok.
Thu Aug 16 11:35:49 2018 - [info] Checking replication health on node3..
Thu Aug 16 11:35:49 2018 - [info] ok.
Thu Aug 16 11:35:49 2018 - [info] Checking master_ip_failover_script status:
Thu Aug 16 11:35:49 2018 - [info] /usr/local/scripts/master_ip_failover --command=status --ssh_user=root --orig_master_host=node1 --orig_master_ip=192.168.88.20 --orig_master_port=3380
IN SCRIPT TEST====/sbin/ifconfig eth1:1 down==/sbin/ifconfig eth1:1 192.168.0.88/24===
Checking the Status of the script.. OK
Thu Aug 16 11:35:49 2018 - [info] OK.
Thu Aug 16 11:35:49 2018 - [warning] shutdown_script is not defined.
Thu Aug 16 11:35:49 2018 - [info] Got exit code 0 (Not master dead).
MySQL Replication Health is OK.
[root@node2 scripts]#
检查主从复制
[root@node2 bin]# masterha_check_repl --conf=/etc/mha/mha.conf
Thu Aug 16 14:56:56 2018 - [warning] Global configuration file /etc/masterha_default.cnf not found. Skipping.
Thu Aug 16 14:56:56 2018 - [info] Reading application default configuration from /etc/mha/mha.conf..
Thu Aug 16 14:56:56 2018 - [info] Reading server configuration from /etc/mha/mha.conf..
Thu Aug 16 14:56:56 2018 - [info] MHA::MasterMonitor version 0.58.
Thu Aug 16 14:56:57 2018 - [info] GTID failover mode = 0
Thu Aug 16 14:56:57 2018 - [info] Dead Servers:
Thu Aug 16 14:56:57 2018 - [info] Alive Servers:
Thu Aug 16 14:56:57 2018 - [info] node1(192.168.88.20:3380)
Thu Aug 16 14:56:57 2018 - [info] node2(192.168.88.18:3380)
Thu Aug 16 14:56:57 2018 - [info] node3(192.168.88.19:3380)
Thu Aug 16 14:56:57 2018 - [info] Alive Slaves:
Thu Aug 16 14:56:57 2018 - [info] node1(192.168.88.20:3380) Version=5.7.22-log (oldest major version between slaves) log-bin:enabled
Thu Aug 16 14:56:57 2018 - [info] GTID ON
Thu Aug 16 14:56:57 2018 - [info] Replicating from 192.168.88.18(192.168.88.18:3380)
Thu Aug 16 14:56:57 2018 - [info] Primary candidate for the new Master (candidate_master is set)
Thu Aug 16 14:56:57 2018 - [info] node3(192.168.88.19:3380) Version=5.7.22-log (oldest major version between slaves) log-bin:enabled
Thu Aug 16 14:56:57 2018 - [info] GTID ON
Thu Aug 16 14:56:57 2018 - [info] Replicating from 192.168.88.18(192.168.88.18:3380)
Thu Aug 16 14:56:57 2018 - [info] Not candidate for the new Master (no_master is set)
Thu Aug 16 14:56:57 2018 - [info] Current Alive Master: node2(192.168.88.18:3380)
Thu Aug 16 14:56:57 2018 - [info] Checking slave configurations..
Thu Aug 16 14:56:57 2018 - [info] read_only=1 is not set on slave node1(192.168.88.20:3380).
Thu Aug 16 14:56:57 2018 - [info] read_only=1 is not set on slave node3(192.168.88.19:3380).
Thu Aug 16 14:56:57 2018 - [info] Checking replication filtering settings..
Thu Aug 16 14:56:57 2018 - [info] binlog_do_db= , binlog_ignore_db=
Thu Aug 16 14:56:57 2018 - [info] Replication filtering check ok.
Thu Aug 16 14:56:57 2018 - [info] GTID (with auto-pos) is not supported
Thu Aug 16 14:56:57 2018 - [info] Starting SSH connection tests..
Thu Aug 16 14:57:00 2018 - [info] All SSH connection tests passed successfully.
Thu Aug 16 14:57:00 2018 - [info] Checking MHA Node version..
Thu Aug 16 14:57:00 2018 - [info] Version check ok.
Thu Aug 16 14:57:00 2018 - [info] Checking SSH publickey authentication settings on the current master..
Thu Aug 16 14:57:01 2018 - [info] HealthCheck: SSH to node2 is reachable.
Thu Aug 16 14:57:01 2018 - [info] Checking recovery script configurations on node2(192.168.88.18:3380)..
Thu Aug 16 14:57:01 2018 - [info] Executing command: save_binary_logs --command=test --start_pos=4 --binlog_dir=/home/mysqldir --output_file=/usr/local/mha/save_binary_logs_test --manager_version=0.58 --start_file=mysql-bin.000001
Thu Aug 16 14:57:01 2018 - [info] Connecting to [email protected](node2:22)..
Creating /usr/local/mha if not exists.. ok.
Checking output directory is accessible or not..
ok.
Binlog found at /home/mysqldir, up to mysql-bin.000001
Thu Aug 16 14:57:01 2018 - [info] Binlog setting check done.
Thu Aug 16 14:57:01 2018 - [info] Checking SSH publickey authentication and checking recovery script configurations on all alive slave servers..
Thu Aug 16 14:57:01 2018 - [info] Executing command : apply_diff_relay_logs --command=test --slave_user='muser' --slave_host=node1 --slave_ip=192.168.88.20 --slave_port=3380 --workdir=/usr/local/mha --target_version=5.7.22-log --manager_version=0.58 --relay_log_info=/home/mysqldir/data/relay-log.info --relay_dir=/home/mysqldir/data/ --slave_pass=xxx
Thu Aug 16 14:57:01 2018 - [info] Connecting to [email protected](node1:22)..
Checking slave recovery environment settings..
Opening /home/mysqldir/data/relay-log.info ... ok.
Relay log found at /home/mysqldir, up to relay-bin.000002
Temporary relay log file is /home/mysqldir/relay-bin.000002
Checking if super_read_only is defined and turned on.. not present or turned off, ignoring.
Testing mysql connection and privileges..
done.
Testing mysqlbinlog output.. done.
Cleaning up test file(s).. done.
Thu Aug 16 14:57:02 2018 - [info] Executing command : apply_diff_relay_logs --command=test --slave_user='muser' --slave_host=node3 --slave_ip=192.168.88.19 --slave_port=3380 --workdir=/usr/local/mha --target_version=5.7.22-log --manager_version=0.58 --relay_log_info=/home/mysqldir/data/relay-log.info --relay_dir=/home/mysqldir/data/ --slave_pass=xxx
Thu Aug 16 14:57:02 2018 - [info] Connecting to [email protected](node3:22)..
Can't exec "mysqlbinlog": 没有那个文件或目录 at /usr/local/share/perl5/MHA/BinlogManager.pm line 106.
mysqlbinlog version command failed with rc 1:0, please verify PATH, LD_LIBRARY_PATH, and client options
at /usr/local/bin/apply_diff_relay_logs line 532.
Thu Aug 16 14:57:02 2018 - [error][/usr/local/share/perl5/MHA/MasterMonitor.pm, ln208] Slaves settings check failed!
Thu Aug 16 14:57:02 2018 - [error][/usr/local/share/perl5/MHA/MasterMonitor.pm, ln416] Slave configuration failed.
Thu Aug 16 14:57:02 2018 - [error][/usr/local/share/perl5/MHA/MasterMonitor.pm, ln427] Error happened on checking configurations. at /usr/local/bin/masterha_check_repl line 48.
Thu Aug 16 14:57:02 2018 - [error][/usr/local/share/perl5/MHA/MasterMonitor.pm, ln525] Error happened on monitoring servers.
Thu Aug 16 14:57:02 2018 - [info] Got exit code 1 (Not master dead).
MySQL Replication Health is NOT OK!
[root@node2 bin]#
[root@node2 bin]#
[root@node2 bin]# masterha_check_repl --conf=/etc/mha/mha.conf
Thu Aug 16 15:00:20 2018 - [warning] Global configuration file /etc/masterha_default.cnf not found. Skipping.
Thu Aug 16 15:00:20 2018 - [info] Reading application default configuration from /etc/mha/mha.conf..
Thu Aug 16 15:00:20 2018 - [info] Reading server configuration from /etc/mha/mha.conf..
Thu Aug 16 15:00:20 2018 - [info] MHA::MasterMonitor version 0.58.
Thu Aug 16 15:00:21 2018 - [info] GTID failover mode = 0
Thu Aug 16 15:00:21 2018 - [info] Dead Servers:
Thu Aug 16 15:00:21 2018 - [info] Alive Servers:
Thu Aug 16 15:00:21 2018 - [info] node1(192.168.88.20:3380)
Thu Aug 16 15:00:21 2018 - [info] node2(192.168.88.18:3380)
Thu Aug 16 15:00:21 2018 - [info] node3(192.168.88.19:3380)
Thu Aug 16 15:00:21 2018 - [info] Alive Slaves:
Thu Aug 16 15:00:21 2018 - [info] node1(192.168.88.20:3380) Version=5.7.22-log (oldest major version between slaves) log-bin:enabled
Thu Aug 16 15:00:21 2018 - [info] GTID ON
Thu Aug 16 15:00:21 2018 - [info] Replicating from 192.168.88.18(192.168.88.18:3380)
Thu Aug 16 15:00:21 2018 - [info] Primary candidate for the new Master (candidate_master is set)
Thu Aug 16 15:00:21 2018 - [info] node3(192.168.88.19:3380) Version=5.7.22-log (oldest major version between slaves) log-bin:enabled
Thu Aug 16 15:00:21 2018 - [info] GTID ON
Thu Aug 16 15:00:21 2018 - [info] Replicating from 192.168.88.18(192.168.88.18:3380)
Thu Aug 16 15:00:21 2018 - [info] Not candidate for the new Master (no_master is set)
Thu Aug 16 15:00:21 2018 - [info] Current Alive Master: node2(192.168.88.18:3380)
Thu Aug 16 15:00:21 2018 - [info] Checking slave configurations..
Thu Aug 16 15:00:21 2018 - [info] read_only=1 is not set on slave node1(192.168.88.20:3380).
Thu Aug 16 15:00:21 2018 - [info] read_only=1 is not set on slave node3(192.168.88.19:3380).
Thu Aug 16 15:00:21 2018 - [info] Checking replication filtering settings..
Thu Aug 16 15:00:21 2018 - [info] binlog_do_db= , binlog_ignore_db=
Thu Aug 16 15:00:21 2018 - [info] Replication filtering check ok.
Thu Aug 16 15:00:21 2018 - [info] GTID (with auto-pos) is not supported
Thu Aug 16 15:00:21 2018 - [info] Starting SSH connection tests..
Thu Aug 16 15:00:25 2018 - [info] All SSH connection tests passed successfully.
Thu Aug 16 15:00:25 2018 - [info] Checking MHA Node version..
Thu Aug 16 15:00:25 2018 - [info] Version check ok.
Thu Aug 16 15:00:25 2018 - [info] Checking SSH publickey authentication settings on the current master..
Thu Aug 16 15:00:26 2018 - [info] HealthCheck: SSH to node2 is reachable.
Thu Aug 16 15:00:26 2018 - [info] Checking recovery script configurations on node2(192.168.88.18:3380)..
Thu Aug 16 15:00:26 2018 - [info] Executing command: save_binary_logs --command=test --start_pos=4 --binlog_dir=/home/mysqldir --output_file=/usr/local/mha/save_binary_logs_test --manager_version=0.58 --start_file=mysql-bin.000001
Thu Aug 16 15:00:26 2018 - [info] Connecting to [email protected](node2:22)..
Creating /usr/local/mha if not exists.. ok.
Checking output directory is accessible or not..
ok.
Binlog found at /home/mysqldir, up to mysql-bin.000001
Thu Aug 16 15:00:26 2018 - [info] Binlog setting check done.
Thu Aug 16 15:00:26 2018 - [info] Checking SSH publickey authentication and checking recovery script configurations on all alive slave servers..
Thu Aug 16 15:00:26 2018 - [info] Executing command : apply_diff_relay_logs --command=test --slave_user='muser' --slave_host=node1 --slave_ip=192.168.88.20 --slave_port=3380 --workdir=/usr/local/mha --target_version=5.7.22-log --manager_version=0.58 --relay_log_info=/home/mysqldir/data/relay-log.info --relay_dir=/home/mysqldir/data/ --slave_pass=xxx
Thu Aug 16 15:00:26 2018 - [info] Connecting to [email protected](node1:22)..
Checking slave recovery environment settings..
Opening /home/mysqldir/data/relay-log.info ... ok.
Relay log found at /home/mysqldir, up to relay-bin.000002
Temporary relay log file is /home/mysqldir/relay-bin.000002
Checking if super_read_only is defined and turned on.. not present or turned off, ignoring.
Testing mysql connection and privileges..
done.
Testing mysqlbinlog output.. done.
Cleaning up test file(s).. done.
Thu Aug 16 15:00:27 2018 - [info] Executing command : apply_diff_relay_logs --command=test --slave_user='muser' --slave_host=node3 --slave_ip=192.168.88.19 --slave_port=3380 --workdir=/usr/local/mha --target_version=5.7.22-log --manager_version=0.58 --relay_log_info=/home/mysqldir/data/relay-log.info --relay_dir=/home/mysqldir/data/ --slave_pass=xxx
Thu Aug 16 15:00:27 2018 - [info] Connecting to [email protected](node3:22)..
Creating directory /usr/local/mha.. done.
Checking slave recovery environment settings..
Opening /home/mysqldir/data/relay-log.info ... ok.
Relay log found at /home/mysqldir, up to relay-bin.000003
Temporary relay log file is /home/mysqldir/relay-bin.000003
Checking if super_read_only is defined and turned on.. not present or turned off, ignoring.
Testing mysql connection and privileges..
sh: mysql: 未找到命令
mysql command failed with rc 127:0!
at /usr/local/bin/apply_diff_relay_logs line 404.
main::check() called at /usr/local/bin/apply_diff_relay_logs line 536
eval {...} called at /usr/local/bin/apply_diff_relay_logs line 514
main::main() called at /usr/local/bin/apply_diff_relay_logs line 121
Thu Aug 16 15:00:27 2018 - [error][/usr/local/share/perl5/MHA/MasterMonitor.pm, ln208] Slaves settings check failed!
Thu Aug 16 15:00:27 2018 - [error][/usr/local/share/perl5/MHA/MasterMonitor.pm, ln416] Slave configuration failed.
Thu Aug 16 15:00:27 2018 - [error][/usr/local/share/perl5/MHA/MasterMonitor.pm, ln427] Error happened on checking configurations. at /usr/local/bin/masterha_check_repl line 48.
Thu Aug 16 15:00:27 2018 - [error][/usr/local/share/perl5/MHA/MasterMonitor.pm, ln525] Error happened on monitoring servers.
Thu Aug 16 15:00:27 2018 - [info] Got exit code 1 (Not master dead).
MySQL Replication Health is NOT OK! --报错解决如下
[root@node2 bin]# type mysql
mysql 是 /usr/bin/mysql
[root@node3 data]# ls /usr/local/mysql/bin/mysqlbinlog
/usr/local/mysql/bin/mysqlbinlog
[root@node3 data]# type mysqlbilog
-bash: type: mysqlbilog: δ?μ?
[root@node3 data]# type mysqlbinlog
mysqlbinlog ˇ /usr/local/mysql/bin/mysqlbinlog
[root@node3 data]# ln -s /usr/local/mysql/bin/mysqlbinlog /usr/local/bin/mysqlbinlog
[root@node3 data]# type mysql
mysql ?±?1t? (/usr/local/mysql/bin/mysql)
[root@node3 data]#
[root@node3 data]# ln -s /usr/local/mysql/bin/mysql /usr/local/bin/mysql
[root@node2 bin]#
[root@node2 bin]# masterha_check_repl --conf=/etc/mha/mha.conf
Thu Aug 16 15:01:37 2018 - [warning] Global configuration file /etc/masterha_default.cnf not found. Skipping.
Thu Aug 16 15:01:37 2018 - [info] Reading application default configuration from /etc/mha/mha.conf..
Thu Aug 16 15:01:37 2018 - [info] Reading server configuration from /etc/mha/mha.conf..
Thu Aug 16 15:01:37 2018 - [info] MHA::MasterMonitor version 0.58.
Thu Aug 16 15:01:38 2018 - [info] GTID failover mode = 0
Thu Aug 16 15:01:38 2018 - [info] Dead Servers:
Thu Aug 16 15:01:38 2018 - [info] Alive Servers:
Thu Aug 16 15:01:38 2018 - [info] node1(192.168.88.20:3380)
Thu Aug 16 15:01:38 2018 - [info] node2(192.168.88.18:3380)
Thu Aug 16 15:01:38 2018 - [info] node3(192.168.88.19:3380)
Thu Aug 16 15:01:38 2018 - [info] Alive Slaves:
Thu Aug 16 15:01:38 2018 - [info] node1(192.168.88.20:3380) Version=5.7.22-log (oldest major version between slaves) log-bin:enabled
Thu Aug 16 15:01:38 2018 - [info] GTID ON
Thu Aug 16 15:01:38 2018 - [info] Replicating from 192.168.88.18(192.168.88.18:3380)
Thu Aug 16 15:01:38 2018 - [info] Primary candidate for the new Master (candidate_master is set)
Thu Aug 16 15:01:38 2018 - [info] node3(192.168.88.19:3380) Version=5.7.22-log (oldest major version between slaves) log-bin:enabled
Thu Aug 16 15:01:38 2018 - [info] GTID ON
Thu Aug 16 15:01:38 2018 - [info] Replicating from 192.168.88.18(192.168.88.18:3380)
Thu Aug 16 15:01:38 2018 - [info] Not candidate for the new Master (no_master is set)
Thu Aug 16 15:01:38 2018 - [info] Current Alive Master: node2(192.168.88.18:3380)
Thu Aug 16 15:01:38 2018 - [info] Checking slave configurations..
Thu Aug 16 15:01:38 2018 - [info] read_only=1 is not set on slave node1(192.168.88.20:3380).
Thu Aug 16 15:01:38 2018 - [info] read_only=1 is not set on slave node3(192.168.88.19:3380).
Thu Aug 16 15:01:38 2018 - [info] Checking replication filtering settings..
Thu Aug 16 15:01:38 2018 - [info] binlog_do_db= , binlog_ignore_db=
Thu Aug 16 15:01:38 2018 - [info] Replication filtering check ok.
Thu Aug 16 15:01:38 2018 - [info] GTID (with auto-pos) is not supported
Thu Aug 16 15:01:38 2018 - [info] Starting SSH connection tests..
Thu Aug 16 15:01:41 2018 - [info] All SSH connection tests passed successfully.
Thu Aug 16 15:01:41 2018 - [info] Checking MHA Node version..
Thu Aug 16 15:01:41 2018 - [info] Version check ok.
Thu Aug 16 15:01:41 2018 - [info] Checking SSH publickey authentication settings on the current master..
Thu Aug 16 15:01:42 2018 - [info] HealthCheck: SSH to node2 is reachable.
Thu Aug 16 15:01:42 2018 - [info] Checking recovery script configurations on node2(192.168.88.18:3380)..
Thu Aug 16 15:01:42 2018 - [info] Executing command: save_binary_logs --command=test --start_pos=4 --binlog_dir=/home/mysqldir --output_file=/usr/local/mha/save_binary_logs_test --manager_version=0.58 --start_file=mysql-bin.000001
Thu Aug 16 15:01:42 2018 - [info] Connecting to [email protected](node2:22)..
Creating /usr/local/mha if not exists.. ok.
Checking output directory is accessible or not..
ok.
Binlog found at /home/mysqldir, up to mysql-bin.000001
Thu Aug 16 15:01:42 2018 - [info] Binlog setting check done.
Thu Aug 16 15:01:42 2018 - [info] Checking SSH publickey authentication and checking recovery script configurations on all alive slave servers..
Thu Aug 16 15:01:42 2018 - [info] Executing command : apply_diff_relay_logs --command=test --slave_user='muser' --slave_host=node1 --slave_ip=192.168.88.20 --slave_port=3380 --workdir=/usr/local/mha --target_version=5.7.22-log --manager_version=0.58 --relay_log_info=/home/mysqldir/data/relay-log.info --relay_dir=/home/mysqldir/data/ --slave_pass=xxx
Thu Aug 16 15:01:42 2018 - [info] Connecting to [email protected](node1:22)..
Checking slave recovery environment settings..
Opening /home/mysqldir/data/relay-log.info ... ok.
Relay log found at /home/mysqldir, up to relay-bin.000002
Temporary relay log file is /home/mysqldir/relay-bin.000002
Checking if super_read_only is defined and turned on.. not present or turned off, ignoring.
Testing mysql connection and privileges..
done.
Testing mysqlbinlog output.. done.
Cleaning up test file(s).. done.
Thu Aug 16 15:01:43 2018 - [info] Executing command : apply_diff_relay_logs --command=test --slave_user='muser' --slave_host=node3 --slave_ip=192.168.88.19 --slave_port=3380 --workdir=/usr/local/mha --target_version=5.7.22-log --manager_version=0.58 --relay_log_info=/home/mysqldir/data/relay-log.info --relay_dir=/home/mysqldir/data/ --slave_pass=xxx
Thu Aug 16 15:01:43 2018 - [info] Connecting to [email protected](node3:22)..
Checking slave recovery environment settings..
Opening /home/mysqldir/data/relay-log.info ... ok.
Relay log found at /home/mysqldir, up to relay-bin.000003
Temporary relay log file is /home/mysqldir/relay-bin.000003
Checking if super_read_only is defined and turned on.. not present or turned off, ignoring.
Testing mysql connection and privileges..
mysql: [Warning] Using a password on the command line interface can be insecure.
done.
Testing mysqlbinlog output.. done.
Cleaning up test file(s).. done.
Thu Aug 16 15:01:43 2018 - [info] Slaves settings check done.
Thu Aug 16 15:01:43 2018 - [info]
node2(192.168.88.18:3380) (current master)
+--node1(192.168.88.20:3380)
+--node3(192.168.88.19:3380)
Thu Aug 16 15:01:43 2018 - [info] Checking replication health on node1..
Thu Aug 16 15:01:43 2018 - [info] ok.
Thu Aug 16 15:01:43 2018 - [info] Checking replication health on node3..
Thu Aug 16 15:01:43 2018 - [info] ok.
Thu Aug 16 15:01:43 2018 - [info] Checking master_ip_failover_script status:
Thu Aug 16 15:01:43 2018 - [info] /usr/local/scripts/master_ip_failover --command=status --ssh_user=root --orig_master_host=node2 --orig_master_ip=192.168.88.18 --orig_master_port=3380
IN SCRIPT TEST====/sbin/ifconfig eth0:1 down==/sbin/ifconfig eth0:1 192.168.88.222/24===
Checking the Status of the script.. OK
Thu Aug 16 15:01:43 2018 - [info] OK.
Thu Aug 16 15:01:43 2018 - [warning] shutdown_script is not defined.
Thu Aug 16 15:01:43 2018 - [info] Got exit code 0 (Not master dead).
MySQL Replication Health is OK.
[root@node2 bin]#
手动在线切换
[root@node2 ~]# masterha_master_switch --conf=/etc/mha/mha.conf --master_state=alive --new_master_host=node1 --orig_master_is_new_slave
Thu Aug 16 16:45:54 2018 - [info] MHA::MasterRotate version 0.58.
Thu Aug 16 16:45:54 2018 - [info] Starting online master switch..
Thu Aug 16 16:45:54 2018 - [info]
Thu Aug 16 16:45:54 2018 - [info] * Phase 1: Configuration Check Phase..
Thu Aug 16 16:45:54 2018 - [info]
Thu Aug 16 16:45:54 2018 - [warning] Global configuration file /etc/masterha_default.cnf not found. Skipping.
Thu Aug 16 16:45:54 2018 - [info] Reading application default configuration from /etc/mha/mha.conf..
Thu Aug 16 16:45:54 2018 - [info] Reading server configuration from /etc/mha/mha.conf..
Thu Aug 16 16:45:55 2018 - [info] GTID failover mode = 1
Thu Aug 16 16:45:55 2018 - [info] Current Alive Master: 192.168.88.18(192.168.88.18:3380)
Thu Aug 16 16:45:55 2018 - [info] Alive Slaves:
Thu Aug 16 16:45:55 2018 - [info] 192.168.88.20(192.168.88.20:3380) Version=5.7.22-log (oldest major version between slaves) log-bin:enabled
Thu Aug 16 16:45:55 2018 - [info] GTID ON
Thu Aug 16 16:45:55 2018 - [info] Replicating from 192.168.88.18(192.168.88.18:3380)
Thu Aug 16 16:45:55 2018 - [info] Primary candidate for the new Master (candidate_master is set)
Thu Aug 16 16:45:55 2018 - [info] 192.168.88.19(192.168.88.19:3380) Version=5.7.22-log (oldest major version between slaves) log-bin:enabled
Thu Aug 16 16:45:55 2018 - [info] GTID ON
Thu Aug 16 16:45:55 2018 - [info] Replicating from 192.168.88.18(192.168.88.18:3380)
Thu Aug 16 16:45:55 2018 - [info] Primary candidate for the new Master (candidate_master is set)
It is better to execute FLUSH NO_WRITE_TO_BINLOG TABLES on the master before switching. Is it ok to execute on 192.168.88.18(192.168.88.18:3380)? (YES/no): YES
Thu Aug 16 16:45:57 2018 - [info] Executing FLUSH NO_WRITE_TO_BINLOG TABLES. This may take long time..
Thu Aug 16 16:45:57 2018 - [info] ok.
Thu Aug 16 16:45:57 2018 - [info] Checking MHA is not monitoring or doing failover..
Thu Aug 16 16:45:57 2018 - [info] Checking replication health on 192.168.88.20..
Thu Aug 16 16:45:57 2018 - [info] ok.
Thu Aug 16 16:45:57 2018 - [info] Checking replication health on 192.168.88.19..
Thu Aug 16 16:45:57 2018 - [info] ok.
Thu Aug 16 16:45:57 2018 - [error][/usr/local/share/perl5/MHA/ServerManager.pm, ln1218] node1 is not alive!
Thu Aug 16 16:45:57 2018 - [error][/usr/local/share/perl5/MHA/MasterRotate.pm, ln233] Failed to get new master!
Thu Aug 16 16:45:57 2018 - [error][/usr/local/share/perl5/MHA/ManagerUtil.pm, ln177] Got ERROR: at /usr/local/bin/masterha_master_switch line 53.
[root@node2 ~]# vi /etc/mha/mha.conf
[root@node2 ~]#
[root@node2 ~]#
[root@node2 ~]# masterha_master_switch --conf=/etc/mha/mha.conf --master_state=alive --new_master_host=node1 --orig_master_is_new_slave
Thu Aug 16 16:48:04 2018 - [info] MHA::MasterRotate version 0.58.
Thu Aug 16 16:48:04 2018 - [info] Starting online master switch..
Thu Aug 16 16:48:04 2018 - [info]
Thu Aug 16 16:48:04 2018 - [info] * Phase 1: Configuration Check Phase..
Thu Aug 16 16:48:04 2018 - [info]
Thu Aug 16 16:48:04 2018 - [warning] Global configuration file /etc/masterha_default.cnf not found. Skipping.
Thu Aug 16 16:48:04 2018 - [info] Reading application default configuration from /etc/mha/mha.conf..
Thu Aug 16 16:48:04 2018 - [info] Reading server configuration from /etc/mha/mha.conf..
Thu Aug 16 16:48:05 2018 - [info] GTID failover mode = 1
Thu Aug 16 16:48:05 2018 - [info] Current Alive Master: node2(192.168.88.18:3380)
Thu Aug 16 16:48:05 2018 - [info] Alive Slaves:
Thu Aug 16 16:48:05 2018 - [info] node1(192.168.88.20:3380) Version=5.7.22-log (oldest major version between slaves) log-bin:enabled
Thu Aug 16 16:48:05 2018 - [info] GTID ON
Thu Aug 16 16:48:05 2018 - [info] Replicating from 192.168.88.18(192.168.88.18:3380)
Thu Aug 16 16:48:05 2018 - [info] Primary candidate for the new Master (candidate_master is set)
Thu Aug 16 16:48:05 2018 - [info] node3(192.168.88.19:3380) Version=5.7.22-log (oldest major version between slaves) log-bin:enabled
Thu Aug 16 16:48:05 2018 - [info] GTID ON
Thu Aug 16 16:48:05 2018 - [info] Replicating from 192.168.88.18(192.168.88.18:3380)
Thu Aug 16 16:48:05 2018 - [info] Primary candidate for the new Master (candidate_master is set)
It is better to execute FLUSH NO_WRITE_TO_BINLOG TABLES on the master before switching. Is it ok to execute on node2(192.168.88.18:3380)? (YES/no): YES
Thu Aug 16 16:48:07 2018 - [info] Executing FLUSH NO_WRITE_TO_BINLOG TABLES. This may take long time..
Thu Aug 16 16:48:07 2018 - [info] ok.
Thu Aug 16 16:48:07 2018 - [info] Checking MHA is not monitoring or doing failover..
Thu Aug 16 16:48:07 2018 - [info] Checking replication health on node1..
Thu Aug 16 16:48:07 2018 - [info] ok.
Thu Aug 16 16:48:07 2018 - [info] Checking replication health on node3..
Thu Aug 16 16:48:07 2018 - [info] ok.
Thu Aug 16 16:48:07 2018 - [error][/usr/local/share/perl5/MHA/ServerManager.pm, ln1218] node1 is not alive!
Thu Aug 16 16:48:07 2018 - [error][/usr/local/share/perl5/MHA/MasterRotate.pm, ln233] Failed to get new master!
Thu Aug 16 16:48:07 2018 - [error][/usr/local/share/perl5/MHA/ManagerUtil.pm, ln177] Got ERROR: at /usr/local/bin/masterha_master_switch line 53.
[root@node2 ~]# ping node1
PING node1 (192.168.88.20) 56(84) bytes of data.
64 bytes from node1 (192.168.88.20): icmp_seq=1 ttl=64 time=0.477 ms
64 bytes from node1 (192.168.88.20): icmp_seq=2 ttl=64 time=0.335 ms
64 bytes from node1 (192.168.88.20): icmp_seq=3 ttl=64 time=0.442 ms
^C
--- node1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 1999ms
rtt min/avg/max/mdev = 0.335/0.418/0.477/0.060 ms
[root@node2 ~]# ping node2
PING node2 (192.168.88.18) 56(84) bytes of data.
64 bytes from node2 (192.168.88.18): icmp_seq=1 ttl=64 time=0.021 ms
64 bytes from node2 (192.168.88.18): icmp_seq=2 ttl=64 time=0.031 ms
^C
--- node2 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1000ms
rtt min/avg/max/mdev = 0.021/0.026/0.031/0.005 ms
[root@node2 ~]# ping node3
PING node3 (192.168.88.19) 56(84) bytes of data.
64 bytes from node3 (192.168.88.19): icmp_seq=1 ttl=64 time=0.455 ms
64 bytes from node3 (192.168.88.19): icmp_seq=2 ttl=64 time=0.535 ms
^C
--- node3 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1000ms
rtt min/avg/max/mdev = 0.455/0.495/0.535/0.040 ms
加上--new_master_port=3380 就可以解决上面报错了
[root@node2 ~]# masterha_master_switch --conf=/etc/mha/mha.conf --master_state=alive --new_master_host=node1 --new_master_port=3380 --orig_master_is_new_slave
Thu Aug 16 16:59:24 2018 - [info] MHA::MasterRotate version 0.58.
Thu Aug 16 16:59:24 2018 - [info] Starting online master switch..
Thu Aug 16 16:59:24 2018 - [info]
Thu Aug 16 16:59:24 2018 - [info] * Phase 1: Configuration Check Phase..
Thu Aug 16 16:59:24 2018 - [info]
Thu Aug 16 16:59:24 2018 - [warning] Global configuration file /etc/masterha_default.cnf not found. Skipping.
Thu Aug 16 16:59:24 2018 - [info] Reading application default configuration from /etc/mha/mha.conf..
Thu Aug 16 16:59:24 2018 - [info] Reading server configuration from /etc/mha/mha.conf..
Thu Aug 16 16:59:25 2018 - [info] GTID failover mode = 1
Thu Aug 16 16:59:25 2018 - [info] Current Alive Master: node2(192.168.88.18:3380)
Thu Aug 16 16:59:25 2018 - [info] Alive Slaves:
Thu Aug 16 16:59:25 2018 - [info] node1(192.168.88.20:3380) Version=5.7.22-log (oldest major version between slaves) log-bin:enabled
Thu Aug 16 16:59:25 2018 - [info] GTID ON
Thu Aug 16 16:59:25 2018 - [info] Replicating from 192.168.88.18(192.168.88.18:3380)
Thu Aug 16 16:59:25 2018 - [info] Primary candidate for the new Master (candidate_master is set)
Thu Aug 16 16:59:25 2018 - [info] node3(192.168.88.19:3380) Version=5.7.22-log (oldest major version between slaves) log-bin:enabled
Thu Aug 16 16:59:25 2018 - [info] GTID ON
Thu Aug 16 16:59:25 2018 - [info] Replicating from 192.168.88.18(192.168.88.18:3380)
Thu Aug 16 16:59:25 2018 - [info] Primary candidate for the new Master (candidate_master is set)
It is better to execute FLUSH NO_WRITE_TO_BINLOG TABLES on the master before switching. Is it ok to execute on node2(192.168.88.18:3380)? (YES/no): YES
Thu Aug 16 16:59:28 2018 - [info] Executing FLUSH NO_WRITE_TO_BINLOG TABLES. This may take long time..
Thu Aug 16 16:59:28 2018 - [info] ok.
Thu Aug 16 16:59:28 2018 - [info] Checking MHA is not monitoring or doing failover..
Thu Aug 16 16:59:28 2018 - [info] Checking replication health on node1..
Thu Aug 16 16:59:28 2018 - [info] ok.
Thu Aug 16 16:59:28 2018 - [info] Checking replication health on node3..
Thu Aug 16 16:59:28 2018 - [info] ok.
Thu Aug 16 16:59:28 2018 - [info] node1 can be new master.
Thu Aug 16 16:59:28 2018 - [info]
From:
node2(192.168.88.18:3380) (current master)
+--node1(192.168.88.20:3380)
+--node3(192.168.88.19:3380)
To:
node1(192.168.88.20:3380) (new master)
+--node3(192.168.88.19:3380)
+--node2(192.168.88.18:3380)
Starting master switch from node2(192.168.88.18:3380) to node1(192.168.88.20:3380)? (yes/NO): YES
Thu Aug 16 16:59:46 2018 - [info] Checking whether node1(192.168.88.20:3380) is ok for the new master..
Thu Aug 16 16:59:46 2018 - [info] ok.
Thu Aug 16 16:59:46 2018 - [info] node2(192.168.88.18:3380): SHOW SLAVE STATUS returned empty result. To check replication filtering rules, temporarily executing CHANGE MASTER to a dummy host.
Thu Aug 16 16:59:46 2018 - [info] node2(192.168.88.18:3380): Resetting slave pointing to the dummy host.
Thu Aug 16 16:59:46 2018 - [info] ** Phase 1: Configuration Check Phase completed.
Thu Aug 16 16:59:46 2018 - [info]
Thu Aug 16 16:59:46 2018 - [info] * Phase 2: Rejecting updates Phase..
Thu Aug 16 16:59:46 2018 - [info]
Thu Aug 16 16:59:46 2018 - [info] Executing master ip online change script to disable write on the current master:
Thu Aug 16 16:59:46 2018 - [info] /usr/local/scripts/master_ip_online_change --command=stop --orig_master_host=node2 --orig_master_ip=192.168.88.18 --orig_master_port=3380 --orig_master_user='muser' --new_master_host=node1 --new_master_ip=192.168.88.20 --new_master_port=3380 --new_master_user='muser' --orig_master_ssh_user=root --new_master_ssh_user=root --orig_master_is_new_slave --orig_master_password=xxx --new_master_password=xxx
Thu Aug 16 16:59:46 2018 452519 Set read_only on the new master.. ok.
Thu Aug 16 16:59:46 2018 455830 drop vip 192.168.88.222..
RTNETLINK answers: Cannot assign requested address
Thu Aug 16 16:59:46 2018 617226 Waiting all running 2 threads are disconnected.. (max 1500 milliseconds)
{'Time' => '1170','db' => undef,'Id' => '2','User' => 'repl','State' => 'Master has sent all binlog to slave; waiting for more updates','Command' => 'Binlog Dump GTID','Info' => undef,'Host' => '192.168.88.20:44731'}
{'Time' => '1034','db' => undef,'Id' => '3','User' => 'repl','State' => 'Master has sent all binlog to slave; waiting for more updates','Command' => 'Binlog Dump GTID','Info' => undef,'Host' => '192.168.88.19:39830'}
Thu Aug 16 16:59:47 2018 116762 Waiting all running 2 threads are disconnected.. (max 1000 milliseconds)
{'Time' => '1171','db' => undef,'Id' => '2','User' => 'repl','State' => 'Master has sent all binlog to slave; waiting for more updates','Command' => 'Binlog Dump GTID','Info' => undef,'Host' => '192.168.88.20:44731'}
{'Time' => '1035','db' => undef,'Id' => '3','User' => 'repl','State' => 'Master has sent all binlog to slave; waiting for more updates','Command' => 'Binlog Dump GTID','Info' => undef,'Host' => '192.168.88.19:39830'}
Thu Aug 16 16:59:47 2018 617500 Waiting all running 2 threads are disconnected.. (max 500 milliseconds)
{'Time' => '1171','db' => undef,'Id' => '2','User' => 'repl','State' => 'Master has sent all binlog to slave; waiting for more updates','Command' => 'Binlog Dump GTID','Info' => undef,'Host' => '192.168.88.20:44731'}
{'Time' => '1035','db' => undef,'Id' => '3','User' => 'repl','State' => 'Master has sent all binlog to slave; waiting for more updates','Command' => 'Binlog Dump GTID','Info' => undef,'Host' => '192.168.88.19:39830'}
Thu Aug 16 16:59:48 2018 118255 Set read_only=1 on the orig master.. ok.
Thu Aug 16 16:59:48 2018 119471 Waiting all running 2 queries are disconnected.. (max 500 milliseconds)
{'Time' => '1172','db' => undef,'Id' => '2','User' => 'repl','State' => 'Master has sent all binlog to slave; waiting for more updates','Command' => 'Binlog Dump GTID','Info' => undef,'Host' => '192.168.88.20:44731'}
{'Time' => '1036','db' => undef,'Id' => '3','User' => 'repl','State' => 'Master has sent all binlog to slave; waiting for more updates','Command' => 'Binlog Dump GTID','Info' => undef,'Host' => '192.168.88.19:39830'}
Thu Aug 16 16:59:48 2018 619092 Killing all application threads..
Thu Aug 16 16:59:48 2018 630238 done.
Thu Aug 16 16:59:48 2018 - [info] ok.
Thu Aug 16 16:59:48 2018 - [info] Locking all tables on the orig master to reject updates from everybody (including root):
Thu Aug 16 16:59:48 2018 - [info] Executing FLUSH TABLES WITH READ LOCK..
Thu Aug 16 16:59:48 2018 - [info] ok.
Thu Aug 16 16:59:48 2018 - [info] Orig master binlog:pos is mysql-bin.000010:194.
Thu Aug 16 16:59:48 2018 - [info] Waiting to execute all relay logs on node1(192.168.88.20:3380)..
Thu Aug 16 16:59:48 2018 - [info] master_pos_wait(mysql-bin.000010:194) completed on node1(192.168.88.20:3380). Executed 0 events.
Thu Aug 16 16:59:48 2018 - [info] done.
Thu Aug 16 16:59:48 2018 - [info] Getting new master's binlog name and position..
Thu Aug 16 16:59:48 2018 - [info] mysql-bin.000017:194
Thu Aug 16 16:59:48 2018 - [info] All other slaves should start replication from here. Statement should be: CHANGE MASTER TO MASTER_HOST='node1 or 192.168.88.20', MASTER_PORT=3380, MASTER_AUTO_POSITION=1, MASTER_USER='repl', MASTER_PASSWORD='xxx';
Thu Aug 16 16:59:48 2018 - [info] Executing master ip online change script to allow write on the new master:
Thu Aug 16 16:59:48 2018 - [info] /usr/local/scripts/master_ip_online_change --command=start --orig_master_host=node2 --orig_master_ip=192.168.88.18 --orig_master_port=3380 --orig_master_user='muser' --new_master_host=node1 --new_master_ip=192.168.88.20 --new_master_port=3380 --new_master_user='muser' --orig_master_ssh_user=root --new_master_ssh_user=root --orig_master_is_new_slave --orig_master_password=xxx --new_master_password=xxx
Thu Aug 16 16:59:49 2018 157075 Set read_only=0 on the new master.
Thu Aug 16 16:59:49 2018 158229Add vip 192.168.88.222 on eth0..
Thu Aug 16 16:59:49 2018 - [info] ok.
Thu Aug 16 16:59:49 2018 - [info]
Thu Aug 16 16:59:49 2018 - [info] * Switching slaves in parallel..
Thu Aug 16 16:59:49 2018 - [info]
Thu Aug 16 16:59:49 2018 - [info] -- Slave switch on host node3(192.168.88.19:3380) started, pid: 2141
Thu Aug 16 16:59:49 2018 - [info]
Thu Aug 16 16:59:50 2018 - [info] Log messages from node3 ...
Thu Aug 16 16:59:50 2018 - [info]
Thu Aug 16 16:59:49 2018 - [info] Waiting to execute all relay logs on node3(192.168.88.19:3380)..
Thu Aug 16 16:59:49 2018 - [info] master_pos_wait(mysql-bin.000010:194) completed on node3(192.168.88.19:3380). Executed 0 events.
Thu Aug 16 16:59:49 2018 - [info] done.
Thu Aug 16 16:59:49 2018 - [info] Resetting slave node3(192.168.88.19:3380) and starting replication from the new master node1(192.168.88.20:3380)..
Thu Aug 16 16:59:49 2018 - [info] Executed CHANGE MASTER.
Thu Aug 16 16:59:49 2018 - [info] Slave started.
Thu Aug 16 16:59:50 2018 - [info] End of log messages from node3 ...
Thu Aug 16 16:59:50 2018 - [info]
Thu Aug 16 16:59:50 2018 - [info] -- Slave switch on host node3(192.168.88.19:3380) succeeded.
Thu Aug 16 16:59:50 2018 - [info] Unlocking all tables on the orig master:
Thu Aug 16 16:59:50 2018 - [info] Executing UNLOCK TABLES..
Thu Aug 16 16:59:50 2018 - [info] ok.
Thu Aug 16 16:59:50 2018 - [info] Starting orig master as a new slave..
Thu Aug 16 16:59:50 2018 - [info] Resetting slave node2(192.168.88.18:3380) and starting replication from the new master node1(192.168.88.20:3380)..
Thu Aug 16 16:59:50 2018 - [info] Executed CHANGE MASTER.
Thu Aug 16 16:59:51 2018 - [error][/usr/local/share/perl5/MHA/Server.pm, ln775] SQL Thread could not be started on node2(192.168.88.18:3380)! Check slave status.
Thu Aug 16 16:59:51 2018 - [error][/usr/local/share/perl5/MHA/Server.pm, ln779] Last Error= 1007, Last Error=Error 'Can't create database 't_mha'; database exists' on query. Default database: 't_mha'. Query: 'create database t_mha DEFAULT CHARACTER SET utf8 COLLATE utf8_general_ci'
Thu Aug 16 16:59:51 2018 - [error][/usr/local/share/perl5/MHA/Server.pm, ln867] Starting slave IO/SQL thread on node2(192.168.88.18:3380) failed!
Thu Aug 16 16:59:51 2018 - [error][/usr/local/share/perl5/MHA/MasterRotate.pm, ln584] Failed!
Thu Aug 16 16:59:51 2018 - [error][/usr/local/share/perl5/MHA/MasterRotate.pm, ln613] Switching master to node1(192.168.88.20:3380) done, but switching slaves partially failed.
[root@node2 ~]#
[root@node2 ~]# ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 52:54:00:82:9a:e9 brd ff:ff:ff:ff:ff:ff
inet 192.168.88.18/24 brd 192.168.88.255 scope global eth0
valid_lft forever preferred_lft forever
inet 192.168.88.222/32 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::5054:ff:fe82:9ae9/64 scope link
valid_lft forever preferred_lft forever
[root@node2 ~]#
[root@node2 ~]#
[root@node2 ~]# mysqladmin -uroot -proot001 -P3380 -S /home/mysqldir/mysql.sock shutdown
[root@node2 ~]#
[root@node2 mha]# tail -f manager.log
Thu Aug 16 17:33:08 2018 - [warning] secondary_check_script is not defined. It is highly recommended setting it to check master reachability from two or more routes.
Thu Aug 16 17:33:08 2018 - [info] Starting ping health check on node2(192.168.88.18:3380)..
Thu Aug 16 17:33:08 2018 - [info] Ping(SELECT) succeeded, waiting until MySQL doesn't respond..
Thu Aug 16 17:36:32 2018 - [warning] Got error on MySQL select ping: 2006 (MySQL server has gone away)
Thu Aug 16 17:36:32 2018 - [info] Executing SSH check script: exit 0
Thu Aug 16 17:36:32 2018 - [info] HealthCheck: SSH to node2 is reachable.
Thu Aug 16 17:36:35 2018 - [warning] Got error on MySQL connect: 2003 (Can't connect to MySQL server on '192.168.88.18' (111))
Thu Aug 16 17:36:35 2018 - [warning] Connection failed 2 time(s)..
Thu Aug 16 17:36:38 2018 - [warning] Got error on MySQL connect: 2003 (Can't connect to MySQL server on '192.168.88.18' (111))
Thu Aug 16 17:36:38 2018 - [warning] Connection failed 3 time(s)..
Thu Aug 16 17:36:41 2018 - [warning] Got error on MySQL connect: 2003 (Can't connect to MySQL server on '192.168.88.18' (111))
Thu Aug 16 17:36:41 2018 - [warning] Connection failed 4 time(s)..
Thu Aug 16 17:36:41 2018 - [warning] Master is not reachable from health checker!
Thu Aug 16 17:36:41 2018 - [warning] Master node2(192.168.88.18:3380) is not reachable!
Thu Aug 16 17:36:41 2018 - [warning] SSH is reachable.
Thu Aug 16 17:36:41 2018 - [info] Connecting to a master server failed. Reading configuration file /etc/masterha_default.cnf and /etc/mha/mha.conf again, and trying to connect to all servers to check server status..
Thu Aug 16 17:36:41 2018 - [warning] Global configuration file /etc/masterha_default.cnf not found. Skipping.
Thu Aug 16 17:36:41 2018 - [info] Reading application default configuration from /etc/mha/mha.conf..
Thu Aug 16 17:36:41 2018 - [info] Reading server configuration from /etc/mha/mha.conf..
Thu Aug 16 17:36:42 2018 - [info] GTID failover mode = 1
Thu Aug 16 17:36:42 2018 - [info] Dead Servers:
Thu Aug 16 17:36:42 2018 - [info] node2(192.168.88.18:3380)
Thu Aug 16 17:36:42 2018 - [info] Alive Servers:
Thu Aug 16 17:36:42 2018 - [info] node1(192.168.88.20:3380)
Thu Aug 16 17:36:42 2018 - [info] node3(192.168.88.19:3380)
Thu Aug 16 17:36:42 2018 - [info] Alive Slaves:
Thu Aug 16 17:36:42 2018 - [info] node1(192.168.88.20:3380) Version=5.7.22-log (oldest major version between slaves) log-bin:enabled
Thu Aug 16 17:36:42 2018 - [info] GTID ON
Thu Aug 16 17:36:42 2018 - [info] Replicating from 192.168.88.18(192.168.88.18:3380)
Thu Aug 16 17:36:42 2018 - [info] Primary candidate for the new Master (candidate_master is set)
Thu Aug 16 17:36:42 2018 - [info] node3(192.168.88.19:3380) Version=5.7.22-log (oldest major version between slaves) log-bin:enabled
Thu Aug 16 17:36:42 2018 - [info] GTID ON
Thu Aug 16 17:36:42 2018 - [info] Replicating from 192.168.88.18(192.168.88.18:3380)
Thu Aug 16 17:36:42 2018 - [info] Primary candidate for the new Master (candidate_master is set)
Thu Aug 16 17:36:42 2018 - [info] Checking slave configurations..
Thu Aug 16 17:36:42 2018 - [info] read_only=1 is not set on slave node3(192.168.88.19:3380).
Thu Aug 16 17:36:42 2018 - [info] Checking replication filtering settings..
Thu Aug 16 17:36:42 2018 - [info] Replication filtering check ok.
Thu Aug 16 17:36:42 2018 - [info] Master is down!
Thu Aug 16 17:36:42 2018 - [info] Terminating monitoring script.
Thu Aug 16 17:36:42 2018 - [info] Got exit code 20 (Master dead).
Thu Aug 16 17:36:42 2018 - [info] MHA::MasterFailover version 0.58.
Thu Aug 16 17:36:42 2018 - [info] Starting master failover.
Thu Aug 16 17:36:42 2018 - [info]
Thu Aug 16 17:36:42 2018 - [info] * Phase 1: Configuration Check Phase..
Thu Aug 16 17:36:42 2018 - [info]
Thu Aug 16 17:36:44 2018 - [info] GTID failover mode = 1
Thu Aug 16 17:36:44 2018 - [info] Dead Servers:
Thu Aug 16 17:36:44 2018 - [info] node2(192.168.88.18:3380)
Thu Aug 16 17:36:44 2018 - [info] Checking master reachability via MySQL(double check)...
Thu Aug 16 17:36:44 2018 - [info] ok.
Thu Aug 16 17:36:44 2018 - [info] Alive Servers:
Thu Aug 16 17:36:44 2018 - [info] node1(192.168.88.20:3380)
Thu Aug 16 17:36:44 2018 - [info] node3(192.168.88.19:3380)
Thu Aug 16 17:36:44 2018 - [info] Alive Slaves:
Thu Aug 16 17:36:44 2018 - [info] node1(192.168.88.20:3380) Version=5.7.22-log (oldest major version between slaves) log-bin:enabled
Thu Aug 16 17:36:44 2018 - [info] GTID ON
Thu Aug 16 17:36:44 2018 - [info] Replicating from 192.168.88.18(192.168.88.18:3380)
Thu Aug 16 17:36:44 2018 - [info] Primary candidate for the new Master (candidate_master is set)
Thu Aug 16 17:36:44 2018 - [info] node3(192.168.88.19:3380) Version=5.7.22-log (oldest major version between slaves) log-bin:enabled
Thu Aug 16 17:36:44 2018 - [info] GTID ON
Thu Aug 16 17:36:44 2018 - [info] Replicating from 192.168.88.18(192.168.88.18:3380)
Thu Aug 16 17:36:44 2018 - [info] Primary candidate for the new Master (candidate_master is set)
Thu Aug 16 17:36:44 2018 - [info] Starting GTID based failover.
Thu Aug 16 17:36:44 2018 - [info]
Thu Aug 16 17:36:44 2018 - [info] ** Phase 1: Configuration Check Phase completed.
Thu Aug 16 17:36:44 2018 - [info]
Thu Aug 16 17:36:44 2018 - [info] * Phase 2: Dead Master Shutdown Phase..
Thu Aug 16 17:36:44 2018 - [info]
Thu Aug 16 17:36:44 2018 - [info] Forcing shutdown so that applications never connect to the current master..
Thu Aug 16 17:36:44 2018 - [info] Executing master IP deactivation script:
Thu Aug 16 17:36:44 2018 - [info] /usr/local/scripts/master_ip_failover --orig_master_host=node2 --orig_master_ip=192.168.88.18 --orig_master_port=3380 --command=stopssh --ssh_user=root
IN SCRIPT TEST====/sbin/ifconfig eth0:1 down==/sbin/ifconfig eth0:1 192.168.88.222/24===
Disabling the VIP on old master: node2
SIOCSIFFLAGS: 无法指定被请求的地址
Thu Aug 16 17:36:44 2018 - [info] done.
Thu Aug 16 17:36:44 2018 - [warning] shutdown_script is not set. Skipping explicit shutting down of the dead master.
Thu Aug 16 17:36:44 2018 - [info] * Phase 2: Dead Master Shutdown Phase completed.
Thu Aug 16 17:36:44 2018 - [info]
Thu Aug 16 17:36:44 2018 - [info] * Phase 3: Master Recovery Phase..
Thu Aug 16 17:36:44 2018 - [info]
Thu Aug 16 17:36:44 2018 - [info] * Phase 3.1: Getting Latest Slaves Phase..
Thu Aug 16 17:36:44 2018 - [info]
Thu Aug 16 17:36:44 2018 - [info] The latest binary log file/position on all slaves is mysql-bin.000001:418
Thu Aug 16 17:36:44 2018 - [info] Retrieved Gtid Set: 7f07d504-9fd8-11e8-80c9-525400829ae9:1
Thu Aug 16 17:36:44 2018 - [info] Latest slaves (Slaves that received relay log files to the latest):
Thu Aug 16 17:36:44 2018 - [info] node1(192.168.88.20:3380) Version=5.7.22-log (oldest major version between slaves) log-bin:enabled
Thu Aug 16 17:36:44 2018 - [info] GTID ON
Thu Aug 16 17:36:44 2018 - [info] Replicating from 192.168.88.18(192.168.88.18:3380)
Thu Aug 16 17:36:44 2018 - [info] Primary candidate for the new Master (candidate_master is set)
Thu Aug 16 17:36:44 2018 - [info] node3(192.168.88.19:3380) Version=5.7.22-log (oldest major version between slaves) log-bin:enabled
Thu Aug 16 17:36:44 2018 - [info] GTID ON
Thu Aug 16 17:36:44 2018 - [info] Replicating from 192.168.88.18(192.168.88.18:3380)
Thu Aug 16 17:36:44 2018 - [info] Primary candidate for the new Master (candidate_master is set)
Thu Aug 16 17:36:44 2018 - [info] The oldest binary log file/position on all slaves is mysql-bin.000001:418
Thu Aug 16 17:36:44 2018 - [info] Retrieved Gtid Set: 7f07d504-9fd8-11e8-80c9-525400829ae9:1
Thu Aug 16 17:36:44 2018 - [info] Oldest slaves:
Thu Aug 16 17:36:44 2018 - [info] node1(192.168.88.20:3380) Version=5.7.22-log (oldest major version between slaves) log-bin:enabled
Thu Aug 16 17:36:44 2018 - [info] GTID ON
Thu Aug 16 17:36:44 2018 - [info] Replicating from 192.168.88.18(192.168.88.18:3380)
Thu Aug 16 17:36:44 2018 - [info] Primary candidate for the new Master (candidate_master is set)
Thu Aug 16 17:36:44 2018 - [info] node3(192.168.88.19:3380) Version=5.7.22-log (oldest major version between slaves) log-bin:enabled
Thu Aug 16 17:36:44 2018 - [info] GTID ON
Thu Aug 16 17:36:44 2018 - [info] Replicating from 192.168.88.18(192.168.88.18:3380)
Thu Aug 16 17:36:44 2018 - [info] Primary candidate for the new Master (candidate_master is set)
Thu Aug 16 17:36:44 2018 - [info]
Thu Aug 16 17:36:44 2018 - [info] * Phase 3.3: Determining New Master Phase..
Thu Aug 16 17:36:44 2018 - [info]
Thu Aug 16 17:36:44 2018 - [info] Searching new master from slaves..
Thu Aug 16 17:36:44 2018 - [info] Candidate masters from the configuration file:
Thu Aug 16 17:36:44 2018 - [info] node1(192.168.88.20:3380) Version=5.7.22-log (oldest major version between slaves) log-bin:enabled
Thu Aug 16 17:36:44 2018 - [info] GTID ON
Thu Aug 16 17:36:44 2018 - [info] Replicating from 192.168.88.18(192.168.88.18:3380)
Thu Aug 16 17:36:44 2018 - [info] Primary candidate for the new Master (candidate_master is set)
Thu Aug 16 17:36:44 2018 - [info] node3(192.168.88.19:3380) Version=5.7.22-log (oldest major version between slaves) log-bin:enabled
Thu Aug 16 17:36:44 2018 - [info] GTID ON
Thu Aug 16 17:36:44 2018 - [info] Replicating from 192.168.88.18(192.168.88.18:3380)
Thu Aug 16 17:36:44 2018 - [info] Primary candidate for the new Master (candidate_master is set)
Thu Aug 16 17:36:44 2018 - [info] Non-candidate masters:
Thu Aug 16 17:36:44 2018 - [info] Searching from candidate_master slaves which have received the latest relay log events..
Thu Aug 16 17:36:44 2018 - [info] New master is node1(192.168.88.20:3380)
Thu Aug 16 17:36:44 2018 - [info] Starting master failover..
Thu Aug 16 17:36:44 2018 - [info]
From:
node2(192.168.88.18:3380) (current master)
+--node1(192.168.88.20:3380)
+--node3(192.168.88.19:3380)
To:
node1(192.168.88.20:3380) (new master)
+--node3(192.168.88.19:3380)
Thu Aug 16 17:36:44 2018 - [info]
Thu Aug 16 17:36:44 2018 - [info] * Phase 3.3: New Master Recovery Phase..
Thu Aug 16 17:36:44 2018 - [info]
Thu Aug 16 17:36:44 2018 - [info] Waiting all logs to be applied..
Thu Aug 16 17:36:44 2018 - [info] done.
Thu Aug 16 17:36:44 2018 - [info] Getting new master's binlog name and position..
Thu Aug 16 17:36:44 2018 - [info] mysql-bin.000001:154
Thu Aug 16 17:36:44 2018 - [info] All other slaves should start replication from here. Statement should be: CHANGE MASTER TO MASTER_HOST='node1 or 192.168.88.20', MASTER_PORT=3380, MASTER_AUTO_POSITION=1, MASTER_USER='repl', MASTER_PASSWORD='xxx';
Thu Aug 16 17:36:44 2018 - [info] Master Recovery succeeded. File:Pos:Exec_Gtid_Set: mysql-bin.000001, 154, 7f07d504-9fd8-11e8-80c9-525400829ae9:1
Thu Aug 16 17:36:44 2018 - [info] Executing master IP activate script:
Thu Aug 16 17:36:44 2018 - [info] /usr/local/scripts/master_ip_failover --command=start --ssh_user=root --orig_master_host=node2 --orig_master_ip=192.168.88.18 --orig_master_port=3380 --new_master_host=node1 --new_master_ip=192.168.88.20 --new_master_port=3380 --new_master_user='muser' --new_master_password=xxx
Unknown option: new_master_user
Unknown option: new_master_password
IN SCRIPT TEST====/sbin/ifconfig eth0:1 down==/sbin/ifconfig eth0:1 192.168.88.222/24===
Enabling the VIP - 192.168.88.222/24 on the new master - node1
Thu Aug 16 17:36:44 2018 - [info] OK.
Thu Aug 16 17:36:44 2018 - [info] Setting read_only=0 on node1(192.168.88.20:3380)..
Thu Aug 16 17:36:44 2018 - [info] ok.
Thu Aug 16 17:36:44 2018 - [info] ** Finished master recovery successfully.
Thu Aug 16 17:36:44 2018 - [info] * Phase 3: Master Recovery Phase completed.
Thu Aug 16 17:36:44 2018 - [info]
Thu Aug 16 17:36:44 2018 - [info] * Phase 4: Slaves Recovery Phase..
Thu Aug 16 17:36:44 2018 - [info]
Thu Aug 16 17:36:44 2018 - [info]
Thu Aug 16 17:36:44 2018 - [info] * Phase 4.1: Starting Slaves in parallel..
Thu Aug 16 17:36:44 2018 - [info]
Thu Aug 16 17:36:44 2018 - [info] -- Slave recovery on host node3(192.168.88.19:3380) started, pid: 2390. Check tmp log /usr/local/mha/node3_3380_20180816173642.log if it takes time..
Thu Aug 16 17:36:45 2018 - [info]
Thu Aug 16 17:36:45 2018 - [info] Log messages from node3 ...
Thu Aug 16 17:36:45 2018 - [info]
Thu Aug 16 17:36:44 2018 - [info] Resetting slave node3(192.168.88.19:3380) and starting replication from the new master node1(192.168.88.20:3380)..
Thu Aug 16 17:36:44 2018 - [info] Executed CHANGE MASTER.
Thu Aug 16 17:36:44 2018 - [info] Slave started.
Thu Aug 16 17:36:44 2018 - [info] gtid_wait(7f07d504-9fd8-11e8-80c9-525400829ae9:1) completed on node3(192.168.88.19:3380). Executed 0 events.
Thu Aug 16 17:36:45 2018 - [info] End of log messages from node3.
Thu Aug 16 17:36:45 2018 - [info] -- Slave on host node3(192.168.88.19:3380) started.
Thu Aug 16 17:36:45 2018 - [info] All new slave servers recovered successfully.
Thu Aug 16 17:36:45 2018 - [info]
Thu Aug 16 17:36:45 2018 - [info] * Phase 5: New master cleanup phase..
Thu Aug 16 17:36:45 2018 - [info]
Thu Aug 16 17:36:45 2018 - [info] Resetting slave info on the new master..
Thu Aug 16 17:36:45 2018 - [info] node1: Resetting slave info succeeded.
Thu Aug 16 17:36:45 2018 - [info] Master failover to node1(192.168.88.20:3380) completed successfully.
Thu Aug 16 17:36:45 2018 - [info]
----- Failover Report -----
mha: MySQL Master failover node2(192.168.88.18:3380) to node1(192.168.88.20:3380) succeeded
Master node2(192.168.88.18:3380) is down!
Check MHA Manager logs at node2:/usr/local/mha/manager.log for details.
Started automated(non-interactive) failover.
Invalidated master IP address on node2(192.168.88.18:3380)
Selected node1(192.168.88.20:3380) as a new master.
node1(192.168.88.20:3380): OK: Applying all logs succeeded.
node1(192.168.88.20:3380): OK: Activated master IP address.
node3(192.168.88.19:3380): OK: Slave started, replicating from node1(192.168.88.20:3380)
node1(192.168.88.20:3380): Resetting slave info succeeded.
Master failover to node1(192.168.88.20:3380) completed successfully.
检测主从结构:
[root@node2 mha]# masterha_check_repl --conf=/etc/mha/mha.conf
Thu Aug 16 17:32:12 2018 - [warning] Global configuration file /etc/masterha_default.cnf not found. Skipping.
Thu Aug 16 17:32:12 2018 - [info] Reading application default configuration from /etc/mha/mha.conf..
Thu Aug 16 17:32:12 2018 - [info] Reading server configuration from /etc/mha/mha.conf..
Thu Aug 16 17:32:12 2018 - [info] MHA::MasterMonitor version 0.58.
Thu Aug 16 17:32:13 2018 - [info] GTID failover mode = 1
Thu Aug 16 17:32:13 2018 - [info] Dead Servers:
Thu Aug 16 17:32:13 2018 - [info] Alive Servers:
Thu Aug 16 17:32:13 2018 - [info] node1(192.168.88.20:3380)
Thu Aug 16 17:32:13 2018 - [info] node2(192.168.88.18:3380)
Thu Aug 16 17:32:13 2018 - [info] node3(192.168.88.19:3380)
Thu Aug 16 17:32:13 2018 - [info] Alive Slaves:
Thu Aug 16 17:32:13 2018 - [info] node1(192.168.88.20:3380) Version=5.7.22-log (oldest major version between slaves) log-bin:enabled
Thu Aug 16 17:32:13 2018 - [info] GTID ON
Thu Aug 16 17:32:13 2018 - [info] Replicating from 192.168.88.18(192.168.88.18:3380)
Thu Aug 16 17:32:13 2018 - [info] Primary candidate for the new Master (candidate_master is set)
Thu Aug 16 17:32:13 2018 - [info] node3(192.168.88.19:3380) Version=5.7.22-log (oldest major version between slaves) log-bin:enabled
Thu Aug 16 17:32:13 2018 - [info] GTID ON
Thu Aug 16 17:32:13 2018 - [info] Replicating from 192.168.88.18(192.168.88.18:3380)
Thu Aug 16 17:32:13 2018 - [info] Primary candidate for the new Master (candidate_master is set)
Thu Aug 16 17:32:13 2018 - [info] Current Alive Master: node2(192.168.88.18:3380)
Thu Aug 16 17:32:13 2018 - [info] Checking slave configurations..
Thu Aug 16 17:32:13 2018 - [info] read_only=1 is not set on slave node3(192.168.88.19:3380).
Thu Aug 16 17:32:13 2018 - [info] Checking replication filtering settings..
Thu Aug 16 17:32:13 2018 - [info] binlog_do_db= , binlog_ignore_db=
Thu Aug 16 17:32:13 2018 - [info] Replication filtering check ok.
Thu Aug 16 17:32:13 2018 - [info] GTID (with auto-pos) is supported. Skipping all SSH and Node package checking.
Thu Aug 16 17:32:13 2018 - [info] Checking SSH publickey authentication settings on the current master..
Thu Aug 16 17:32:13 2018 - [info] HealthCheck: SSH to node2 is reachable.
Thu Aug 16 17:32:13 2018 - [info]
node2(192.168.88.18:3380) (current master)
+--node1(192.168.88.20:3380)
+--node3(192.168.88.19:3380)
Thu Aug 16 17:32:13 2018 - [info] Checking replication health on node1..
Thu Aug 16 17:32:13 2018 - [info] ok.
Thu Aug 16 17:32:13 2018 - [info] Checking replication health on node3..
Thu Aug 16 17:32:13 2018 - [info] ok.
Thu Aug 16 17:32:13 2018 - [info] Checking master_ip_failover_script status:
Thu Aug 16 17:32:13 2018 - [info] /usr/local/scripts/master_ip_failover --command=status --ssh_user=root --orig_master_host=node2 --orig_master_ip=192.168.88.18 --orig_master_port=3380
IN SCRIPT TEST====/sbin/ifconfig eth0:1 down==/sbin/ifconfig eth0:1 192.168.88.222/24===
Checking the Status of the script.. OK
Thu Aug 16 17:32:14 2018 - [info] OK.
Thu Aug 16 17:32:14 2018 - [warning] shutdown_script is not defined.
Thu Aug 16 17:32:14 2018 - [info] Got exit code 0 (Not master dead).
MySQL Replication Health is OK.
[root@node2 mha]# masterha_check_status --conf=/etc/mha/mha.conf
mha is stopped(2:NOT_RUNNING).
启动MHA服务
[root@node2 mha]# nohup masterha_manager --conf /etc/mha/mha.conf > /tmp/mha_manager.log </dev/null 2>&1 &
[2] 2239
[root@node2 mha]#
[root@node2 mha]#
检查启动后MHA的状态
[root@node2 mha]# masterha_check_status --conf=/etc/mha/mha.conf
mha (pid:2239) is running(0:PING_OK), master:node2
停止原来的master
[root@node2 ~]# mysqladmin -uroot -proot001 -P3380 -S /home/mysqldir/mysql.sock shutdown
[root@node2 ~]#
另一个slave已经执行的新的master了
mysql> show slave status \G;
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: 192.168.88.20
Master_User: repl
Master_Port: 3380
Connect_Retry: 60
Master_Log_File: mysql-bin.000001
Read_Master_Log_Pos: 154
Relay_Log_File: relay-bin.000002
Relay_Log_Pos: 367
Relay_Master_Log_File: mysql-bin.000001
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
Replicate_Do_DB:
Replicate_Ignore_DB:
Replicate_Do_Table:
Replicate_Ignore_Table:
Replicate_Wild_Do_Table:
Replicate_Wild_Ignore_Table:
Last_Errno: 0
Last_Error:
Skip_Counter: 0
Exec_Master_Log_Pos: 154
Relay_Log_Space: 568
Until_Condition: None
Until_Log_File:
Until_Log_Pos: 0
Master_SSL_Allowed: No
Master_SSL_CA_File:
Master_SSL_CA_Path:
Master_SSL_Cert:
Master_SSL_Cipher:
Master_SSL_Key:
Seconds_Behind_Master: 0
Master_SSL_Verify_Server_Cert: No
Last_IO_Errno: 0
Last_IO_Error:
Last_SQL_Errno: 0
Last_SQL_Error:
Replicate_Ignore_Server_Ids:
Master_Server_Id: 20
Master_UUID: e69195e8-9fdb-11e8-8473-525400e25850
Master_Info_File: /home/mysqldir/data/master.info
SQL_Delay: 0
SQL_Remaining_Delay: NULL
Slave_SQL_Running_State: Slave has read all relay log; waiting for more updates
Master_Retry_Count: 86400
Master_Bind:
Last_IO_Error_Timestamp:
Last_SQL_Error_Timestamp:
Master_SSL_Crl:
Master_SSL_Crlpath:
Retrieved_Gtid_Set:
Executed_Gtid_Set: 7f07d504-9fd8-11e8-80c9-525400829ae9:1
Auto_Position: 1
Replicate_Rewrite_DB:
Channel_Name:
Master_TLS_Version:
1 row in set (0.00 sec)
切换的日志:
[root@node2 mha]# tail -f manager.log
Thu Aug 16 17:33:08 2018 - [warning] secondary_check_script is not defined. It is highly recommended setting it to check master reachability from two or more routes.
Thu Aug 16 17:33:08 2018 - [info] Starting ping health check on node2(192.168.88.18:3380)..
Thu Aug 16 17:33:08 2018 - [info] Ping(SELECT) succeeded, waiting until MySQL doesn't respond..
Thu Aug 16 17:36:32 2018 - [warning] Got error on MySQL select ping: 2006 (MySQL server has gone away)
Thu Aug 16 17:36:32 2018 - [info] Executing SSH check script: exit 0
Thu Aug 16 17:36:32 2018 - [info] HealthCheck: SSH to node2 is reachable.
Thu Aug 16 17:36:35 2018 - [warning] Got error on MySQL connect: 2003 (Can't connect to MySQL server on '192.168.88.18' (111))
Thu Aug 16 17:36:35 2018 - [warning] Connection failed 2 time(s)..
Thu Aug 16 17:36:38 2018 - [warning] Got error on MySQL connect: 2003 (Can't connect to MySQL server on '192.168.88.18' (111))
Thu Aug 16 17:36:38 2018 - [warning] Connection failed 3 time(s)..
Thu Aug 16 17:36:41 2018 - [warning] Got error on MySQL connect: 2003 (Can't connect to MySQL server on '192.168.88.18' (111))
Thu Aug 16 17:36:41 2018 - [warning] Connection failed 4 time(s)..
Thu Aug 16 17:36:41 2018 - [warning] Master is not reachable from health checker!
Thu Aug 16 17:36:41 2018 - [warning] Master node2(192.168.88.18:3380) is not reachable!
Thu Aug 16 17:36:41 2018 - [warning] SSH is reachable.
Thu Aug 16 17:36:41 2018 - [info] Connecting to a master server failed. Reading configuration file /etc/masterha_default.cnf and /etc/mha/mha.conf again, and trying to connect to all servers to check server status..
Thu Aug 16 17:36:41 2018 - [warning] Global configuration file /etc/masterha_default.cnf not found. Skipping.
Thu Aug 16 17:36:41 2018 - [info] Reading application default configuration from /etc/mha/mha.conf..
Thu Aug 16 17:36:41 2018 - [info] Reading server configuration from /etc/mha/mha.conf..
Thu Aug 16 17:36:42 2018 - [info] GTID failover mode = 1
Thu Aug 16 17:36:42 2018 - [info] Dead Servers:
Thu Aug 16 17:36:42 2018 - [info] node2(192.168.88.18:3380)
Thu Aug 16 17:36:42 2018 - [info] Alive Servers:
Thu Aug 16 17:36:42 2018 - [info] node1(192.168.88.20:3380)
Thu Aug 16 17:36:42 2018 - [info] node3(192.168.88.19:3380)
Thu Aug 16 17:36:42 2018 - [info] Alive Slaves:
Thu Aug 16 17:36:42 2018 - [info] node1(192.168.88.20:3380) Version=5.7.22-log (oldest major version between slaves) log-bin:enabled
Thu Aug 16 17:36:42 2018 - [info] GTID ON
Thu Aug 16 17:36:42 2018 - [info] Replicating from 192.168.88.18(192.168.88.18:3380)
Thu Aug 16 17:36:42 2018 - [info] Primary candidate for the new Master (candidate_master is set)
Thu Aug 16 17:36:42 2018 - [info] node3(192.168.88.19:3380) Version=5.7.22-log (oldest major version between slaves) log-bin:enabled
Thu Aug 16 17:36:42 2018 - [info] GTID ON
Thu Aug 16 17:36:42 2018 - [info] Replicating from 192.168.88.18(192.168.88.18:3380)
Thu Aug 16 17:36:42 2018 - [info] Primary candidate for the new Master (candidate_master is set)
Thu Aug 16 17:36:42 2018 - [info] Checking slave configurations..
Thu Aug 16 17:36:42 2018 - [info] read_only=1 is not set on slave node3(192.168.88.19:3380).
Thu Aug 16 17:36:42 2018 - [info] Checking replication filtering settings..
Thu Aug 16 17:36:42 2018 - [info] Replication filtering check ok.
Thu Aug 16 17:36:42 2018 - [info] Master is down!
Thu Aug 16 17:36:42 2018 - [info] Terminating monitoring script.
Thu Aug 16 17:36:42 2018 - [info] Got exit code 20 (Master dead).
Thu Aug 16 17:36:42 2018 - [info] MHA::MasterFailover version 0.58.
Thu Aug 16 17:36:42 2018 - [info] Starting master failover.
Thu Aug 16 17:36:42 2018 - [info]
Thu Aug 16 17:36:42 2018 - [info] * Phase 1: Configuration Check Phase..
Thu Aug 16 17:36:42 2018 - [info]
Thu Aug 16 17:36:44 2018 - [info] GTID failover mode = 1
Thu Aug 16 17:36:44 2018 - [info] Dead Servers:
Thu Aug 16 17:36:44 2018 - [info] node2(192.168.88.18:3380)
Thu Aug 16 17:36:44 2018 - [info] Checking master reachability via MySQL(double check)...
Thu Aug 16 17:36:44 2018 - [info] ok.
Thu Aug 16 17:36:44 2018 - [info] Alive Servers:
Thu Aug 16 17:36:44 2018 - [info] node1(192.168.88.20:3380)
Thu Aug 16 17:36:44 2018 - [info] node3(192.168.88.19:3380)
Thu Aug 16 17:36:44 2018 - [info] Alive Slaves:
Thu Aug 16 17:36:44 2018 - [info] node1(192.168.88.20:3380) Version=5.7.22-log (oldest major version between slaves) log-bin:enabled
Thu Aug 16 17:36:44 2018 - [info] GTID ON
Thu Aug 16 17:36:44 2018 - [info] Replicating from 192.168.88.18(192.168.88.18:3380)
Thu Aug 16 17:36:44 2018 - [info] Primary candidate for the new Master (candidate_master is set)
Thu Aug 16 17:36:44 2018 - [info] node3(192.168.88.19:3380) Version=5.7.22-log (oldest major version between slaves) log-bin:enabled
Thu Aug 16 17:36:44 2018 - [info] GTID ON
Thu Aug 16 17:36:44 2018 - [info] Replicating from 192.168.88.18(192.168.88.18:3380)
Thu Aug 16 17:36:44 2018 - [info] Primary candidate for the new Master (candidate_master is set)
Thu Aug 16 17:36:44 2018 - [info] Starting GTID based failover.
Thu Aug 16 17:36:44 2018 - [info]
Thu Aug 16 17:36:44 2018 - [info] ** Phase 1: Configuration Check Phase completed.
Thu Aug 16 17:36:44 2018 - [info]
Thu Aug 16 17:36:44 2018 - [info] * Phase 2: Dead Master Shutdown Phase..
Thu Aug 16 17:36:44 2018 - [info]
Thu Aug 16 17:36:44 2018 - [info] Forcing shutdown so that applications never connect to the current master..
Thu Aug 16 17:36:44 2018 - [info] Executing master IP deactivation script:
Thu Aug 16 17:36:44 2018 - [info] /usr/local/scripts/master_ip_failover --orig_master_host=node2 --orig_master_ip=192.168.88.18 --orig_master_port=3380 --command=stopssh --ssh_user=root
IN SCRIPT TEST====/sbin/ifconfig eth0:1 down==/sbin/ifconfig eth0:1 192.168.88.222/24===
Disabling the VIP on old master: node2
SIOCSIFFLAGS: 无法指定被请求的地址
Thu Aug 16 17:36:44 2018 - [info] done.
Thu Aug 16 17:36:44 2018 - [warning] shutdown_script is not set. Skipping explicit shutting down of the dead master.
Thu Aug 16 17:36:44 2018 - [info] * Phase 2: Dead Master Shutdown Phase completed.
Thu Aug 16 17:36:44 2018 - [info]
Thu Aug 16 17:36:44 2018 - [info] * Phase 3: Master Recovery Phase..
Thu Aug 16 17:36:44 2018 - [info]
Thu Aug 16 17:36:44 2018 - [info] * Phase 3.1: Getting Latest Slaves Phase..
Thu Aug 16 17:36:44 2018 - [info]
Thu Aug 16 17:36:44 2018 - [info] The latest binary log file/position on all slaves is mysql-bin.000001:418
Thu Aug 16 17:36:44 2018 - [info] Retrieved Gtid Set: 7f07d504-9fd8-11e8-80c9-525400829ae9:1
Thu Aug 16 17:36:44 2018 - [info] Latest slaves (Slaves that received relay log files to the latest):
Thu Aug 16 17:36:44 2018 - [info] node1(192.168.88.20:3380) Version=5.7.22-log (oldest major version between slaves) log-bin:enabled
Thu Aug 16 17:36:44 2018 - [info] GTID ON
Thu Aug 16 17:36:44 2018 - [info] Replicating from 192.168.88.18(192.168.88.18:3380)
Thu Aug 16 17:36:44 2018 - [info] Primary candidate for the new Master (candidate_master is set)
Thu Aug 16 17:36:44 2018 - [info] node3(192.168.88.19:3380) Version=5.7.22-log (oldest major version between slaves) log-bin:enabled
Thu Aug 16 17:36:44 2018 - [info] GTID ON
Thu Aug 16 17:36:44 2018 - [info] Replicating from 192.168.88.18(192.168.88.18:3380)
Thu Aug 16 17:36:44 2018 - [info] Primary candidate for the new Master (candidate_master is set)
Thu Aug 16 17:36:44 2018 - [info] The oldest binary log file/position on all slaves is mysql-bin.000001:418
Thu Aug 16 17:36:44 2018 - [info] Retrieved Gtid Set: 7f07d504-9fd8-11e8-80c9-525400829ae9:1
Thu Aug 16 17:36:44 2018 - [info] Oldest slaves:
Thu Aug 16 17:36:44 2018 - [info] node1(192.168.88.20:3380) Version=5.7.22-log (oldest major version between slaves) log-bin:enabled
Thu Aug 16 17:36:44 2018 - [info] GTID ON
Thu Aug 16 17:36:44 2018 - [info] Replicating from 192.168.88.18(192.168.88.18:3380)
Thu Aug 16 17:36:44 2018 - [info] Primary candidate for the new Master (candidate_master is set)
Thu Aug 16 17:36:44 2018 - [info] node3(192.168.88.19:3380) Version=5.7.22-log (oldest major version between slaves) log-bin:enabled
Thu Aug 16 17:36:44 2018 - [info] GTID ON
Thu Aug 16 17:36:44 2018 - [info] Replicating from 192.168.88.18(192.168.88.18:3380)
Thu Aug 16 17:36:44 2018 - [info] Primary candidate for the new Master (candidate_master is set)
Thu Aug 16 17:36:44 2018 - [info]
Thu Aug 16 17:36:44 2018 - [info] * Phase 3.3: Determining New Master Phase..
Thu Aug 16 17:36:44 2018 - [info]
Thu Aug 16 17:36:44 2018 - [info] Searching new master from slaves..
Thu Aug 16 17:36:44 2018 - [info] Candidate masters from the configuration file:
Thu Aug 16 17:36:44 2018 - [info] node1(192.168.88.20:3380) Version=5.7.22-log (oldest major version between slaves) log-bin:enabled
Thu Aug 16 17:36:44 2018 - [info] GTID ON
Thu Aug 16 17:36:44 2018 - [info] Replicating from 192.168.88.18(192.168.88.18:3380)
Thu Aug 16 17:36:44 2018 - [info] Primary candidate for the new Master (candidate_master is set)
Thu Aug 16 17:36:44 2018 - [info] node3(192.168.88.19:3380) Version=5.7.22-log (oldest major version between slaves) log-bin:enabled
Thu Aug 16 17:36:44 2018 - [info] GTID ON
Thu Aug 16 17:36:44 2018 - [info] Replicating from 192.168.88.18(192.168.88.18:3380)
Thu Aug 16 17:36:44 2018 - [info] Primary candidate for the new Master (candidate_master is set)
Thu Aug 16 17:36:44 2018 - [info] Non-candidate masters:
Thu Aug 16 17:36:44 2018 - [info] Searching from candidate_master slaves which have received the latest relay log events..
Thu Aug 16 17:36:44 2018 - [info] New master is node1(192.168.88.20:3380)
Thu Aug 16 17:36:44 2018 - [info] Starting master failover..
Thu Aug 16 17:36:44 2018 - [info]
From:
node2(192.168.88.18:3380) (current master)
+--node1(192.168.88.20:3380)
+--node3(192.168.88.19:3380)
To:
node1(192.168.88.20:3380) (new master)
+--node3(192.168.88.19:3380)
Thu Aug 16 17:36:44 2018 - [info]
Thu Aug 16 17:36:44 2018 - [info] * Phase 3.3: New Master Recovery Phase..
Thu Aug 16 17:36:44 2018 - [info]
Thu Aug 16 17:36:44 2018 - [info] Waiting all logs to be applied..
Thu Aug 16 17:36:44 2018 - [info] done.
Thu Aug 16 17:36:44 2018 - [info] Getting new master's binlog name and position..
Thu Aug 16 17:36:44 2018 - [info] mysql-bin.000001:154
Thu Aug 16 17:36:44 2018 - [info] All other slaves should start replication from here. Statement should be: CHANGE MASTER TO MASTER_HOST='node1 or 192.168.88.20', MASTER_PORT=3380, MASTER_AUTO_POSITION=1, MASTER_USER='repl', MASTER_PASSWORD='xxx';
Thu Aug 16 17:36:44 2018 - [info] Master Recovery succeeded. File:Pos:Exec_Gtid_Set: mysql-bin.000001, 154, 7f07d504-9fd8-11e8-80c9-525400829ae9:1
Thu Aug 16 17:36:44 2018 - [info] Executing master IP activate script:
Thu Aug 16 17:36:44 2018 - [info] /usr/local/scripts/master_ip_failover --command=start --ssh_user=root --orig_master_host=node2 --orig_master_ip=192.168.88.18 --orig_master_port=3380 --new_master_host=node1 --new_master_ip=192.168.88.20 --new_master_port=3380 --new_master_user='muser' --new_master_password=xxx
Unknown option: new_master_user
Unknown option: new_master_password
IN SCRIPT TEST====/sbin/ifconfig eth0:1 down==/sbin/ifconfig eth0:1 192.168.88.222/24===
Enabling the VIP - 192.168.88.222/24 on the new master - node1
Thu Aug 16 17:36:44 2018 - [info] OK.
Thu Aug 16 17:36:44 2018 - [info] Setting read_only=0 on node1(192.168.88.20:3380)..
Thu Aug 16 17:36:44 2018 - [info] ok.
Thu Aug 16 17:36:44 2018 - [info] ** Finished master recovery successfully.
Thu Aug 16 17:36:44 2018 - [info] * Phase 3: Master Recovery Phase completed.
Thu Aug 16 17:36:44 2018 - [info]
Thu Aug 16 17:36:44 2018 - [info] * Phase 4: Slaves Recovery Phase..
Thu Aug 16 17:36:44 2018 - [info]
Thu Aug 16 17:36:44 2018 - [info]
Thu Aug 16 17:36:44 2018 - [info] * Phase 4.1: Starting Slaves in parallel..
Thu Aug 16 17:36:44 2018 - [info]
Thu Aug 16 17:36:44 2018 - [info] -- Slave recovery on host node3(192.168.88.19:3380) started, pid: 2390. Check tmp log /usr/local/mha/node3_3380_20180816173642.log if it takes time..
Thu Aug 16 17:36:45 2018 - [info]
Thu Aug 16 17:36:45 2018 - [info] Log messages from node3 ...
Thu Aug 16 17:36:45 2018 - [info]
Thu Aug 16 17:36:44 2018 - [info] Resetting slave node3(192.168.88.19:3380) and starting replication from the new master node1(192.168.88.20:3380)..
Thu Aug 16 17:36:44 2018 - [info] Executed CHANGE MASTER.
Thu Aug 16 17:36:44 2018 - [info] Slave started.
Thu Aug 16 17:36:44 2018 - [info] gtid_wait(7f07d504-9fd8-11e8-80c9-525400829ae9:1) completed on node3(192.168.88.19:3380). Executed 0 events.
Thu Aug 16 17:36:45 2018 - [info] End of log messages from node3.
Thu Aug 16 17:36:45 2018 - [info] -- Slave on host node3(192.168.88.19:3380) started.
Thu Aug 16 17:36:45 2018 - [info] All new slave servers recovered successfully.
Thu Aug 16 17:36:45 2018 - [info]
Thu Aug 16 17:36:45 2018 - [info] * Phase 5: New master cleanup phase..
Thu Aug 16 17:36:45 2018 - [info]
Thu Aug 16 17:36:45 2018 - [info] Resetting slave info on the new master..
Thu Aug 16 17:36:45 2018 - [info] node1: Resetting slave info succeeded.
Thu Aug 16 17:36:45 2018 - [info] Master failover to node1(192.168.88.20:3380) completed successfully.
Thu Aug 16 17:36:45 2018 - [info]
----- Failover Report -----
mha: MySQL Master failover node2(192.168.88.18:3380) to node1(192.168.88.20:3380) succeeded
Master node2(192.168.88.18:3380) is down!
Check MHA Manager logs at node2:/usr/local/mha/manager.log for details.
Started automated(non-interactive) failover.
Invalidated master IP address on node2(192.168.88.18:3380)
Selected node1(192.168.88.20:3380) as a new master.
node1(192.168.88.20:3380): OK: Applying all logs succeeded.
node1(192.168.88.20:3380): OK: Activated master IP address.
node3(192.168.88.19:3380): OK: Slave started, replicating from node1(192.168.88.20:3380)
node1(192.168.88.20:3380): Resetting slave info succeeded.
Master failover to node1(192.168.88.20:3380) completed successfully.
新master查看:
MySQL [t_mha]> show slave status \G;
Empty set (0.00 sec)
ERROR: No query specified
MySQL [t_mha]> exit
Bye
vip 已经挂载到新的master上了
[root@node1 ~]# ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 52:54:00:e2:58:50 brd ff:ff:ff:ff:ff:ff
inet 192.168.88.20/24 brd 192.168.88.255 scope global eth0
valid_lft forever preferred_lft forever
inet 192.168.88.222/24 brd 192.168.88.255 scope global secondary eth0:1
valid_lft forever preferred_lft forever
inet6 fe80::5054:ff:fee2:5850/64 scope link
valid_lft forever preferred_lft forever