MySQL分布式高可用集群-PXC案例
MySQL分布式高可用-PXC
1.安装yum源(三个节点全部安装)
安装yum源:yum install https://repo.percona.com/yum/percona-release-latest.noarch.rpm(linux下载慢的话,可以先下载到Windows,传进来)[root@yulong-mysql1 ~]# yum localinstall PXC官方源.rpm
2.三个节点开始安装程序(三个节点同步)
[root@yulong-mysql1 ~]# yum install Percona-XtraDB-Cluster-57 -y(在线安装,太慢,我建议采用去官网下载离线包然后本地安装)官网离线包下载地址:https://www.percona.com/downloads/Percona-XtraDB-Cluster-LATEST/[root@yulong-mysql1 ~]# mkdir ./pxc[root@yulong-mysql1 ~]# tar -xvf Percona-XtraDB-Cluster-5.7.28-31.41-r514-el7-x86_64-bundle.tar -C ./pxc[root@yulong-mysql1 ~]# cd pxc/[root@yulong-mysql1 pxc]# yum localinstall Percona-XtraDB-Cluster-* -y
3.配置/etc/hosts文件,互相做主机名解析(三个节点同步)
[root@yulong-mysql1 ~]# cat /etc/hosts127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4::1 localhost localhost.localdomain localhost6 localhost6.localdomain6192.168.136.105 yulong-mysql1192.168.136.106 yulong-mysql2192.168.136.107 yulong-mysql3
4.启动服务,并且确保服务已经启动成功(三个节点同步)
[root@yulong-mysql1 ~]# systemctl enable mysql[root@yulong-mysql1 ~]# systemctl start mysql[root@yulong-mysql1 ~]# systemctl status mysql | grep ActiveActive: active (running) since 日 2020-03-08 22:29:34 CST; 22min ago
5.防火墙放行端口,关闭selinux(三个节点同步)
[root@yulong-mysql1 ~]# firewall-cmd --add-port=3306/tcp --permanentsuccess[root@yulong-mysql1 ~]# firewall-cmd --add-port=4444/tcp --permanentsuccess[root@yulong-mysql1 ~]# firewall-cmd --add-port=4567/tcp --permanentsuccess[root@yulong-mysql1 ~]# firewall-cmd --add-port=4568/tcp --permanentsuccess[root@yulong-mysql1 ~]# firewall-cmd --reloadsuccess[root@yulong-mysql1 ~]# setenforce 0[root@yulong-mysql1 ~]# getenforcePermissive
6.修改数据库初始密码(三个节点同步)
[root@yulong-mysql1 ~]# grep 'temporary password' /var/log/mysqld.log2020-03-08T14:29:23.633083Z 1 [Note] A temporary password is generated for root@localhost: _OrqlE7d,uYr[root@yulong-mysql1 ~]# mysql -uroot -p'_OrqlE7d,uYr'mysql> alter user 'root'@'localhost' identified by 'Com.123!';Query OK, 0 rows affected (0.14 sec)mysql> exitBye
7.创建sst同步账号并授权(三个节点同步)
mysql> create user 'sstuser'@'localhost' identified by 'Com.123!';Query OK, 0 rows affected (0.00 sec)mysql> grant reload,lock tables, process, replication client on *.* to 'sstuser'@'localhost';Query OK, 0 rows affected (0.00 sec)mysql> flush privileges;Query OK, 0 rows affected (0.00 sec)
7.节点一停止MySQL服务,为写集复制配置相关请求
设置配置文件(节点一)
[root@yulong-mysql1 ~]# systemctl stop mysql[root@yulong-mysql1 ~]# grep -v '^#' /etc/percona-xtradb-cluster.conf.d/wsrep.cnf[mysqld]wsrep_provider=/usr/lib64/galera3/libgalera_smm.sowsrep_cluster_address=gcomm://192.168.136.105,192.168.136.106,192.168.136.107binlog_format=ROWdefault_storage_engine=InnoDBwsrep_slave_threads= 8wsrep_log_conflictsinnodb_autoinc_lock_mode=2wsrep_node_address=192.168.136.105wsrep_cluster_name=pxc-clusterwsrep_node_name=yulong-mysql1pxc_strict_mode=ENFORCINGwsrep_sst_method=xtrabackup-v2wsrep_sst_auth="sstuser:Com.123!"[root@yulong-mysql1 ~]# grep -v '^#' /etc/percona-xtradb-cluster.conf.d/mysqld.cnf[client]socket=/var/lib/mysql/mysql.sock[mysqld]server-id=1datadir=/var/lib/mysqlsocket=/var/lib/mysql/mysql.socklog-error=/var/log/mysqld.logpid-file=/var/run/mysqld/mysqld.pidlog-binlog_slave_updatesexpire_logs_days=7symbolic-links=0
9.引导第一个节点来初始化集群。
[root@yulong-mysql1 ~]# systemctl start mysql@bootstrap.service#番外:如果想重新自举,必须先kill掉原来的进程,才能再一次自举[root@yulong-mysql1 ~]# ps aux | grep ^mysqlmysql 29817 2.9 21.3 1361928 212788 ? Sl 18:15 0:03 /usr/sbin/mysqld --basedir=/usr --datadir=/var/lib/mysql --plugin-dir=/usr/lib64/mysql/plugin --user=mysql --wsrep-provider=/usr/lib64/galera3/libgalera_smm.so --wsrep-new-cluster --log-error=/var/log/mysqld.log --pid-file=/var/run/mysqld/mysqld.pid --socket=/var/lib/mysql/mysql.sock --wsrep_start_position=70317cc3-61d4-11ea-a388-e250f22e8ee5:4[root@yulong-mysql1 ~]# kill -29817
10.查看自举启动信息(节点一)
[root@yulong-mysql1 ~]# mysql -uroot -p'Com.123!'mysql> show status like 'wsrep%';+----------------------------------+--------------------------------------+| Variable_name | Value |+----------------------------------+--------------------------------------+| wsrep_local_state_uuid | 70317cc3-61d4-11ea-a388-e250f22e8ee5 || wsrep_protocol_version | 9 || wsrep_last_applied | 4 || wsrep_last_committed | 4 || wsrep_replicated | 0 || wsrep_replicated_bytes | 0 || wsrep_repl_keys | 0 || wsrep_repl_keys_bytes | 0 || wsrep_repl_data_bytes | 0 || wsrep_repl_other_bytes | 0 || wsrep_received | 2 || wsrep_received_bytes | 152 || wsrep_local_commits | 0 || wsrep_local_cert_failures | 0 || wsrep_local_replays | 0 || wsrep_local_send_queue | 0 || wsrep_local_send_queue_max | 1 || wsrep_local_send_queue_min | 0 || wsrep_local_send_queue_avg | 0.000000 || wsrep_local_recv_queue | 0 || wsrep_local_recv_queue_max | 2 || wsrep_local_recv_queue_min | 0 || wsrep_local_recv_queue_avg | 0.500000 || wsrep_local_cached_downto | 0 || wsrep_flow_control_paused_ns | 0 || wsrep_flow_control_paused | 0.000000 || wsrep_flow_control_sent | 0 || wsrep_flow_control_recv | 0 || wsrep_flow_control_interval | [ 100, 100 ] || wsrep_flow_control_interval_low | 100 || wsrep_flow_control_interval_high | 100 || wsrep_flow_control_status | OFF || wsrep_cert_deps_distance | 0.000000 || wsrep_apply_oooe | 0.000000 || wsrep_apply_oool | 0.000000 || wsrep_apply_window | 0.000000 || wsrep_commit_oooe | 0.000000 || wsrep_commit_oool | 0.000000 || wsrep_commit_window | 0.000000 || wsrep_local_state | 4 || wsrep_local_state_comment | Synced || wsrep_cert_index_size | 0 || wsrep_cert_bucket_count | 22 || wsrep_gcache_pool_size | 1320 || wsrep_causal_reads | 0 || wsrep_cert_interval | 0.000000 || wsrep_open_transactions | 0 || wsrep_open_connections | 0 || wsrep_ist_receive_status | || wsrep_ist_receive_seqno_start | 0 || wsrep_ist_receive_seqno_current | 0 || wsrep_ist_receive_seqno_end | 0 || wsrep_incoming_addresses | 192.168.136.105:3306 || wsrep_cluster_weight | 1 || wsrep_desync_count | 0 || wsrep_evs_delayed | || wsrep_evs_evict_list | || wsrep_evs_repl_latency | 0/0/0/0/0 || wsrep_evs_state | OPERATIONAL || wsrep_gcomm_uuid | 666a19fd-61ec-11ea-90bc-8f6458220847 || wsrep_cluster_conf_id | 1 || wsrep_cluster_size | 1 || wsrep_cluster_state_uuid | 70317cc3-61d4-11ea-a388-e250f22e8ee5 || wsrep_cluster_status | Primary || wsrep_connected | ON || wsrep_local_bf_aborts | 0 || wsrep_local_index | 0 || wsrep_provider_name | Galera || wsrep_provider_vendor | Codership Oy <info@codership.com> || wsrep_provider_version | 3.41(rb3295e6) || wsrep_ready | ON |+----------------------------------+--------------------------------------+71 rows in set (0.01 sec)
11.开始配置节点二
停止MySQL服务
[root@yulong-mysql2 ~]# systemctl stop mysql
开始更改配置文件
[root@yulong-mysql2 ~]# grep -v '^#' /etc/percona-xtradb-cluster.conf.d/wsrep.cnf[mysqld]wsrep_provider=/usr/lib64/galera3/libgalera_smm.sowsrep_cluster_address=gcomm://192.168.136.105,192.168.136.106,192.168.136.107binlog_format=ROWdefault_storage_engine=InnoDBwsrep_slave_threads= 8wsrep_log_conflictsinnodb_autoinc_lock_mode=2wsrep_node_address=192.168.136.106wsrep_cluster_name=pxc-clusterwsrep_node_name=yulong-mysql2pxc_strict_mode=ENFORCINGwsrep_sst_method=xtrabackup-v2wsrep_sst_auth="sstuser:Com.123!"[root@yulong-mysql2 ~]# grep -v '^#' /etc/percona-xtradb-cluster.conf.d/mysqld.cnf[client]socket=/var/lib/mysql/mysql.sock[mysqld]server-id=2datadir=/var/lib/mysqlsocket=/var/lib/mysql/mysql.socklog-error=/var/log/mysqld.logpid-file=/var/run/mysqld/mysqld.pidlog-binlog_slave_updatesexpire_logs_days=7symbolic-links=0
12.添加节点二到节点一的集群中
[root@yulong-mysql2 ~]# systemctl start mysql
13.设置配置文件(节点三)
[root@yulong-mysql3 ~]# grep -v '^#' /etc/percona-xtradb-cluster.conf.d/wsrep.cnf[mysqld]wsrep_provider=/usr/lib64/galera3/libgalera_smm.sowsrep_cluster_address=gcomm://192.168.136.105,192.168.136.106,192.168.136.107binlog_format=ROWdefault_storage_engine=InnoDBwsrep_slave_threads= 8wsrep_log_conflictsinnodb_autoinc_lock_mode=2wsrep_node_address=192.168.136.107wsrep_cluster_name=pxc-clusterwsrep_node_name=yulong-mysql3pxc_strict_mode=ENFORCINGwsrep_sst_method=xtrabackup-v2wsrep_sst_auth="sstuser:Com.123!"[root@yulong-mysql3 ~]# grep -v '^#' /etc/percona-xtradb-cluster.conf.d/mysqld.cnf[client]socket=/var/lib/mysql/mysql.sock[mysqld]server-id=3datadir=/var/lib/mysqlsocket=/var/lib/mysql/mysql.socklog-error=/var/log/mysqld.logpid-file=/var/run/mysqld/mysqld.pidlog-binlog_slave_updatesexpire_logs_days=7symbolic-links=0
14.添加节点三到节点一的集群中
[root@yulong-mysql3 ~]# systemctl start mysql
15.任意节点上查看集群情况:
mysql> show status like 'wsrep%';+----------------------------------+----------------------------------------------------------------+| Variable_name | Value |+----------------------------------+----------------------------------------------------------------+| wsrep_local_state_uuid | 70317cc3-61d4-11ea-a388-e250f22e8ee5 || wsrep_protocol_version | 9 || wsrep_last_applied | 4 || wsrep_last_committed | 4 || wsrep_replicated | 0 || wsrep_replicated_bytes | 0 || wsrep_repl_keys | 0 || wsrep_repl_keys_bytes | 0 || wsrep_repl_data_bytes | 0 || wsrep_repl_other_bytes | 0 || wsrep_received | 10 || wsrep_received_bytes | 846 || wsrep_local_commits | 0 || wsrep_local_cert_failures | 0 || wsrep_local_replays | 0 || wsrep_local_send_queue | 0 || wsrep_local_send_queue_max | 1 || wsrep_local_send_queue_min | 0 || wsrep_local_send_queue_avg | 0.000000 || wsrep_local_recv_queue | 0 || wsrep_local_recv_queue_max | 2 || wsrep_local_recv_queue_min | 0 || wsrep_local_recv_queue_avg | 0.100000 || wsrep_local_cached_downto | 0 || wsrep_flow_control_paused_ns | 0 || wsrep_flow_control_paused | 0.000000 || wsrep_flow_control_sent | 0 || wsrep_flow_control_recv | 0 || wsrep_flow_control_interval | [ 173, 173 ] || wsrep_flow_control_interval_low | 173 || wsrep_flow_control_interval_high | 173 || wsrep_flow_control_status | OFF || wsrep_cert_deps_distance | 0.000000 || wsrep_apply_oooe | 0.000000 || wsrep_apply_oool | 0.000000 || wsrep_apply_window | 0.000000 || wsrep_commit_oooe | 0.000000 || wsrep_commit_oool | 0.000000 || wsrep_commit_window | 0.000000 || wsrep_local_state | 4 || wsrep_local_state_comment | Synced || wsrep_cert_index_size | 0 || wsrep_cert_bucket_count | 22 || wsrep_gcache_pool_size | 3088 || wsrep_causal_reads | 0 || wsrep_cert_interval | 0.000000 || wsrep_open_transactions | 0 || wsrep_open_connections | 0 || wsrep_ist_receive_status | || wsrep_ist_receive_seqno_start | 0 || wsrep_ist_receive_seqno_current | 0 || wsrep_ist_receive_seqno_end | 0 || wsrep_incoming_addresses | 192.168.136.105:3306,192.168.136.106:3306,192.168.136.107:3306 || wsrep_cluster_weight | 3 || wsrep_desync_count | 0 || wsrep_evs_delayed | || wsrep_evs_evict_list | || wsrep_evs_repl_latency | 0/0/0/0/0 || wsrep_evs_state | OPERATIONAL || wsrep_gcomm_uuid | e12b7b36-61ee-11ea-a11c-0b10452ad3e7 || wsrep_cluster_conf_id | 3 || wsrep_cluster_size | 3 || wsrep_cluster_state_uuid | 70317cc3-61d4-11ea-a388-e250f22e8ee5 || wsrep_cluster_status | Primary || wsrep_connected | ON || wsrep_local_bf_aborts | 0 || wsrep_local_index | 0 || wsrep_provider_name | Galera || wsrep_provider_vendor | Codership Oy <info@codership.com> || wsrep_provider_version | 3.41(rb3295e6) || wsrep_ready | ON |+----------------------------------+----------------------------------------------------------------+71 rows in set (0.00 sec)
16.测试写入数据是否同步
(1)节点一上创建数据库mysql> create database yulong;Query OK, 1 row affected (0.00 sec)(2)节点二查看mysql> show databases;+--------------------+| Database |+--------------------+| information_schema || mysql || performance_schema || sys || yulong |+--------------------+5 rows in set (0.01 sec)(3)节点三上查看mysql> show databases;+--------------------+| Database |+--------------------+| information_schema || mysql || performance_schema || sys || yulong |+--------------------+5 rows in set (0.00 sec)#我们可以看到,他们已经同步
17.将节点一停掉,然后重新启动,让其回归集群
#首先停止节点一上的sql服务[root@yulong-mysql1 ~]# systemctl stop mysql[root@yulong-mysql1 ~]# systemctl stop mysql@bootstrap.service#节点二yulong库上创建表mysql> CREATE TABLE example (node_id INT PRIMARY KEY, node_name VARCHAR(30));Query OK, 0 rows affected (0.00 sec)#然后启动节点一的mysql服务[root@yulong-mysql1 ~]# systemctl start mysql[root@yulong-mysql1 ~]# mysql -uroot -p'Com.123!'mysql> use yulong;Database changedmysql> show tables;+------------------+| Tables_in_yulong |+------------------+| example |+------------------+1 row in set (0.00 sec)#我们查看数据库的wsrep_cluster_sizemysql> show status like 'wsrep_cluster_size';+--------------------+-------+| Variable_name | Value |+--------------------+-------+| wsrep_cluster_size | 3 |+--------------------+-------+1 row in set (0.00 sec)#最后我们可以看到,数据已经同步过来了最新的。并且主机个数是3个。
18.将三台服务器全部关机,然后再开机,并且让集群恢复正常
(1)首先关机顺序:node1先关掉,node2第二个关掉,node3最后一个关掉(2)现在node1和node2的/var/lib/mysql/grastate.dat中的safe_to_bootstrap为0(3)node3的/var/lib/mysql/grastate.dat中的safe_to_bootstrap为1(4)这时候,我们要从最后一个关机的node上,也就是node3重新自举:[root@yulong-mysql3 ~]# systemctl start mysql@bootstrap.service(5)node1和node2上启动sql服务[root@yulong-mysql1 ~]# systemctl start mysql[root@yulong-mysql2 ~]# systemctl start mysql(6)进入node3上进行查看集群状态[root@yulong-mysql3 ~]# mysql -uroot -pCom.123!mysql> show status like 'wsrep_cluster_size';+--------------------+-------+| Variable_name | Value |+--------------------+-------+| wsrep_cluster_size | 3 |+--------------------+-------+1 row in set (0.05 sec)(7)如果最后一个关机的node无法开机了,那么其他的node上,任选一个node:将/var/lib/mysql/grastate.dat 中的 safe_to_bootstrap设置为1,然后让这个node自举启动,其他节点再systemctl start mysql 即可恢复集群
sql2 ~]# systemctl start mysql
(6)进入node3上进行查看集群状态
[root@yulong-mysql3 ~]# mysql -uroot -pCom.123!
mysql> show status like ‘wsrep_cluster_size’;
±-------------------±------+
| Variable_name | Value |
±-------------------±------+
| wsrep_cluster_size | 3 |
±-------------------±------+
1 row in set (0.05 sec)
(7)如果最后一个关机的node无法开机了,那么其他的node上,任选一个node:
将/var/lib/mysql/grastate.dat 中的 safe_to_bootstrap设置为1,然后
让这个node自举启动,其他节点再systemctl start mysql 即可恢复集群
- 点赞
- 收藏
- 分享
- 文章举报
- Youka:可创建任何YouTube视频的卡拉OK歌曲版本的免费网站
- 图说大型网站的技术架构
- 引流粉丝到公众号,网站运营的一点思考,引流技巧实操
- 大部分的菠菜都是利用Thinkphp框架二次开源,那么利用代码审计怎么参透入侵菠菜网站呢
- 找不到合适的模板素材,那是你没有收藏这个网站!
- 阿里云专家手把手教你重塑 IT 架构!
- 非常诡异的IIS下由配置文件加上svg的mime头导致整个网站的静态文件访问报错误
- JavaScript设计模式读书笔记(五)=>架构型设计模式,MVC,MVP,MVVM
- Anaconda官方网站下载慢的解决方法&&Anaconda的使用(一)
- 理解gRPC,以及表现层状态转换与远程过程调用架构之间的区别
- kolla-ansible部署openstack+ceph高可用集群queens版本---部署Openstack+ceph
- kolla-ansible部署openstack+ceph高可用集群queens版本---ansible和kolla-ansible配置
- kolla-ansible部署openstack+ceph高可用集群queens版本--- 环境准备及初始化
- 模板网站与定制开发网站的区别?
- 数据仓库架构-实时数仓演进
- 政府网站公祭日,如何使网站整体变灰
- 三层架构——ATM + 购物车
- kolla-ansible部署openstack+ceph高可用集群queens版本--- 部署说明
- 趁你年轻快来学学如何搭建一个小说网站,这里有超详细教程,快进来看看吧,错过了可不要后悔哟。
- 看过无数总结,经历无数实践,这才是顶级IT公司的架构设计