How To Install Percona XtraDB Cluster and Configure It To Work With ScaleArc
Release | Classification | Level | DB Platform | Categories |
3.8+ | How To | ![]() ![]() |
MySQL |
Installaion Configuration |
QUESTION |
- How to install Percona XtraDB ?
- How to create Percona cluster and verify replication ?
- How to configure and add Percona cluster to ScaleArc ?
ANSWER |
The goal of the KB article is to guide the user through the whole process of installing, configuring and adding Percona XtraDB cluster to ScaleArc.
This manual is divided into the following sections: Prerequsites, Installing and Configuring Percona XtraDB Cluster, Adding Percona to ScaleArc.
*PREREQUISITES:
1. Get Linux CentOS 7 minimal installed first:
Use ISO image CentOS-7-x86_64-Minimal-1611.iso to install operating system.
2. Get your IP plan ready, for example:
Hostname IP address --------------------------------- percona1 192.168.1.222 percona2 192.168.1.223 percona3 192.168.1.224 scalearc1 192.168.1.209 scalearc2 192.168.1.210 scale-vip 192.168.1.211
Where percona1, percona2, percona3 are Percona XtraDB nodes, scalearc1 and scalearc2 are ScaleArc HA nodes, scale-vip is virtual IP address used by Percona cluster, defined in ScaleArc.
3. Choose a cluster name, e.g:
pxc-cluster
*INSTALL AND CONFIGURE PERCONA CLUSTER
After installing CentOS 7 minimal, configure the following on all three future Percona cluster nodes:
1. Configure networking using nmtui tool:
[root@localhost ~]# nmtui
2. Change the hostname to percona1, percona2, percona3 respectively:
[root@localhost ~]# vi /etc/hostname
3. Disable the firewall daemon:
[root@percona1 ~]# systemctl disable firewalld [root@percona1 ~]# systemctl stop firewalld
4. Disable SELinux:
[root@percona1 ~]# setenforce 0
Change SELINUX parameter in /etc/selinux/config to 'disabled' for changes to take effect permanently.
# This file controls the state of SELinux on the system. # SELINUX= can take one of these three values: # enforcing - SELinux security policy is enforced. # permissive - SELinux prints warnings instead of enforcing. # disabled - No SELinux policy is loaded. SELINUX=disabled # SELINUXTYPE= can take one of three two values: # targeted - Targeted processes are protected, # minimum - Modification of targeted policy. Only selected processes are protected. # mls - Multi Level Security protection. SELINUXTYPE=targeted
5. Install Percona repository:
[root@percona1 ~]# sudo yum install http://www.percona.com/downloads/percona-release/redhat/0.1-4/percona-release-0.1-4.noarch.rpm
6. Install Percona XtraDB cluster:
[root@percona1 ~]# sudo yum install Percona-XtraDB-Cluster-57
7. Start mysql service:
[root@percona1 ~]# sudo service mysql start
8. Display temporary root password, generated during the installation:
[root@percona1 ~]# sudo grep 'temporary password' /var/log/mysqld.log
9. Change mysql root password:
[root@percona1 ~]# sudo grep 'temporary password' /var/log/mysqld.log
mysql> ALTER USER 'root'@'localhost' IDENTIFIED BY 'rootPass'; Query OK, 0 rows affected (0.00 sec) mysql> exit Bye
10. Stop mysql service:
[root@percona1 ~]# sudo service mysql stop
11. Configure all three Percona nodes like in the example files below, found under '/etc/percona-xtradb-cluster.conf.d/wsrep.cnf':
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 |
[root@percona1 ~]# cat /etc/percona-xtradb-cluster.conf.d/wsrep.cnf [mysqld] # Path to Galera library wsrep_provider=/usr/lib64/galera3/libgalera_smm.so # Cluster connection URL contains IPs of nodes #If no IP is found, this implies that a new cluster needs to be created, #in order to do that you need to bootstrap this node wsrep_cluster_address=gcomm://192.168.1.222,192.168.1.223,192.168.1.224 # In order for Galera to work correctly binlog format should be ROW binlog_format=ROW # MyISAM storage engine has only experimental support default_storage_engine=InnoDB # Slave thread to use wsrep_slave_threads= 8 wsrep_log_conflicts # This changes how InnoDB autoincrement locks are managed and is a requirement for Galera innodb_autoinc_lock_mode=2 # Node IP address wsrep_node_address=192.168.1.222 # Cluster name wsrep_cluster_name=pxc-cluster #If wsrep_node_name is not specified, then system hostname will be used wsrep_node_name=percona1 #pxc_strict_mode allowed values: DISABLED,PERMISSIVE,ENFORCING,MASTER pxc_strict_mode=ENFORCING # SST method wsrep_sst_method=xtrabackup-v2 #Authentication for SST method wsrep_sst_auth=sstuser:passw0rd |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 |
[root@percona2 ~]# cat /etc/percona-xtradb-cluster.conf.d/wsrep.cnf [mysqld] # Path to Galera library wsrep_provider=/usr/lib64/galera3/libgalera_smm.so # Cluster connection URL contains IPs of nodes #If no IP is found, this implies that a new cluster needs to be created, #in order to do that you need to bootstrap this node wsrep_cluster_address=gcomm://192.168.1.222,192.168.1.223,192.168.1.224 # In order for Galera to work correctly binlog format should be ROW binlog_format=ROW # MyISAM storage engine has only experimental support default_storage_engine=InnoDB # Slave thread to use wsrep_slave_threads= 8 wsrep_log_conflicts # This changes how InnoDB autoincrement locks are managed and is a requirement for Galera innodb_autoinc_lock_mode=2 # Node IP address wsrep_node_address=192.168.1.223 # Cluster name wsrep_cluster_name=pxc-cluster #If wsrep_node_name is not specified, then system hostname will be used wsrep_node_name=percona2 #pxc_strict_mode allowed values: DISABLED,PERMISSIVE,ENFORCING,MASTER pxc_strict_mode=ENFORCING # SST method wsrep_sst_method=xtrabackup-v2 #Authentication for SST method wsrep_sst_auth=sstuser:passw0rd |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 |
[root@percona3 ~]# cat /etc/percona-xtradb-cluster.conf.d/wsrep.cnf [mysqld] # Path to Galera library wsrep_provider=/usr/lib64/galera3/libgalera_smm.so # Cluster connection URL contains IPs of nodes #If no IP is found, this implies that a new cluster needs to be created, #in order to do that you need to bootstrap this node wsrep_cluster_address=gcomm://192.168.1.222,192.168.1.223,192.168.1.224 # In order for Galera to work correctly binlog format should be ROW binlog_format=ROW # MyISAM storage engine has only experimental support default_storage_engine=InnoDB # Slave thread to use wsrep_slave_threads= 8 wsrep_log_conflicts # This changes how InnoDB autoincrement locks are managed and is a requirement for Galera innodb_autoinc_lock_mode=2 # Node IP address wsrep_node_address=192.168.1.224 # Cluster name wsrep_cluster_name=pxc-cluster #If wsrep_node_name is not specified, then system hostname will be used wsrep_node_name=percona3 #pxc_strict_mode allowed values: DISABLED,PERMISSIVE,ENFORCING,MASTER pxc_strict_mode=ENFORCING # SST method wsrep_sst_method=xtrabackup-v2 #Authentication for SST method wsrep_sst_auth=sstuser:passw0rd |
Plese note parameters max_connections and max_connect_errors added to the end of each wsrep.cnf configuration file. This configuration is required by ScaleArc.
12. Bootstrap the first node, percona1:
[root@percona1 ~]# systemctl start mysql@bootstrap.service
13. Check the cluster initialization status:
root@percona1> mysql -u root -p mysql> show status like 'wsrep%'; +----------------------------+--------------------------------------+ | Variable_name | Value | +----------------------------+--------------------------------------+ | wsrep_local_state_uuid | c2883338-834d-11e2-0800-03c9c68e41ec | | ... | ... | | wsrep_local_state | 4 | | wsrep_local_state_comment | Synced | | ... | ... | | wsrep_cluster_size | 1 | | wsrep_cluster_status | Primary | | wsrep_connected | ON | | ... | ... | | wsrep_ready | ON | +----------------------------+--------------------------------------+ 40 rows in set (0.01 sec)
14. Create replication user 'sstuser' :
mysql> CREATE USER 'sstuser'@'localhost' IDENTIFIED BY 'passw0rd'; mysql> GRANT RELOAD, LOCK TABLES, PROCESS, REPLICATION CLIENT ON *.* TO 'sstuser'@'localhost'; mysql> FLUSH PRIVILEGES;
15. Add second percona node to the cluster, using configuration as per point 11:
[root@percona2 ~]# sudo service mysql start
16. Check that node joined cluster:
[root@percona2 ~]# mysql -u root -p mysql> show status like 'wsrep%'; +----------------------------+--------------------------------------+ | Variable_name | Value | +----------------------------+--------------------------------------+ | wsrep_local_state_uuid | c2883338-834d-11e2-0800-03c9c68e41ec | | ... | ... | | wsrep_local_state | 4 | | wsrep_local_state_comment | Synced | | ... | ... | | wsrep_cluster_size | 2 | | wsrep_cluster_status | Primary | | wsrep_connected | ON | | ... | ... | | wsrep_ready | ON | +----------------------------+--------------------------------------+ 40 rows in set (0.01 sec)
wsrep_cluster_size parameter increases by 1 upon node successfuly joining the cluster.
17. Add third node to the Percona cluster repeating steps 15 and 16 on percona3:
wsrep_cluster_size parameter increases again by 1 upon node successfuly joining cluster.
18. Verify replication:
a. Create test database on second node percona2:
mysql> CREATE DATABASE percona; Query OK, 1 row affected (0.01 sec)
b. Create table on third node percona3:
mysql> USE percona; Database changed mysql> CREATE TABLE example (node_id INT PRIMARY KEY, node_name VARCHAR(30)); Query OK, 0 rows affected (0.05 sec)
c. Insert records on the first node percona1:
mysql> INSERT INTO percona.example VALUES (1, 'percona1'); Query OK, 1 row affected (0.02 sec)
d. Retrieve table content on second node percona2:
mysql> SELECT * FROM percona.example; +---------+-----------+ | node_id | node_name | +---------+-----------+ | 1 | percona1 | +---------+-----------+ 1 row in set (0.00 sec)
* CONFIGURE PERCONA XtraDB FOR ScaleArc
1. Create DB user before adding Percona cluster to Scalearc:
mysql> create user 'scalearc'@'%' identified by 'Test1234$'; mysql> GRANT SELECT, INSERT, UPDATE, CREATE, DROP, RELOAD, SUPER, REPLICATION SLAVE, REPLICATION CLIENT ON *.* \ TO 'scalearc'@'%' identified by 'Test1234$' WITH GRANT OPTION; mysql> create user 'scalearc'@'192.168.1.209' identified by 'Test1234$'; mysql> create user 'scalearc'@'192.168.1.210' identified by 'Test1234$'; mysql> create user 'scalearc'@'192.168.1.211' identified by 'Test1234$'; mysql> GRANT SELECT, INSERT, UPDATE, CREATE, DROP, RELOAD, SUPER, REPLICATION SLAVE, REPLICATION CLIENT ON *.* \ TO 'scalearc'@'192.168.1.209' WITH GRANT OPTION; mysql> GRANT SELECT, INSERT, UPDATE, CREATE, DROP, RELOAD, SUPER, REPLICATION SLAVE, REPLICATION CLIENT ON *.* \ TO 'scalearc'@'192.168.1.210' WITH GRANT OPTION; mysql> GRANT SELECT, INSERT, UPDATE, CREATE, DROP, RELOAD, SUPER, REPLICATION SLAVE, REPLICATION CLIENT ON *.* \ TO 'scalearc'@'192.168.1.211' WITH GRANT OPTION;
2. Additionally, create ScaleArc user for role change handling:
mysql> create user 'scalearc'@'192.168.1.222' identified by 'Test1234$'; mysql> create user 'scalearc'@'192.168.1.223' identified by 'Test1234$'; mysql> create user 'scalearc'@'192.168.1.224' identified by 'Test1234$';
mysql> GRANT SELECT, INSERT, UPDATE, CREATE, DROP, RELOAD, SUPER, REPLICATION SLAVE, REPLICATION CLIENT ON *.* \
TO 'scalearc'@'192.168.1.222' WITH GRANT OPTION; mysql> GRANT SELECT, INSERT, UPDATE, CREATE, DROP, RELOAD, SUPER, REPLICATION SLAVE, REPLICATION CLIENT ON *.* \
TO 'scalearc'@'192.168.1.223' WITH GRANT OPTION; mysql> GRANT SELECT, INSERT, UPDATE, CREATE, DROP, RELOAD, SUPER, REPLICATION SLAVE, REPLICATION CLIENT ON *.* \
TO 'scalearc'@'192.168.1.224' WITH GRANT OPTION;
Output should be looking similar to the one below:
mysql> select user,host,Grant_priv from mysql.user; +-----------+---------------+------------+ | user | host | Grant_priv | +-----------+---------------+------------+ | root | localhost | Y | | mysql.sys | localhost | N | | sstuser | localhost | N | | scalearc | 192.168.1.224 | Y | | scalearc | 192.168.1.223 | Y | | scalearc | 192.168.1.222 | Y | | scalearc | % | Y | | scalearc | 192.168.1.211 | Y | | scalearc | 192.168.1.210 | Y | | scalearc | 192.168.1.209 | Y | +-----------+---------------+------------+ 10 rows in set (0.00 sec)
3. Adding Percona cluster to ScaleArc:
a. In ScaleArc, go to CLUSTERS menu and press '+Add Cluster' button.
b. Add each DB server from the cluster - percona1, percona2 and percona3 IP addresses, using 'scalearc' user credentials.
c. Use server role 'Read + Write' for 'percona1', 'Standby + Read' for 'percona2' and 'percona3' nodes, for example.
d. And create a cluster by clicking 'Setup Cluster' button.
If you are experiencing issues with ScaleArc or with any of it's features, please contact ScaleArc Support. We are available 24x7 by phone at 855 800 7225 or +1 408 412 7315. For general support inquiries, you can also e-mail us at support@scalearc.com.
Permalink: https://support.scalearc.com/kb/articles/4375 |