Skip to content

MySQL Innodb Cluster Installation procedure (Version 5.7)

klouddb edited this page Sep 16, 2020 · 3 revisions

NOTE : Below runbook is for Version 5.7 . If you are looking for 8.0 install procedure we have a separate page for it

Procedure difference 5.7 Vs 8.0

  • In 5.7 there is no cloning feature available . So adding secondary nodes in cluster has to be done using MEB/Xtrabackup/mysqldump
  • In 8.0 you can use configureInstance and run it from one node remotely to fix all the cluster nodes. But in 5.7 you need to use configureLocalInstance on each of the nodes separately

Pre-install checks

Selinux settings / Apparomr settings

For Centos machines set selinux to permissive and validate . NOTE : If you ignore this step your build may fail

sestatus
sed -i "s/SELINUX=enforcing/SELINUX=permissive/" /etc/sysconfig/selinux
setenforce 0
sestatus

Host mapping

Modify /etc/hosts for all the cluster nodes to add host mapping . In this example we have 3 node cluster

202.222.12.2 innodbcln1
202.222.12.3 innodbcln2
202.222.12.4 innodbcln2

Firewall

Please ensure that firewall rules are set for below ports

3306 --> MySQL port
33060 --> MySQL X port
33061 --> Group replication communication port

Install MySQL on all the cluster nodes

NOTE : Before you proceed further please confirm that all pre-install steps listed above are complete . You can choose to install community version or percona version . In realtime you may have your own custom my.cnf which needs to be imported and used

Install community version

wget https://dev.mysql.com/get/mysql57-community-release-el7-11.noarch.rpm
yum localinstall mysql57-community-release-el7-11.noarch.rpm
yum install mysql-community-server

Install percona version

yum install https://repo.percona.com/yum/percona-release-latest.noarch.rpm

yum install Percona-Server-server-57

yum install percona-mysql-shell.x86_64

Get the default root pass from error log and reset the password after you login:

 grep -i pass /var/log/mysqld.log
2020-09-10T14:34:25.825963Z 6 [Note] [MY-010454] [Server] A temporary password is generated for root@localhost: xxxx
ALTER USER 'root'@'localhost' IDENTIFIED BY 'MyNewPass';

Add cluster user which will be used as admin user for your Innodb cluster NOTE: password and userid has to be changed in below command to meet your requirements

We are setting lo_bin to 0 to avoid any errant GTIDs
SET SQL_LOG_BIN=0; 

CREATE USER 'idbclusr'@'%' IDENTIFIED BY 'idclPass#2989'; 
GRANT ALL ON *.* TO 'idbclusr'@'%' WITH GRANT OPTION;

Setting up the cluster

Login to shell on one of the nodes

[root@ip-172-31-18-222 ~]# mysqlsh
MySQL Shell 8.0.20

Copyright (c) 2016, 2020, Oracle and/or its affiliates. All rights reserved.
Oracle is a registered trademark of Oracle Corporation and/or its affiliates.
Other names may be trademarks of their respective owners.

Check for configuration before creating cluster. Note you will be asked if you want to save password. when you hit Y it will be saved to vault and from next time it will not prompt for password

NODE 1 >>
dba.checkInstanceConfiguration('idbclusr@18.216.203.216:3306')
Please provide the password for 'idbclusr@18.216.203.216:3306': *************
Save password for 'idbclusr@18.216.203.216:3306'? [Y]es/[N]o/Ne[v]er (default No): Y
NODE 2 >>
dba.checkInstanceConfiguration('idbclusr@3.128.120.157:3306')
Please provide the password for 'idbclusr@3.128.120.157:3306': *************
Save password for 'idbclusr@3.128.120.157:3306'? [Y]es/[N]o/Ne[v]er (default No): Y
NODE 3 >>
dba.checkInstanceConfiguration('idbclusr@18.217.197.22:3306')
Please provide the password for 'idbclusr@18.217.197.22:3306': *************
Save password for 'idbclusr@18.217.197.22:3306'? [Y]es/[N]o/Ne[v]er (default No): Y

If you see any issues in config you should see output like below when you ran above commands

Checking whether existing tables comply with Group Replication requirements...
No incompatible tables detected

Checking instance configuration...

NOTE: Some configuration options need to be fixed:
+--------------------------+---------------+----------------+--------------------------------------------------+
| Variable                 | Current Value | Required Value | Note                                             |
+--------------------------+---------------+----------------+--------------------------------------------------+
| binlog_checksum          | CRC32         | NONE           | Update the server variable                       |
| enforce_gtid_consistency | OFF           | ON             | Update read-only variable and restart the server |
| gtid_mode                | OFF           | ON             | Update read-only variable and restart the server |
| server_id                | 1             | <unique ID>    | Update read-only variable and restart the server |
+--------------------------+---------------+----------------+--------------------------------------------------+

Some variables need to be changed, but cannot be done dynamically on the server.
NOTE: Please use the dba.configureLocalInstance() command to repair these issues.

Below is a screenshot from a live run for your reference. When you run checkInstanceConfiguration you should see something similar to below:

Fix the config issues by running below on each node . Note : You need to login to each node separately and issue below command . In version 8.O it can be done remotely but in 5.7 you need to login to each instance

NODE1>>
dba.configureLocalInstance('idbclusr@18.216.203.216:3306')
NODE2>>
NODE3>>

Once you fix the config issues you can run below commands to check again

NODE 1 >>
dba.checkInstanceConfiguration('idbclusr@18.216.203.216:3306')
NODE 2 >>
dba.checkInstanceConfiguration('idbclusr@3.128.120.157:3306')
NODE 3 >>
dba.checkInstanceConfiguration('idbclusr@18.217.197.22:3306')

You should see output as


The instance 'ip-172-31-18-138.us-east-2.compute.internal:3306' is valid to be used in an InnoDB cluster.

{
    "status": "ok"
}

Below is a screenshot from a live run for your reference. You should get status : Ok

Create the cluster after config checks

 cluster = dba.createCluster('prd_cluster');
 MySQL  JS >  cluster = dba.createCluster('prd_cluster');
Dba.createCluster: The shell must be connected to a member of the InnoDB cluster being managed (LogicError)
 MySQL  JS >

MySQL  JS > shell.connect('idbclusr@18.216.203.216:3306')
cluster = dba.createCluster('prd_cluster');

NOTE:  One major difference between version 8.0 and 5.7 is CLONING feature. In 8.0 it is very easy to add a new node to cluster by cloning 
But in 5.7 you need to restore using Xtrabackup,MEB,mysqldump before adding using below procedure

cluster.status(); cluster.addInstance('idbclusr@3.128.120.157:3306');

MySQL  18.216.203.216:3306 ssl  JS > cluster.addInstance('idbclusr@18.217.197.22:3306');

NOTE: The target instance 'ip-172-31-18-222.us-east-2.compute.internal:3306' has not been pre-provisioned (GTID set is empty). The Shell is unable to decide whether incremental state recovery can correctly provision it.
The safest and most convenient way to provision a new instance is through automatic clone provisioning, which will completely overwrite the state of 'ip-172-31-18-222.us-east-2.compute.internal:3306' with a physical snapshot from an existing cluster member. To use this method by default, set the 'recoveryMethod' option to 'clone'.

The incremental state recovery may be safely used if you are sure all updates ever executed in the cluster were done with GTIDs enabled, there are no purged transactions and the new instance contains the same GTID set as the cluster or a subset of it. To use this method by default, set the 'recoveryMethod' option to 'incremental'.


Please select a recovery method [C]lone/[I]ncremental recovery/[A]bort (default Clone):
cluster.addInstance('idbclusr@18.217.197.22:3306');
cluster.status();

Below is a screenshot from a live run for your reference. When you add secondary nodes you should see something like below

Validate the cluster health by issuing cluster.status()