Skip to content

MySQL Innodb Cluster Installation procedure (Version 8.0)

klouddb.io edited this page Sep 4, 2022 · 2 revisions

NOTE : Below runbook is for Version 8 . If you are looking for 5.7 install procedure we have a separate page for it

Pre-install checks

Selinux settings / Apparomr settings

For Centos machines set selinux to permissive and validate . NOTE : If you ignore this step your build may fail

sestatus
sed -i "s/SELINUX=enforcing/SELINUX=permissive/" /etc/sysconfig/selinux
setenforce 0
sestatus

Host mapping

Modify /etc/hosts for all the cluster nodes to add host mapping . In this example we have 3 node cluster

202.222.12.2 innodbcln1
202.222.12.3 innodbcln2
202.222.12.4 innodbcln2

Firewall

Please ensure that firewall rules are set for below ports

3306 --> MySQL port
33060 --> MySQL X port
33061 --> Group replication communication port

Install MySQL on all the cluster nodes

NOTE : Before you proceed further please confirm that all pre-install steps listed above are complete . You can choose to install community version or percona version . In realtime you may have your own custom my.cnf which needs to be imported and used

Install community version

sed -i 's/enabled=1/enabled=0/' /etc/yum.repos.d/mysql-community.repo
yum --enablerepo=mysql80-community install mysql-community-server

Install percona version

yum install https://repo.percona.com/yum/percona-release-latest.noarch.rpm

percona-release setup ps80

yum install percona-server-server
yum install percona-mysql-shell

Get the default root pass from error log and reset the password after you login:

 grep -i pass /var/log/mysqld.log
2020-09-10T14:34:25.825963Z 6 [Note] [MY-010454] [Server] A temporary password is generated for root@localhost: xxxx
ALTER USER 'root'@'localhost' IDENTIFIED BY 'MyNewPass';

Add cluster user which will be used as admin user for your Innodb cluster NOTE: password and userid has to be changed in below command to meet your requirements

We are setting lo_bin to 0 to avoid any errant GTIDs
SET SQL_LOG_BIN=0; 

CREATE USER 'idbclusr'@'%' IDENTIFIED BY 'idclPass#2989'; 
GRANT ALL ON *.* TO 'idbclusr'@'%' WITH GRANT OPTION;

Setting up the cluster

Login to shell on one of the nodes

[root@ip-172-31-18-222 ~]# mysqlsh
MySQL Shell 8.0.20

Copyright (c) 2016, 2020, Oracle and/or its affiliates. All rights reserved.
Oracle is a registered trademark of Oracle Corporation and/or its affiliates.
Other names may be trademarks of their respective owners.

Check for configuration before creating cluster. Note you will be asked if you want to save password. when you hit Y it will be saved to vault and from next time it will not prompt for password

NODE 1 >>
dba.checkInstanceConfiguration('idbclusr@18.216.203.216:3306')
Please provide the password for 'idbclusr@18.216.203.216:3306': *************
Save password for 'idbclusr@18.216.203.216:3306'? [Y]es/[N]o/Ne[v]er (default No): Y
NODE 2 >>
dba.checkInstanceConfiguration('idbclusr@3.128.120.157:3306')
Please provide the password for 'idbclusr@3.128.120.157:3306': *************
Save password for 'idbclusr@3.128.120.157:3306'? [Y]es/[N]o/Ne[v]er (default No): Y
NODE 3 >>
dba.checkInstanceConfiguration('idbclusr@18.217.197.22:3306')
Please provide the password for 'idbclusr@18.217.197.22:3306': *************
Save password for 'idbclusr@18.217.197.22:3306'? [Y]es/[N]o/Ne[v]er (default No): Y

If you see any issues in config you should see output like below when you ran above commands

Checking whether existing tables comply with Group Replication requirements...
No incompatible tables detected

Checking instance configuration...

NOTE: Some configuration options need to be fixed:
+--------------------------+---------------+----------------+--------------------------------------------------+
| Variable                 | Current Value | Required Value | Note                                             |
+--------------------------+---------------+----------------+--------------------------------------------------+
| binlog_checksum          | CRC32         | NONE           | Update the server variable                       |
| enforce_gtid_consistency | OFF           | ON             | Update read-only variable and restart the server |
| gtid_mode                | OFF           | ON             | Update read-only variable and restart the server |
| server_id                | 1             | <unique ID>    | Update read-only variable and restart the server |
+--------------------------+---------------+----------------+--------------------------------------------------+

Some variables need to be changed, but cannot be done dynamically on the server.
NOTE: Please use the dba.configureInstance() command to repair these issues.

Below is a screenshot from a live run for your reference. When you run checkInstanceConfiguration you should see something similar to below:

Fix the config issues by running below on each node

NODE1>>
dba.configureInstance('idbclusr@18.216.203.216:3306')
NODE2>>
NODE3>>

When you run ConfigureInstance you should see something similar to below . It asks for your permission to restart (restart is required for some variables):

Once you fix the config issues you can run below commands to check again

NODE 1 >>
dba.checkInstanceConfiguration('idbclusr@18.216.203.216:3306')
NODE 2 >>
dba.checkInstanceConfiguration('idbclusr@3.128.120.157:3306')
NODE 3 >>
dba.checkInstanceConfiguration('idbclusr@18.217.197.22:3306')

You should see output as


The instance 'ip-172-31-18-138.us-east-2.compute.internal:3306' is valid to be used in an InnoDB cluster.

{
    "status": "ok"
}

Below is a screenshot from a live run for your reference. You should get status : Ok

Create the cluster after config checks

 cluster = dba.createCluster('prd_cluster');
 MySQL  JS >  cluster = dba.createCluster('prd_cluster');
Dba.createCluster: The shell must be connected to a member of the InnoDB cluster being managed (LogicError)
 MySQL  JS >

MySQL  JS > shell.connect('idbclusr@18.216.203.216:3306')
cluster = dba.createCluster('prd_cluster');

cluster.status(); cluster.addInstance('idbclusr@3.128.120.157:3306');

MySQL  18.216.203.216:3306 ssl  JS > cluster.addInstance('idbclusr@18.217.197.22:3306');

NOTE: The target instance 'ip-172-31-18-222.us-east-2.compute.internal:3306' has not been pre-provisioned (GTID set is empty). The Shell is unable to decide whether incremental state recovery can correctly provision it.
The safest and most convenient way to provision a new instance is through automatic clone provisioning, which will completely overwrite the state of 'ip-172-31-18-222.us-east-2.compute.internal:3306' with a physical snapshot from an existing cluster member. To use this method by default, set the 'recoveryMethod' option to 'clone'.

The incremental state recovery may be safely used if you are sure all updates ever executed in the cluster were done with GTIDs enabled, there are no purged transactions and the new instance contains the same GTID set as the cluster or a subset of it. To use this method by default, set the 'recoveryMethod' option to 'incremental'.


Please select a recovery method [C]lone/[I]ncremental recovery/[A]bort (default Clone):
cluster.addInstance('idbclusr@18.217.197.22:3306');
cluster.status();

Below is a screenshot from a live run for your reference. When you add secondary nodes you should see something like below

Validate the cluster health by issuing cluster.status()

Clone this wiki locally