Monday, September 21, 2020

MySQL mysql_config_editor & expect

This is just a note to help out anyone that might want to use the mysql_config_editor command in their automation tools. 

the mysql_config_editor does not take a password argument so automation tools that might have before set your password in the .my.cnf file trying to use mysql_config_editor fails. 

It is possible and quite simple though with the expect tool. 

 yum -y install expect  

it works for apt-get also. 


So in this example, I will show a simple bash script version. 

1st.. my login path does not work... 

mysql --login-path=local

ERROR 1045 (28000): Access denied for user


Set this with expect 

You would execute this via your bash script.  

expect <<EOD

spawn mysql_config_editor set --login-path=local --host=localhost --user=root --password 

expect "password"

send  -- "<PASSWORD>\r"

interact

EOD


Now it works...

mysql --login-path=local

Welcome to the MySQL monitor.  Commands end with ; or \g.

Your MySQL connection id is 1002



Sunday, March 15, 2020

MySQL & Dockers...a simple set up

MySQL & Dockers... are not new concepts,  people have been moving to Dockers for some time now.  For someone who is just moving to this for development, it can have a few hurdles.

While MySQL works just fine running locally, if you are testing code across different versions of MySQL it is nice to have several versions easily available.

One option for years has been of course https://mysqlsandbox.net/ by Giuseppe Maxia.  This is a very valid solution to be able to get several instances up and test replication and etc etc.

Dockers are now also another often used scenario when it comes to testing across different versions of MySQL. The following will just go over some of the steps to get several versions installed easily. I use OSX so these examples are for OSX.

You need Docker to start and of course and Docker Desktop is a handy tool for you to be able to get access easily.

Once I had Docker set up I can get my environment ready for MySQL. 

Here I created a Docker folder that contains the MySQL data directories, Config files as well as the mysql-files directory if I needed it. 

mkdir ~/Docker ;

mkdir ~/Docker/mysql_data;
mkdir ~/Docker/mysql-files;
mkdir ~/Docker/cnf;

Now inside mysql_data


cd  ~/Docker/mysql_data;
mkdir 8.0;
mkdir 5.7;
mkdir 5.6;
mkdir 5.5;


Now I set up simple cnf files for this example. The primary thing to note is the bind-address. This is set to ensure it is opened up for us to reach MySQL outside of the docker.  You can also notice that these files can be used to set up additional configuration information as you see fit per MySQL docker instance. 



cd  ~/Docker/cnf;

cat my.8.0.cnf
[mysqld]
pid-file        = /var/run/mysqld/mysqld.pid
socket          = /var/run/mysqld/mysqld.sock
datadir         = /var/lib/mysql
secure-file-priv= /var/lib/mysql-files
# Disabling symbolic-links is recommended to prevent assorted security risks
symbolic-links=0
bind-address = 0.0.0.0
port=3306
server-id=80


# Custom config should go here
!includedir /etc/mysql/conf.d/

 cat my.5.7.cnf
[mysqld]
bind-address = 0.0.0.0
server-id=57
max_allowed_packet=32M

$ cat my.5.6.cnf
[mysqld]
bind-address = 0.0.0.0
server-id=56

$ cat my.5.5.cnf
[mysqld]
bind-address = 0.0.0.0
server-id=55


OK so now that we have configuration files set up, We need to build the dockers. A few things to note for the build commands. 

--name   We set a named reference for the docker. 

Here we are mapping the configuration files, data directory and mysql-files directories to the docker . This allows us to adjust the my.cnf file and etc easily. 
-v ~/Docker/cnf/my.8.0.cnf:/etc/mysql/my.cnf 
-v ~/Docker/mysql_data/8.0:/var/lib/mysql
-v ~/Docker/mysql-files:/var/lib/mysql-files

We want to be able to reach these MySQL instances outside of the docker so we need to publish and map the port accordingly. 
-p  3306:3306  This means 3306 local to 3306 inside docker
-p  3307:3306  This means 3307 local to 3306 inside docker
-p  3308:3306  This means 3308 local to 3306 inside docker
-p  3309:3306  This means 3309 local to 3306 inside docker

Then we also pass a couple of environment variables. 
-e MYSQL_ROOT_HOST=% -e MYSQL_ROOT_PASSWORD=<set a password here>

So putting it all together...


docker run --restart always --name mysql8.0   -v ~/Docker/cnf/my.8.0.cnf:/etc/mysql/my.cnf -v ~/Docker/mysql_data/8.0:/var/lib/mysql -v ~/Docker/mysql-files:/var/lib/mysql-files -p  3306:3306 -d -e MYSQL_ROOT_HOST=% -e MYSQL_ROOT_PASSWORD=<set a password here> mysql:8.0

docker run --restart always --name mysql5.7   -v ~/Docker/cnf/my.5.7.cnf:/etc/mysql/my.cnf -v ~/Docker/mysql_data/5.7:/var/lib/mysql -v ~/Docker/mysql-files:/var/lib/mysql-files -p  3307:3306 -d -e MYSQL_ROOT_HOST=% -e MYSQL_ROOT_PASSWORD=<set a password here> mysql:5.7

docker run --restart always --name mysql5.6   -v ~/Docker/cnf/my.5.6.cnf:/etc/mysql/my.cnf -v ~/Docker/mysql_data/5.6:/var/lib/mysql -v ~/Docker/mysql-files:/var/lib/mysql-files -p  3308:3306 -d -e MYSQL_ROOT_HOST=% -e MYSQL_ROOT_PASSWORD=<set a password here> mysql:5.6

docker run --restart always --name mysql5.5   -v ~/Docker/cnf/my.5.5.cnf:/etc/mysql/my.cnf -v ~/Docker/mysql_data/5.5:/var/lib/mysql -v ~/Docker/mysql-files:/var/lib/mysql-files -p  3309:3306 -d -e MYSQL_ROOT_HOST=% -e MYSQL_ROOT_PASSWORD=<set a password here> mysql:5.5

After each execution of the above commands, you should get an id returned. 
example: 3cb07d7c21476fbf298648986208f3429ec664167d8eef7fed17bf9ee3ce6316

You can start/restart and access each docker terminal easily via the Docker Desktop or just keep note of the related IDs and you execute via the terminal.

The Docker Desktop also shows you all the variables you passed so you can validate. 
You can of course also access the CLI here, stop and start or destroy it easily. 



$ docker exec -it 3cb07d7c21476fbf298648986208f3429ec664167d8eef7fed17bf9ee3ce6316 /bin/sh; exit
# mysql -p 

If the Docker container is already running you can now access MySQL via your localhost terminal.

mysql --host=localhost  --protocol=tcp --port=3306 -p -u root 

Now if you are having any access issues remember to ensure that MySQL accounts are correct and that your ports and mapping correctly. 
  • Lost connection to MySQL server at 'reading initial communication packet'
  • ERROR 1045 (28000): Access denied for user 'root'@'192.168.0.5' (using password: YES)

Now you can see that all are up and available and the server Ids match what we set per cnf file eariler.

$ mysql --host=localhost --protocol=tcp --port=3306 -e "Select @@hostname, @@version, @@server_id "
+--------------+-----------+-------------+
| @@hostname | @@version | @@server_id |
+--------------+-----------+-------------+
| 58e9663afe8d | 8.0.19 | 80 |
+--------------+-----------+-------------+
$ mysql --host=localhost --protocol=tcp --port=3307 -e "Select @@hostname, @@version, @@server_id "
+--------------+-----------+-------------+
| @@hostname | @@version | @@server_id |
+--------------+-----------+-------------+
| b240917f051a | 5.7.29 | 57 |
+--------------+-----------+-------------+
$ mysql --host=localhost --protocol=tcp --port=3308 -e "Select @@hostname, @@version, @@server_id "
+--------------+-----------+-------------+
| @@hostname | @@version | @@server_id |
+--------------+-----------+-------------+
| b4653850cfe9 | 5.6.47 | 56 |
+--------------+-----------+-------------+
$ mysql --host=localhost --protocol=tcp --port=3309 -e "Select @@hostname, @@version, @@server_id "
+--------------+-----------+-------------+
| @@hostname | @@version | @@server_id |
+--------------+-----------+-------------+
| 22e169004583 | 5.5.62 | 55 |
+--------------+-----------+-------------+


Saturday, July 13, 2019

MySQL How do you restore tablespace

MySQL How do you restore tablespace?

This is not new information but I have not covered it much so addressing it now for those that need it.

If you lose your ibd files... you lose your data. So if you have a copy of one available.. or even if you are syncing from another database you can still import it.  What/how do you lose tablespace?

Here is a simple example to recover tablespace.



mysql> Create database demo;

mysql> use demo;

mysql> CREATE TABLE `demotable` (
    ->   `id` int(11) NOT NULL AUTO_INCREMENT,
    ->   `dts` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
    ->   PRIMARY KEY (`id`)
    -> ) ENGINE=InnoDB;


Now we store some data...


mysql> INSERT INTO demotable (id) VALUES (NULL);
Query OK, 1 row affected (0.10 sec)

mysql> INSERT INTO demotable (id) VALUES (NULL);
Query OK, 1 row affected (0.08 sec)

mysql> SELECT * FROM demotable;
+----+---------------------+
| id | dts                 |
+----+---------------------+
|  1 | 2019-07-12 23:31:34 |
|  2 | 2019-07-12 23:31:35 |
+----+---------------------+
2 rows in set (0.00 sec)


OK now lets break it..


# systemctl stop mysqld
# cd /var/lib/mysql/demo/
# ls -ltr
total 80
-rw-r-----. 1 mysql mysql 114688 Jul 12 23:31 demotable.ibd
# mv demotable.ibd /tmp/

# systemctl start mysqld
# mysql demo

mysql> show tables;
+----------------+
| Tables_in_demo |
+----------------+
| demotable      |
+----------------+
1 row in set (0.00 sec)

mysql> desc demotable;
+-------+-----------+------+-----+-------------------+-----------------------------------------------+
| Field | Type      | Null | Key | Default           | Extra                                         |
+-------+-----------+------+-----+-------------------+-----------------------------------------------+
| id    | int(11)   | NO   | PRI | NULL              | auto_increment                                |
| dts   | timestamp | NO   |     | CURRENT_TIMESTAMP | DEFAULT_GENERATED on update CURRENT_TIMESTAMP |
+-------+-----------+------+-----+-------------------+-----------------------------------------------+
2 rows in set (0.01 sec)

mysql> INSERT INTO demotable (id) VALUES (NULL);
ERROR 1812 (HY000): Tablespace is missing for table `demo`.`demotable`.


Broken and lost tablespace... Now we can recover it..


demo]# cp /tmp/demotable.ibd .

mysql> ALTER TABLE demotable DISCARD TABLESPACE;

demo]# cp /tmp/demotable.ibd .
demo]# ls -ltr
total 112
-rw-r-----. 1 root root 114688 Jul 12 23:50 demotable.ibd
demo]# chown mysql:mysql demotable.ibd
demo]# mysql demo
mysql> ALTER TABLE demotable IMPORT TABLESPACE;
ERROR 1034 (HY000): Incorrect key file for table 'demotable'; try to repair it

mysql> REPAIR TABLE demotable;
+----------------+--------+----------+---------------------------------------------------------+
| Table          | Op     | Msg_type | Msg_text                                                |
+----------------+--------+----------+---------------------------------------------------------+
| demo.demotable | repair | note     | The storage engine for the table doesn't support repair |
+----------------+--------+----------+---------------------------------------------------------+


Notice now we also got another error.. This is usually tied to space available to tmpdir, and repair doesn't work for .ibd anyway.


mysql> select @@tmpdir;
+----------+
| @@tmpdir |
+----------+
| /tmp     |
+----------+

# vi /etc/my.cnf
tmpdir=/var/lib/mysql-files/

# systemctl restart mysqld
# mysql demo


OK used the mysql-files directory just for example.
Now we can try again.


mysql> ALTER TABLE demotable IMPORT TABLESPACE;
Query OK, 0 rows affected, 1 warning (0.61 sec)

mysql>  INSERT INTO demotable (id) VALUES (NULL);
Query OK, 1 row affected (0.11 sec)

mysql>  SELECT * FROM demotable;
+----+---------------------+
| id | dts                 |
+----+---------------------+
|  1 | 2019-07-12 23:31:34 |
|  2 | 2019-07-12 23:31:35 |
|  3 | 2019-07-12 23:56:08 |
+----+---------------------+


OK worked.
Now, this is all nice and simple if you just have one table. But what about 100s...

Automate it, of course, and use your information_schema to help.

Make a few more copies for test.

mysql> create table demotable1 like demotable;
Query OK, 0 rows affected (0.51 sec)

mysql> create table demotable2 like demotable;
Query OK, 0 rows affected (1.04 sec)

mysql> create table demotable3 like demotable;
Query OK, 0 rows affected (0.74 sec)

mysql> create table demotable4 like demotable;
Query OK, 0 rows affected (2.21 sec)


break them all..

demo]# mv *.ibd /tmp/


Now using your information_schema.tables table, you can build out all the commands you will need.

# vi build_discard.sql
# cat build_discard.sql
SELECT CONCAT(" ALTER TABLE ",TABLE_SCHEMA,".",TABLE_NAME," DISCARD TABLESPACE;  ") as CMD FROM information_schema.TABLES WHERE TABLE_SCHEMA='demo';

# vi build_import.sql
# cat build_import.sql
SELECT CONCAT(" ALTER TABLE ",TABLE_SCHEMA,".",TABLE_NAME," IMPORT TABLESPACE;  ") as CMD FROM information_schema.TABLES WHERE TABLE_SCHEMA='demo';



# mysql -N < build_import.sql > import_tablespace.sql
# mysql -N < build_discard.sql  | mysql demo

demo]# cp /tmp/*.ibd .
demo]# chown mysql:mysql *.ibd
# systemctl restart mysqld
# mysql demo < import_tablespace.sql
# mysql demo

mysql> INSERT INTO demotable (id) VALUES (NULL);
Query OK, 1 row affected (0.08 sec)

mysql> INSERT INTO demotable1 (id) VALUES (NULL);
Query OK, 1 row affected (0.05 sec)

mysql> INSERT INTO demotable2 (id) VALUES (NULL);
Query OK, 1 row affected (0.09 sec)

mysql> INSERT INTO demotable3 (id) VALUES (NULL);
^[[AQuery OK, 1 row affected (0.37 sec)

mysql> INSERT INTO demotable4 (id) VALUES (NULL);
Query OK, 1 row affected (0.12 sec)



And it worked.


Friday, July 12, 2019

MySQL Binlogs:: How to recover

So I realized I had not made a post about this after this situation that recently came up.

Here is the scenario: A backup was taken at midnight, they used MySQL dumps per database. Then at ten am the next day the database crashed. A series of events happened before I was called in, but they got it up to a version of the database with MyISAM tables and the IBD files missing from the tablespace.

So option 1, restoring from backup would get us to midnight and we would lose hours of data. Option 2, we reimport the 1000's of ibd files and keep everything. Then we had option 3, restore from backup, then apply the binlogs for recent changes.

To make it more interesting, they didn't have all of the ibd files I was told, and I did see some missing. So not sure how that was possible but option 2 became an invalid option. They, of course, wanted the least data loss possible, so we went with option 3.

To do this safely I started up another instance of MySQL under port 3307. This allowed me a safe place to work while traffic had read access to the MyISAM data on the port 3306 instance.

Once all the backup dump files uncompressed and imported into the 3307 instance I was able to focus on the binlog files.

At first this concept sounds much harder risky than it really is. It is actually pretty straight forward and simple.

So first you have to find the data your after. A review of the binlog files gives you a head start as to what files are relevant. In my case, somehow they managed to reset the binlog so the 117 file had 2 date ranges within it.

First for binlog review, the following command outputs the data in human-readable format.
mysqlbinlog --defaults-file=/root/.my.cnf  --base64-output=DECODE-ROWS  --verbose mysql-bin.000117 >   review_mysql-bin.000117.sql

*Note... Be careful running the above command. Notice I have it dumping the file directly in same location as binlog. So VALIDATE that your file name is valid.  This mysql-bin.000117.sql is different than this mysql-bin.000117 .sql  . You will loose your binlog with the 2nd option and a space before .sql.

Now to save the data so it can be applied. Since I had several binlogs I created a file and I wanted to double-check the time ranges anyway.


mysqlbinlog --defaults-file=/root/.my.cnf --start-datetime="2019-07-09 00:00:00" --stop-datetime="2019-07-10 00:00:00" mysql-bin.000117 > binlog_restore.sql
mysqlbinlog --defaults-file=/root/.my.cnf mysql-bin.000118 >> binlog_restore.sql
mysqlbinlog --defaults-file=/root/.my.cnf mysql-bin.000119 >> binlog_restore.sql
mysqlbinlog --defaults-file=/root/.my.cnf --start-datetime="2019-07-10 00:00:00" --stop-datetime="2019-07-10 10:00:00" mysql-bin.000117 >> binlog_restore.sql
mysqlbinlog --defaults-file=/root/.my.cnf --stop-datetime="2019-07-10 10:00:00" mysql-bin.000120 >> binlog_restore.sql
mysqlbinlog --defaults-file=/root/.my.cnf --stop-datetime="2019-07-10 10:00:00" mysql-bin.000121 >> binlog_restore.sql

mysql --socket=/var/lib/mysql_restore/mysql.sock -e "source /var/lib/mysql/binlog_restore.sql"


Now I applied all the data from those binlogs for the given time ranges. The client double-checked all data and was very happy to have it all back.

Several different options existed for this situation, this happened to workout best with the client.

Once the validated all was ok on the restored version it was a simple stop both databases, moved the data directories (wanted to keep the datadir defaults intact) , chown the directories just to be safe and start up MySQL. Now the restored instance was up on port 3306.


Monday, June 17, 2019

MySQL Group Replication

So MySQL's group replication came out with MySQL 5.7. Now that is has been out a little while people are starting to ask more about it.
Below is an example of how to set this up and a few pain point examples as I poked around with it.
I am using three different servers,

 Server CENTOSA

mysql> INSTALL PLUGIN group_replication SONAME 'group_replication.so';
Query OK, 0 rows affected (0.02 sec)

vi my.cnf
disabled_storage_engines="MyISAM,BLACKHOLE,FEDERATED,ARCHIVE,MEMORY"
server_id=1
gtid_mode=ON
enforce_gtid_consistency=ON
binlog_checksum=NONE

log_bin=binlog
log_slave_updates=ON
binlog_format=ROW
master_info_repository=TABLE
relay_log_info_repository=TABLE

transaction_write_set_extraction=XXHASH64
group_replication_group_name="90d8b7c8-5ce1-490e-a448-9c8d176b54a8"
group_replication_start_on_boot=off
group_replication_local_address= "192.168.111.17:33061"
group_replication_group_seeds= "192.168.111.17:33061,192.168.111.89:33061,192.168.111.124:33061"
group_replication_bootstrap_group=off

mysql> SET SQL_LOG_BIN=0;
mysql> CREATE USER repl@'%' IDENTIFIED BY 'replpassword';
mysql> GRANT REPLICATION SLAVE ON *.* TO repl@'%';
mysql> FLUSH PRIVILEGES;
mysql> SET SQL_LOG_BIN=1;


CHANGE MASTER TO
MASTER_USER='repl',
MASTER_PASSWORD='replpassword'
FOR CHANNEL 'group_replication_recovery';


mysql> SET GLOBAL group_replication_bootstrap_group=ON;
Query OK, 0 rows affected (0.00 sec)


mysql> START GROUP_REPLICATION;
Query OK, 0 rows affected (3.11 sec)


mysql> SET GLOBAL group_replication_bootstrap_group=OFF;
Query OK, 0 rows affected (0.00 sec)


mysql> SELECT * FROM performance_schema.replication_group_members \G

*************************** 1. row ***************************
CHANNEL_NAME: group_replication_applier
MEMBER_ID: 1ab30239-5ef6-11e9-9b4a-08002712f4b1
MEMBER_HOST: centosa
MEMBER_PORT: 3306
MEMBER_STATE: ONLINE
MEMBER_ROLE: PRIMARY
MEMBER_VERSION: 8.0.15

So now we can add more servers.
Server CENTOSB

vi my.cnf
disabled_storage_engines="MyISAM,BLACKHOLE,FEDERATED,ARCHIVE,MEMORY"
server_id=2
gtid_mode=ON
enforce_gtid_consistency=ON
binlog_checksum=NONE

log_bin=binlog
log_slave_updates=ON
binlog_format=ROW
master_info_repository=TABLE
relay_log_info_repository=TABLE


transaction_write_set_extraction=XXHASH64
group_replication_group_name="90d8b7c8-5ce1-490e-a448-9c8d176b54a8"
group_replication_start_on_boot=off
group_replication_local_address= "192.168.111.89:33061"
group_replication_group_seeds= "192.168.111.17:33061,192.168.111.89:33061,192.168.111.124:33061"
group_replication_bootstrap_group=off

mysql> CHANGE MASTER TO
MASTER_USER='repl',
MASTER_PASSWORD='replpassword'
FOR CHANNEL 'group_replication_recovery';
Query OK, 0 rows affected, 2 warnings (0.02 sec)

mysql> CHANGE MASTER TO GET_MASTER_PUBLIC_KEY=1;
Query OK, 0 rows affected (0.02 sec)

mysql> START GROUP_REPLICATION;
Query OK, 0 rows affected (4.03 sec)

mysql> SELECT * FROM performance_schema.replication_group_members;
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+
| CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | MEMBER_ROLE | MEMBER_VERSION |
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+
| group_replication_applier | 1ab30239-5ef6-11e9-9b4a-08002712f4b1 | centosa | 3306 | ONLINE | PRIMARY | 8.0.15 |
| group_replication_applier | 572ca2fa-5eff-11e9-8df9-08002712f4b1 | centosb | 3306 | RECOVERING | SECONDARY | 8.0.15 |
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+
2 rows in set (0.00 sec)


Server CENTOSC

vi my.cnf
disabled_storage_engines="MyISAM,BLACKHOLE,FEDERATED,ARCHIVE,MEMORY"
server_id=3
gtid_mode=ON
enforce_gtid_consistency=ON
binlog_checksum=NONE
log_bin=binlog
log_slave_updates=ON
binlog_format=ROW
master_info_repository=TABLE
relay_log_info_repository=TABLE

transaction_write_set_extraction=XXHASH64
group_replication_group_name="90d8b7c8-5ce1-490e-a448-9c8d176b54a8"
group_replication_start_on_boot=off
group_replication_local_address= "192.168.111.124:33061"
group_replication_group_seeds= "192.168.111.17:33061,192.168.111.89:33061,192.168.111.124:33061"
group_replication_bootstrap_group=off

mysql> CHANGE MASTER TO
-> MASTER_USER='repl',
-> MASTER_PASSWORD='replpassword'
-> FOR CHANNEL 'group_replication_recovery';
Query OK, 0 rows affected, 2 warnings (0.02 sec)

mysql> CHANGE MASTER TO GET_MASTER_PUBLIC_KEY=1;
Query OK, 0 rows affected (0.02 sec)

mysql> START GROUP_REPLICATION;
Query OK, 0 rows affected (3.58 sec)
mysql> SELECT * FROM performance_schema.replication_group_members \G
*************************** 1. row ***************************
CHANNEL_NAME: group_replication_applier
MEMBER_ID: 1ab30239-5ef6-11e9-9b4a-08002712f4b1
MEMBER_HOST: centosa
MEMBER_PORT: 3306
MEMBER_STATE: ONLINE
MEMBER_ROLE: PRIMARY
MEMBER_VERSION: 8.0.15

*************************** 2. row ***************************
CHANNEL_NAME: group_replication_applier
MEMBER_ID: 572ca2fa-5eff-11e9-8df9-08002712f4b1
MEMBER_HOST: centosb
MEMBER_PORT: 3306
MEMBER_STATE: ONLINE
MEMBER_ROLE: SECONDARY
MEMBER_VERSION: 8.0.15

*************************** 3. row ***************************
CHANNEL_NAME: group_replication_applier
MEMBER_ID: c5f3d1d2-8dd8-11e9-858d-08002773d1b6
MEMBER_HOST: centosc
MEMBER_PORT: 3306
MEMBER_STATE: ONLINE
MEMBER_ROLE: SECONDARY
MEMBER_VERSION: 8.0.15
3 rows in set (0.00 sec)


So this is all great but it doesn't always mean they go online, they can often sit in recovery mode.
I have seen this fail with MySQL crashes so far so need to ensure it stable.
mysql> create database testcentosb;<br> ERROR 1290 (HY000): The MySQL server is running with the --super-read-only option so it cannot execute this statement<br>
Side Note to address some of those factors --
mysql> START GROUP_REPLICATION;
ERROR 3094 (HY000): The START GROUP_REPLICATION command failed as the applier module failed to start.

mysql> reset slave all;
Query OK, 0 rows affected (0.03 sec)

-- Then start over from Change master command
mysql> START GROUP_REPLICATION;
ERROR 3092 (HY000): The server is not configured properly to be an active member of the group. Please see more details on error log.

[ERROR] [MY-011735] [Repl] Plugin group_replication reported: '[GCS] Error on opening a connection to 192.168.111.17:33061 on local port: 33061.'
[ERROR] [MY-011526] [Repl] Plugin group_replication reported: 'This member has more executed transactions than those present in the group. Local transactions: c5f3d1d2-8dd8-11e9-858d-08002773d1b6:1-4 >
[ERROR] [MY-011522] [Repl] Plugin group_replication reported: 'The member contains transactions not present in the group. The member will now exit the group.'

https://ronniethedba.wordpress.com/2017/04/22/this-member-has-more-executed-transactions-than-those-present-in-the-group/ 


 [ERROR] [MY-011620] [Repl] Plugin group_replication reported: 'Fatal error during the recovery process of Group Replication. The server will leave the group.'
[ERROR] [MY-013173] [Repl] Plugin group_replication reported: 'The plugin encountered a critical error and will abort: Fatal error during execution of Group Replication'

SELECT * FROM performance_schema.replication_connection_status\G


My thoughts...
Keep in mind that group replication can be set up in single primary mode or multi-node
mysql> select @@group_replication_single_primary_mode\G
*************************** 1. row ***************************
@@group_replication_single_primary_mode: 1

mysql> create database testcentosb;
ERROR 1290 (HY000): The MySQL server is running with the --super-read-only option so it cannot execute this statement
you will of course get an error if you write to none primary node.


group-replication-single-primary-mode=off  <-- added to the cnf files. 
mysql> SELECT * FROM performance_schema.replication_group_members;
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+
| CHANNEL_NAME              | MEMBER_ID                            | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | MEMBER_ROLE | MEMBER_VERSION |
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+
| group_replication_applier | 1ab30239-5ef6-11e9-9b4a-08002712f4b1 | centosa     |        3306 | RECOVERING   | PRIMARY     | 8.0.15         |
| group_replication_applier | 572ca2fa-5eff-11e9-8df9-08002712f4b1 | centosb     |        3306 | ONLINE       | PRIMARY     | 8.0.15         |
| group_replication_applier | c5f3d1d2-8dd8-11e9-858d-08002773d1b6 | centosc     |        3306 | RECOVERING   | PRIMARY     | 8.0.15         |
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+

3 rows in set (0.00 sec)


It is now however if you use Keepalived, MySQL router, ProxySQL etc to handle your traffic to automatically roll over in case of a failover. We can see from below it failed over right away when I stopped the primary.

mysql> SELECT * FROM performance_schema.replication_group_members ;
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+
| CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | MEMBER_ROLE | MEMBER_VERSION |
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+
| group_replication_applier | 1ab30239-5ef6-11e9-9b4a-08002712f4b1 | centosa | 3306 | ONLINE | PRIMARY | 8.0.15 |
| group_replication_applier | 572ca2fa-5eff-11e9-8df9-08002712f4b1 | centosb | 3306 | ONLINE | SECONDARY | 8.0.15 |
| group_replication_applier | c5f3d1d2-8dd8-11e9-858d-08002773d1b6 | centosc | 3306 | ONLINE | SECONDARY | 8.0.15 |
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+
3 rows in set (0.00 sec)

[root@centosa]# systemctl stop mysqld

mysql> SELECT * FROM performance_schema.replication_group_members ;
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+
| CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | MEMBER_ROLE | MEMBER_VERSION |
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+
| group_replication_applier | 572ca2fa-5eff-11e9-8df9-08002712f4b1 | centosb | 3306 | ONLINE | PRIMARY | 8.0.15 |
| group_replication_applier | c5f3d1d2-8dd8-11e9-858d-08002773d1b6 | centosc | 3306 | ONLINE | SECONDARY | 8.0.15 |
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+
2 rows in set (0.00 sec)

[root@centosa]# systemctl start mysqld
[root@centosa]# mysql
mysql> START GROUP_REPLICATION;
Query OK, 0 rows affected (3.34 sec)

mysql> SELECT * FROM performance_schema.replication_group_members ;
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+
| CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | MEMBER_ROLE | MEMBER_VERSION |
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+
| group_replication_applier | 1ab30239-5ef6-11e9-9b4a-08002712f4b1 | centosa | 3306 | RECOVERING | SECONDARY | 8.0.15 |
| group_replication_applier | 572ca2fa-5eff-11e9-8df9-08002712f4b1 | centosb | 3306 | ONLINE | PRIMARY | 8.0.15 |
| group_replication_applier | c5f3d1d2-8dd8-11e9-858d-08002773d1b6 | centosc | 3306 | ONLINE | SECONDARY | 8.0.15 |
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+
3 rows in set (0.00 sec)


Now the recovery was still an issue, as it is would not simply join back. Had to review all accounts and steps again but I did get it back eventually.

mysql> SELECT * FROM performance_schema.replication_group_members;
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+
| CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | MEMBER_ROLE | MEMBER_VERSION |
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+
| group_replication_applier | 1ab30239-5ef6-11e9-9b4a-08002712f4b1 | centosa | 3306 | ONLINE | SECONDARY | 8.0.15 |
| group_replication_applier | 572ca2fa-5eff-11e9-8df9-08002712f4b1 | centosb | 3306 | ONLINE | PRIMARY | 8.0.15 |
| group_replication_applier | c5f3d1d2-8dd8-11e9-858d-08002773d1b6 | centosc | 3306 | ONLINE | SECONDARY | 8.0.15 |
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+
3 rows in set (0.00 sec)


I need to test more with this as I am not 100% sold yet as to needing this as I lean towards Galera replication still.

URLS of Interest


  • https://dev.mysql.com/doc/refman/8.0/en/group-replication.html
  • https://dev.mysql.com/doc/refman/8.0/en/group-replication-deploying-in-single-primary-mode.html
  • http://datacharmer.blogspot.com/2017/01/mysql-group-replication-vs-multi-source.html 
  • https://dev.mysql.com/doc/refman/8.0/en/group-replication-launching.html
  • https://dev.mysql.com/doc/refman/8.0/en/group-replication-configuring-instances.html
  • https://dev.mysql.com/doc/refman/8.0/en/group-replication-adding-instances.html
  • https://ronniethedba.wordpress.com/2017/04/22/how-to-setup-mysql-group-replication/
  • https://www.digitalocean.com/community/tutorials/how-to-configure-mysql-group-replication-on-ubuntu-16-04 
  • https://dev.mysql.com/doc/refman/8.0/en/group-replication-options.html#sysvar_group_replication_group_seeds 
  • https://bugs.mysql.com/bug.php?id=90534
  • https://www.percona.com/blog/2017/02/24/battle-for-synchronous-replication-in-mysql-galera-vs-group-replication/
  • https://lefred.be/content/mysql-group-replication-is-sweet-but-can-be-sour-if-you-misunderstand-it/
  • https://www.youtube.com/watch?v=IfZK-Up03Mw
  • https://mysqlhighavailability.com/mysql-group-replication-a-quick-start-guide/