Saturday, July 13, 2019

MySQL How do you restore tablespace

MySQL How do you restore tablespace?

This is not new information but I have not covered it much so addressing it now for those that need it.

If you lose your ibd files... you lose your data. So if you have a copy of one available.. or even if you are syncing from another database you can still import it.  What/how do you lose tablespace?

Here is a simple example to recover tablespace.



mysql> Create database demo;

mysql> use demo;

mysql> CREATE TABLE `demotable` (
    ->   `id` int(11) NOT NULL AUTO_INCREMENT,
    ->   `dts` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
    ->   PRIMARY KEY (`id`)
    -> ) ENGINE=InnoDB;


Now we store some data...


mysql> INSERT INTO demotable (id) VALUES (NULL);
Query OK, 1 row affected (0.10 sec)

mysql> INSERT INTO demotable (id) VALUES (NULL);
Query OK, 1 row affected (0.08 sec)

mysql> SELECT * FROM demotable;
+----+---------------------+
| id | dts                 |
+----+---------------------+
|  1 | 2019-07-12 23:31:34 |
|  2 | 2019-07-12 23:31:35 |
+----+---------------------+
2 rows in set (0.00 sec)


OK now lets break it..


# systemctl stop mysqld
# cd /var/lib/mysql/demo/
# ls -ltr
total 80
-rw-r-----. 1 mysql mysql 114688 Jul 12 23:31 demotable.ibd
# mv demotable.ibd /tmp/

# systemctl start mysqld
# mysql demo

mysql> show tables;
+----------------+
| Tables_in_demo |
+----------------+
| demotable      |
+----------------+
1 row in set (0.00 sec)

mysql> desc demotable;
+-------+-----------+------+-----+-------------------+-----------------------------------------------+
| Field | Type      | Null | Key | Default           | Extra                                         |
+-------+-----------+------+-----+-------------------+-----------------------------------------------+
| id    | int(11)   | NO   | PRI | NULL              | auto_increment                                |
| dts   | timestamp | NO   |     | CURRENT_TIMESTAMP | DEFAULT_GENERATED on update CURRENT_TIMESTAMP |
+-------+-----------+------+-----+-------------------+-----------------------------------------------+
2 rows in set (0.01 sec)

mysql> INSERT INTO demotable (id) VALUES (NULL);
ERROR 1812 (HY000): Tablespace is missing for table `demo`.`demotable`.


Broken and lost tablespace... Now we can recover it..


demo]# cp /tmp/demotable.ibd .

mysql> ALTER TABLE demotable DISCARD TABLESPACE;

demo]# cp /tmp/demotable.ibd .
demo]# ls -ltr
total 112
-rw-r-----. 1 root root 114688 Jul 12 23:50 demotable.ibd
demo]# chown mysql:mysql demotable.ibd
demo]# mysql demo
mysql> ALTER TABLE demotable IMPORT TABLESPACE;
ERROR 1034 (HY000): Incorrect key file for table 'demotable'; try to repair it

mysql> REPAIR TABLE demotable;
+----------------+--------+----------+---------------------------------------------------------+
| Table          | Op     | Msg_type | Msg_text                                                |
+----------------+--------+----------+---------------------------------------------------------+
| demo.demotable | repair | note     | The storage engine for the table doesn't support repair |
+----------------+--------+----------+---------------------------------------------------------+


Notice now we also got another error.. This is usually tied to space available to tmpdir, and repair doesn't work for .ibd anyway.


mysql> select @@tmpdir;
+----------+
| @@tmpdir |
+----------+
| /tmp     |
+----------+

# vi /etc/my.cnf
tmpdir=/var/lib/mysql-files/

# systemctl restart mysqld
# mysql demo


OK used the mysql-files directory just for example.
Now we can try again.


mysql> ALTER TABLE demotable IMPORT TABLESPACE;
Query OK, 0 rows affected, 1 warning (0.61 sec)

mysql>  INSERT INTO demotable (id) VALUES (NULL);
Query OK, 1 row affected (0.11 sec)

mysql>  SELECT * FROM demotable;
+----+---------------------+
| id | dts                 |
+----+---------------------+
|  1 | 2019-07-12 23:31:34 |
|  2 | 2019-07-12 23:31:35 |
|  3 | 2019-07-12 23:56:08 |
+----+---------------------+


OK worked.
Now, this is all nice and simple if you just have one table. But what about 100s...

Automate it, of course, and use your information_schema to help.

Make a few more copies for test.

mysql> create table demotable1 like demotable;
Query OK, 0 rows affected (0.51 sec)

mysql> create table demotable2 like demotable;
Query OK, 0 rows affected (1.04 sec)

mysql> create table demotable3 like demotable;
Query OK, 0 rows affected (0.74 sec)

mysql> create table demotable4 like demotable;
Query OK, 0 rows affected (2.21 sec)


break them all..

demo]# mv *.ibd /tmp/


Now using your information_schema.tables table, you can build out all the commands you will need.

# vi build_discard.sql
# cat build_discard.sql
SELECT CONCAT(" ALTER TABLE ",TABLE_SCHEMA,".",TABLE_NAME," DISCARD TABLESPACE;  ") as CMD FROM information_schema.TABLES WHERE TABLE_SCHEMA='demo';

# vi build_import.sql
# cat build_import.sql
SELECT CONCAT(" ALTER TABLE ",TABLE_SCHEMA,".",TABLE_NAME," IMPORT TABLESPACE;  ") as CMD FROM information_schema.TABLES WHERE TABLE_SCHEMA='demo';



# mysql -N < build_import.sql > import_tablespace.sql
# mysql -N < build_discard.sql  | mysql demo

demo]# cp /tmp/*.ibd .
demo]# chown mysql:mysql *.ibd
# systemctl restart mysqld
# mysql demo < import_tablespace.sql
# mysql demo

mysql> INSERT INTO demotable (id) VALUES (NULL);
Query OK, 1 row affected (0.08 sec)

mysql> INSERT INTO demotable1 (id) VALUES (NULL);
Query OK, 1 row affected (0.05 sec)

mysql> INSERT INTO demotable2 (id) VALUES (NULL);
Query OK, 1 row affected (0.09 sec)

mysql> INSERT INTO demotable3 (id) VALUES (NULL);
^[[AQuery OK, 1 row affected (0.37 sec)

mysql> INSERT INTO demotable4 (id) VALUES (NULL);
Query OK, 1 row affected (0.12 sec)



And it worked.


Friday, July 12, 2019

MySQL Binlogs:: How to recover

So I realized I had not made a post about this after this situation that recently came up.

Here is the scenario: A backup was taken at midnight, they used MySQL dumps per database. Then at ten am the next day the database crashed. A series of events happened before I was called in, but they got it up to a version of the database with MyISAM tables and the IBD files missing from the tablespace.

So option 1, restoring from backup would get us to midnight and we would lose hours of data. Option 2, we reimport the 1000's of ibd files and keep everything. Then we had option 3, restore from backup, then apply the binlogs for recent changes.

To make it more interesting, they didn't have all of the ibd files I was told, and I did see some missing. So not sure how that was possible but option 2 became an invalid option. They, of course, wanted the least data loss possible, so we went with option 3.

To do this safely I started up another instance of MySQL under port 3307. This allowed me a safe place to work while traffic had read access to the MyISAM data on the port 3306 instance.

Once all the backup dump files uncompressed and imported into the 3307 instance I was able to focus on the binlog files.

At first this concept sounds much harder risky than it really is. It is actually pretty straight forward and simple.

So first you have to find the data your after. A review of the binlog files gives you a head start as to what files are relevant. In my case, somehow they managed to reset the binlog so the 117 file had 2 date ranges within it.

First for binlog review, the following command outputs the data in human-readable format.
mysqlbinlog --defaults-file=/root/.my.cnf  --base64-output=DECODE-ROWS  --verbose mysql-bin.000117 >   review_mysql-bin.000117.sql

*Note... Be careful running the above command. Notice I have it dumping the file directly in same location as binlog. So VALIDATE that your file name is valid.  This mysql-bin.000117.sql is different than this mysql-bin.000117 .sql  . You will loose your binlog with the 2nd option and a space before .sql.

Now to save the data so it can be applied. Since I had several binlogs I created a file and I wanted to double-check the time ranges anyway.


mysqlbinlog --defaults-file=/root/.my.cnf --start-datetime="2019-07-09 00:00:00" --stop-datetime="2019-07-10 00:00:00" mysql-bin.000117 > binlog_restore.sql
mysqlbinlog --defaults-file=/root/.my.cnf mysql-bin.000118 >> binlog_restore.sql
mysqlbinlog --defaults-file=/root/.my.cnf mysql-bin.000119 >> binlog_restore.sql
mysqlbinlog --defaults-file=/root/.my.cnf --start-datetime="2019-07-10 00:00:00" --stop-datetime="2019-07-10 10:00:00" mysql-bin.000117 >> binlog_restore.sql
mysqlbinlog --defaults-file=/root/.my.cnf --stop-datetime="2019-07-10 10:00:00" mysql-bin.000120 >> binlog_restore.sql
mysqlbinlog --defaults-file=/root/.my.cnf --stop-datetime="2019-07-10 10:00:00" mysql-bin.000121 >> binlog_restore.sql

mysql --socket=/var/lib/mysql_restore/mysql.sock -e "source /var/lib/mysql/binlog_restore.sql"


Now I applied all the data from those binlogs for the given time ranges. The client double-checked all data and was very happy to have it all back.

Several different options existed for this situation, this happened to workout best with the client.

Once the validated all was ok on the restored version it was a simple stop both databases, moved the data directories (wanted to keep the datadir defaults intact) , chown the directories just to be safe and start up MySQL. Now the restored instance was up on port 3306.


Monday, June 17, 2019

MySQL Group Replication

So MySQL's group replication came out with MySQL 5.7. Now that is has been out a little while people are starting to ask more about it.
Below is an example of how to set this up and a few pain point examples as I poked around with it.
I am using three different servers,

 Server CENTOSA

mysql> INSTALL PLUGIN group_replication SONAME 'group_replication.so';
Query OK, 0 rows affected (0.02 sec)

vi my.cnf
disabled_storage_engines="MyISAM,BLACKHOLE,FEDERATED,ARCHIVE,MEMORY"
server_id=1
gtid_mode=ON
enforce_gtid_consistency=ON
binlog_checksum=NONE

log_bin=binlog
log_slave_updates=ON
binlog_format=ROW
master_info_repository=TABLE
relay_log_info_repository=TABLE

transaction_write_set_extraction=XXHASH64
group_replication_group_name="90d8b7c8-5ce1-490e-a448-9c8d176b54a8"
group_replication_start_on_boot=off
group_replication_local_address= "192.168.111.17:33061"
group_replication_group_seeds= "192.168.111.17:33061,192.168.111.89:33061,192.168.111.124:33061"
group_replication_bootstrap_group=off

mysql> SET SQL_LOG_BIN=0;
mysql> CREATE USER repl@'%' IDENTIFIED BY 'replpassword';
mysql> GRANT REPLICATION SLAVE ON *.* TO repl@'%';
mysql> FLUSH PRIVILEGES;
mysql> SET SQL_LOG_BIN=1;


CHANGE MASTER TO
MASTER_USER='repl',
MASTER_PASSWORD='replpassword'
FOR CHANNEL 'group_replication_recovery';


mysql> SET GLOBAL group_replication_bootstrap_group=ON;
Query OK, 0 rows affected (0.00 sec)


mysql> START GROUP_REPLICATION;
Query OK, 0 rows affected (3.11 sec)


mysql> SET GLOBAL group_replication_bootstrap_group=OFF;
Query OK, 0 rows affected (0.00 sec)


mysql> SELECT * FROM performance_schema.replication_group_members \G

*************************** 1. row ***************************
CHANNEL_NAME: group_replication_applier
MEMBER_ID: 1ab30239-5ef6-11e9-9b4a-08002712f4b1
MEMBER_HOST: centosa
MEMBER_PORT: 3306
MEMBER_STATE: ONLINE
MEMBER_ROLE: PRIMARY
MEMBER_VERSION: 8.0.15

So now we can add more servers.
Server CENTOSB

vi my.cnf
disabled_storage_engines="MyISAM,BLACKHOLE,FEDERATED,ARCHIVE,MEMORY"
server_id=2
gtid_mode=ON
enforce_gtid_consistency=ON
binlog_checksum=NONE

log_bin=binlog
log_slave_updates=ON
binlog_format=ROW
master_info_repository=TABLE
relay_log_info_repository=TABLE


transaction_write_set_extraction=XXHASH64
group_replication_group_name="90d8b7c8-5ce1-490e-a448-9c8d176b54a8"
group_replication_start_on_boot=off
group_replication_local_address= "192.168.111.89:33061"
group_replication_group_seeds= "192.168.111.17:33061,192.168.111.89:33061,192.168.111.124:33061"
group_replication_bootstrap_group=off

mysql> CHANGE MASTER TO
MASTER_USER='repl',
MASTER_PASSWORD='replpassword'
FOR CHANNEL 'group_replication_recovery';
Query OK, 0 rows affected, 2 warnings (0.02 sec)

mysql> CHANGE MASTER TO GET_MASTER_PUBLIC_KEY=1;
Query OK, 0 rows affected (0.02 sec)

mysql> START GROUP_REPLICATION;
Query OK, 0 rows affected (4.03 sec)

mysql> SELECT * FROM performance_schema.replication_group_members;
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+
| CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | MEMBER_ROLE | MEMBER_VERSION |
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+
| group_replication_applier | 1ab30239-5ef6-11e9-9b4a-08002712f4b1 | centosa | 3306 | ONLINE | PRIMARY | 8.0.15 |
| group_replication_applier | 572ca2fa-5eff-11e9-8df9-08002712f4b1 | centosb | 3306 | RECOVERING | SECONDARY | 8.0.15 |
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+
2 rows in set (0.00 sec)


Server CENTOSC

vi my.cnf
disabled_storage_engines="MyISAM,BLACKHOLE,FEDERATED,ARCHIVE,MEMORY"
server_id=3
gtid_mode=ON
enforce_gtid_consistency=ON
binlog_checksum=NONE
log_bin=binlog
log_slave_updates=ON
binlog_format=ROW
master_info_repository=TABLE
relay_log_info_repository=TABLE

transaction_write_set_extraction=XXHASH64
group_replication_group_name="90d8b7c8-5ce1-490e-a448-9c8d176b54a8"
group_replication_start_on_boot=off
group_replication_local_address= "192.168.111.124:33061"
group_replication_group_seeds= "192.168.111.17:33061,192.168.111.89:33061,192.168.111.124:33061"
group_replication_bootstrap_group=off

mysql> CHANGE MASTER TO
-> MASTER_USER='repl',
-> MASTER_PASSWORD='replpassword'
-> FOR CHANNEL 'group_replication_recovery';
Query OK, 0 rows affected, 2 warnings (0.02 sec)

mysql> CHANGE MASTER TO GET_MASTER_PUBLIC_KEY=1;
Query OK, 0 rows affected (0.02 sec)

mysql> START GROUP_REPLICATION;
Query OK, 0 rows affected (3.58 sec)
mysql> SELECT * FROM performance_schema.replication_group_members \G
*************************** 1. row ***************************
CHANNEL_NAME: group_replication_applier
MEMBER_ID: 1ab30239-5ef6-11e9-9b4a-08002712f4b1
MEMBER_HOST: centosa
MEMBER_PORT: 3306
MEMBER_STATE: ONLINE
MEMBER_ROLE: PRIMARY
MEMBER_VERSION: 8.0.15

*************************** 2. row ***************************
CHANNEL_NAME: group_replication_applier
MEMBER_ID: 572ca2fa-5eff-11e9-8df9-08002712f4b1
MEMBER_HOST: centosb
MEMBER_PORT: 3306
MEMBER_STATE: ONLINE
MEMBER_ROLE: SECONDARY
MEMBER_VERSION: 8.0.15

*************************** 3. row ***************************
CHANNEL_NAME: group_replication_applier
MEMBER_ID: c5f3d1d2-8dd8-11e9-858d-08002773d1b6
MEMBER_HOST: centosc
MEMBER_PORT: 3306
MEMBER_STATE: ONLINE
MEMBER_ROLE: SECONDARY
MEMBER_VERSION: 8.0.15
3 rows in set (0.00 sec)


So this is all great but it doesn't always mean they go online, they can often sit in recovery mode.
I have seen this fail with MySQL crashes so far so need to ensure it stable.
mysql> create database testcentosb;<br> ERROR 1290 (HY000): The MySQL server is running with the --super-read-only option so it cannot execute this statement<br>
Side Note to address some of those factors --
mysql> START GROUP_REPLICATION;
ERROR 3094 (HY000): The START GROUP_REPLICATION command failed as the applier module failed to start.

mysql> reset slave all;
Query OK, 0 rows affected (0.03 sec)

-- Then start over from Change master command
mysql> START GROUP_REPLICATION;
ERROR 3092 (HY000): The server is not configured properly to be an active member of the group. Please see more details on error log.

[ERROR] [MY-011735] [Repl] Plugin group_replication reported: '[GCS] Error on opening a connection to 192.168.111.17:33061 on local port: 33061.'
[ERROR] [MY-011526] [Repl] Plugin group_replication reported: 'This member has more executed transactions than those present in the group. Local transactions: c5f3d1d2-8dd8-11e9-858d-08002773d1b6:1-4 >
[ERROR] [MY-011522] [Repl] Plugin group_replication reported: 'The member contains transactions not present in the group. The member will now exit the group.'

https://ronniethedba.wordpress.com/2017/04/22/this-member-has-more-executed-transactions-than-those-present-in-the-group/ 


 [ERROR] [MY-011620] [Repl] Plugin group_replication reported: 'Fatal error during the recovery process of Group Replication. The server will leave the group.'
[ERROR] [MY-013173] [Repl] Plugin group_replication reported: 'The plugin encountered a critical error and will abort: Fatal error during execution of Group Replication'

SELECT * FROM performance_schema.replication_connection_status\G


My thoughts...
Keep in mind that group replication can be set up in single primary mode or multi-node
mysql> select @@group_replication_single_primary_mode\G
*************************** 1. row ***************************
@@group_replication_single_primary_mode: 1

mysql> create database testcentosb;
ERROR 1290 (HY000): The MySQL server is running with the --super-read-only option so it cannot execute this statement
you will of course get an error if you write to none primary node.


group-replication-single-primary-mode=off  <-- added to the cnf files. 
mysql> SELECT * FROM performance_schema.replication_group_members;
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+
| CHANNEL_NAME              | MEMBER_ID                            | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | MEMBER_ROLE | MEMBER_VERSION |
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+
| group_replication_applier | 1ab30239-5ef6-11e9-9b4a-08002712f4b1 | centosa     |        3306 | RECOVERING   | PRIMARY     | 8.0.15         |
| group_replication_applier | 572ca2fa-5eff-11e9-8df9-08002712f4b1 | centosb     |        3306 | ONLINE       | PRIMARY     | 8.0.15         |
| group_replication_applier | c5f3d1d2-8dd8-11e9-858d-08002773d1b6 | centosc     |        3306 | RECOVERING   | PRIMARY     | 8.0.15         |
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+

3 rows in set (0.00 sec)


It is now however if you use Keepalived, MySQL router, ProxySQL etc to handle your traffic to automatically roll over in case of a failover. We can see from below it failed over right away when I stopped the primary.

mysql> SELECT * FROM performance_schema.replication_group_members ;
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+
| CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | MEMBER_ROLE | MEMBER_VERSION |
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+
| group_replication_applier | 1ab30239-5ef6-11e9-9b4a-08002712f4b1 | centosa | 3306 | ONLINE | PRIMARY | 8.0.15 |
| group_replication_applier | 572ca2fa-5eff-11e9-8df9-08002712f4b1 | centosb | 3306 | ONLINE | SECONDARY | 8.0.15 |
| group_replication_applier | c5f3d1d2-8dd8-11e9-858d-08002773d1b6 | centosc | 3306 | ONLINE | SECONDARY | 8.0.15 |
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+
3 rows in set (0.00 sec)

[root@centosa]# systemctl stop mysqld

mysql> SELECT * FROM performance_schema.replication_group_members ;
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+
| CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | MEMBER_ROLE | MEMBER_VERSION |
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+
| group_replication_applier | 572ca2fa-5eff-11e9-8df9-08002712f4b1 | centosb | 3306 | ONLINE | PRIMARY | 8.0.15 |
| group_replication_applier | c5f3d1d2-8dd8-11e9-858d-08002773d1b6 | centosc | 3306 | ONLINE | SECONDARY | 8.0.15 |
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+
2 rows in set (0.00 sec)

[root@centosa]# systemctl start mysqld
[root@centosa]# mysql
mysql> START GROUP_REPLICATION;
Query OK, 0 rows affected (3.34 sec)

mysql> SELECT * FROM performance_schema.replication_group_members ;
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+
| CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | MEMBER_ROLE | MEMBER_VERSION |
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+
| group_replication_applier | 1ab30239-5ef6-11e9-9b4a-08002712f4b1 | centosa | 3306 | RECOVERING | SECONDARY | 8.0.15 |
| group_replication_applier | 572ca2fa-5eff-11e9-8df9-08002712f4b1 | centosb | 3306 | ONLINE | PRIMARY | 8.0.15 |
| group_replication_applier | c5f3d1d2-8dd8-11e9-858d-08002773d1b6 | centosc | 3306 | ONLINE | SECONDARY | 8.0.15 |
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+
3 rows in set (0.00 sec)


Now the recovery was still an issue, as it is would not simply join back. Had to review all accounts and steps again but I did get it back eventually.

mysql> SELECT * FROM performance_schema.replication_group_members;
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+
| CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | MEMBER_ROLE | MEMBER_VERSION |
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+
| group_replication_applier | 1ab30239-5ef6-11e9-9b4a-08002712f4b1 | centosa | 3306 | ONLINE | SECONDARY | 8.0.15 |
| group_replication_applier | 572ca2fa-5eff-11e9-8df9-08002712f4b1 | centosb | 3306 | ONLINE | PRIMARY | 8.0.15 |
| group_replication_applier | c5f3d1d2-8dd8-11e9-858d-08002773d1b6 | centosc | 3306 | ONLINE | SECONDARY | 8.0.15 |
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+
3 rows in set (0.00 sec)


I need to test more with this as I am not 100% sold yet as to needing this as I lean towards Galera replication still.

URLS of Interest


  • https://dev.mysql.com/doc/refman/8.0/en/group-replication.html
  • https://dev.mysql.com/doc/refman/8.0/en/group-replication-deploying-in-single-primary-mode.html
  • http://datacharmer.blogspot.com/2017/01/mysql-group-replication-vs-multi-source.html 
  • https://dev.mysql.com/doc/refman/8.0/en/group-replication-launching.html
  • https://dev.mysql.com/doc/refman/8.0/en/group-replication-configuring-instances.html
  • https://dev.mysql.com/doc/refman/8.0/en/group-replication-adding-instances.html
  • https://ronniethedba.wordpress.com/2017/04/22/how-to-setup-mysql-group-replication/
  • https://www.digitalocean.com/community/tutorials/how-to-configure-mysql-group-replication-on-ubuntu-16-04 
  • https://dev.mysql.com/doc/refman/8.0/en/group-replication-options.html#sysvar_group_replication_group_seeds 
  • https://bugs.mysql.com/bug.php?id=90534
  • https://www.percona.com/blog/2017/02/24/battle-for-synchronous-replication-in-mysql-galera-vs-group-replication/
  • https://lefred.be/content/mysql-group-replication-is-sweet-but-can-be-sour-if-you-misunderstand-it/
  • https://www.youtube.com/watch?v=IfZK-Up03Mw
  • https://mysqlhighavailability.com/mysql-group-replication-a-quick-start-guide/


  • Monday, June 3, 2019

    Max_connections 214 4.15.0-46-generic #49-Ubuntu


    So the issue of max_connections dropping from the value set in your my.cnf file down to 214 has been around for a little while on Ubuntu.

    As an example, it was noted here back in 2015



    I ran into this again recently and was resolved with the following steps.


    # cp /lib/systemd/system/mysql.service /etc/systemd/system/
    # cd /etc/systemd/system/
    # vi mysql.service

    LimitNOFILE=infinity
    LimitMEMLOCK=infinity

    # systemctl daemon-reload
    # systemctl restart mysql


    Once those steps completed the MySQL connections were stable at the given parameter in the my.cnf file. 

    Thursday, April 25, 2019

    Slack

    Just a simple post here to give support to the slack community channels.

    If you have not joined some of these... this is a great way to communicate with the community.

    MySQL -- https://mysqlcommunity.slack.com  https://mysql-casual.slack.com/

    PostgreSQL - https://postgresteam.slack.com  https://postgres-slack.herokuapp.com/

    MongoDB -- https://mongo-db.slack.com

    Of course, Freenode is still available as well -- http://chat.freenode.net (https://freenode.net/)