Monday, June 17, 2019

MySQL Group Replication

So MySQL's group replication came out with MySQL 5.7. Now that is has been out a little while people are starting to ask more about it.
Below is an example of how to set this up and a few pain point examples as I poked around with it.
I am using three different servers,

 Server CENTOSA

mysql> INSTALL PLUGIN group_replication SONAME 'group_replication.so';
Query OK, 0 rows affected (0.02 sec)

vi my.cnf
disabled_storage_engines="MyISAM,BLACKHOLE,FEDERATED,ARCHIVE,MEMORY"
server_id=1
gtid_mode=ON
enforce_gtid_consistency=ON
binlog_checksum=NONE

log_bin=binlog
log_slave_updates=ON
binlog_format=ROW
master_info_repository=TABLE
relay_log_info_repository=TABLE

transaction_write_set_extraction=XXHASH64
group_replication_group_name="90d8b7c8-5ce1-490e-a448-9c8d176b54a8"
group_replication_start_on_boot=off
group_replication_local_address= "192.168.111.17:33061"
group_replication_group_seeds= "192.168.111.17:33061,192.168.111.89:33061,192.168.111.124:33061"
group_replication_bootstrap_group=off

mysql> SET SQL_LOG_BIN=0;
mysql> CREATE USER repl@'%' IDENTIFIED BY 'replpassword';
mysql> GRANT REPLICATION SLAVE ON *.* TO repl@'%';
mysql> FLUSH PRIVILEGES;
mysql> SET SQL_LOG_BIN=1;


CHANGE MASTER TO
MASTER_USER='repl',
MASTER_PASSWORD='replpassword'
FOR CHANNEL 'group_replication_recovery';


mysql> SET GLOBAL group_replication_bootstrap_group=ON;
Query OK, 0 rows affected (0.00 sec)


mysql> START GROUP_REPLICATION;
Query OK, 0 rows affected (3.11 sec)


mysql> SET GLOBAL group_replication_bootstrap_group=OFF;
Query OK, 0 rows affected (0.00 sec)


mysql> SELECT * FROM performance_schema.replication_group_members \G

*************************** 1. row ***************************
CHANNEL_NAME: group_replication_applier
MEMBER_ID: 1ab30239-5ef6-11e9-9b4a-08002712f4b1
MEMBER_HOST: centosa
MEMBER_PORT: 3306
MEMBER_STATE: ONLINE
MEMBER_ROLE: PRIMARY
MEMBER_VERSION: 8.0.15

So now we can add more servers.
Server CENTOSB

vi my.cnf
disabled_storage_engines="MyISAM,BLACKHOLE,FEDERATED,ARCHIVE,MEMORY"
server_id=2
gtid_mode=ON
enforce_gtid_consistency=ON
binlog_checksum=NONE

log_bin=binlog
log_slave_updates=ON
binlog_format=ROW
master_info_repository=TABLE
relay_log_info_repository=TABLE


transaction_write_set_extraction=XXHASH64
group_replication_group_name="90d8b7c8-5ce1-490e-a448-9c8d176b54a8"
group_replication_start_on_boot=off
group_replication_local_address= "192.168.111.89:33061"
group_replication_group_seeds= "192.168.111.17:33061,192.168.111.89:33061,192.168.111.124:33061"
group_replication_bootstrap_group=off

mysql> CHANGE MASTER TO
MASTER_USER='repl',
MASTER_PASSWORD='replpassword'
FOR CHANNEL 'group_replication_recovery';
Query OK, 0 rows affected, 2 warnings (0.02 sec)

mysql> CHANGE MASTER TO GET_MASTER_PUBLIC_KEY=1;
Query OK, 0 rows affected (0.02 sec)

mysql> START GROUP_REPLICATION;
Query OK, 0 rows affected (4.03 sec)

mysql> SELECT * FROM performance_schema.replication_group_members;
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+
| CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | MEMBER_ROLE | MEMBER_VERSION |
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+
| group_replication_applier | 1ab30239-5ef6-11e9-9b4a-08002712f4b1 | centosa | 3306 | ONLINE | PRIMARY | 8.0.15 |
| group_replication_applier | 572ca2fa-5eff-11e9-8df9-08002712f4b1 | centosb | 3306 | RECOVERING | SECONDARY | 8.0.15 |
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+
2 rows in set (0.00 sec)


Server CENTOSC

vi my.cnf
disabled_storage_engines="MyISAM,BLACKHOLE,FEDERATED,ARCHIVE,MEMORY"
server_id=3
gtid_mode=ON
enforce_gtid_consistency=ON
binlog_checksum=NONE
log_bin=binlog
log_slave_updates=ON
binlog_format=ROW
master_info_repository=TABLE
relay_log_info_repository=TABLE

transaction_write_set_extraction=XXHASH64
group_replication_group_name="90d8b7c8-5ce1-490e-a448-9c8d176b54a8"
group_replication_start_on_boot=off
group_replication_local_address= "192.168.111.124:33061"
group_replication_group_seeds= "192.168.111.17:33061,192.168.111.89:33061,192.168.111.124:33061"
group_replication_bootstrap_group=off

mysql> CHANGE MASTER TO
-> MASTER_USER='repl',
-> MASTER_PASSWORD='replpassword'
-> FOR CHANNEL 'group_replication_recovery';
Query OK, 0 rows affected, 2 warnings (0.02 sec)

mysql> CHANGE MASTER TO GET_MASTER_PUBLIC_KEY=1;
Query OK, 0 rows affected (0.02 sec)

mysql> START GROUP_REPLICATION;
Query OK, 0 rows affected (3.58 sec)
mysql> SELECT * FROM performance_schema.replication_group_members \G
*************************** 1. row ***************************
CHANNEL_NAME: group_replication_applier
MEMBER_ID: 1ab30239-5ef6-11e9-9b4a-08002712f4b1
MEMBER_HOST: centosa
MEMBER_PORT: 3306
MEMBER_STATE: ONLINE
MEMBER_ROLE: PRIMARY
MEMBER_VERSION: 8.0.15

*************************** 2. row ***************************
CHANNEL_NAME: group_replication_applier
MEMBER_ID: 572ca2fa-5eff-11e9-8df9-08002712f4b1
MEMBER_HOST: centosb
MEMBER_PORT: 3306
MEMBER_STATE: ONLINE
MEMBER_ROLE: SECONDARY
MEMBER_VERSION: 8.0.15

*************************** 3. row ***************************
CHANNEL_NAME: group_replication_applier
MEMBER_ID: c5f3d1d2-8dd8-11e9-858d-08002773d1b6
MEMBER_HOST: centosc
MEMBER_PORT: 3306
MEMBER_STATE: ONLINE
MEMBER_ROLE: SECONDARY
MEMBER_VERSION: 8.0.15
3 rows in set (0.00 sec)


So this is all great but it doesn't always mean they go online, they can often sit in recovery mode.
I have seen this fail with MySQL crashes so far so need to ensure it stable.
mysql> create database testcentosb;<br> ERROR 1290 (HY000): The MySQL server is running with the --super-read-only option so it cannot execute this statement<br>
Side Note to address some of those factors --
mysql> START GROUP_REPLICATION;
ERROR 3094 (HY000): The START GROUP_REPLICATION command failed as the applier module failed to start.

mysql> reset slave all;
Query OK, 0 rows affected (0.03 sec)

-- Then start over from Change master command
mysql> START GROUP_REPLICATION;
ERROR 3092 (HY000): The server is not configured properly to be an active member of the group. Please see more details on error log.

[ERROR] [MY-011735] [Repl] Plugin group_replication reported: '[GCS] Error on opening a connection to 192.168.111.17:33061 on local port: 33061.'
[ERROR] [MY-011526] [Repl] Plugin group_replication reported: 'This member has more executed transactions than those present in the group. Local transactions: c5f3d1d2-8dd8-11e9-858d-08002773d1b6:1-4 >
[ERROR] [MY-011522] [Repl] Plugin group_replication reported: 'The member contains transactions not present in the group. The member will now exit the group.'

https://ronniethedba.wordpress.com/2017/04/22/this-member-has-more-executed-transactions-than-those-present-in-the-group/ 


 [ERROR] [MY-011620] [Repl] Plugin group_replication reported: 'Fatal error during the recovery process of Group Replication. The server will leave the group.'
[ERROR] [MY-013173] [Repl] Plugin group_replication reported: 'The plugin encountered a critical error and will abort: Fatal error during execution of Group Replication'

SELECT * FROM performance_schema.replication_connection_status\G


My thoughts...
Keep in mind that group replication can be set up in single primary mode or multi-node
mysql> select @@group_replication_single_primary_mode\G
*************************** 1. row ***************************
@@group_replication_single_primary_mode: 1

mysql> create database testcentosb;
ERROR 1290 (HY000): The MySQL server is running with the --super-read-only option so it cannot execute this statement
you will of course get an error if you write to none primary node.


group-replication-single-primary-mode=off  <-- added to the cnf files. 
mysql> SELECT * FROM performance_schema.replication_group_members;
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+
| CHANNEL_NAME              | MEMBER_ID                            | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | MEMBER_ROLE | MEMBER_VERSION |
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+
| group_replication_applier | 1ab30239-5ef6-11e9-9b4a-08002712f4b1 | centosa     |        3306 | RECOVERING   | PRIMARY     | 8.0.15         |
| group_replication_applier | 572ca2fa-5eff-11e9-8df9-08002712f4b1 | centosb     |        3306 | ONLINE       | PRIMARY     | 8.0.15         |
| group_replication_applier | c5f3d1d2-8dd8-11e9-858d-08002773d1b6 | centosc     |        3306 | RECOVERING   | PRIMARY     | 8.0.15         |
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+

3 rows in set (0.00 sec)


It is now however if you use Keepalived, MySQL router, ProxySQL etc to handle your traffic to automatically roll over in case of a failover. We can see from below it failed over right away when I stopped the primary.

mysql> SELECT * FROM performance_schema.replication_group_members ;
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+
| CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | MEMBER_ROLE | MEMBER_VERSION |
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+
| group_replication_applier | 1ab30239-5ef6-11e9-9b4a-08002712f4b1 | centosa | 3306 | ONLINE | PRIMARY | 8.0.15 |
| group_replication_applier | 572ca2fa-5eff-11e9-8df9-08002712f4b1 | centosb | 3306 | ONLINE | SECONDARY | 8.0.15 |
| group_replication_applier | c5f3d1d2-8dd8-11e9-858d-08002773d1b6 | centosc | 3306 | ONLINE | SECONDARY | 8.0.15 |
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+
3 rows in set (0.00 sec)

[root@centosa]# systemctl stop mysqld

mysql> SELECT * FROM performance_schema.replication_group_members ;
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+
| CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | MEMBER_ROLE | MEMBER_VERSION |
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+
| group_replication_applier | 572ca2fa-5eff-11e9-8df9-08002712f4b1 | centosb | 3306 | ONLINE | PRIMARY | 8.0.15 |
| group_replication_applier | c5f3d1d2-8dd8-11e9-858d-08002773d1b6 | centosc | 3306 | ONLINE | SECONDARY | 8.0.15 |
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+
2 rows in set (0.00 sec)

[root@centosa]# systemctl start mysqld
[root@centosa]# mysql
mysql> START GROUP_REPLICATION;
Query OK, 0 rows affected (3.34 sec)

mysql> SELECT * FROM performance_schema.replication_group_members ;
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+
| CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | MEMBER_ROLE | MEMBER_VERSION |
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+
| group_replication_applier | 1ab30239-5ef6-11e9-9b4a-08002712f4b1 | centosa | 3306 | RECOVERING | SECONDARY | 8.0.15 |
| group_replication_applier | 572ca2fa-5eff-11e9-8df9-08002712f4b1 | centosb | 3306 | ONLINE | PRIMARY | 8.0.15 |
| group_replication_applier | c5f3d1d2-8dd8-11e9-858d-08002773d1b6 | centosc | 3306 | ONLINE | SECONDARY | 8.0.15 |
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+
3 rows in set (0.00 sec)


Now the recovery was still an issue, as it is would not simply join back. Had to review all accounts and steps again but I did get it back eventually.

mysql> SELECT * FROM performance_schema.replication_group_members;
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+
| CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | MEMBER_ROLE | MEMBER_VERSION |
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+
| group_replication_applier | 1ab30239-5ef6-11e9-9b4a-08002712f4b1 | centosa | 3306 | ONLINE | SECONDARY | 8.0.15 |
| group_replication_applier | 572ca2fa-5eff-11e9-8df9-08002712f4b1 | centosb | 3306 | ONLINE | PRIMARY | 8.0.15 |
| group_replication_applier | c5f3d1d2-8dd8-11e9-858d-08002773d1b6 | centosc | 3306 | ONLINE | SECONDARY | 8.0.15 |
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+
3 rows in set (0.00 sec)


I need to test more with this as I am not 100% sold yet as to needing this as I lean towards Galera replication still.

URLS of Interest


  • https://dev.mysql.com/doc/refman/8.0/en/group-replication.html
  • https://dev.mysql.com/doc/refman/8.0/en/group-replication-deploying-in-single-primary-mode.html
  • http://datacharmer.blogspot.com/2017/01/mysql-group-replication-vs-multi-source.html 
  • https://dev.mysql.com/doc/refman/8.0/en/group-replication-launching.html
  • https://dev.mysql.com/doc/refman/8.0/en/group-replication-configuring-instances.html
  • https://dev.mysql.com/doc/refman/8.0/en/group-replication-adding-instances.html
  • https://ronniethedba.wordpress.com/2017/04/22/how-to-setup-mysql-group-replication/
  • https://www.digitalocean.com/community/tutorials/how-to-configure-mysql-group-replication-on-ubuntu-16-04 
  • https://dev.mysql.com/doc/refman/8.0/en/group-replication-options.html#sysvar_group_replication_group_seeds 
  • https://bugs.mysql.com/bug.php?id=90534
  • https://www.percona.com/blog/2017/02/24/battle-for-synchronous-replication-in-mysql-galera-vs-group-replication/
  • https://lefred.be/content/mysql-group-replication-is-sweet-but-can-be-sour-if-you-misunderstand-it/
  • https://www.youtube.com/watch?v=IfZK-Up03Mw
  • https://mysqlhighavailability.com/mysql-group-replication-a-quick-start-guide/


  • Monday, June 3, 2019

    Max_connections 214 4.15.0-46-generic #49-Ubuntu


    So the issue of max_connections dropping from the value set in your my.cnf file down to 214 has been around for a little while on Ubuntu.

    As an example, it was noted here back in 2015



    I ran into this again recently and was resolved with the following steps.


    # cp /lib/systemd/system/mysql.service /etc/systemd/system/
    # cd /etc/systemd/system/
    # vi mysql.service

    LimitNOFILE=infinity
    LimitMEMLOCK=infinity

    # systemctl daemon-reload
    # systemctl restart mysql


    Once those steps completed the MySQL connections were stable at the given parameter in the my.cnf file. 

    Thursday, April 25, 2019

    Slack

    Just a simple post here to give support to the slack community channels.

    If you have not joined some of these... this is a great way to communicate with the community.

    MySQL -- https://mysqlcommunity.slack.com  https://mysql-casual.slack.com/

    PostgreSQL - https://postgresteam.slack.com  https://postgres-slack.herokuapp.com/

    MongoDB -- https://mongo-db.slack.com

    Of course, Freenode is still available as well -- http://chat.freenode.net (https://freenode.net/)

    Sunday, April 14, 2019

    Simple KeepaliveD set up

    So keepalived has been around for quite a while now .... however it is still a mystery to many.
    So this is a very simple example of how keepalived can work with MySQL. Hopefully, this can help those with questions.

    We will have a Simple master to slave set up. Meaning.. we write to one unless we have failover to the second for some event.

    1st - install keepalived


    # yum search keepalived
    keepalived.x86_64 : Load balancer and high availability service

      Name and summary matches only, use "search all" for everything.
    # yum -y install keepalived

    You should now have an config file 

    # ls -ltr /etc/keepalived/keepalived.conf 

    Keep the original as you always backup .. right....
    # cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.orig

    So you need to figure out an ipaddress you can use for your virtual ip.  I picked 192.168.0.123 for this example. 

    Next, we will set up a script to be used for our new config file. 

    A few things I did here..
    I left a .cnf file for keepalived and a log all in the /etc/keepalived.
    This makes it simple for the example so you can do this or use your own cnf files.

    A script:

    cat /etc/keepalived/keepalived_check.sh 
    #!/bin/bash

    # monitor mysql status

    # if this node mysql is dead

    # and its slave is alive , then stop its keepalived. The other node will bind the IP.



    export MYSQL_HOME=/etc/keepalived/

    export PATH=$MYSQL_HOME/bin:$PATH



    mysql="/usr/bin/mysql"

    mysqladmin="/usr/bin/mysqladmin"

    delay_file="$MYSQL_HOME/slave_delay_second.log"

    slave_host=$1





    ALIVE=$($mysqladmin --defaults-file=$MYSQL_HOME/.my.localhost.cnf  ping | grep alive | wc -l );

    REMOTEALIVE=$($mysqladmin --defaults-file=$MYSQL_HOME/.my.remotehost.cnf  ping | grep alive | wc -l );



    if [[ $ALIVE -ne 1 ]]

    then

    #echo "MySQL is down"

            if [[ $REMOTEALIVE -eq 1 ]]

            then

    #        echo "Shutdown keep alive "

                systemctl stop keepalived  

    #       echo " keepalived stop "

            fi

    #else

    #echo "MySQL is up"

    #date

    fi



    exit 0 #all done

    New config file

    cat /etc/keepalived/keepalived.conf
    global_defs {



          notification_email {

            anothermysqldba@gmail.com 

          }



          notification_email_from anothermysqldba@gmail.com 

          smtp_server localhost

          smtp_connect_timeout 30



          }







    vrrp_script check_mysql {

       script "/etc/keepalived/keepalived_check.sh "

       interval 2

       weight 2

    }







    vrrp_instance VI_1 {



          state MASTER

          interface enp0s8  # <--- WHAT INTERFACE NAME HOLDS YOUR REAL IP /sbin/ifconfig

            # found with ip link show

          virtual_router_id 51

          priority 101

          advert_int 1

          nopreempt  # only needed on higher priority node

          authentication {

            auth_type PASS

            auth_pass 1111

          }





          track_script {

            check_mysql

          }



          virtual_ipaddress {

            192.168.0.123 

          }




    }



    This is all great but does it work....

    So we have 2 hosts

    [root@centosa keepalived]# hostname

    centosa

    [root@centosb keepalived]# hostname
    centosb

    Start keepalived  

    [root@centosa keepalived]# systemctl status keepalived
    ● keepalived.service - LVS and VRRP High Availability Monitor
       Loaded: loaded (/usr/lib/systemd/system/keepalived.service; disabled; vendor preset: disabled)
       Active: inactive (dead)
    [root@centosa keepalived]# systemctl restart keepalived
    [root@centosa keepalived]# systemctl status keepalived
    keepalived.service - LVS and VRRP High Availability Monitor
       Loaded: loaded (/usr/lib/systemd/system/keepalived.service; disabled; vendor preset: disabled)
       Active: active (running)

    [root@centosa keepalived]# ssh 192.168.0.123 'hostname'
    root@192.168.0.123's password: 

    centosa

    Prove the connections work already

    [root@centosa keepalived]# mysql --defaults-file=.my.remotehost.cnf --host=192.168.0.101   -e "select @@hostname"
    +------------+
    | @@hostname |
    +------------+
    | centosb    |
    +------------+
    [root@centosa keepalived]# mysql --defaults-file=.my.remotehost.cnf --host=192.168.0.102   -e "select @@hostname"
    +------------+
    | @@hostname |
    +------------+
    | centosa    |
    +------------+

    Double check that it is running... 

    [root@centosa keepalived]# systemctl status keepalived | grep active
       Active: active 

    [root@centosb keepalived]# systemctl status keepalived | grep active
       Active: active 

    Test current VIP .. stop mysql and watch same VIP change hosts ... 

    [root@centosa keepalived]# mysql --defaults-file=.my.remotehost.cnf --host=192.168.0.123   -e "select @@hostname"
    +------------+
    | @@hostname |
    +------------+
    | centosa    |
    +------------+
    [root@centosa keepalived]# systemctl stop mysqld 
    [root@centosa keepalived]# mysql --defaults-file=.my.remotehost.cnf --host=192.168.0.123   -e "select @@hostname"
    +------------+
    | @@hostname |
    +------------+
    | centosb    |
    +------------+




    Tuesday, April 9, 2019

    Sometimes the slow database.. is not the database...

    So I was recently asked to look into why the updated MySQL 5,.6 was slower than the older 5.5 

    So I started by poking around looking over the standard variables and caches and etc.

    The test case was a simple routine that took about twice as long to run on 5.6 than it did on 5.5. 

    To add to the mix.. the 5.6 version had double the Innodb_buffer_pool_size and of course more ram overall.  

    So I started some tests with MySQLslap...

    Mysqlslap tests show it slower on 5.6 

    5.6:
    mysqlslap --defaults-file=./.my.cnf --concurrency=150 --iterations=130 -query=/test.sql --create-schema=applicationdata --verbose 
    Benchmark
    Average number of seconds to run all queries: 0.028 seconds
    Minimum number of seconds to run all queries: 0.019 seconds
    Maximum number of seconds to run all queries: 0.071 seconds
    Number of clients running queries: 150
    Average number of queries per client: 1

    5.5:
    mysqlslap --defaults-file=./.my.cnf --concurrency=150 --iterations=130 --query=/test.sql --create-schema=applicationdata --verbose 
    Benchmark
    Average number of seconds to run all queries: 0.015 seconds
    Minimum number of seconds to run all queries: 0.011 seconds
    Maximum number of seconds to run all queries: 0.024 seconds
    Number of clients running queries: 150
    Average number of queries per client: 1


    All of this goes against the public benchmarks 
    http://dimitrik.free.fr/blog/archives/2013/02/mysql-performance-mysql-56-ga-vs-mysql-55-32cores.html 

    So I checked disk level --

    5.6:
    # dd if=/dev/zero of=test bs=1048576 count=2048
    2048+0 records in
    2048+0 records out
    2147483648 bytes (2.1 GB) copied, 25.7401 s, 83.4 MB/s

    # dd if=test of=/dev/null bs=1048576
    2048+0 records in
    2048+0 records out
    2147483648 bytes (2.1 GB) copied, 29.1527 s, 73.7 MB/s

    5.5:
    # dd if=/dev/zero of=test bs=1048576 count=2048
    2048+0 records in
    2048+0 records out
    2147483648 bytes (2.1 GB) copied, 19.9214 seconds, 108 MB/s

    # dd if=test of=/dev/null bs=1048576
    2048+0 records in
    2048+0 records out
    2147483648 bytes (2.1 GB) copied, 20.0243 seconds, 107 MB/s



    Here the disks with 5.5 is slower regardless of MySQL. So in this case.... Look to fixing the disk speed.. MySQL  was running fine and will.