Thursday, April 25, 2019

Slack

Just a simple post here to give support to the slack community channels.

If you have not joined some of these... this is a great way to communicate with the community.

MySQL -- https://mysqlcommunity.slack.com  https://mysql-casual.slack.com/

PostgreSQL - https://postgresteam.slack.com  https://postgres-slack.herokuapp.com/

MongoDB -- https://mongo-db.slack.com

Of course, Freenode is still available as well -- http://chat.freenode.net (https://freenode.net/)

Sunday, April 14, 2019

Simple KeepaliveD set up

So keepalived has been around for quite a while now .... however it is still a mystery to many.
So this is a very simple example of how keepalived can work with MySQL. Hopefully, this can help those with questions.

We will have a Simple master to slave set up. Meaning.. we write to one unless we have failover to the second for some event.

1st - install keepalived


# yum search keepalived
keepalived.x86_64 : Load balancer and high availability service

  Name and summary matches only, use "search all" for everything.
# yum -y install keepalived

You should now have an config file 

# ls -ltr /etc/keepalived/keepalived.conf 

Keep the original as you always backup .. right....
# cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.orig

So you need to figure out an ipaddress you can use for your virtual ip.  I picked 192.168.0.123 for this example. 

Next, we will set up a script to be used for our new config file. 

A few things I did here..
I left a .cnf file for keepalived and a log all in the /etc/keepalived.
This makes it simple for the example so you can do this or use your own cnf files.

A script:

cat /etc/keepalived/keepalived_check.sh 
#!/bin/bash

# monitor mysql status

# if this node mysql is dead

# and its slave is alive , then stop its keepalived. The other node will bind the IP.



export MYSQL_HOME=/etc/keepalived/

export PATH=$MYSQL_HOME/bin:$PATH



mysql="/usr/bin/mysql"

mysqladmin="/usr/bin/mysqladmin"

delay_file="$MYSQL_HOME/slave_delay_second.log"

slave_host=$1





ALIVE=$($mysqladmin --defaults-file=$MYSQL_HOME/.my.localhost.cnf  ping | grep alive | wc -l );

REMOTEALIVE=$($mysqladmin --defaults-file=$MYSQL_HOME/.my.remotehost.cnf  ping | grep alive | wc -l );



if [[ $ALIVE -ne 1 ]]

then

#echo "MySQL is down"

        if [[ $REMOTEALIVE -eq 1 ]]

        then

#        echo "Shutdown keep alive "

            systemctl stop keepalived  

#       echo " keepalived stop "

        fi

#else

#echo "MySQL is up"

#date

fi



exit 0 #all done

New config file

cat /etc/keepalived/keepalived.conf
global_defs {



      notification_email {

        anothermysqldba@gmail.com 

      }



      notification_email_from anothermysqldba@gmail.com 

      smtp_server localhost

      smtp_connect_timeout 30



      }







vrrp_script check_mysql {

   script "/etc/keepalived/keepalived_check.sh "

   interval 2

   weight 2

}







vrrp_instance VI_1 {



      state MASTER

      interface enp0s8  # <--- WHAT INTERFACE NAME HOLDS YOUR REAL IP /sbin/ifconfig

        # found with ip link show

      virtual_router_id 51

      priority 101

      advert_int 1

      nopreempt  # only needed on higher priority node

      authentication {

        auth_type PASS

        auth_pass 1111

      }





      track_script {

        check_mysql

      }



      virtual_ipaddress {

        192.168.0.123 

      }




}



This is all great but does it work....

So we have 2 hosts

[root@centosa keepalived]# hostname

centosa

[root@centosb keepalived]# hostname
centosb

Start keepalived  

[root@centosa keepalived]# systemctl status keepalived
● keepalived.service - LVS and VRRP High Availability Monitor
   Loaded: loaded (/usr/lib/systemd/system/keepalived.service; disabled; vendor preset: disabled)
   Active: inactive (dead)
[root@centosa keepalived]# systemctl restart keepalived
[root@centosa keepalived]# systemctl status keepalived
keepalived.service - LVS and VRRP High Availability Monitor
   Loaded: loaded (/usr/lib/systemd/system/keepalived.service; disabled; vendor preset: disabled)
   Active: active (running)

[root@centosa keepalived]# ssh 192.168.0.123 'hostname'
root@192.168.0.123's password: 

centosa

Prove the connections work already

[root@centosa keepalived]# mysql --defaults-file=.my.remotehost.cnf --host=192.168.0.101   -e "select @@hostname"
+------------+
| @@hostname |
+------------+
| centosb    |
+------------+
[root@centosa keepalived]# mysql --defaults-file=.my.remotehost.cnf --host=192.168.0.102   -e "select @@hostname"
+------------+
| @@hostname |
+------------+
| centosa    |
+------------+

Double check that it is running... 

[root@centosa keepalived]# systemctl status keepalived | grep active
   Active: active 

[root@centosb keepalived]# systemctl status keepalived | grep active
   Active: active 

Test current VIP .. stop mysql and watch same VIP change hosts ... 

[root@centosa keepalived]# mysql --defaults-file=.my.remotehost.cnf --host=192.168.0.123   -e "select @@hostname"
+------------+
| @@hostname |
+------------+
| centosa    |
+------------+
[root@centosa keepalived]# systemctl stop mysqld 
[root@centosa keepalived]# mysql --defaults-file=.my.remotehost.cnf --host=192.168.0.123   -e "select @@hostname"
+------------+
| @@hostname |
+------------+
| centosb    |
+------------+




Tuesday, April 9, 2019

Sometimes the slow database.. is not the database...

So I was recently asked to look into why the updated MySQL 5,.6 was slower than the older 5.5 

So I started by poking around looking over the standard variables and caches and etc.

The test case was a simple routine that took about twice as long to run on 5.6 than it did on 5.5. 

To add to the mix.. the 5.6 version had double the Innodb_buffer_pool_size and of course more ram overall.  

So I started some tests with MySQLslap...

Mysqlslap tests show it slower on 5.6 

5.6:
mysqlslap --defaults-file=./.my.cnf --concurrency=150 --iterations=130 -query=/test.sql --create-schema=applicationdata --verbose 
Benchmark
Average number of seconds to run all queries: 0.028 seconds
Minimum number of seconds to run all queries: 0.019 seconds
Maximum number of seconds to run all queries: 0.071 seconds
Number of clients running queries: 150
Average number of queries per client: 1

5.5:
mysqlslap --defaults-file=./.my.cnf --concurrency=150 --iterations=130 --query=/test.sql --create-schema=applicationdata --verbose 
Benchmark
Average number of seconds to run all queries: 0.015 seconds
Minimum number of seconds to run all queries: 0.011 seconds
Maximum number of seconds to run all queries: 0.024 seconds
Number of clients running queries: 150
Average number of queries per client: 1


All of this goes against the public benchmarks 
http://dimitrik.free.fr/blog/archives/2013/02/mysql-performance-mysql-56-ga-vs-mysql-55-32cores.html 

So I checked disk level --

5.6:
# dd if=/dev/zero of=test bs=1048576 count=2048
2048+0 records in
2048+0 records out
2147483648 bytes (2.1 GB) copied, 25.7401 s, 83.4 MB/s

# dd if=test of=/dev/null bs=1048576
2048+0 records in
2048+0 records out
2147483648 bytes (2.1 GB) copied, 29.1527 s, 73.7 MB/s

5.5:
# dd if=/dev/zero of=test bs=1048576 count=2048
2048+0 records in
2048+0 records out
2147483648 bytes (2.1 GB) copied, 19.9214 seconds, 108 MB/s

# dd if=test of=/dev/null bs=1048576
2048+0 records in
2048+0 records out
2147483648 bytes (2.1 GB) copied, 20.0243 seconds, 107 MB/s



Here the disks with 5.5 is slower regardless of MySQL. So in this case.... Look to fixing the disk speed.. MySQL  was running fine and will.