Monday, March 31, 2014

MySQLUserclone or just Copy MySQL Users

I recently ran across a MySQL Forum post that was looking for how to migrate users onto a new system.

While this could be done a number of ways, I figured it gave me a chance to play and demo the mysqluserclone tool.

So I have two databases and only one wiki user:

root@localhost [(none)]> select VERSION();
| VERSION()  |
| 5.6.10-log |
root@localhost [(none)]> show grants for wikiuser@localhost;
| Grants for wikiuser@localhost                                                                                     |
| GRANT USAGE ON *.* TO 'wikiuser'@'localhost' IDENTIFIED BY PASSWORD '1e09502e61120480'                            |
| GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, ALTER, LOCK TABLES ON `wikidb`.* TO 'wikiuser'@'localhost' |

 select VERSION();
| VERSION()  |
| 5.5.30-log |
show grants for wikiuser@localhost;
ERROR 1141 (42000): There is no such grant defined for user 'wikiuser' on host 'localhost'

So now use the mysqluserclone tool

Usage: mysqluserclone --source=user:pass@host:port:socket --destination=user:pass@host:port:socket joe@localhost sam:secret1@localhost

# mysqluserclone --source=root:<PASSWORD>@localhost:3306 --destination=root:@localhost:3307 wikiuser@localhost wikiuser@localhost
# Source on localhost: ... connected.
# Destination on localhost: ... connected.
# Cloning 1 users...
# Cloning wikiuser@localhost to user wikiuser@localhost
# ...done.

root@localhost [(none)]>  select VERSION();
| VERSION()  |
| 5.5.30-log |
1 row in set (0.00 sec)

root@localhost [(none)]> show grants for wikiuser@localhost;
| Grants for wikiuser@localhost                                                                                     |
| GRANT USAGE ON *.* TO 'wikiuser'@'localhost'                                                                      |
| GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, ALTER, LOCK TABLES ON `wikidb`.* TO 'wikiuser'@'localhost' |
2 rows in set (0.00 sec)

Notice that it is missing a password because I did not set a password for the user. Opps that is not good enough.

Granted it works great if you want to set up new user accounts with new passwords. I could of passed the new password as part of the clone command

mysqluserclone --source=root:<PASSWORD>@localhost:3306 --destination=root:@localhost:3307 wikiuser@localhost wikiuser:<PASSWORD>@localhost

But what if I do not know the passwords? I want to ensure all the clients have all their same passwords.

I could also do the process below.
  • Gather grants from DB 1 
  • Edit the file for grants I want to move
  • Edit the user file to add comments to make safe sql
  • Load the file into DB 2
select CONCAT('SHOW GRANTS FOR `',USER,'`@',HOST,';') as showgrants FROM mysql.user INTO OUTFILE '/tmp/showgrants.sql';

vi /tmp/showgrants.sql

mysql < /tmp/showgrants.sql  > /tmp/user_grants.sql

vi /tmp/user_grants.sql
:%s/Grants for/#Grants for/g
mysql --port=3307  -u root -p  < /tmp/user_grants.sql 
> show grants for wikiuser@localhost;
| Grants for wikiuser@localhost                                                                            |
| GRANT USAGE ON *.* TO 'wikiuser'@'localhost' IDENTIFIED BY PASSWORD '1e09502e61120480'                   |
| GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, ALTER, LOCK TABLES ON `wikidb`.* TO 'wikiuser'@'localhost' |

Now I have the same User , Host and password.

Thursday, March 27, 2014

WebScaleSQL Installation attempted... part 1

Well if you have not see the news yet, then by all means follow this link and read about WebScaleSQL. Many thanks to the team of engineers from Facebook, Twitter and Google for their hard work on this. I am curious to see how this develops more in the future, once I get it to work.

Here is what I encountered so far. I wanted this to be a successful blog post, instead it will be part one until I get it running.

I figured I would put up a little how to install WebScaleSQL for those of you new to Github any of this. It takes feedback from the community after all, so the community needs to be able to install and try it out.

For this example I took the pre-built Oracle Linux Virtual Box image. I had one already but it is available here :

It should be obvious that WebScaleSQL is not intended for use on a VirtualBox in production, but this is just a demo. I assumed that it meant that people do not have an excuse to try it out since anyone can download and use this with VirtualBox. I did have to enable the additional disk that come with the image so you have enough space. To format, partition and mount the disks all can be done via disk utility.  The finished product for my example is below.

[root@oraclelinux6 oracle]# ls -al /media/
drwx------  3 oracle oracle 4096 Mar 27 16:19 webscaledisk
drwx------  3 oracle oracle 4096 Mar 27 16:21 workspace
[root@oraclelinux6 oracle]# chmod 777 /media/workspace/
[root@oraclelinux6 oracle]# chmod 777 /media/webscaledisk/

The code for WebScaleSQL is available on Github. The zip file can be downloaded here. I choose to check out the code via the github code repository methods. If others need help with the SSH keys see the Gitbhub reference page.

The git package is not available on Oracle Linux but that is quickly solved if you plan to use the Github version.

[oracle@oraclelinux6 ~]$ sudo bash
[root@oraclelinux6 oracle]# yum install curl-devel expat-devel gettext-devel openssl-devel zlib-devel
[root@oraclelinux6 oracle]# wget
[root@oraclelinux6 oracle]# tar -xvzf git-
[root@oraclelinux6 oracle]# cd git-
[root@oraclelinux6 git-]# make prefix=/usr/local all
[root@oraclelinux6 git-]# make prefix=/usr/local install
[root@oraclelinux6 git-]# exit
$ git --version
git version

Once git and the ssh-keys are ready you can download:

[oracle@oraclelinux6 ~]$ cd /media/workspace/
[oracle@oraclelinux6 workspace]$ git clone
Cloning into 'webscalesql-5.6'...
remote: Counting objects: 28637, done.
remote: Compressing objects: 100% (10926/10926), done.
remote: Total 28637 (delta 17074), reused 28419 (delta 16936)
Receiving objects: 100% (28637/28637), 48.01 MiB | 796 KiB/s, done.
Resolving deltas: 100% (17074/17074), done.

OK so however you decided to get the code... Lets us now try to get it installed.  To be clear this is should nothing new, this is a basic MySQL source install but it is just using the WebScaleSQL package.  It basically follows the same procedure as I did here before.

I needed a few dependancies on this clean system first. You might find you need some as well, it all depends on your system and the result of the cmake command below.

[root@oraclelinux6 workspace]# yum -y install ncurses-devel  readline-devel cmake gcc gcc-c++ bison-devel bison
[root@oraclelinux6 workspace]# wget
[root@oraclelinux6 workspace]# unzip
[root@oraclelinux6 workspace]# cd gmock-1.6.0
[root@oraclelinux6 gmock-1.6.0]# ./configure
[root@oraclelinux6 gmock-1.6.0]# make
[root@oraclelinux6 gmock-1.6.0]# make check
1 of 4 tests failed
Please report to
make[2]: Leaving directory `/media/workspace/gmock-1.6.0/gtest'
make[1]: Leaving directory `/media/workspace/gmock-1.6.0/gtest'

OK so the next phase (part 2) should be the following once I get the google code to work...

Btw the denable_downloads should download and install the gmock as well but it also fails.

[root@oraclelinux6 workspace]# groupadd mysql
[root@oraclelinux6 workspace]# useradd -r -g mysql mysql
[root@oraclelinux6 workspace]# ln -s /media/workspace/webscalesql-5.6/ /usr/local/mysql
[root@oraclelinux6 workspace]# chown -R mysql:mysql webscalesql-5.6/
[root@oraclelinux6 workspace]# cd webscalesql-5.6/
[root@oraclelinux6 webscalesql-5.6]# cmake -DENABLE_DOWNLOADS=1 .
make install
./scripts/mysql_install_db --user=mysql
chown -R root .
chown -R mysql data

So not ready just yet.. but I will get back with you all once I get it working. I have faith I will.

Wednesday, March 26, 2014

Percona CLOUD TOOLS -- MySQL Metrics

So I really like the fact that Percona is reaching out and offering solutions to help analyze MySQL concerns or issues with

It is pretty easy to install. The fastest way is to have the Percona YUM repo installed, I have another blog about that if needed,  so you will have the pt-agent installed.

This was done easily because I had a .my.cnf file set up for the root user, otherwise you have to set up username and password information.

Log into first then go and install the agent:

# pt-agent --install
Step 1 of 12: Verify the user is root: OK
Step 2 of 12: Check Perl module dependencies: OK
Step 3 of 12: Check for crontab: OK
Step 4 of 12: Verify pt-agent is not installed: OK
Step 5 of 12: Verify the API key:
Enter your API key: <API KEY HERE provided on the percona website>
Step 5 of 12: Verify the API key: OK
Step 6 of 12: Connect to MySQL: OK
Step 7 of 12: Check if MySQL is a slave: NO
Step 8 of 12: Create a MySQL user for the agent: OK
Step 9 of 12: Initialize /etc/percona/agent/my.cnf: OK
Step 10 of 12: Initialize /root/.pt-agent.conf: OK
Step 11 of 12: Create the agent: OK
Step 12 of 12: Run the agent: pt-agent has daemonized and is running as PID 16333:

  --lib /var/lib/pt-agent
  --log /var/log/pt-agent.log
  --pid /var/run/

These values can change if a different configuration is received.

Simple as that..  Then you just log back into the Percona website  to enable and adjust any of your agent settings per system.

Give it about 15 mins to gather the first amount of data and then you are all set to have data at your finger tips.  You will just click the "Query Analytics" button provided on the website.

Depending on your branch of MySQL you will get different analytics, obviously Percona would prefer you to use Percona Server 5.5.34 or higher with their tools, but it works with all MySQL.

Once your agent as sent data back to Percona, you will be able to display graphs on the Server Summary per query count, query time, lock time, rows sent, rows examined, query length as well as information provided via the query profile.

"Filesort Filesort on disk Full join Full scan Query cache hits Temporary tables Temporary tables on disk are available only for Percona Server." --

MySQL and OS X install and remove

So this is in reference to this bug:
I happen to run across it so I figured what the heck test it out with 5.7 as well.
I was also able to reproduce this error with the latest DMR version. ( at the time of this test mysql-5.7.3-m13 ) I was using the provided DMG and the tar.gz file.

dyld: Symbol not found: _strnlen
  Referenced from: /usr/local/mysql/bin/my_print_defaults
  Expected in: /usr/lib/libSystem.B.dylib

The DMG has a nice little installer and etc but as you can see it did not work.

So to test the 5.5 version you have to remove MySQL from OS X

rm -rf /usr/local/mysql*
rm -rf /Library/StartupItems/MySQLCOM
rm -Rf /private/var/db/receipts/*mysql*

I just needed those but this site has a list of other references if you need it.

I was able to confirm that 5.5.36 does install and run. So you are just out of luck for the moment if you want a higher version of MySQL .

One note.. Read the information screen or the readme.txt file. It gives helpful information.

 mysql> select @@hostname;
| @@hostname           |
| MacBook-Pro-15.local |
1 row in set (0.01 sec)

mysql> select VERSION();
| 5.5.36    |
1 row in set (0.00 sec)

Tuesday, March 11, 2014

Planning MySQL downtime and other Status variables for review

We all know that while MySQL is stable and can run for a very long time without restarts. Every so often we need to restart to it adjust some variables or for upgrading though. Some might assume that overnight is the best time to do this because their users are asleep.  However, often the overnight also includes backups and other crontab related scripts that could be doing different data mining and etc.

So how can you get a guideline on when you have the least activity on the server?  Now each application and database are different, so you need to taken into account what related activity is running, during the different time frames.  If  you start to log  the "THREADS_CONNECTED" status variable every minute to 5 to 10 minutes, again this depends on how loaded your system is once an hour might be fine, you will soon gather the amount of different MySQL connections across your logged time-frame.

This can be done a numerous of ways. It is already provided in many enterprise level tools and you can write scripts and etc, but just to keep it in MySQL. I will use an event and routine for this simple example.

Create a simple table:

CREATE TABLE `threads_count` (
`threads_count_id` int(11) NOT NULL AUTO_INCREMENT,
`variable_value` varchar(255) DEFAULT NULL,
PRIMARY KEY (`threads_count_id`)

Create a simple routine:

delimiter //
CREATE PROCEDURE threads_tally()
INSERT INTO threads_count SELECT NULL , VARIABLE_VALUE , NOW() FROM information_schema.GLOBAL_STATUS WHERE VARIABLE_NAME IN ('Threads_connected');
delimiter ;

Create a simple event:

SET GLOBAL event_scheduler = ON;
CREATE EVENT `threads_tally`
COMMENT 'keeps a count of Threads_connected'
CALL threads_tally();

Now but implementing the above example, you will have a table that you can select the values from (likely put into a graph or just for review) so you can see when the least number of connections existed on your server. Depending on your application  and traffic you can keep this running long enough to gain a valid perspective and then disable the event.

select variable_value , time_recorded FROM threads_count;

Now you can course take this a step further, if you do not have enterprise tools, and keep a log of all of your status variables for review:

Table for values:

CREATE TABLE `status_review` (
`status_review_id` int(11) NOT NULL AUTO_INCREMENT,
`variable_name` varchar(255) DEFAULT NULL,
`variable_value` varchar(255) DEFAULT NULL,
PRIMARY KEY (`status_review_id`)

Routine to gather values:

delimiter //
CREATE PROCEDURE status_reviews()
delimiter ;

Event to execute routine:

SET GLOBAL event_scheduler = ON;
CREATE EVENT `status_reviews`
COMMENT 'keeps a eye on status values'
CALL status_reviews();

You might only want certain variable names and values for review and you can adjust the query in the routine accordingly. The run time is dependent on your needs, I used an hour in this example.

This of course it going to generate a lot of information over time so if relevant some cleaning out of these tables over time might be needed. It all depends on how often you log and review the information provided.

While you will see a steady increase of many of the values because of your data growth and system use you will also be able to pinpoint data spikes in the data as and correlate that to your application and system use.

MySQL enterprise for example already will graph all of this type of information for you but this is a way you can also do it yourself with a little time and reviewing of the data.