So of course Percona has documentation that explain the process. The goal of this blog is to go into a little more detail in hopes that can help someone.
Hyperlinks for review:
So I started with a basic server install of CentOS 6.5.
To avoid any conflicts I removed the mysql-libs and related dependencies
Then I installed the Percona Cluster packages.
So we can replace the items we removed per node..
So repeating the above steps will install the packages so that we can configure the cluster next.
While we have three instances of MySQL running it is not yet a cluster.
Configuring the nodes
Node 1 /etc/my.cnf
Starting the 1st Cluster Node
The cluster is only one node at the moment.
OK now now one is up and running we can start node 2
Node 2 /etc/my.cnf
Now compare our values on each node.
Now we add node 3 into the mix.
Node 3 /etc/my.cnf
So how do all of our nodes look like now.
Test the nodes
So now we can load some data and test it out..
SO now that everything is loaded… is it all across the cluster?
Hyperlinks for review:
- CentOS - http://www.percona.com/doc/percona-xtradb-cluster/5.5/howtos/cenots_howto.html
- Ubuntu - http://www.percona.com/doc/percona-xtradb-cluster/5.5/howtos/ubuntu_howto.html
- Installing Percona XtraDB Cluster from Binaries -
- Firewall has been set up to allow connecting to ports 3306, 4444, 4567 and 4568
- Stop iptables for internal local networks or adjust iptable rules.
/etc/init.d/iptables stop
- SELinux is disabled
echo 0 >/selinux/enforce
vi /etc/selinux/config
- Set Up SSH Keys and place into authorized_keys so all the id_rsa.pub values are in authorized_keys on all servers.
# ssh-keygen -t rsa
# cd /root/.ssh/
# cp id_rsa.pub authorized_keys
# chmod 600 /root/.ssh/authorized_keys
# chmod 700 /root/.ssh/
So I started with a basic server install of CentOS 6.5.
# yum -y install http://www.percona.com/downloads/percona-release/percona-release-0.0-1.x86_64.rpm
# yum -y install http://mirror.pnl.gov/epel/6/x86_64/epel-release-6-8.noarch.rpm
# wget http://www.percona.com/downloads/RPM-GPG-KEY-percona /etc/pki/rpm-gpg/RPM-GPG-KEY-percona
# wget http://www.percona.com/downloads/RPM-GPG-KEY-percona /etc/pki/rpm-gpg/RPM-GPG-KEY-percona
# yum -y install socat
To avoid any conflicts I removed the mysql-libs and related dependencies
# rpm -e mysql-libs postfix cronie redhat-lsb-core redhat-lsb-printing redhat-lsb-graphics libcgroup numad redhat-lsb sysstat crontabs cronie-anacron redhat-lsb-compat
Then I installed the Percona Cluster packages.
# yum -y install Percona-XtraDB-Cluster-full-56
[root@node1 ~]# /etc/init.d/mysql start
Starting MySQL (Percona XtraDB Cluster)......... SUCCESS!
mysql -e "CREATE FUNCTION fnv1a_64 RETURNS INTEGER SONAME 'libfnv1a_udf.so'"
mysql -e "CREATE FUNCTION fnv_64 RETURNS INTEGER SONAME 'libfnv_udf.so'"
mysql -e "CREATE FUNCTION murmur_hash RETURNS INTEGER SONAME 'libmurmur_udf.so'"
So we can replace the items we removed per node..
yum -y install postfix cronie redhat-lsb-core redhat-lsb-printing redhat-lsb-graphics libcgroup numad redhat-lsb sysstat crontabs cronie-anacron redhat-lsb-compat
So repeating the above steps will install the packages so that we can configure the cluster next.
[root@node2 ~]# /etc/init.d/mysql start
Starting MySQL (Percona XtraDB Cluster)......... SUCCESS!
[root@node3 ~]# /etc/init.d/mysql start
Starting MySQL (Percona XtraDB Cluster)........ SUCCESS!
While we have three instances of MySQL running it is not yet a cluster.
Configuring the nodes
- http://www.percona.com/doc/percona-xtradb-cluster/5.5/howtos/cenots_howto.html
- http://galeracluster.com/documentation-webpages/configuration.html
- http://galeracluster.com/documentation-webpages/search.html
- http://www.magicposition.com/2012/01/16/installing-galera-mysql-clustering-on-ubuntu-10/
Node 1 /etc/my.cnf
[mysqld]
datadir=/var/lib/mysql
user=mysql
# Path to Galera library
wsrep_provider=/usr/lib64/libgalera_smm.so
# Cluster connection URL contains the IPs of node#1, node#2 and node#3
# node 1 192.168.0.33
# nod3 2 192.168.0.34
# nod3 3 192.168.0.35
wsrep_cluster_address=gcomm://192.168.0.33,192.168.0.34,192.168.0.35
# In order for Galera to work correctly binlog format should be ROW
binlog_format=ROW
# MyISAM storage engine has only experimental support
default_storage_engine=InnoDB
# This changes how InnoDB auto increment locks are managed and is a requirement for Galera
innodb_autoinc_lock_mode=2
# Node #1 address
wsrep_node_address=192.168.0.33
# SST method
#wsrep_sst_method=xtrabackup
wsrep_sst_method=rsync #
# wsrep_sst_method=rsync_wan #
# wsrep_sst_method=mysqldump # SLOW
# Cluster name
wsrep_cluster_name=percona_cluster
# Authentication for SST method
wsrep_sst_auth="root:<password_here>"
# server_id
server_id=3232235553 #SELECT INET_ATON('192.168.0.33')
#[client]
socket=/var/lib/mysql/mysql.sock
Starting the 1st Cluster Node
/etc/init.d/mysql start --wsrep-cluster-address="gcomm://"
Starting MySQL (Percona XtraDB Cluster)...................................... SUCCESS!
[root@node1 mysql]# cat grastate.dat
# GALERA saved state
version: 2.1
uuid: 97c457f8-f3d2-11e3-9b4e-374ebb7427e6
seqno: -1
cert_index:
The cluster is only one node at the moment.
mysql> select @@hostname\G show global status like 'wsrep_cluster_size' \G
*************************** 1. row ***************************
@@hostname: node1.localdomain
1 row in set (0.01 sec)
*************************** 1. row ***************************
Variable_name: wsrep_cluster_size
Value: 1
OK now now one is up and running we can start node 2
Node 2 /etc/my.cnf
[mysqld]
datadir=/var/lib/mysql
user=mysql
# Path to Galera library
wsrep_provider=/usr/lib64/libgalera_smm.so
# Cluster connection URL contains the IPs of node#1, node#2 and node#3
# node 1 192.168.0.33
# nod3 2 192.168.0.34
# nod3 3 192.168.0.35
wsrep_cluster_address=gcomm://192.168.0.33,192.168.0.34,192.168.0.35
# In order for Galera to work correctly binlog format should be ROW
binlog_format=ROW
# MyISAM storage engine has only experimental support
default_storage_engine=InnoDB
# This changes how InnoDB auto increment locks are managed and is a requirement for Galera
innodb_autoinc_lock_mode=2
# Node #1 address
wsrep_node_address=192.168.0.34
# SST method
#wsrep_sst_method=xtrabackup
wsrep_sst_method=rsync #
# wsrep_sst_method=rsync_wan #
# wsrep_sst_method=mysqldump # SLOW
# Cluster name
wsrep_cluster_name=percona_cluster
# Authentication for SST method
wsrep_sst_auth="root:"
# to enable debug level logging, set this to 1
wsrep_debug=1
# server_id
server_id=3232235554 # SELECT INET_ATON('192.168.0.34')
#[client]
socket=/var/lib/mysql/mysql.sock
[root@node2 mysql]#/etc/init.d/mysql start
Starting MySQL (Percona XtraDB Cluster)........................... SUCCESS!
Now compare our values on each node.
mysql> select @@hostname\G show global status like 'wsrep_cluster_size' \G
*************************** 1. row ***************************
@@hostname: node1.localdomain
1 row in set (0.01 sec)
*************************** 1. row ***************************
Variable_name: wsrep_cluster_size
Value: 2
mysql> select @@hostname\G show global status like 'wsrep_cluster_size' \G
*************************** 1. row ***************************
@@hostname: node2.localdomain
1 row in set (0.00 sec)
*************************** 1. row ***************************
Variable_name: wsrep_cluster_size
Value: 2
1 row in set (0.18 sec)
Now we add node 3 into the mix.
Node 3 /etc/my.cnf
[mysqld]
datadir=/var/lib/mysql
user=mysql
# Path to Galera library
wsrep_provider=/usr/lib64/libgalera_smm.so
# Cluster connection URL contains the IPs of node#1, node#2 and node#3
# node 1 192.168.0.33
# nod3 2 192.168.0.34
# nod3 3 192.168.0.35
wsrep_cluster_address=gcomm://192.168.0.33,192.168.0.34,192.168.0.35
# In order for Galera to work correctly binlog format should be ROW
binlog_format=ROW
# MyISAM storage engine has only experimental support
default_storage_engine=InnoDB
# This changes how InnoDB auto increment locks are managed and is a requirement for Galera
innodb_autoinc_lock_mode=2
# Node #1 address
wsrep_node_address=192.168.0.35
# SST method
# wsrep_sst_method=xtrabackup
wsrep_sst_method=rsync #
# wsrep_sst_method=rsync_wan #
# wsrep_sst_method=mysqldump # SLOW
# Cluster name
wsrep_cluster_name=percona_cluster
# Authentication for SST method
wsrep_sst_auth="root:"
# to enable debug level logging, set this to 1
wsrep_debug=1
# server_id
server_id=3232235555 # SELECT INET_ATON('192.168.0.35')
#[client]
socket=/var/lib/mysql/mysql.sock
[root@node3 mysql]#/etc/init.d/mysql start
Starting MySQL (Percona XtraDB Cluster)........................... SUCCESS!
[root@node3 mysql]# cat grastate.dat
# GALERA saved state
version: 2.1
uuid: 97c457f8-f3d2-11e3-9b4e-374ebb7427e6
seqno: -1
cert_index:
So how do all of our nodes look like now.
mysql> select @@hostname\G show global status like 'wsrep_cluster_size' \G
*************************** 1. row ***************************
@@hostname: node1.localdomain
1 row in set (0.01 sec)
*************************** 1. row ***************************
Variable_name: wsrep_cluster_size
Value: 3
mysql> select @@hostname\G show global status like 'wsrep_cluster_size' \G
*************************** 1. row ***************************
@@hostname: node2.localdomain
1 row in set (0.00 sec)
*************************** 1. row ***************************
Variable_name: wsrep_cluster_size
Value: 3
mysql> select @@hostname\G show global status like 'wsrep_cluster_size' \G
*************************** 1. row ***************************
@@hostname: node3.localdomain
1 row in set (0.00 sec)
*************************** 1. row ***************************
Variable_name: wsrep_cluster_size
Value: 3
Test the nodes
So now we can load some data and test it out..
[root@node2 ~]# wget http://downloads.mysql.com/docs/world_innodb.sql.gz
[root@node2 ~]# gzip -d world_innodb.sql.gz
[root@node2 ~]# mysql -e "create database world"
[root@node2 ~]# mysql world < world_innodb.sql
SO now that everything is loaded… is it all across the cluster?
@@hostname: node1.localdomain
DATABASE_SCHEMA: world
ENGINE: InnoDB
count_tables: 3
TOTAL_DB_GB: 0.001
@@hostname: node2.localdomain
DATABASE_SCHEMA: world
ENGINE: InnoDB
count_tables: 3
TOTAL_DB_GB: 0.001
@@hostname: node3.localdomain
DATABASE_SCHEMA: world
ENGINE: InnoDB
count_tables: 3
TOTAL_DB_GB: 0.001
Looks like it is working.
This is a Great, Great Article !!!! Life Saver
ReplyDelete