So I recently posted: A MySQL DBA looks at PostgreSQL and part 2: MySQL to PostgreSQL.
This post will explore the migration from PostgreSQL to MySQL. Once again, the longterm goal with these posts are to be able show how the data works within the different databases as well as how to solve similar issues in each database when examples should arise.
MySQL pushes the MySQL Workbench the tool for database migration. I have to admit that I am curious why the MySQL Utilities does not provide a command line option. The previous blog post (part 2) showed how easy the migration via command line was for MySQL to PostgreSQL. Keep in mind though that when bringing data back to MySQL the Data Engine has to be considered.
To put it simply, if you are going to bring data back into MySQL from PostgreSQL a fast option is likely the MySQL Workbench. But that is not an overall solution since we often prefer to stay within our terminal windows. So you will be doing the following:
Anyway, First we still have the data in PostgreSQL from the World database example.
Dump out the schema:
Using pg_dump take the --data-only and --inserts to simply build standard SQL file of data.
You will see later that doing a dump per table would be better but this works.
Creating a database in MySQL to place the data back as well as test your new schema.
Edit your schema file: vi world_schema.pgsql
You have the new MySQL Database created so you can test them as you go.
You will need to review the file for all related keys so you can create valid statements.
You will need to understand MySQL so you can create valid Create table statements.
It is of course up to you. but once you work out the PRIMARY KEY per table, I would create alter statements to update the new schemas so you can ensure you catch everything. While they can all be added directly into the first Create statement for the most part as you process the Postgresql dump file making alters can keep you in check.
Some examples of the needed Alter statements:
Once all your schema is updated and valid. you can put the saved data back.
This post will explore the migration from PostgreSQL to MySQL. Once again, the longterm goal with these posts are to be able show how the data works within the different databases as well as how to solve similar issues in each database when examples should arise.
MySQL pushes the MySQL Workbench the tool for database migration. I have to admit that I am curious why the MySQL Utilities does not provide a command line option. The previous blog post (part 2) showed how easy the migration via command line was for MySQL to PostgreSQL. Keep in mind though that when bringing data back to MySQL the Data Engine has to be considered.
To put it simply, if you are going to bring data back into MySQL from PostgreSQL a fast option is likely the MySQL Workbench. But that is not an overall solution since we often prefer to stay within our terminal windows. So you will be doing the following:
- Dump the Schema from PostgreSQL into a file
- Review and edit the file for MySQL.
- Per desired Schema and tables export out as a csv file.
- Import back into MySQL
Anyway, First we still have the data in PostgreSQL from the World database example.
world=> \dt
List of relations
Schema | Name | Type | Owner
--------+-----------------+-------+----------
public | city | table | testuser
public | country | table | testuser
public | countrylanguage | table | testuser
(3 rows)
world=> select count(ID) from City;
count
-------
4079
(1 row)
world=>
Dump out the schema:
$ pg_dump -s world > world_schema.pgsql
Using pg_dump take the --data-only and --inserts to simply build standard SQL file of data.
pg_dump --data-only --inserts world > world_data.pgsql
You will see later that doing a dump per table would be better but this works.
Creating a database in MySQL to place the data back as well as test your new schema.
mysql> CREATE DATABASE world_back;
Query OK, 1 row affected (0.01 sec)
Edit your schema file: vi world_schema.pgsql
You have the new MySQL Database created so you can test them as you go.
CREATE TABLE city (
id integer DEFAULT nextval('city_id_seq'::regclass) NOT NULL,
name character(35) DEFAULT ''::bpchar NOT NULL,
countrycode character(3) DEFAULT ''::bpchar NOT NULL,
district character(20) DEFAULT ''::bpchar NOT NULL,
population integer DEFAULT 0 NOT NULL
);
CREATE TABLE country (
code character(3) DEFAULT ''::bpchar NOT NULL,
name character(52) DEFAULT ''::bpchar NOT NULL,
continent character varying DEFAULT 'Asia'::character varying NOT NULL,
region character(26) DEFAULT ''::bpchar NOT NULL,
surfacearea double precision DEFAULT 0::double precision NOT NULL,
indepyear smallint,
population integer DEFAULT 0 NOT NULL,
lifeexpectancy double precision,
gnp double precision,
gnpold double precision,
localname character(45) DEFAULT ''::bpchar NOT NULL,
governmentform character(45) DEFAULT ''::bpchar NOT NULL,
headofstate character(60) DEFAULT NULL::bpchar,
capital integer,
code2 character(2) DEFAULT ''::bpchar NOT NULL,
CONSTRAINT country_continent_check CHECK (((continent)::text = ANY ((ARRAY['Asia'::character varying, 'Europe'::character varying, 'North America'::character varying, 'Africa'::character varying, 'Oceania'::character varying, 'Antarctica'::character varying, 'South America'::character varying])::text[])))
);
ALTER TABLE ONLY city
ADD CONSTRAINT city_pkey PRIMARY KEY (id);
CREATE INDEX city_countrycode_idx ON city USING btree (countrycode);
You will need to review the file for all related keys so you can create valid statements.
You will need to understand MySQL so you can create valid Create table statements.
CREATE TABLE city (
`id` int(11) NOT NULL AUTO_INCREMENT,
`name` char(35) NOT NULL DEFAULT '',
`countrycode` char(3) NOT NULL DEFAULT '',
`district` char(20) NOT NULL DEFAULT '',
`population` int(11) NOT NULL DEFAULT '0',
PRIMARY KEY (`id`)
);
CREATE TABLE `country` (
`code` char(3) NOT NULL DEFAULT '',
`name` char(52) NOT NULL DEFAULT '',
`continent` char(5) NOT NULL DEFAULT '',
`region` char(26) NOT NULL DEFAULT '',
`surfaceArea` float(10,2) NOT NULL DEFAULT '0.00',
`indepyear` smallint(6) DEFAULT NULL,
`population` int(11) NOT NULL DEFAULT '0',
`lifeexpectancy` float(3,1) DEFAULT NULL,
`gnp` float(10,2) DEFAULT NULL,
`gnpold` float(10,2) DEFAULT NULL,
`localname` char(45) NOT NULL DEFAULT '',
`governmentform` char(45) NOT NULL DEFAULT '',
`headofstate` char(60) DEFAULT NULL,
`capital` int(11) DEFAULT NULL,
`code2` char(2) NOT NULL DEFAULT '',
PRIMARY KEY (`code`)
);
It is of course up to you. but once you work out the PRIMARY KEY per table, I would create alter statements to update the new schemas so you can ensure you catch everything. While they can all be added directly into the first Create statement for the most part as you process the Postgresql dump file making alters can keep you in check.
Some examples of the needed Alter statements:
ALTER TABLE city ENGINE=InnoDB;
ALTER TABLE country ENGINE=InnoDB;
ALTER TABLE countrylanguage ENGINE=InnoDB;
ALTER TABLE country DROP continent;
ALTER TABLE country ADD continent enum('Asia','Europe','North America','Africa','Oceania','Antarctica','South America') NOT NULL DEFAULT 'Asia' AFTER name;
ALTER TABLE city ADD KEY `countrycode` (`countrycode`),
ALTER TABLE city ADD CONSTRAINT `city_ibfk_1` FOREIGN KEY (`countrycode`) REFERENCES `country` (`code`)
Once all your schema is updated and valid. you can put the saved data back.
vi world_data.pgsql to remove the SET statements at the top.
--
-- PostgreSQL database dump
--
SET statement_timeout = 0;
SET lock_timeout = 0;
SET client_encoding = 'UTF8';
SET standard_conforming_strings = on;
SET check_function_bodies = false;
SET client_min_messages = warning;
SET search_path = public, pg_catalog;
Copy the files out per table in this case because of the CONSTRAINTs. Edit accordingly so each file only has the data per table. I should have just dumped like that or just dump again per table.
$ cp world_data.pgsql world_data_city.pgsql
$ cp world_data.pgsql world_data_countrylanguage.pgsql
$ cp world_data.pgsql world_data_country.pgsql
$ mysql -u root -p world_back < world_data_country.pgsql
Enter password:
$ mysql -u root -p world_back < world_data_countrylanguage.pgsql
Enter password:
$ mysql -u root -p world_back < world_data_city.pgsql
So simply put it is not as easy, I should say automated, to migrate into MySQL via the command line because of the schema changes that will require your attention but it can be done.
mysql> select count(id) from city;
+-----------+
| count(id) |
+-----------+
| 4079 |
+-----------+
1 row in set (0.14 sec)
MySQL Workbench database migration of course can do the same process and you can learn more about that tool here - http://www.mysql.com/products/workbench/migrate/
No comments:
Post a Comment
@AnotherMySQLDBA