Setting up a new MySQL slave database via steaming xtrabackup

The goal today is to build a new MySQL slave database off an existing database. We'll use Percona's xtrabackup to stream a hot backup over the wire to a new machine, while keeping the existing server online.


Install MySQL on New Server

Install MySQL in the way you normally would (e.g., apt-get etc).

The only very important thing you need to do is edit my.cnf and set a server-id for the server. This can be any integer. I tend to use the current date and time just because it's generally pretty unique (like 201601271730).

Install xtrabackup on both servers

Both servers need to have xtrabackup. If you use Debian, Ubuntu or CentOS, you're best off just using Percona's repositories.

For example, here's how I installed it on Ubuntu:

$ wget -O /tmp/percona-repos.deb$(lsb_release -sc)_all.deb
$ sudo dpkg -i /tmp/percona-repos.deb
$ sudo apt-get update
$ sudo apt-get install percona-xtrabackup

Create holding directory on new server

We'll shortly stream data from the existing server to the new server. You need a directory on the new server, so create it now:

[email protected]# mkdir /root/mysql-data  

Note: For the rest of this post, I'll use new to refer to the new slave you're setting up, and old to refer to the old, existing server.

Increase maximum number of open file descriptors

You may run into issues with too many files being open at once. This can happen if you have many databases, and/or if you use MySQL with one-file-per-table.

In your console, before you run any commands (on both old/new servers), it's smart to raise the limit:

[email protected]# ulimit -n 1024000  

Note that this does NOT persist the change. It'll reset back to the system default as soon as you disconnect.

Stream a hot backup (no downtime)

Now we start the stream from the old server to the new server. The general command looks something like this:

[email protected]# innobackupex \  
    --no-lock \
    --user=root \
    --password=XXX \
    --stream=xbstream ./ \
    | ssh [email protected] "xbstream -x -C /root/mysql-data/"

--no-lock prevents locking the database tables. If you need your existing server to stay online and still accept writes, you need to use this option. Note however that this is ONLY safe if ALL of your tables are InnoDB. If you are using other storage engines, you have no choice but to lock the tables as the copy happens.

Note: Do not issue DDL queries (including creating new databases or tables) when you use --no-lock.

--stream=xbstream tells innobackupex to compress and stream the data, and then we pipe it through ssh to the remote server. Since it's compressed, this is also ideal if you're configuring a slave over the WAN.

Two other special cases

  • If the old server is itself a slave, you should use --safe-slave-backup option. This stops the slave SQL thread until the backup completes.
  • If you want your new server to be a slave of the same master as old, use the --slave-info option. This will output the position info for the master. More on that in a bit.

Process the backup

After the stream finishes, you'll be left with a bunch of data in /root/mysql-data. This data isn't ready to use yet, we need to apply the binlogs that innobackupex was copying as it went:

[email protected]# cd /root/mysql-data  
[email protected]# innobackupex --apply-log .  

You also need to find the position info (we'll use it in a moment). Write down the contents of this file:

[email protected]# cat /root/mysql-data/xtrabackup_binlog_pos_innodb  

Move data files into place

On the new server still, we now need to move the data files into the places where the MySQL server actually expects to find them. I typically move the old/default dirs into tmp just in case I need them for some reason.

[email protected]# mv /var/lib/mysql /tmp/old-mysql-lib  
[email protected]# mv /var/log/mysql /tmp/old-mysq-log  
[email protected]# mv /root/mysql-data /var/lib/mysql  
[email protected]# mkdir /var/log/mysql  
[email protected]# chown -R mysql:mysql /var/lib/mysql /var/log/mysql  

Now you can start the MySQL server again (e.g., service mysql start). It should come back online without a problem.


Note that we have literally just copied the whole database from the old to the new server. That's including MySQL's internal tables that define things like credentials.

In other words, any users and passwords you might have configured when you installed MySQL on the new server are no gone. They are all replaced with the users that existed on the 'old' server.

On Debian/Ubuntu, the system inserts a special debian-sys-maint user for maintanance related stuff. So you will need to manualy modify /etc/mysql/debian.cnf and copy the password from the 'old' server. Alternatively, you can just reset the user password instead after the server is started (next).

Start the slave

At this point, your new server is set up and it has the data up until the point where the innobackupex command finished. The next step is making this a real slave, so it can begin to replicate off of a real master.

Connect to the local mysql server on the new server (e.g., mysql -uroot ...). And we'll change the master and start the slave:

    MASTER_LOG_FILE='<BIN FILE FROM xtrabackup_binlog_pos_innodb>',
    MASTER_LOG_POS=<POSITION FROM xtrabackup_binlog_pos_innodb>;

Fill in the details for your master host, and be sure to double check the bin log file and position that you got before (from the cat /root/mysql-data/xtrabackup_binlog_pos_innodb we ran above).

Then start the salve, wait a few seconds, and check the status to make sure it's running:


All Done

You should have a fully functional slave now :-)

Christopher Nadeau

London, UK