Zarafa High Availability setup with MySQL master-slave

From Zarafa wiki

Jump to: navigation, search

This article describes how you can create a manual High Availability system for Zarafa, MySQL and Apache. MySQL replication is used to replicate the database on the two servers.



In this whitepaper we will describe how you can create a High Availability system for Zarafa and MySQL between two servers. This HA system is based on a active-passive environment.

In this case we don't use DRBD for exact copies of both servers. Before we start to configure the HA system, you need to install on both servers Apache, Zarafa and a MTA. To monitor the servers we use the default Linux tool Heartbeat.


The Standy-server will monitor the Main server. When the main server is down for more then 30 seconds the Standy server will start Zarafa and will get the ip-address of the main server. The database is replicated continuously on both servers by the default MySQL master-slave replication.

To avoid inconsistent databases we advise you to reconfigure manually the HA environment after a failover.

Zarafa in High Availability environment

Before starting with setting up a High Availability system, make sure you have a complete backup of your Zarafa database and have some good experience with MySQL configurations. Zarafa is NOT responsible for lost data or corrupt databases.

Setup MySQL replication

In a master-slave replication environment the slave will execute all queries that are done on the master server. The master server will record all the execute queries in binary logfiles. The slave server will read these logfiles and execute the query on its own database. For an application, like Zarafa, it's not possible to do write actions of the slave server. The slave server reads the binlog files all the time. When the master server has a failure, the slave cannot connect to the master server and will no longer read the binary logfiles.


To configure MySQL for replication you need two servers both with the same MySQL version installed. First you need to configure the master server by the following steps:

  • Open the my.cnf file
  • Add the following to this file below the option [mysqld]:

option server_id = 1
innodb_safe_binlog (<- obsolete since MySQL 5.x)

  • Restart the mysqld after you edited the my.cnf file
  • Login on the mysql console mysql -u root -p mysql_password
  • Execute the following command:

 GRANT REPLICATION SLAVE ON *.* TO 'replication'@'' IDENTIFIED BY 'secret';

This command gives the user replication access to master server with the password secret.

  • Execute the command: mysql> FLUSH TABLES WITH READ LOCK;

This command will lock the databases on the master server.

  • Copy the databases of the master server to the slave server, make sure the mysqld on the

slave server is not running.

cd /var/lib/mysql
tar -cvf /tmp/mysql-snapshot.tar .
scp /tmp/ mysql-snapshot.tar [email protected]:~

  • Execute the command show master status; on the mysql prompt.

This command shows you the following output:


  • Logon to the slave server
  • Stop the mysql server
  • Add the option server-id = 2 to the my.cnf file
  • Untar the copied databases of the master server
  • Start mysqld
  • Login on the mysql console
  • Execute the command: mysql> stop slave;
  • Execute the command:

change master to
-> MASTER_HOST='master_host_name',
-> MASTER_USER='replication',
-> MASTER_PASSWORD='secret',
-> MASTER_LOG_FILE='recorded_log_file_name',
-> MASTER_LOG_POS=recorded_log_position;

Replace the host, username and password by the values you used in the GRANT command at the master server.

The recorded_log_file_name should be the value from the column file of the output of the show master status; command.

The recorded_log_position should be the value form the column position of the output of the show master status; command.

  • Execute the command mysql> start slave;
  • Your MySQL replication should now work correctly.

You can check the status of the MySQL replication by execution the command mysql> show slave status; on the MySQL prompt.


Since it is recommended to use /var/lib/zarafa (or similar path) for your external attachments, don't forget to sync them also periodically using a command like

> rsync -abv --delete /var/lib/zarafa/ slave-system:/var/lib/zarafa/

Setup Heartbeat monitoring

Install the Heartbeat packages on both servers. In most distributions the Heartbeat are in default repository.

To use Heartbeat it's very important that the hostnames of both servers are correctly configured and available in your hosts table.

/etc/hostname -> Full servername (e.g.
/etc/hosts -> Both servers should be available in this file

For example:

You can check your hostname via the command uname -s or hostname.

To configure Heartbeat go to the Heartbeat configuration directory /etc/ha.d. Open or create the file and add the following lines:

debugfile /var/log/ha-debug
logfile /var/log/ha-log
logfacility local0
keepalive 10
deadtime 60
initdead 90
udpport 694
bcast eth1 # interface that's used for monitoring
auto_failback no
node # master hostname
node # slave hostname

Copy the to your slave server and switch the last two lines, so the slave node is the first line.

Open or create the file haresources on the master and add the following line: failover

This line will set the init-script that will be started when the master is down.

Open or create the file haresources on the slave and add the following line:

Open or create the file authkeys and add the following line:

auth 2
2 sha1 security!

Change the permissions of this file to 0600 and copy this file to the slave server.

Now you have to create the init-script that will be started when the master server is down. Below you will find a example failover script that will start the Zarafa services and take over the ip-address of the master server.

#! /bin/sh
export PATH="${PATH:+$PATH:}/usr/sbin:/sbin"
case "$1" in
 ip addr dev eth0
 /etc/init.d/zarafa-gateway start
 /etc/init.d/zarafa-ical start
 /etc/init.d/zarafa-monitor start
 /etc/init.d/zarafa-server start
 /etc/init.d/zarafa-spooler start
 /etc/init.d/networking restart
 /etc/init.d/zarafa-gateway stop
 /etc/init.d/zarafa-ical stop
 /etc/init.d/zarafa-monitor stop
 /etc/init.d/zarafa-server stop
 /etc/init.d/zarafa-spooler stop
$0 stop
$0 start
echo "."
echo "Usage: /etc/init.d/failover {start|stop|restart}"
exit 1
exit 0

Tips and hints

To start the HA cluster, we recommended to first start the master server and then the slave server. We strongly advise to start the Heartbeat monitoring manually when both servers are successfully started.

It is recommended to use exactly the same Zarafa version of both master and slave server.


  • Heartbeat information - [1]
  • MySQL replication information - [2]
Personal tools