MariaDB HA with Pacemaker and ldirector (LVS)

Updated at by
  • Floating VIP
  • HA-servers
  • MariaDB servers

MariaDB server configuration

Add VIP as loopback adapter alias /etc/sysconfig/network-scripts/ifcfg-lo.cfg so the server will accept packages forwarded by HA.


Restrict ARP announces for the alias interface /etc/sysctl.conf

net.ipv4.conf.all.arp_ignore = 1
net.ipv4.conf.eth0.arp_ignore = 1
net.ipv4.conf.all.arp_announce = 2
net.ipv4.conf.eth0.arp_announce = 2

Restart interface and reload rules from sysctl.conf

sysctl -p /etc/sysctl.conf
ifdown lo && ifup lo

HA-server configuration

Add /etc/hosts entries so HA servers can resolve each other with shortnames. Install packages for LVS and Pacemaker. Perl package is needed for LVS mysql service

[root@ha-1 ~]# yum install pacemaker pcs ldirectord perl-DBD-MySQL

Generate authkey /etc/corosync/authkey for corosync and distribute the key to other hosts

[root@ha-1 ~]# corosync-keygen

Speed up the key generation if you're on a remote connection

[root@ha-1 ~]# find /usr -type f -exec md5sum {} \;

Create corosync cluster configuration /etc/cluster/cluster.conf

<cluster name="mycluster" config_version="1">
<cman two_node="1" expected_votes="1" keyfile="/etc/corosync/authkey"/>
    <clusternode name="ha-1" nodeid="1">
            <method name="pcmk-redirect">
                <device name="pcmk" port="ha-1"/>
<clusternode name="ha-2" nodeid="2">
        <method name="pcmk-redirect">
            <device name="pcmk" port="ha-2"/>
    <fencedevice agent="fence_pcmk" name="pcmk"/>

Create /etc/ha.d/ configuration for mysql on HA-servers.

# add your services here - check from docs for reference
    real= gate 1
    real= gate 1
    real= gate 1
    real= gate 1
    request="SELECT id FROM connection_check LIMIT 1"

Add user and database for ldirector connection checking on MariaDB servers

USE ldirector;
CREATE TABLE connection_check (id int unsigned primary key auto_increment) ENGINE=InnoDB;
INSERT INTO connection_check () values ();
GRANT SELECT ON ldirector.* TO ldirector@'' IDENTIFIED BY 'ldirector';

Start cluster and on startup

chkconfig pacemaker on
service pacemaker start

Check cluster status with

[root@ha-2 ~]# pcs status

Cluster name: mycluster
Last updated: Thu Nov 13 09:40:28 2014
Last change: Thu Nov  6 09:37:17 2014
Stack: cman
Current DC: ha-1 - partition with quorum
Version: 1.1.11-97629de
2 Nodes configured
0 Resources configured

Online: [ ha-1 ha-2 ]

Add VIP and ldirectord as cluster resource.

pcs resource create VIP ocf:heartbeat:IPaddr2 ip= cidr_netmask=32 op monitor interval=30s
pcs resource create MrDirector lsb:ldirectord

Group the resources and make a constraint for starting order and location

pcs resource group add VIP-and-MrDirector VIP MrDirector
pcs constraint order VIP then MrDirector
pcs constraint location VIP-and-MrDirector rule score=pingd defined pingd

Make resources sticky, so they won't migrate back and forth if a cluster node is eg. booted

pcs resource defaults resource-stickiness=100

Do not die on quorum on two node cluster

pcs property set no-quorum-policy=ignore

Share on FacebookShare on Facebook Share on TwitterShare on Twitter

Leave a comment