MariaDB HA with Pacemaker and ldirector (LVS)

Updated at by
  • Floating VIP 10.1.1.19
  • HA-servers 10.1.1.20/31
  • MariaDB servers 10.1.1.40/30

MariaDB server configuration

Add VIP as loopback adapter alias /etc/sysconfig/network-scripts/ifcfg-lo.cfg so the server will accept packages forwarded by HA.

IPADDR0=10.1.1.19
NETMASK0=255.255.255.255
BROADCAST0=255.255.255.255

Restrict ARP announces for the alias interface /etc/sysctl.conf

net.ipv4.conf.all.arp_ignore = 1
net.ipv4.conf.eth0.arp_ignore = 1
net.ipv4.conf.all.arp_announce = 2
net.ipv4.conf.eth0.arp_announce = 2

Restart interface and reload rules from sysctl.conf

sysctl -p /etc/sysctl.conf
ifdown lo && ifup lo

HA-server configuration

Add /etc/hosts entries so HA servers can resolve each other with shortnames. Install packages for LVS and Pacemaker. Perl package is needed for LVS mysql service

[root@ha-1 ~]# yum install pacemaker pcs ldirectord perl-DBD-MySQL

Generate authkey /etc/corosync/authkey for corosync and distribute the key to other hosts

[root@ha-1 ~]# corosync-keygen

Speed up the key generation if you're on a remote connection

[root@ha-1 ~]# find /usr -type f -exec md5sum {} \;

Create corosync cluster configuration /etc/cluster/cluster.conf

<cluster name="mycluster" config_version="1">
<cman two_node="1" expected_votes="1" keyfile="/etc/corosync/authkey"/>
<clusternodes>
    <clusternode name="ha-1" nodeid="1">
        <fence>
            <method name="pcmk-redirect">
                <device name="pcmk" port="ha-1"/>
            </method>
        </fence>
</clusternode>
<clusternode name="ha-2" nodeid="2">
    <fence>
        <method name="pcmk-redirect">
            <device name="pcmk" port="ha-2"/>
        </method>
    </fence>
</clusternode>
</clusternodes>
<fencedevices>
    <fencedevice agent="fence_pcmk" name="pcmk"/>
</fencedevices>
<rm>
</rm>
</cluster>

Create /etc/ha.d/ldirectord.cf configuration for mysql on HA-servers.

# add your services here - check ldirectord.cf from docs for reference
virtual=10.1.1.19:3306
    real=10.1.1.40:3306 gate 1
    real=10.1.1.41:3306 gate 1
    real=10.1.1.42:3306 gate 1
    real=10.1.1.43:3306 gate 1
    netmask=255.255.255.255
    service=mysql
    checktype=negotiate
    login="ldirector"
    passwd="ldirector"
    database="ldirector"
    request="SELECT id FROM connection_check LIMIT 1"
    scheduler=wlc

Add user and database for ldirector connection checking on MariaDB servers

CREATE DATABASE ldirector;
USE ldirector;
CREATE TABLE connection_check (id int unsigned primary key auto_increment) ENGINE=InnoDB;
INSERT INTO connection_check () values ();
GRANT SELECT ON ldirector.* TO ldirector@'10.1.1.20/255.255.255.254' IDENTIFIED BY 'ldirector';

Start cluster and on startup

chkconfig pacemaker on
service pacemaker start

Check cluster status with

[root@ha-2 ~]# pcs status

Cluster name: mycluster
Last updated: Thu Nov 13 09:40:28 2014
Last change: Thu Nov  6 09:37:17 2014
Stack: cman
Current DC: ha-1 - partition with quorum
Version: 1.1.11-97629de
2 Nodes configured
0 Resources configured

Online: [ ha-1 ha-2 ]

Add VIP and ldirectord as cluster resource.

pcs resource create VIP ocf:heartbeat:IPaddr2 ip=10.1.1.19 cidr_netmask=32 op monitor interval=30s
pcs resource create MrDirector lsb:ldirectord

Group the resources and make a constraint for starting order and location

pcs resource group add VIP-and-MrDirector VIP MrDirector
pcs constraint order VIP then MrDirector
pcs constraint location VIP-and-MrDirector rule score=pingd defined pingd

Make resources sticky, so they won't migrate back and forth if a cluster node is eg. booted

pcs resource defaults resource-stickiness=100

Do not die on quorum on two node cluster

pcs property set no-quorum-policy=ignore

Leave a comment