MaxScale on CentOS 7 - pacemaker cloned edition

Updated at by

MaxScale is MariaDB/MySQL server proxy which works nicely with Galera cluster and binlog slaves or mixed env.

I'll go trough a setup for 3 node galera cluster paired with some binlog slaves. Only quirk I had to work out was the handling of different monitors modules poking at the same hosts. There will be an additional port for the slave connections (3307) on the MaxScale servers which needs some tweaking at SELinux enabled client machines.

Hosts used

  • MaxScale servers scale-1 and scale-2 (VIP 10.1.1.139 and 10.1.1.140/31)
  • Galera nodes gdb-1, gdb-2, gdb-3 (10.1.1.10-10.1.1.12)
  • Binlog slaves sdb-1 and sdb-2 (10.1.1.30/31)

Installation

Add a repository file /etc/yum/repos.d/maxscale.repo

[maxscale]
name=maxscale
baseurl=https://downloads.mariadb.com/files/MaxScale/latest/centos/7/x86_64
gpgkey=https://downloads.mariadb.com/software/MaxScale/MaxScale-GPG-KEY.public
enabled=1
gpgcheck=true

And install

yum install maxscale

Users and privileges for MaxScale

MaxScale needs user and privilege information from the database + server health. I'll create two users for this purpose 'maxscale-monitor' and 'maxscale-router'. MaxScale burps errors into log if db users's host part contains a netmask so in order to lower log noise each MaxScale host will have it's own two users.

Monitor user will be granted with REPLICATION CLIENT to allow SHOW SLAVE / MASTER STATUS. Without binlog slaves USAGE is sufficient (SHOW STATUS).

GRANT REPLICATION CLIENT ON *.* TO 'maxscale-monitor'@'scale-1' IDENTIFIED BY 'maxscale-monitor';
GRANT REPLICATION CLIENT ON *.* TO 'maxscale-monitor'@'scale-2' IDENTIFIED BY 'maxscale-monitor';

Router user needs to dig up user, privilege and db info :

GRANT SHOW DATABASES ON *.* TO 'maxscale-router'@'scale-1' IDENTIFIED BY 'maxscale-router';
GRANT SELECT ON mysql.user TO 'maxscale-router'@'scale-1';
GRANT SELECT ON mysql.tables_priv TO 'maxscale-router'@'scale-1';
GRANT SELECT ON mysql.db TO 'maxscale-router'@'scale-1';

and same for 'scale-2' host

MaxScale configuration

I'll use two routers : one for the "write master" (CUDdly operations) on port 3306 and one for read-only binlog slaves on port 3307.

Edit /etc/maxscale.cnf

[maxscale]
threads=4

[Master monitor]
type=monitor
module=galeramon
servers=gdb-1,gdb-2,gdb-3
user=maxscale-monitor
passwd=maxscale-monitor
disable_master_failback=1
monitor_interval=1000

[Slave monitor]
type=monitor
module=mysqlmon
# please note : monitoring same server with two types modules results in weeeeird behaviour.
# gdb-1 is aka. mdb-1 and so forth.
servers=mdb-1,mdb-2,mdb-3,sdb-1,sdb-2
user=maxscale-monitor
passwd=maxscale-monitor
monitor_interval=1000

[Master router]
type=service
router=readconnroute
servers=gdb-1,gdb-2,gdb-3
# Router will designate one of the servers as "master" and the rest will be slaves
# with the following router_options all the queries will be handed to the designated
# "write master"
router_options=master
user=maxscale-router
passwd=maxscale-router

[Slave router]
type=service
router=readconnroute
servers=mdb-1,mdb-2,mdb-3,sdb-1,sdb-2
# The galera cluster nodes are also present on this server list but with another
# name (mdb-1 vs. gdb-1). Monitoring same servers with "galeramon" and "mysqlmon"
# will result in an eternal state change of lost_synced, new_slave, lost_master and so on.
# router_options=slave will send all queries to binlog slaves.
router_options=slave
user=maxscale-router
passwd=maxscale-router

[Master listener]
type=listener
service=Master router
protocol=MySQLClient
port=3306

[Slave listener]
type=listener
service=Slave router
protocol=MySQLClient
# We'll use another port for the binlog slave connections 3307
port=3307

[CLI]
type=service
router=cli

[CLI Listener]
type=listener
service=CLI
protocol=maxscaled
port=6603

[gdb-1]
type=server
address=10.1.1.10
port=3306
protocol=MySQLBackend

[gdb-2]
...

[mdb-1]
type=server
address=10.1.1.10
port=3306
protocol=MySQLBackend

[mdb-2]
...

[sdb-1]
type=server
address=10.1.1.30
port=3306
protocol=MySQLBackend

[sdb-2]
type=server
address=10.1.1.31
port=3306
protocol=MySQLBackend

Running maxadmin list servers (default password is mariadb) should have an output similar to this :

Servers.
-------------------+-----------------+-------+-------------+--------------------
Server             | Address         | Port  | Connections | Status              
-------------------+-----------------+-------+-------------+--------------------
gdb-1              | 10.1.1.10       |  3306 |           1 | Master, Synced, Running
gdb-2              | 10.1.1.11       |  3306 |           0 | Slave, Synced, Running
gdb-3              | 10.1.1.12       |  3306 |           0 | Slave, Synced, Running
mdb-1              | 10.1.1.10       |  3306 |           0 | Master, Running
mdb-2              | 10.1.1.11       |  3306 |           0 | Running
mdb-3              | 10.1.1.12       |  3306 |           0 | Running
sdb-1              | 10.1.1.30       |  3306 |           1 | Slave, Running
sdb-2              | 10.1.1.31       |  3306 |           1 | Slave, Running
-------------------+-----------------+-------+-------------+--------------------

MaxScale log checkup

If router is working you should see something like this in /var/log/maxscale/maxscale1.log

2016-08-08 09:37:54   notice : Loaded 12 MySQL Users for service [Master router].

If monitor is OK :

2016-08-08 14:08:30   notice : Server changed state: gdb-3[10.1.1.12:3306]: new_slave
2016-08-08 14:08:30   notice : Server changed state: mdb-1[10.1.1.10:3306]: new_master
2016-08-08 14:08:30   notice : Server changed state: sdb-1[10.1.1.30:3306]: new_slave
2016-08-08 14:08:30   notice : Server changed state: sdb-2[10.1.1.31:3306]: new_slave

Pacemaker resource and colocation

Disable the maxscale.service in case you have enabled it. Pacemaker will handle them in the future.

systemctl disable maxscale.service

Add a cloned resource to cluster

pcs resource create madmax systemd:maxscale clone

Output of pcs status should shortly contain madmax running on both nodes:

 VIP    (ocf::heartbeat:IPaddr2):   Started scale-1
 Clone Set: madmax-clone [madmax]
      Started: [ scale-1 scale-2 ]

And we add a constraint for VIP so MaxScale has to be running on the same node.

pcs constraint colocation add VIP madmax-clone

You can test the colocation by smudging maxscales configuration making it unstartable and killing the process on the active VIP host.

SELinux for clients running in httpd_t

If you're using a policy or a boolean like httpd_can_network_connect_db to allow connections to database you need to add 3307 to the mysqld_port_t list

semanage port -a -t mysqld_port_t -p tcp 3307

Otherwise you might see this

type=AVC msg=audit(1470662180.054:16651): avc:  denied  { name_connect } for  pid=25134 comm="php-fpm" dest=3307 scontext=system_u:system_r:httpd_t:s0 tcontext=system_u:object_r:unreserved_port_t:s0 tclass=tcp_socket

Share on FacebookShare on Facebook Share on TwitterShare on Twitter

Leave a comment