Building a High-Availability MySQL Cluster
Pages: 1, 2, 3
Configuring the SQL Nodes
Now you need to tell the MySQL SQL node that it can use the NDB engine. Modify the MySQL config file my.cnf in /etc/ to add the parameter ndbcluster in the [mysqld] section. Upon restarting mysqld, the output of ndb_mgm should have some additional lines:
[mysqld(API)] 3 node(s)
id=4 @172.16.32.7 (Version: 4.1.12)
id=5 @172.16.32.107 (Version: 4.1.12)
id=6 (not connected, accepting connect from any host)
Now test the actual cluster database and create a table with the option engine=Ndb;:
mysql> create table blah (i int, a char) engine NDB;
Query OK, 0 rows affected (0.46 sec)
Then on node DB-B, watch the table show up:
mysql> show tables;
+----------------+
| Tables_in_test |
+----------------+
| blah |
+----------------+
1 row in set (0.03 sec)
On node DB-A, insert a value into the table:
mysql>insert into blah values (1,"x");Query OK, 1 row affected (0.01 sec) mysql>select * from blah;+------+------+ | i | a | +------+------+ | 1 | x | +------+------+ 1 row in set (0.01 sec)
... and watch it appear in the same table on the second node (DB-B):
mysql>select * from blah;Empty set (0.08 sec) mysql>select * from blah;+------+------+ | i | a | +------+------+ | 1 | x | +------+------+ 1 row in set (0.00 sec)
As you can see, modifying a record on one node makes it automagically appear on the other node, as well. This actually works both directions, so it really doesn't matter which node you connect to.
When querying the database, the SQL node will contact the management node to see which engine it should query. If one of the NDB engine nodes disappears, the management node will direct the SQL node to another. That's why you need to have the management node running on a separate node; if the node where both your active NDB engine and your MGM node run crashes, your SQL engine has nowhere to go. You can restart or take down your management node for maintenance; as long as none of the other nodes goes down your applications will continue. Of course, you can always configure two management nodes.
Adding Heartbeat and Monitoring To the RealDBs
MySQL Cluster makes sure your database stays up, but what if the mysqld you are talking to crashes? What if the physical machine you are talking to crashes? What if your network connection goes down? In these cases, your application, which used to talk to the mysql daemon on DB-A, doesn't know what to do anymore. That's why my team added the mon-plus-Heartbeat combination to our setup.
Heartbeat will take care of monitoring both nodes and making sure that the active node has the correct application address (for example, the address of your Java applications).
Setting up Heartbeat on cluster nodes means configuring three files. In /etc/ha.d:
$cat authkeysauth 1 1 md5 WorldDominationFast! $more haresourcesDB-A 172.17.0.103/24/eth0:1 DB-A:/etc/ha.d #more ha.cflogfacility local0 keepalive 2 deadtime 10 warntime 5 initdead 120 nice_failback off udpport 694 bcast eth0 node DB-A node DB-B
Make sure that files both exist and are identical on both nodes.
Heartbeat does not monitor the state of the application. You can still be able to ping a DB, but its mysqld might be completely dysfunctional. That's why we added mon to our setup.
The next change is to mon.cf in /etc/mon/:
hostgroup mysql-node 127.0.0.1
#
# watch definitions
watch mysql-node
service mysql
interval 10s
monitor msql-mysql.monitor --mode mysql --database=your database \
--password=yourpasswordgoeshere
period wd {Mon-Sun}
alert stop-heartbeat.alert
As this config snippet shows, mon uses its msql-mysql.monitor script to see if it can actually still connect to the database. It also periodically checks that it can still do a show tables on the database on the node on which it is running. The moment that fails, it will run stop-heartbeat.alert.
When mon decides it can't connect to MySQL anymore, it triggers Heartbeat to move the active IP to the second node, where another MySQL instance is happily running and can access identical data to that of the prior node.
With this setup, we created a highly available MySQL cluster with no single point of failure.
Kris Buytaert is a Linux and open source consultant operating in the Benelux. He currently maintains the openMosix HOWTO.
Return to O'Reilly Databases



