You are on page 1of 18

How To Set Up A Load-Balanced MySQL Cluster

How To Set Up A Load-Balanced MySQL Cluster
Version 1.0
Author: Falko Timme <ft [at] falkotimme [dot] com>
Last edited 03/27/2006
This tutorial shows how to configure a MySQL 5 cluster with three nodes: two storage nodes and one
management node. This cluster is load-balanced by a high-availability load balancer that in fact has two
nodes that use the Ultra Monkey package that provides heartbeat (for checking if the other node is still
alive) and ldirectord (to split up the requests to the nodes of the MySQL cluster).
In this document I use Debian Sarge for all nodes. Therefore the setup might differ a bit for other
distributions. The MySQL version I use in this setup is 5.0.19. If you do not want to use MySQL 5, you
can use MySQL 4.1 as well, although I haven't tested it.
This howto is meant as a practical guide; it does not cover the theoretical backgrounds. They are treated in
a lot of other documents in the web.
This document comes without warranty of any kind! I want to say that this is not the only way of setting
up such a system. There are many ways of achieving this goal but this is the way I take. I do not issue any
guarantee that this will work for you!

1 My Servers
I use the following Debian servers that are all in the same network (192.168.0.x in this example):    

sql1.example.com: 192.168.0.101 MySQL cluster node 1
sql2.example.com: 192.168.0.102 MySQL cluster node 2
loadb1.example.com: 192.168.0.103 Load Balancer 1 / MySQL
loadb2.example.com: 192.168.0.104 Load Balancer 2

cluster management server

In addition to that we need a virtual IP address : 192.168.0.105. It will be assigned to the MySQL cluster
by the load balancer so that applications have a single IP address to access the cluster.
Although we want to have two MySQL cluster nodes in our MySQL cluster, we still need a third node, the
MySQL cluster management server, for mainly one reason: if one of the two MySQL cluster nodes fails,
and the management server is not running, then the data on the two cluster nodes will become inconsistent
("split brain"). We also need it for configuring the MySQL cluster.
So normally we would need five machines for our setup:
2 MySQL cluster nodes + 1 cluster management server + 2 Load Balancers = 5

As the MySQL cluster management server does not use many resources, and the system would just sit
there doing nothing, we can put our first load balancer on the same machine, which saves us one machine,
so we end up with four machines.

2 Set Up The MySQL Cluster Management Server
First we have to download MySQL 5.0.19 (the max version!) and install the cluster management server

19-linux-i686-glibc23.0/mysql-max-5.org/sites/ftp.example.gz cd mysql-max-5.example.19-linux-i686-\ glibc23.168.102 DataDir=/var/lib/mysql-cluster # one [MYSQLD] per storage node [MYSQLD] [MYSQLD] Please replace the IP addresses in the file appropriately.(ndb_mgmd) and the cluster management client (ndb_mgm . we must create the cluster configuration file.gz/from/http://www.com: mkdir /var/lib/mysql-cluster cd /var/lib/mysql-cluster vi config.tar.com/ tar xvfz mysql-max-5.mysql.0.103 # Section for the storage nodes [NDBD] # IP address of the first storage node HostName=192.ini: loadb1.168.ini [NDBD DEFAULT] NoOfReplicas=2 [MYSQLD DEFAULT] [NDB_MGMD DEFAULT] [TCP DEFAULT] # Section for the cluster management node [NDB_MGMD] # IP address of the management node (this system) HostName=192.ini It makes sense to automatically start the management server at system boot time.com: mkdir /usr/src/mysql-mgm cd /usr/src/mysql-mgm wget http://dev. /var/lib/mysql-cluster/config.168.com/get/Downloads/MySQL-5.it can be used to monitor what's going on in the cluster).0.0.168.example.19-linux-i686-glibc23 mv bin/ndb_mgm /usr/bin mv bin/ndb_mgmd /usr/bin chmod 755 /usr/bin/ndb_mg* cd /usr/src rm -rf /usr/src/mysql-mgm Next.0.0.com (192.101 DataDir= /var/lib/mysql-cluster [NDBD] # IP address of the second storage node HostName=192.mirrorservice.0.103): loadb1.tar.mysql.0.example. so we create a very simple init script and the appropriate startup links: . Then we start the cluster management server: loadb1. The following steps are carried out on loadb1.com: ndb_mgmd -f /var/lib/mysql-cluster/config.

mysql.com: sql1.0.d/mysql.example.com and sql2.0.example.com / sql2.mysql.gz ln -s mysql-max-5.0.103 [mysql_cluster] # IP address of the cluster management node ndb-connectstring=192.d ndb_mgmd defaults Copyright © 2006 Falko Timme All Rights Reserved./ rm -fr /usr/local/mysql/bin ln -s /usr/bin /usr/local/mysql/bin Then we create the MySQL configuration file /etc/my. chown -R mysql data cp support-files/mysql.d/ndb_mgmd chmod 755 /etc/init.cnf on both nodes: sql1.0.com / sql2.d/ndb_mgmd update-rc.0/mysql-max-5.19-linux-i686-glibc23 mysql cd mysql scripts/mysql_install_db --user=mysql chown -R root:mysql .cnf [mysqld] ndbcluster # IP address of the cluster management node ndb-connectstring=192.d/ chmod 755 /etc/init. .tar.example.com/ tar xvfz mysql-max-5.example. How To Set Up A Load-Balanced MySQL Cluster .com: groupadd mysql useradd -g mysql mysql cd /usr/local/ wget http://dev.gz/from/http://www.server defaults cd /usr/local/mysql/bin mv * /usr/bin cd .com/get/Downloads/MySQL-5.mirrorservice.example.tar.19-linux-i686-\ glibc23.com: echo 'ndb_mgmd -f /var/lib/mysql-cluster/config.org/sites/ftp.168.ini' > /etc/init.d mysql.19 on both sql1.example.0.Page 2 3 Set Up The MySQL Cluster Nodes (Storage Nodes) Now we install mysql-max-5.19-linux-i686-glibc23.0.example.com: vi /etc/my.168.server /etc/init.loadb1.103 Make sure you fill in the correct IP address of the MySQL cluster management server.server update-rc..

d/ndbd update-rc.com changes.example.d ndbd defaults How To Set Up A Load-Balanced MySQL Cluster .Management Client -ndb_mgm> Now type show.com: mkdir /var/lib/mysql-cluster cd /var/lib/mysql-cluster ndbd --initial /etc/init.example.com: ndb_mgm You should see this: -. so we create an ndbd init script and the appropriate system startup links: sql1.example.ini on loadb1.com / sql2.com).) Now is a good time to set a password for the MySQL root user: sql1.example. and if /var/lib/mysql-cluster/config.NDB Cluster -.example. On the cluster management server (loadb1. now it's time to test it.com / sql2.Page 3 4 Test The MySQL Cluster Our MySQL cluster configuration is already finished.Next we create the data directories and start the MySQL server on both cluster nodes: sql1.example.com: echo 'ndbd' > /etc/init.example.server start (Please note: we have to run ndbd --initial only when the start MySQL for the first time.com / sql2. at the command prompt: .example.d/ndbd chmod 755 /etc/init.example.com: mysqladmin -u root password yourrootsqlpassword We want to start the cluster nodes at boot time.d/mysql. run the cluster management client ndb_mgm to check if the cluster nodes are connected: loadb1.

168.0. Connected to Management Server at: localhost:1186 Cluster Configuration --------------------[ndbd(NDB)] 2 node(s) id=2 @192.0.168.com (yes. quit.0. USE mysqlclustertest. then clustering will not work!) The result of the SELECT statement should be: mysql> SELECT * FROM testtable. Now we create a test database with a test table and some data on sql1.102 (Version: 5.0.0.example. Master) id=3 @192. The output should be like this: ndb_mgm> show. Nodegroup: 0.com because testtable uses testtable . then everything's ok! Type quit. we still have to create it.example.0.0. but afterwards and its data should be replicated to sql2.19) ndb_mgm> If you see that your nodes are connected.0. CREATE TABLE testtable (i INT) ENGINE=NDBCLUSTER.102 (Version: 5.168. (Have a look at the CREATE statment: We must use ENGINE=NDBCLUSTER for all database tables that we want to get clustered! If you use another engine. to leave the ndb_mgm client console.com: sql1. SELECT * FROM testtable.168.101 (Version: 5.com: mysql -u root -p CREATE DATABASE mysqlclustertest.103 (Version: 5.0.19) id=5 @192. +------+ | i | +------+ | 1 | +------+ 1 row in set (0.03 sec) Now we create the same database on sql2.19.101 (Version: 5.168.show.19) [mysqld(API)] 2 node(s) id=4 @192.19. INSERT INTO testtable () VALUES (1).example.0.example. Nodegroup: 0) [ndb_mgmd(MGM)] 1 node(s) id=1 @192.

example.example.05 sec) So both MySQL cluster nodes alwas have the same data! Now let's see what happens if we stop node 1 (sql1.com.com): Run sql1.ENGINE=NDBCLUSTER): sql2. +------+ | i | +------+ | 1 | | 2 | +------+ 2 rows in set (0.com: . SELECT * FROM testtable.example.com: mysql -u root -p USE mysqlclustertest. +------+ | i | +------+ | 1 | +------+ 1 row in set (0. Now we insert another row into testtable: sql2.example.example.com: mysql> SELECT * FROM testtable.com: mysql -u root -p CREATE DATABASE mysqlclustertest.example. You should see something like this: mysql> SELECT * FROM testtable. The SELECT statement should deliver you the same result as before on sql1.example.example.com and check if we see the new row there: sql1. quit. USE mysqlclustertest.com: INSERT INTO testtable () VALUES (2).04 sec) So the data was replicated from sql1. quit. SELECT * FROM testtable.example. Now let's go back to sql1.com to sql2.

Type quit. sql1.168.0. run another killall ndbd until all ndbd processes are gone.19) ndb_mgm> You see. Let's check sql2.19.killall ndbd and check with ps aux | grep ndbd | grep -iv grep that all ndbd processes have terminated.0.example.102 (Version: 5. issue show.com): loadb1. Nodegroup: 0. If you still see ndbd processes. to leave the ndb_mgm console.101) id=3 @192.101 (Version: 5.com is not connected anymore.0. and you should see this: ndb_mgm> show.0.19) id=5 @192.com: .example.102 (Version: 5.103 (Version: 5. Master) [ndb_mgmd(MGM)] 1 node(s) id=1 @192.example.19) [mysqld(API)] 2 node(s) id=4 @192.168.168.0. accepting connect from 192. Now let's check the cluster status on our management server (loadb1.0.0.168.0.com: ndb_mgm On the ndb_mgm console. Connected to Management Server at: localhost:1186 Cluster Configuration --------------------[ndbd(NDB)] 2 node(s) id=2 (not connected.168.0.example.

example. quit. Node 3: Cluster shutdown initiated Node 2: Node shutdown completed.com node again: sql1.example.com or for some other reason. You will then see something like this: ndb_mgm> shutdown. +------+ | i | +------+ | 1 | | 2 | +------+ 2 rows in set (0. for example because you have changed /var/lib/mysql-cluster/config. NDB Cluster management server shutdown. To do this.com: ndb_mgm On the ndb_mgm console. so let's start our sql1.example.com: mysql -u root -p USE mysqlclustertest. you use the ndb_mgm cluster management client on loadb1. 2 NDB Cluster node(s) have shutdown.example.17 sec) Ok. you type shutdown.example.com: ndbd How To Set Up A Load-Balanced MySQL Cluster .Page 4 5 How To Restart The Cluster Now let's asume you want to restart the MySQL cluster.ini on loadb1.example. .com: loadb1. all tests went fine. SELECT * FROM testtable.sql2. The result of the SELECT query should still be mysql> SELECT * FROM testtable.

to leave the ndb_mgm console.example.com: loadb1.com / sql2.example.ndb_mgm> This means that the cluster nodes sql1.com and also the cluster management server have shut down.com: ndbd --initial Afterwards.com: ndbd or.ini and on sql1.com if the cluster has restarted: loadb1.example.example. if you have changed /var/lib/mysql-cluster/config.ini on loadb1. Type quit. to see the current status of the cluster. to leave the ndb_mgm console.com you run sql1. Run quit.com and sql2.example. do this on loadb1. type show.example.example. you can check on loadb1. It might take a few seconds after a restart until all nodes are reported as connected.com: ndb_mgm On the ndb_mgm console.com and sql2.example. To start the cluster management server. .example.com: ndb_mgmd -f /var/lib/mysql-cluster/config.example.example.

we list the modules in /etc/modules: loadb1. we don't have a single IP address that we can use to access the cluster. However. What happens if the load balancer fails? Therefore we will configure two load balancers (loadb1.com: modprobe modprobe modprobe modprobe modprobe modprobe modprobe modprobe modprobe modprobe modprobe modprobe ip_vs_dh ip_vs_ftp ip_vs ip_vs_lblc ip_vs_lblcr ip_vs_lc ip_vs_nq ip_vs_rr ip_vs_sed ip_vs_sh ip_vs_wlc ip_vs_wrr In order to load the IPVS kernel modules at boot time. which means you must configure your applications in a way that a part of it uses the MySQL cluster node 1 (sql1. the actual load balancer the splits up the load onto the cluster nodes.example. Now in this scenario the load balancer becomes the bottleneck.com: .1 Install Ultra Monkey Ok. all your applications could just use one node.com and loadb2.example.com / loadb2.example. and you could start using it now.com have support for IPVS (IP Virtual Server) in their kernels. because the load balancer redirects the requests to the working node.example.example.example.com).example. and the rest uses the other node (sql2. Both load balancers use heartbeat to check if the other load balancer is still alive.Page 5 6 Configure The Load Balancers Our MySQL cluster is finished now. If one of the nodes fails. what happens if one of the cluster nodes fails? Then the applications that use this cluster node cannot work anymore.How To Set Up A Load-Balanced MySQL Cluster . 6.com and loadb2. but what's the point then in having a cluster if you do not split up the load between the cluster nodes? Another problem is.example. and the other one is a hot-standby and becomes active if the active one fails.com and loadb2.example.example. then your applications will still work. let's start: first we enable IPVS on loadb1. and all your applications use this virtual IP address to access the cluster.com: loadb1. and both load balancers also use ldirectord.com / loadb2. IPVS implements transport-layer load balancing inside the Linux kernel. heartbeat and ldirectord are provided by the Ultra Monkey package that we will install. Of course. which means we have one active load balancer. It is important that loadb1.example.com). The load blanacer configures a virtual IP address that is shared between the cluster nodes. The solution is to have a load balancer in front of the MySQL cluster which (as its name suggests) balances the load between the MySQL cluster nodes.example.com) in an active/passive setup.

none The libdbd-mysql-perl package we've just installed does not work with MySQL 5 (we use MySQL 5 on our MySQL cluster. so we install the newest DBD::mysql Perl package: loadb1.com: vi /etc/apt/sources. <-. Answer the following questions: Do you want to automatically load IPVS rules on boot? <-.No Select a daemon method.list deb http://www. ¦ ¦ ¦ ¦ ¦ ¦ ¦ you can ignore it.com: .example.example. If you see this warning: libsensors3 not functional It appears that your kernel is not compiled with sensors support. As a result.com / loadb2..ultramonkey.com / loadb2. have a look at "I2C Hardware Sensors Chip support" in your kernel configuration.example.org/download/3 sarge main apt-get update apt-get install ultramonkey libdbi-perl libdbd-mysql-perl libmysqlclient14-dev Now Ultra ¦ ¦ ¦ ¦ ¦ ¦ ¦ Monkey is being installed. If you want to enable it.vi /etc/modules ip_vs_dh ip_vs_ftp ip_vs ip_vs_lblc ip_vs_lblcr ip_vs_lc ip_vs_nq ip_vs_rr ip_vs_sed ip_vs_sh ip_vs_wlc ip_vs_wrr Now we edit /etc/apt/sources.list and add the Ultra repositories). libsensors3 will not be functional on your system.example..).ultramonkey. and then we install Ultra Monkey: Monkey repositories (don't remove the other loadb1.org/download/3/ sarge main deb-src http://www.

com: vi /etc/ha.gz cd DBD-mysql-3.cd /tmp wget http://search.2 Configure heartbeat Next we configure heartbeat by creating three files (all three files must be identical on loadb1. you don't have to change anything in the file. .tar.0.d/ha.org/CPAN/authors/id/C/CA/CAPTTOFU/DBD-mysql-3.com / loadb2.example.gz tar xvfz DBD-mysql-3.example.example.0002.0002 perl Makefile.com): loadb1.0002.cpan.tar.0.example.ipv4.example.PL make make install We must enable packet forwarding: loadb1.com / loadb2.com: vi /etc/sysctl.com and loadb2.conf # Enables packet forwarding net.ip_forward = 1 sysctl -p How To Set Up A Load-Balanced MySQL Cluster .Page 6 6.cf logfacility local0 bcast eth0 mcast eth0 225.example.1 694 1 0 auto_failback off node loadb1 node loadb2 respawn hacluster /usr/lib/heartbeat/ipfail apiauth ipfail gid=haclient uid=hacluster Please note: you must list the node names (in this case loadb1 and loadb2) as shown by uname -n Other than that.

com / loadb2.168. I use md5 as it is the most secure one. somerandomstring /etc/ha.d/authkeys 6.168.168.vi /etc/ha. You have the choice between three authentication mechanisms.0.255). therefore we do this: loadb1.0.com: chmod 600 /etc/ha.168. If you are unsure about the correct settings.0.255 You must list one of the load balancer node names (here: loadb1) and list the virtual IP address (192.102:3306 gate checktype = negotiate login = "ldirector" passwd = "ldirectorpassword" database = "ldirectordb" .info/ might help you.0.com: vi /etc/ha.105) together with the correct netmask (24) and broadcast address (192.d/haresources loadb1 \ ldirectord::ldirectord.com / loadb2.example.101:3306 gate real = 192.0.d/authkeys should be readable by root only.cf \ LVSSyncDaemonSwap::master \ IPaddr2::192.d/authkeys auth 3 3 md5 somerandomstring is a password which the two heartbeat daemons on loadb1 and loadb2 use to authenticate against each other.0.105/24/eth0/192. http://www.0.example.105:3306 service = mysql real = 192. Use your own string here.example.168.cf # Global Directives checktimeout=10 checkinterval=2 autoreload=no logfile="local0" quiescent=yes virtual = 192. the load balancer: loadb1. vi /etc/ha.3 Configure ldirectord Now we create the configuration file for ldirectord.example.168.d/ldirectord.168.subnetmask.

102).com: mysql -u root -p GRANT ALL ON ldirectordb. sql1. 6.example.example.168.com: mysql -u root -p GRANT ALL ON ldirectordb. quit. We are going to create the ldirector database with the ldirector user in the next step. FLUSH PRIVILEGES.example. FLUSH PRIVILEGES.4 Create A Database Called ldirector Next we create the ldirector database on our MySQL cluster nodes sql1.example. INSERT INTO connectioncheck () VALUES (1).request = "SELECT * FROM connectioncheck" scheduler = wrr Please fill in the correct virtual IP address (192. CREATE TABLE connectioncheck (i INT) ENGINE=NDBCLUSTER. We also specify a MySQL user (ldirector) and password (ldirectorpassword).d -f ldirectord remove How To Set Up A Load-Balanced MySQL Cluster . CREATE DATABASE ldirectordb. a database (ldirectordb) and an SQL query.com and sql2.com. CREATE DATABASE ldirectordb.d -f heartbeat remove update-rc.168.0.* TO 'ldirector'@'%' IDENTIFIED BY 'ldirectorpassword'.example.d heartbeat start 75 2 3 4 5 . sql2. ldirectord uses this information to make test requests to the MySQL cluster nodes to check if they are still available. This database will be used by our load balancers to check the availability of the MySQL cluster nodes. USE ldirectordb. stop 05 0 1 6 .* TO 'ldirector'@'%' IDENTIFIED BY 'ldirectorpassword'.168.com / loadb2.com: update-rc. update-rc.5 Prepare The MySQL Cluster Nodes For Load Balancing .105) and the correct IP addresses of your MySQL cluster nodes (192.example.101 and 192. quit.Page 7 6. Now we create the necessary system startup links for heartbeat and remove those of ldirectord (bacause ldirectord will be started by heartbeat): loadb1.0. 3306 is the port that MySQL runs on by default.0.

conf.com: vi /etc/network/interfaces auto lo:0 iface lo:0 inet static address 192. If this # is not set.ipv4.arp_announce = 2 # When making an ARP request sent through eth0 Always use an address that # is configured on eth0 as the source address of the ARP request. This is # not desirable in this case as adresses on lo on the real-servers should # be announced only by the linux-director.conf.eth1.arp_announce = 2 # Ditto for eth1.168.255 pre-up sysctl -p > /dev/null ifup lo:0 . it has the effect of announcing this address.conf.arp_ignore = 1 # Ditto for eth1. do not respond if the address is # configured on lo net.conf.eth0.ipv4.arp_ignore = 1 # When an arp request is received on eth0.168.com and sql2. In particular. net.com / sql2.0.conf.eth1.all.eth0.conf # Enable configuration of arp_ignore option net.255. add for all ARPing interfaces #net.example.255.example. only respond if that address is # configured on eth0.com / sql2. # As the source IP address of arp requests is entered into the ARP cache on # the destination.arp_ignore = 1 # Enable configuration of arp_announce option net. and packets are being sent out eth0 for an address that is on # lo.com to accept requests on the virtual IP address 192.conf: sql1.example.example.all.105. and an arp request is required.example.com: apt-get install iproute Add the following to /etc/sysctl. add for all ARPing interfaces #net.arp_announce = 2 sysctl -p Add this section for the virtual IP address to /etc/network/interfaces: sql1.example.conf.example.105 netmask 255.ipv4.ipv4.example. then the address on lo will be used.ipv4.com / sql2. sql1.0.ipv4.Finally we must configure our MySQL cluster nodes sql1.com: vi /etc/sysctl.

How To Set Up A Load-Balanced MySQL Cluster .d/ldirectord stop /etc/init.168.example.com: ldirectord ldirectord.com / loadb2.UP> mtu 1500 qdisc pfifo_fast qlen 1000 link/ether 00:16:3e:45:fc:f8 brd ff:ff:ff:ff:ff:ff inet 192.168.d/heartbeat start If you don't see errors.example.0.example.0.com / loadb2.255 scope global eth0 loadb1.example.168.MULTICAST.0.0.168.example.MULTICAST.example.105/24 brd 192.104/24 brd 192.Page 8 7 Start The Load Balancer And Do Some Testing Now we can start our two load balancers for the first time: loadb1.com: /etc/init.0.255 scope global eth0 inet 192.example.d/ldirectord.com / loadb2.255 scope global secondary eth0 The hot-standby should show this: 2: eth0: <BROADCAST.cf status Output on the active load balancer: ldirectord for /etc/ha.168.103/24 brd 192.com / loadb2. you should now reboot both load balancers: loadb1.cf is running with pid: 1603 Output on the hot-standby: .168.com: shutdown -r now After the reboot we can check if both load balancers work as expected : loadb1.0.UP> mtu 1500 qdisc pfifo_fast qlen 1000 link/ether 00:16:3e:16:c1:4e brd ff:ff:ff:ff:ff:ff inet 192.168.0.com: ip addr sh eth0 The active load balancer should list the virtual IP address (192.example.105): 2: eth0: <BROADCAST.

168.com / loadb2. older versions do not work with MySQL 5.0. you should then still be able to connect to the MySQL database.168.com / loadb2.168.com: /etc/ha.0.example.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 192.168.0.ldirectord is stopped for /etc/ha.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn loadb1.com: ipvsadm -L -n Output on the active load balancer: IP Virtual Server version 1.105: mysql -h 192. you can now try to access the MySQL database from a totally different server in the same network (192.d/resource.1.) You can now switch off one of the MySQL cluster nodes for test purposes.2.0.d/LVSSyncDaemonSwap master status Output on the active load balancer: master running (ipvs_syncmaster pid: 1766) Output on the hot-standby: master stopped (ipvs_syncbackup pid: 1440) If your tests went fine.105 -u ldirector -p (Please note: your MySQL client must at least be of version 4.168.d/ldirectord.example.0.cf loadb1.0. .x) using the virtual IP address 192.101:3306 Route 1 0 0 -> 192.example.168.example.102:3306 Route 1 0 0 Output on the hot-standby: IP Virtual Server version 1.105:3306 wrr -> 192.

you would need 1. The formula how much RAM you need on ech node goes like this: (SizeofDatabase × NumberOfReplicas × 1.0/en/mysql-cluster-faq.com/doc/refman/5. and anyone can connect.linux-ha.com/doc/refman/5.com/doc/refman/5.html MySQL Cluster FAQ: http://dev. and therefore you should run your cluster in an isolated private network! It's a good idea to have a look at the MySQL Cluster FAQ: http://dev.org/ The High-Availability Linux Project: http://www.1 ) / NumberOfDataNodes So if you have a database that is 1 GB of size.8 Annotations There are some important things to keep in mind when running a MySQL cluster: .html Ultra Monkey: http://www.0/en/ndbcluster.com/doc/refman/5.ultramonkey.0/en/mysql-cluster-faq.mysql.mysql.All data is stored in RAM! Therefore you need lots of RAM on your cluster nodes.org/ .mysql.mysql.mysql.1 GB RAM on each node! .com/ MySQL Cluster documentation: http://dev.html Links MySQL: http://www.The cluster management node listens on port 1186.html and also at the MySQL Cluster documentation: http://dev. So that's definitely not secure.0/en/ndbcluster.