You are on page 1of 26

!!!!! ! ! ! !

MySQL High Availability: DRBD
Configuration and Deployment Guide

A MySQL® White Paper
October 2012

Copyright © 2012, Oracle and/or its affiliates. All rights reserved. !

Table of Contents
Introduction ....................................................................................................3 Approaches to High Availability with MySQL .............................................3 Optimum Use Cases for DRBD .................................................................4 Introduction to MySQL on DRBD/Pacemaker/Corosync/Oracle Linux .....5 Setting up MySQL with DRBD/Pacemaker/Corosync/Oracle Linux ..........6 Target Configuration ..................................................................................6 File Systems ..............................................................................................6 Pre-Requisites ...........................................................................................7 Setting up and testing your system ...........................................................8 Step 1. Check correct kernel is installed ...............................................8 Step 2. Ensure that DRBD user-land tools are installed ........................8 Step 3. Ensure cluster software is installed..........................................10 Step 4. Configure DRBD & create file system ......................................11 Step 5. Install & configure MySQL .......................................................14 Step 6. Configure Pacemaker/Corosync resources .............................16 Step 7. Stop service when isolated from the network ..........................21 Step 8. Ensure the correct daemons are started at system boot .........22 Step 9. Test the system ........................................................................23 Recovering from Split-Brain.....................................................................24 Support for DRBD ........................................................................................24 Oracle Linux Premier Support .................................................................24 MySQL Enterprise Edition .......................................................................25 Conclusion ....................................................................................................25 Additional Resources ..................................................................................26

Copyright © 2012, Oracle and/or its affiliates. All rights reserved. !

Page 2

one of the leading solutions for MySQL HA (High Availability). Applications have different availability requirements. via synchronous replication.DRBD kernel module and userland utilities.Mirroring. without the requirement for shared-storage. . . there are many different ways of achieving high availability for MySQL. Oracle and/or its affiliates. As a result of its popularity. infrastructure and technology preferences.Introduction As the world’s leading open source database. with some of the most popular shown in Figure 1. developers and DBAs will be able to qualify ideal use-cases for DRBD and then quickly deploy new MySQL HA solutions with DRBD. integrated stack of mature and proven open source technologies. including: .com/why-mysql/white-papers/mysql_wp_ha_strategy_guide.php Copyright © 2012. provisioning and testing the complete MySQL and DRBD stack. Approaches to High Availability with MySQL Designing solutions for database high availability is not a “one size fits all” exercise. MySQL is deployed in many of today’s most demanding web. . You can learn more about each of these solutions. Ensuring service continuity is a critical attribute in any system serving these applications.mysql.An end-to-end. fully supported by Oracle. architects.Pacemaker and Corosync cluster messaging and management processes. . offering users: .MySQL Database. To recognize this diversity.Oracle Linux operating system. configuring. social and mobile applications. requiring close consideration by developers and architects. cloud. . This Guide introduces DRBD (Distributed Replication Block Device). All rights reserved. and a best-practices methodology to guide their selection from the Guide to MySQL High Availability Solutions. The paper provides a step-by-step guide to installing. while organizations have different IT skill sets.Automatic failover and recovery for service continuity. . there are a number of approaches to delivering HA for MySQL. to ensure failover between nodes without the risk of losing committed transactions. ! Page 3 .Building of HA clusters from commodity hardware.1 Figure 1 MySQL HA Solutions – Covering the Spectrum of Requirements 1 http://www. By reading this Guide.

! Page 4 .mysql.com/support/supportedplatforms/database.99% MySQL Cluster All supported by MySQL Cluster **** NDB (MySQL Cluster) Yes Yes Yes 5 seconds + InnoDB Recovery Time*** Synchronous No.6 + HA Utilities 5 seconds + WSFC* Windows Server 2008 InnoDB Yes Yes N/A – Shared Storage 5 seconds + InnoDB Recovery Time*** N/A – Shared Storage Yes Active / Passive Master + Multiple Slaves 99. with MySQL 5.html *** InnoDB recovery time dependent on cache and database size.9% Synchronous No.Optimum Use Cases for DRBD Figure 2 below provides a feature comparison of the leading HA solutions for MySQL.html Figure 2 Comparing MySQL HA Solutions The following sections of the whitepaper focus on the installation and configuration of the DRBD stack. Oracle and/or its affiliates. distributed across nodes 255 + Multiple Slaves 99. with MySQL 5.95% DRBD Oracle Linux InnoDB Yes.Guarantees of no data loss in the event of a node failover. HA Technology Platform Support Supported Storage Engine Auto IP Failover Auto Database Failover Auto Data Resynchronization Failover Time Replication Mode Shared Storage No.999% * Windows Server 2008R2 Failover Clustering ** http://www. database activity. .6 + HA Utilities Yes. **** http://www.mysql. . distributed across nodes Master & Multiple Slaves 99. Copyright © 2012.99% 1 Second or Less Asynchronous / SemiSynchronous No. As the figure demonstrates. All rights reserved. with Corosync + Pacemaker Yes Oracle VM Template Oracle Linux InnoDB Yes Yes N/A – Shared Storage 5 seconds + InnoDB Recovery Time*** N/A – Shared Storage Yes Active / Passive Master + Multiple Slaves 99.Generic HA solution supporting multiple application types.com/support/supportedplatforms/cluster.Local data storage.Linux-based environment. with Corosync + Pacemaker Yes. in addition to the database. the DRBD stack is best deployed if a user has the following requirements: . of Nodes Availability Design Level MySQL Replication All supported by MySQL Server ** All (InnoDB required for Auto-Failover) No Yes. etc. distributed across nodes Active / Passive Master + Multiple Slaves 99. .

At any point in time. Oracle and/or its affiliates. delivering high availability and avoiding data corruption. Any changes made on the active host are synchronously replicated to the standby host by DRBD. ! Page 5 . DRBD requires the use of a journaling file system such as ext3 or ext4. At the lowest level. For this solution it acts in an active-standby mode – this means that at any point in time the directories being managed by DRBD are accessible for reads and writes on exactly one of the two hosts and inaccessible (even for reads) on the other. This white paper will demonstrate how to configure Pacemaker to use these agents to provide a High Availability stack for MySQL. 2 hosts are required in order to provide physical redundancy. Copyright © 2012. It is an important feature that no shared storage is required. DRBD synchronizes data at the block device (typically a spinning or solid state disk) – transparent to the application. it also handles the nodes membership within the cluster and informs Pacemaker of any changes. those 2 hosts should be on different physical machines. All rights reserved. The core Pacemaker process does not have built in Figure 3 MySQL/DRBD/Pacemaker/Corosync Stack knowledge of the specific services to be managed. instead agents are used which provide a wrapper for the service-specific actions. database and even the file system. MySQL and the Virtual IP Address that applications use to connect to the active MySQL service. Corosync provides the underlying messaging infrastructure between the nodes that enables Pacemaker to do its job. Pacemaker is responsible for starting and stopping services – ensuring that they’re running on exactly one host.Introduction to MySQL on DRBD/Pacemaker/Corosync/Oracle Linux Figure 3 illustrates the stack that can be used to deliver a level of High Availability for the MySQL service. the services will be active on one host and in standby mode on the other. The essential services managed by Pacemaker in this configuration are DRBD. For example. Pacemaker and Corosync combine to provide the clustering layer that sits between the services and the underlying hosts and operating systems. in this solution we use agents for Virtual IP Addresses. if using a virtual environment. MySQL and DRBD – these are all existing agents and come packaged with Pacemaker.

localdomain” (192.168. Figure 4 Network configuration One of the final steps in configuring Pacemaker is to add network connectivity monitoring in order to attempt to have an isolated host stop its MySQL service to avoid a “splitbrain” scenario.19) and “host2.1 ::1 192. This is achieved by having each host ping an external (not one part of the cluster) IP addresses – in this case the network router (192.168.168. Copyright © 2012.5. The MySQL Server configuration file (my. The two physical hosts are “host1. File Systems Figure 5 shows where the MySQL files will be stored. The MySQL binaries as well as the socket (mysql. All rights reserved.localdomain localhost localhost. It is recommended that you don’t rely on an external DNS service (as that is an additional point of failure) and so these mappings should be configured on each host in the /etc/hosts file: 127.1).5.102) and this is the address that the application will connect to when accessing the MySQL database.0.168.5.16 localhost localhost.5.sock) and process-id (mysql.16).0.localdomain A single Virtual IP (VIP) is shown in the figure (192.cnf) and the database files (data/*) are stored in a DRBD controlled file system that at any point in time is only available on one of the two hosts – this file system is controlled by DRBD and mounted under /var/lib/mysql_drbd/.localdomain host2 host2.168.19 192.localdomain host1 host1.5. ! Page 6 .168.pid) files are stored in a regular partition – independent on each host (under /var/lib/mysql/).Setting up MySQL with DRBD/Pacemaker/Corosync/Oracle Linux Target Configuration Figure 4 shows the network configuration used in this paper – note that for simplicity a single network connection is used but for maximum availability in a production environment you should consider redundant network connections.localdomain” (192.5. Pacemaker will be responsible for migrating this between the 2 physical IP addresses. Oracle and/or its affiliates.

Copyright © 2012.d/30-net_persistent_names.Figure 5 Distribution of MySQL file system Pre-Requisites 2 Servers. All rights reserved.localdomain 192. ! Page 7 .16 host2 host2.localdomain ::1 localhost localhost.rules file and then restarting the server.localdomain /etc/hosts (host2): 127.localdomain Check that the same name is configured for the Network Interface Card on each of the 2 servers and change one if they don’t match – in this case the NIC is called eth0 on both hosts.0.5.19 host1 host1.2 or later Unpartitioned space on the local disks to create a DRBD partition Network connectivity (ideally redundant) It is recommended that you do not rely on DNS to resolve host names and so for the configuration shown in Figure 4 the following host configuration files are created: /etc/hosts (host1): 127.localdomain ::1 localhost localhost.0.0.5. If the NIC names do not match then they can be changed by editing the /etc/udev/rules.168.localdomain 192. each with: MySQL 5. Oracle and/or its affiliates.168.0.1 localhost localhost.1 localhost localhost.5 or later Oracle Linux 6.

34|:80.39 or later.461 --. edit the /etc/selinux/config file to overwrite enforcing with permissive (on each host) and then restart each of the hosts.repos.d/public-yum-ol6.repo -P /etc/yum. 200 OK Length: 1461 (1.1. connected.repos.44. before going any further install the latest version on each server: [root@host1 yum. if that’s the case then you can skip to Step 2. HTTP request sent.el6uek.com/RPM-GPG-KEY-oracle-ol6 gpgcheck=1 enabled=1 The system can then be updated (includes bringing the kernel up to UEK2 Release 2): [root@host1]# yum update Or if you want to limit the update to the kernel: [root@host1]# yum update kernel-uek Restart the hosts to move activate the new kernel: [root@host1]# shutdown -r now Step 2..oracle.-K/s in 0s 2012-06-22 10:50:50 (135 MB/s) .4K) [text/plain] Saving to: â/etc/yum. Oracle and/or its affiliates.repoâ Within the /etc/yum.oracle.com/repo/OracleLinux/OL6/UEK/base/$basearch/ gpgkey=http://public-yum.x86_64 You need the version to be using Oracle Unbreakable Enterprise Kernel 2.repos.44..com/public-yum-ol6.com..d/public-yum-ol6. Ensure that DRBD user-land tools are installed DRBD is part of the Oracle Linux kernel but it may be necessary to install the user-land utilities.d]# wget http://public-yum.1.146.d/public-yum-ol6.oracle.oracle.34 Connecting to public-yum.repoâ 100%[======================================>] 1.repo file enable the ol6_UEK_base repository by setting enabled=1: [ol6_UEK_base] name=Unbreakable Enterprise Kernel for Oracle Linux $releasever ($basearch) baseurl=http://public-yum..repo Resolving public-yum.http://public-yum.com/public-yum-ol6.d/ --2012-06-22 10:50:50-. security Error: No matching Packages to list Copyright © 2012. Firstly use yum to check that they aren’t already there (if “Repo” is set to “installed” then it is already installed): [root@host1]# yum info drbd83-utils Loaded plugins: refresh-packagekit. ! Page 8 .146.39-100..oracle.repos.SELINUX can prevent the cluster stack from operating correctly and so at this point.oracle.â/etc/yum. 141. Check correct kernel is installed First of all. check the version of the kernel you are using: [root@host1]# uname -r 2.6. Setting up and testing your system Step 1.com|141.repos. All rights reserved..6. The instructions in this paper are based on Oracle’s Enterprise Kernel Release 2 for Oracle Linux 6. awaiting response.

com). ! Page 9 .3.3. rhnplugin. Oracle and/or its affiliates. yum should be used to install the package on both hosts: [root@host1 ]# yum install drbd83-utils Loaded plugins: refresh-packagekit. otherwise you need to make sure that the system is registered with Oracle ULN (http://linux-update.rpm Running rpm_check_debug Running Transaction Test Transaction Test Succeeded Running Transaction Installing : drbd83-utils-8. Figure 6 Register server with Oracle ULN Within the ULN web site (http://linuxupdate.el6 ol6_x86_64_mysql-ha-utils 207 k Transaction Summary ================================================================================ Install 1 Package(s) Total download size: 207 k Installed size: 504 k Is this ok [y/N]: y Downloading Packages: drbd83-utils-8.com) – from the desktop.el6.If they are there then you can jump to Step 3.11-1.x86_64. security Setting up Install Process Resolving Dependencies --> Running transaction check ---> Package drbd83-utils. Figure 7 Subscribe to "HA Utilities for MySQL" channel At this point.x86_64 0:8.3. All rights reserved.x86_64 | 207 kB 00:00 1/1 Copyright © 2012.oracle. you need to subscribe to the “HA Utilities for MySQL” channel for each of the two systems (Figure 7).3.el6.11-1.el6 will be installed --> Finished Dependency Resolution Dependencies Resolved ================================================================================ Package Arch Version Repository Size ================================================================================ Installing: drbd83-utils x86_64 8. select “System/Administration/ULN Registration” (as shown in Figure 6 and then follow the steps) or run uln_register if you don’t have a desktop environment.11-1.oracle.11-1.

: : It will run scripts at initialization. : : Available rpmbuild rebuild options: : --with(out) : heartbeat cman corosync doc publican snmp esmtp : pre_release Copyright © 2012.11-1.1 Release : 4. All rights reserved. and an init script.11-1.el6 Complete! Step 3.3.4. : : It supports "n-node" clusters with significant capabilities for : managing resources and dependencies. scalable High-Availability cluster : resource manager for Linux-HA (Heartbeat) and/or Corosync. when machines go up or : down. If it is available in a repository then it is simple to install: [root@host1]# yum info pacemaker Loaded plugins: security Repository ol6_latest is listed more than once in the configuration Repository ol6_ga_base is listed more than once in the configuration Available Packages Name : pacemaker Arch : x86_64 Version : 1.org License : GPLv2+ and LGPLv2+ Description : Pacemaker is an advanced. security Installed Packages Name : corosync Arch : x86_64 Version : 1. Oracle and/or its affiliates. when related resources fail and can be configured to : periodically check resource health. Ensure cluster software is installed Corosync will likely already be available but this can be confirmed using yum: [root@host1]# yum info corosync Loaded plugins: refresh-packagekit. If the Rep information is there but not set to “installed” then simply run: [root@host1]# yum install corosync on both hosts.Verifying : drbd83-utils-8.clusterlabs.el6 Size : 405 k Repo : ol6_latest Summary : Scalable High-Availability cluster resource manager URL : http://www. default configuration : files.x86_64 0:8.corosync. : several default APIs and libraries.3.el6 Size : 422 k Repo : installed From repo : HighAvailability Summary : The Corosync Cluster Engine and Application Programming : Interfaces URL : http://ftp. ! Page 10 .x86_64 1/1 Installed: drbd83-utils.org License : BSD Description : This package contains the Corosync Cluster Engine Executive.el6. Pacemaker may need to be installed – again this can be checked using yum.1.6 Release : 3.

Changes will remain in memory only.[root@host1]# yum install pacemaker Step 4. All rights reserved. nor Sun.9 GB.5 GB. the previous content won't be recoverable. After that. 63 sectors/track. 63 sectors/track.3 GB. 2610 cylinders. disk sdb has no partitions and so we can safely create a new one: [root@host1]# fdisk -cu /dev/sdb Device contains neither a valid DOS partition table. Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(ri Command (m for help): p Disk /dev/sdb: 21.5 GB. 21474836480 bytes 255 heads. /dev/sda2 64 5222 41430016 Id 83 8e System Linux Linux LVM Disk /dev/sdb: 21. 5221 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000c7583 Device Boot Start End Blocks /dev/sda1 * 1 64 512000 Partition 1 does not end on cylinder boundary. of course. 257 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/mapper/vg_host1-lv_swap doesn't contain a valid partition table In this case. 63 sectors/track. Configure DRBD & create file system If your hosts don’t already have an empty partition that you plan to use for the DRBD-managed data then that must be created first – even before that confirm that you have a disk available which hasn’t already been partitioned: [root@host1]# fdisk -l Disk /dev/sda: 42. 21474836480 bytes 255 heads. 63 sectors/track. 2113929216 bytes 255 heads. until you decide to write them. total 41943040 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xecef1a6a Device Boot Start End Blocks Id System Copyright © 2012. 40307261440 bytes 255 heads. 2610 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/sdb doesn't contain a valid partition table Disk /dev/mapper/vg_host1-lv_root: 40. 63 sectors/track. Oracle and/or its affiliates. SGI or OSF di Building a new DOS disklabel with disk identifier 0xecef1a6a. ! Page 11 . 4900 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/mapper/vg_host1-lv_root doesn't contain a valid partition table Disk /dev/mapper/vg_host1-lv_swap: 2113 MB. 42949672960 bytes 255 heads.

d/ directory. Syncing disks.sh. /usr/lib/drbd/notifyemergency-reboot.sh.res) must be created in the /etc/drbd. shared-secret "clusterdb". net { cram-hmac-alg "sha1". } syncer { rate 10M. rr-conflict disconnect. 2610 cylinders. handlers { pri-on-incon-degr "/usr/lib/drbd/notify-pri-on-incon-degr. Oracle and/or its affiliates. echo b > /proc/sysrq-trigger . 21474836480 bytes 255 heads. reboot -f".sh.Command (m for help): n Command action e extended p primary partition (1-4) p Partition number (1-4): 1 First sector (2048-41943039. All rights reserved. ! Page 12 . +sectors or +size{K. This partition will be used as a resource. echo o > /proc/sysrq-trigger . the contents should look like this: resource clusterdb_res { protocol C. # 2 seconds. pri-lost-after-sb "/usr/lib/drbd/notify-pri-lost-after-sb.sh".sh. after-sb-1pri disconnect. fence-peer "/usr/lib/drbd/crm-fence-peer. /usr/lib/drbd/notify-emergencyshutdown. reboot -f". default 41943039): Using default value 41943039 Command (m for help): p Disk /dev/sdb: 21. } startup { degr-wfc-timeout 120. local-io-error "/usr/lib/drbd/notify-io-error.sh. after-sb-2pri disconnect. Copyright © 2012.M. in order for DRBD to be able to do this a new configuration file (in this case called clusterdb_res.G} (2048-41943039. outdated-wfc-timeout 2. echo b > /proc/sysrq-trigger . 63 sectors/track. halt -f". total 41943040 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xecef1a6a Device Boot /dev/sdb1 Start 2048 End 41943039 Blocks 20970496 Id 83 System Linux Command (m for help): w The partition table has been altered! Calling ioctl() to re-read partition table. after-sb-0pri disconnect. } disk { on-io-error } detach. /usr/lib/drbd/notifyemergency-reboot. default 2048): Using default value 2048 Last sector. managed (and synchronized between hosts by DRBD). # 2 minutes.sh.5 GB.

this can be confirmed by querying the status of the service: [root@host1]# /etc/init.11 (api:88/proto:86-96) srcversion: DA5A13F16DE6553FC7CE9B2 m:res cs ro Copyright © 2012.d/ Before starting the DRBD daemon. Oracle and/or its affiliates. At this point the DRBD service is running on both hosts but neither host is the “primary” and so the resource (block device) cannot be accessed on either host. } on host2. initializing activity log NOT initialized bitmap New drbd meta data block successfully created. disk /dev/sdb1.res host2:/etc/drbd. This resource configuration file should be copied over to the same location on the second host: [root@host1 drbd. the IP addresses and disk locations should be specific to the hosts that the cluster will be using.localdomain { device /dev/drbd0. In this example the device that DRBD will create will be located at /dev/drbd0 – it is this device that will be swapped back and forth between the hosts by DRBD. address 192. success It is now possible to start the DRBD daemon on each host: [root@host1]# /etc/init.168.d/drbd status drbd driver loaded OK. success [root@host2]# drbdadm create-md clusterdb_res Writing meta data. } } Obviously.. ! ds Inconsistent/Inconsistent p C mounted fstype ds p mounted fstype Page 13 . on-no-data-accessible io-error.11 (api:88/proto:86-96) srcversion: DA5A13F16DE6553FC7CE9B2 m:res cs ro 0:clusterdb_res Connected Secondary/Secondary [root@host2]# /etc/init.3.d/drbd start Starting DRBD resources: [ d(clusterdb_res) ].16:7788. disk /dev/sdb1. All rights reserved. } on host1..3.d/drbd status drbd driver loaded OK.d]# scp clusterdb_res. initializing activity log NOT initialized bitmap New drbd meta data block successfully created.5. device status: version: 8.localdomain { device /dev/drbd0.d/drbd start Starting DRBD resources: [ d(clusterdb_res) ]. address 192. meta-disk internal.al-extents 257. [root@host2]# /etc/init.19:7788. flexible-meta-disk internal.5. device status: version: 8.. meta data must be created for the new resource (clusterdb_res) on each host: [root@host1]# drbdadm create-md clusterdb_res Writing meta data..168.

4096000 Writing inode tables: done Creating journal (32768 blocks): done Writing superblocks and filesystem accounting information: done This filesystem will be automatically checked every 32 mounts or 180 days.com|156.d/drbd status drbd driver loaded OK.. device status: version: 8. Copyright © 2012. 1605632. Oracle and/or its affiliates. 5242455 blocks 262122 blocks (5. 819200. This initial sync can take some time but it should not be necessary to wait for it to complete in order to complete Step 4 through Step 8.com/get/Downloads/MySQL-5..ukfast. Stripe width=0 blocks 1310720 inodes.com/ --2012-02-27 17:58:45-.151.el6.http://www. As previously shown in Figure 5. 884736. whichever comes first.5/MySQL-5. 229376.mysql.co. 156.384 fstype (10. the data and configuration files will be stored in the file system provided by DRBD (where DRBD will ensure that the data is replicated between the hosts) and so that part of the installation only needs to be done once (on the active host in the cluster) whereas the binaries should be on both servers (note that the exact name and source of the rpms will change over time – simply visit https://www. 294912.mysql.0:clusterdb_res Connected Secondary/Secondary Inconsistent/Inconsistent C In order to create the file systems (and go on to store useful data in it).x86_64. All rights reserved. Note that we do not mount this new file system on either host (though in Step 5 we do so temporarily in order to install MySQL on it) as this is something that will be handled by the clustering software – ensuring that the replicated file system is only mounted on the active server.oracle..tar/from/http://mirrors.mysql. Install & configure MySQL The MySQL software should be installed on each of the servers.com/get/Downloads/MySQL-5.ukfast.co.x86_64. sync'ed: 0. 32768 fragments per group 8192 inodes per group Superblock backups stored on blocks: 32768.com/downloads/ for the latest (or https://edelivery.3.com/ Resolving www. Use tune2fs -c or -i to override.com.com/ for commercial versions)): [root@host1 ~]# yum erase mysql* [root@host1 ~]# wget http://www.mysql.5.mysql.63.151.tar/from/http://mirrors.00%) reserved for the super user First data block=0 Maximum filesystem blocks=4294967296 160 block groups 32768 blocks per group. connected.63..el6. 163840.--overwrite-data-of-peer primary all [root@host1]# /etc/init.. one of the hosts must be made primary for the clusterdb_res resource: [root@host1]# drbdadm -.6|:80.11 (api:88/proto:86-96) srcversion: DA5A13F16DE6553FC7CE9B2 m:res cs ro . Now that the device is available on host1 it is possible to create a file system on it (note that this does not need to be repeated on the second host as DRBD will handle the syncing of the raw disk data): [root@host1]# mkfs -t ext4 /dev/drbd0 mke2fs 1.211. Step 5.211.5.6 Connecting to www.uk/sites/ftp.mysql.41.5/MySQL-5.12 (17-May-2010) Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) Stride=0 blocks.. ! Page 14 .4% K/sec 0:clusterdb_res SyncSource Primary/Secondary ds (20404/20476)Mfinish: UpToDate/Inconsistent p 0:33:29 C mounted 10.uk/sites/ftp.mysql. 98304. 2654208.384) Note that the status output also shows the progress of the block-level syncing of the device from the new primary (host1) to the secondary (host2).

el6.el6. HTTP request sent. security Setting up Install Process Examining MySQL-client-5.x86_64.x86_64.tarâ [root@host1 ~]# tar xf MySQL-5.el6. awaiting response.21-1. 78.21-1.109.. awaiting response.rpm Loaded plugins: refresh-packagekit.el6 /MySQL-client-5.x86_64 0:5.el6.mysql.21-1.109.ukfast.uk|78.ukfast.el6..tar [following] --2012-02-27 17:58:46-.x86_64 Marking MySQL-client-5.5.x86_64 1/1 Installed: MySQL-client.x86_64.5. the /var/lib/mysql_drbd directory should be created on both hosts: [root@host1 [root@host1 [root@host1 [root@host1 [root@host1 ~]# ~]# ~]# ~]# ~]# mkdir chown chgrp chown chgrp /var/lib/mysql_drbd billy /var/lib/mysql_drbd billy /var/lib/mysql_drbd billy /var/lib/mysql billy /var/lib/mysql [root@host2 ~]# mkdir /var/lib/mysql_drbd [root@host2 ~]# chown billy /var/lib/mysql_drbd [root@host2 ~]# chgrp billy /var/lib/mysql_drbd Copyright © 2012.el6.x86_64 0:5.21-1.co..175. In order for the In DRBD file system to be mounted.5/MySQL-5.uk.rpm: MySQL-client-5.tar Resolving mirrors.rpm to be installed Resolving Dependencies --> Running transaction check ---> Package MySQL-client.5.5.117|:80.5.ukfast.5.el6.21-1.5.com/Downloads/MySQL-5.http://mirrors.5.5..117 Connecting to mirrors.el6 Complete! Repeat the above installation for the second server.el6 will be installed --> Finished Dependency Resolution Dependencies Resolved ================================================================================================== ================================================================================================= Package Arch Version Repository Size ================================================================================================== ================================================================================================= Installing: MySQL-client x86_64 5. 200 OK Length: 147312640 (140M) [application/x-tar] Saving to: âMySQL-5.21-1.x86_64.5.21-1.21-1.21-1.x86_64.5. All rights reserved.uk/sites/ftp.21-1.x86_64.com/Downloads/MySQL5.mysql. ! Page 15 .21-1. connected.el6.175.5.21-1.x86_64.rpm [root@host1 ~]# yum install MySQL-client-5.211.co.ukfast.x86_64.co.co.x86_64 63 M Transaction Summary ================================================================================================== ================================================================================================= Install 1 Package(s) Total size: 63 M Installed size: 63 M Is this ok [y/N]: y Downloading Packages: Running rpm_check_debug Running Transaction Test Transaction Test Succeeded Running Transaction Installing : MySQL-client-5..uk/sites/ftp.el6.el6..HTTP request sent.5.5/MySQL-5.el6.21-1. Oracle and/or its affiliates.. 302 Found Location: http://mirrors..5.tar [root@host1 ~]# rpm -ivh --force MySQL-server-5.

You can start the MySQL daemon with: cd /usr . All rights reserved.. See the manual for more instructions.sock and the pid file to /var/lib/mysql/mysql.cnf og-w my.localdomain password 'new-password' Alternatively you can run: /usr/bin/mysql_secure_installation which will also give you the option of removing the test databases and anonymous user created by default. Also confirm that the socket is configured to /var/lib/mysql/mysql. OK To start mysqld at boot time you have to copy support-files/mysql. /usr/bin/mysqld_safe & You can test the MySQL daemon with mysql-test-run. OK Filling help tables. start the server..cnf -R billy data Now that this has been set up. then issue the following commands: /usr/bin/mysqladmin -u root password 'new-password' /usr/bin/mysqladmin -u root -h host1..cnf file and set datadir=/var/lib/mysql_drbd/data in the [mysqld] section. the DRBD file system should be temporarily mounted so that the configuration file can be created and the default data files installed: [root@host1 ~]# mount /dev/drbd0 /var/lib/mysql_drbd [root@host1 ~]# mkdir /var/lib/mysql_drbd/data [root@host1 ~]# cp /usr/share/mysql/my-small.cnf 644 my.server to the right place for your system PLEASE REMEMBER TO SET A PASSWORD FOR THE MySQL root USER ! To do so. Copyright © 2012. ! Page 16 ..pl cd /usr/mysql-test . Oracle and/or its affiliates.pl Please report any problems with the /usr/bin/mysqlbug script! [root@host1 [root@host1 [root@host1 [root@host1 [root@host1 mysql_drbd]# mysql_drbd]# mysql_drbd]# mysql_drbd]# mysql_drbd]# chmod chown chmod chmod chown -R uog+rw * billy my. Configure Pacemaker/Corosync resources At this point.[root@host2 ~]# chown billy /var/lib/mysql [root@host2 ~]# chgrp billy /var/lib/mysql On just the one (DRBD active) host. the DRBD file system is configured and initialized and MySQL has been installed and the required file set up on the replicated DRBD file system. This is strongly recommended for production servers.cnf Edit the edit the /var/lib/mysql_drbd/my. Pacemaker and Corosync are installed but they are not yet managing the MySQL/DRBD resources to provide a clustered solution – the next step is to set that up.cnf /var/lib/mysql_drbd/my.pid. The default database files can now be populated: [root@host1 ~]# mysql_install_db –no-defaults --datadir=/var/lib/mysql_drbd/data -user=billy Installing MySQL system tables. the DRBD file system should be unmounted (and primary control of the DRBD resource surrendered) and from this point onwards it will be managed by the clustering software: [root@host1 ~]# umount /var/lib/mysql_drbd [root@host1 ~]# drbdadm secondary clusterdb_res Step 6. perl mysql-test-run.

[root@host2 ~]# /etc/init. All rights reserved.0.0 mcastaddr: 226.d/pacemaker status pacemakerd (pid 3203) is running. ! Page 17 .conf The totem section of the /etc/corosync/corosync.conf file should be updated as follows: totem { version: 2 secauth: off threads: 0 interface { ringnumber: 0 bindnetaddr: 192.d/corosync start Starting Corosync Cluster Engine (corosync): [root@host2 ~]# /etc/init. Oracle and/or its affiliates. check /var/log/messages for errors before starting Pacemaker: [root@host1 ~]# /etc/init.conf /etc/corosync/service.1.ZZ.d/pacemaker status pacemakerd (pid 3070) is running.99.org/wiki/Multicast_address Copyright © 2012.wikipedia.d/pcmk Corosync can then be started on both of the hosts: [root@host1 ~]# /etc/init..d/pacemaker start Starting Pacemaker Cluster Manager: [root@host2 ~]# /etc/init. [ OK ] [ OK ] 2 https://en..Firstly.5. the configuration file can be copied across: [root@host1 ~]# scp [root@host1 ~]# scp /etc/corosync/corosync.d/pcmk: service { # Load the Pacemaker Cluster Resource Manager name: pacemaker ver: 1 } To avoid any mismatches. The multi-cast address2 should be unique in your network but the port can be left at 4000..d/corosync start Starting Corosync Cluster Engine (corosync): [root@host1 ~]# /etc/init..conf host2:/etc/corosync/corosync.conf.d/corosync status corosync (pid 3023) is running.YY. [root@host2 ~]# /etc/init.example /etc/corosync/corosync. set up some network-specific parameters from the Linux command line and also in the Corosync configuration file.. The IP address should be based on the IP addresses being used by the servers but should take the form of XX...d/corosync status corosync (pid 3157) is running.d/pacemaker start Starting Pacemaker Cluster Manager: [root@host1 ~]# /etc/init.d/pcmk host2:/etc/corosync/service.1 mcastport: 4000 ttl: 1 } } Create: /etc/corosync/service. [root@host1 ~]# cp /etc/corosync/corosync.168.. [ OK ] [ OK ] To confirm that there are no problems at this point.

A recommendation would be to also run it without the option (on both servers) so that you get a continually refreshed view of the state of the cluster – including any managed resources. rather how they compare with the values that will subsequently be configured against specific events. Oracle and/or its affiliates. we stop the DRBD service: [root@host1 ~]# /etc/init.localdomain Stack: openais Current DC: host1. We turn this off as this solution will rely on each node shutting itself down in the event that it loses connectivity with the independent host: [root@host1 ~]# crm configure property stonith-enabled=false The first resource to configure is DRBD – a primitive (p_drbd_mysql) is created but before that.localdomain host2.localdomain Stack: openais Current DC: host1.localdomain . here we set the stickiness to 100: [root@host1 ~]# crm configure rsc_defaults resource-stickiness=100 STONITH (Shoot The Other Node In The Head) – otherwise known as fencing – refers to one node trying to kill another in the even that it believes the other has partially failed and should be stopped in order to avoid any risk of a split-brain scenario. ============ Online: [ host1. ============ Online: [ host1. 2 expected votes 0 Resources configured. Copyright © 2012. crm_mon is run with the -1 option to indicate that it should report once and then return.el6-a02c0f19a00c1eb2527ad38f146ebc0834814558 2 Nodes configured.partition with quorum Version: 1. All rights reserved. ! Page 18 . This isn’t the desired behavior as it does not offer High Availability and so that default should be overridden (we’ll later add an extra behavior whereby each node will shut itself down if it cannot ping a 3rd node that is external to the cluster (preventing a split brain situation)).localdomain host2.d/drbd stop Stopping all DRBD resources: .el6-a02c0f19a00c1eb2527ad38f146ebc0834814558 2 Nodes configured.6-3. check /var/log/messages for errors and also run Pacemaker’s cluster resource monitoring command to view the status of the cluster: It’s worth running it on both hosts to confirm that they share the same view of the world: [root@host1 billy]# crm_mon -1 ============ Last updated: Mon Feb 27 17:51:10 2012 Last change: Mon Feb 27 17:50:25 2012 via crmd on host1. Pacemaker’s resource management tool crm can then be used to configure the cluster resources.: [root@host1 ~]# crm configure property no-quorum-policy=ignore Pacemaker uses “resource stickiness” parameters to determine when resources should be migrated between nodes – the absolute values are not important. when one host fails (or loses contact with the other) there is no node majority (quorum) left and so by default the surviving node (or both if they’re still running but isolated from each other) would be shut down by Pacemaker.localdomain .localdomain ] [root@host2 billy]# crm_mon -1 ============ Last updated: Mon Feb 27 17:51:34 2012 Last change: Mon Feb 27 17:50:25 2012 via crmd on host1.Again. 2 expected votes 0 Resources configured.1.localdomain ] Above.1.partition with quorum Version: 1. As we are configuring a cluster made up of just 2 hosts.6-3.

5.102 --user=billy" op start timeout=120s op stop timeout=120s op monitor interval=20s timeout=30s Rather than managing the individual resources/primitives required for the MySQL service. Oracle and/or its affiliates.cnf" datadir="/var/lib/mysql_drbd/data" pid="/var/lib/mysql/mysql. As a prerequisite check. it makes sense for Pacemaker to manage them as a group (for example. the application will connect to MySQL through the Virtual IP Address 192. ! Page 19 . To that end. All rights reserved. that relationship is added by defining a co-location and an ordering constraint to ensure that the MySQL group is co-located with the DRBD master and that the DRBD promotion of the host to the master must happen before the MySQL group can be started: crm(live)configure# colocation c_mysql_on_drbd inf: g_mysql ms_drbd_mysql:Master crm(live)configure# order o_drbd_before_mysql inf: ms_drbd_mysql:promote g_mysql:start Copyright © 2012. Using this information.d/drbd stop Stopping all DRBD resources: .168. the MySQL service itself can be configured – the primitive being labeled as p_mysql.5. you should have ensured that both hosts use the same name for their NIC – in this example eth0.sock" user="billy" group="billy" additional_parameters="-bind-address=192. [root@host1 ~]# crm configure crm(live)configure# primitive p_drbd_mysql ocf:linbit:drbd params drbd_resource="clusterdb_res" op monitor interval="15s" WARNING: p_drbd_mysql: default timeout 20s for start is smaller than the advised 240 WARNING: p_drbd_mysql: default timeout 20s for stop is smaller than the advised 100 WARNING: p_drbd_mysql: action monitor not advertised in meta-data.168. Note that Pacemaker will provide command-line arguments when running the mysqld process which will override options such as datadir set in the my. it may not be supported by the RA A master-slave relationship (called ms_drbd_mysql) is then set up for the p_drbd_mysql primitive and it is configured to only allow a single master: crm(live)configure# ms ms_drbd_mysql p_drbd_mysql meta master-max="1" master-nodemax="1" clone-max="2" clone-node-max="1" notify="true" Next a primitive (p_fs_mysql) is created for the file system running on the DRBD device and this is configured to mount it to the directory (/var/lib/mysql_drbd) where the MySQL service will expect to use it: crm(live)configure# primitive p_fs_mysql ocf:heartbeat:Filesystem params device="/dev/drbd0" directory="/var/lib/mysql_drbd" fstype="ext4" WARNING: p_fs_mysql: default timeout 20s for start is smaller than the advised 60 WARNING: p_fs_mysql: default timeout 20s for stop is smaller than the advised 60 As shown in Figure 4. a group resource (g_mysql) is defined: crm(live)configure# group g_mysql p_fs_mysql p_ip_mysql p_mysql As the MySQL service (group) has a dependency on the host it is running on being the DRBD master.5. migrating the VIP to the second host wouldn’t allow applications to access the database unless the mysqld process is also started there).[root@host2 billy]# /etc/init.102. the VIP can be created: crm(live)configure# primitive p_ip_mysql ocf:heartbeat:IPaddr2 params ip="192.pid" socket="/var/lib/mysql/mysql.102" cidr_netmask="24" nic="eth0" Now that the file system and the VIP to be used by MySQL have been defined in the cluster.cnf file and so it is important to override Pacemaker’s defaults (actually the defaults set by the MySQL agent) by specifying the correct command-line options when defining the primitive here: crm(live)configure# primitive p_mysql ocf:heartbeat:mysql params binary="/usr/sbin/mysqld" config="/var/lib/mysql_drbd/my.168.

localdomain p_mysql (ocf::heartbeat:mysql): Started host1. ! Page 20 .localdomain ] Slaves: [ host2. it may not be supported by the p_fs_mysql: default timeout 20s for start is smaller than the advised 60 p_fs_mysql: default timeout 20s for stop is smaller than the advised 60 [root@host1 ~]# crm_mon -1 ============ Last updated: Tue Feb 28 10:22:32 2012 Last change: Tue Feb 28 10:20:47 2012 via cibadmin on host1. ============ Online: [ host1.localdomain .At this point.partition with quorum Version: 1. 2 expected votes 5 Resources configured.localdomain Stack: openais Current DC: host1.el6-a02c0f19a00c1eb2527ad38f146ebc0834814558 2 Nodes configured.localdomain Copyright © 2012. Oracle and/or its affiliates.localdomain host2. All rights reserved.1.localdomain ] Master/Slave Set: ms_drbd_mysql [p_drbd_mysql] Masters: [ host1.localdomain ] Resource Group: g_mysql p_fs_mysql (ocf::heartbeat:Filesystem): Started host1.localdomain p_ip_mysql (ocf::heartbeat:IPaddr2): Started host1.6-3. the commit command will apply the changes and then crm_mon can be used to check that the resources have been correctly defined and are actually active: crm(live)configure# commit WARNING: WARNING: WARNING: RA WARNING: WARNING: p_drbd_mysql: default timeout 20s for start is smaller than the advised 240 p_drbd_mysql: default timeout 20s for stop is smaller than the advised 100 p_drbd_mysql: action monitor not advertised in meta-data. all of the configuration changes are defined but have not been applied.

Other names may be trademarks of their respective owners. The first step is to create the ping resource/primitive (p_ping).Figure 8 illustrates the various entities that have been created in Pacemaker and the relationships between them.(3). it is necessary to grant privileges to users from remote hosts to access the database. All rights reserved. Oracle and/or its affiliates. Figure 8 Clustered entities Just as with any MySQL installation. on the host where the MySQL Serrver process is currently running. Oracle and/or its affiliates. the host_list contains the list of IP addresses that the clustered host should ping in order to determine if it still has network connectivity. Stop service when isolated from the network In order to prevent a split-brain scenario in the event of network partitioning. the default stickiness for all resources was set to 100). Type 'help.21 MySQL Community Server (GPL) Copyright (c) 2000.* to 'root'@'%'" At this point it is possible to connect to the database (using the configured VIP) and store some data that we can then check is still there after later failing over to host2: [billy@host1 ~]$ mysql -h 192. Step 7. the number of addresses provided in host_list multiplied by the multiplier parameter should exceed the resourcestickiness parameter which was used when creating the cluster resources (in this paper.' or '\h' for help. [root@host1 ~]# crm configure Copyright © 2012. Commands end with . ! Page 21 . or \g. mysql> CREATE DATABASE clusterdb.168. Oracle is a registered trademark of Oracle Corporation and/or its affiliates.(2). Pacemaker can ping independent network resources (such as a network router) and then prevent the host from being the DRBD master in the event that it becomes isolated. execute the following: [billy@host1 ~]$ mysql -u root -e "GRANT ALL ON *. Type '\c' to clear the current input statement. Database changed mysql> CREATE TABLE simples (id INT NOT NULL PRIMARY KEY). Your MySQL connection id is 2 Server version: 5. 2011. All rights reserved.(4).5. mysql> INSERT INTO simples VALUES (1).102 -P3306 -u root Welcome to the MySQL monitor. Note that in order to overcome the ‘stickiness’ of the resource. USE clusterdb.5.

! Page 22 .6-3.localdomain ] Slaves: [ host1.localdomain Clone Set: cl_ping [p_ping] Started: [ host2. crm_mon can be used to confirm that this is running successfully: [root@host1 ~]# crm_mon -1 ============ Last updated: Wed Feb 29 11:27:46 2012 Last change: Wed Feb 29 10:35:23 2012 via crmd on host2. Oracle and/or its affiliates. In this example.localdomain p_ip_mysql (ocf::heartbeat:IPaddr2): Started host2.el6-a02c0f19a00c1eb2527ad38f146ebc0834814558 2 Nodes configured.localdomain .partition with quorum Version: 1.localdomain host1. a clone (cl_ping) is created – this just causes the resource to be run on all hosts in the cluster: crm(live)configure# clone cl_ping p_ping meta interleave="true" Now that there is a ping resource defined for each host.localdomain host2.crm(live)configure# primitive p_ping ocf:pacemaker:ping params name="ping" multiplier="1000" host_list="192.168. use the chkconfig command on each host: [root@host1 [root@host1 [root@host1 [root@host1 [root@host2 [root@host2 [root@host2 [root@host2 ~]# ~]# ~]# ~]# ~]# ~]# ~]# ~]# chkconfig chkconfig chkconfig chkconfig chkconfig chkconfig chkconfig chkconfig drbd off corosync on mysql off pacemaker on drbd off corosync on mysql off pacemaker on Copyright © 2012. the new location constraint (l_drbd_master_on_ping) will control the location of the DRBD master (the Master role of the ms_drbd_mysql resource) by setting the preference score for the host to negative infinity (-inf) if there is no ping service defined on the host or that ping service is unable to successfully ping at least one node (<= 0 or in Pacemaker syntax number:lte 0) crm(live)configure# location l_drbd_master_on_ping ms_drbd_mysql rule $role="Master" inf: not_defined ping or ping number:lte 0 crm(live)configure# commit Again. To this end.localdomain ] Master/Slave Set: ms_drbd_mysql [p_drbd_mysql] Masters: [ host2.localdomain ] Resource Group: g_mysql p_fs_mysql (ocf::heartbeat:Filesystem): Started host2. a reliable MySQL service is in place but it is also important to check that the correct cluster services are started automatically as part of the servers’ system startup. Pacemaker needs telling how to handle the results of the pings.1.5. Ensure the correct daemons are started at system boot At this point.localdomain ] Step 8.localdomain p_mysql (ocf::heartbeat:mysql): Started host2. All rights reserved.1" op monitor interval="15s" timeout="60s" start timeout="60s" As both hosts in the cluster should be running ping to check on their connectivity. ============ Online: [ host1. 2 expected votes 7 Resources configured. It is necessary for the Linux startup to start the Corosync and Pacemaker services but not DRBD or MySQL as those services will be started on the correct server by Pacemake.localdomain Stack: openais Current DC: host1.

1. All rights reserved. Test the system The main test is to ensure that service can be transferred from the active host to the standby. The systems administrator may also request Pacemaker to migrate the service from a healthy master to what was the standby host.localdomain Stack: openais Current DC: host1.el6-a02c0f19a00c1eb2527ad38f146ebc0834814558 2 Nodes configured. ! Page 23 .168.partition with quorum Version: 1.localdomain .localdomain . For this test.localdomain p_ip_mysql (ocf::heartbeat:IPaddr2): Started host1.localdomain ] [root@host1 ~]# mysql -h 192.localdomain ] Master/Slave Set: ms_drbd_mysql [p_drbd_mysql] Masters: [ host1.localdomain ] Slaves: [ host2.localdomain You should then check that Pacemaker believes that the resources have been migrated and most importantly that you can still access the database contents through the VIP: [root@host1 ~]# crm_mon -1 ============ Last updated: Thu Mar 1 10:01:59 2012 Last change: Thu Mar 1 10:01:35 2012 via crm_resource on host1.el6-a02c0f19a00c1eb2527ad38f146ebc0834814558 2 Nodes configured.partition with quorum Version: 1. Oracle and/or its affiliates.Step 9.localdomain host1. ============ Copyright © 2012. ============ Online: [ host1.5.102 -P3306 -u root -e 'SELECT * FROM clusterdb.6-3.localdomain Stack: openais Current DC: host1. There are several unplanned scenarios that could trigger this including the complete failure of the active host or the loss of its network connectivity.localdomain p_mysql (ocf::heartbeat:mysql): Started host1.localdomain ] Resource Group: g_mysql p_fs_mysql (ocf::heartbeat:Filesystem): Started host1.simples.1.localdomain host2. first confirm which is the active host and that the sample data is still available in the database: [root@host1 ~]# crm_mon -1 ============ Last updated: Thu Mar 1 09:59:54 2012 Last change: Thu Mar 1 09:57:57 2012 via crmd on host1.localdomain Clone Set: cl_ping [p_ping] Started: [ host2. 2 expected votes 7 Resources configured. In the case of such failures crm_mon should report that the service has transitioned to the healthy host and an application should be able to reconnect to the MySQL database using the VIP.6-3.' +----+ | id | +----+ | 1 | | 2 | | 3 | | 4 | +----+ The cluster management tool can then be used to request the MySQL group g_mysql (and implicitly any colocated resources such as the Master in the ms_drbd_mysql resource): [root@host1 ~]# crm resource migrate g_mysql host2. 2 expected votes 7 Resources configured.

Indefinite sustaining support.Comprehensive legal indemnification In addition. DRBD. ! Page 24 .5.localdomain ] [root@host1 ~]# mysql -h 192. All rights reserved.html Copyright © 2012. for the steps below. it is assumed that host1 has the correct data (simply switch the hosts if the opposite is true): [root@host2 ~]# drbdadm secondary clusterdb_res [root@host2 ~]# drbdadm -. You can confirm that split-brain is the cause of the disconnection by checking the errors in /var/log/messages. Should this happen. Oracle and/or its affiliates.localdomain Clone Set: cl_ping [p_ping] Started: [ host2. .localdomain ] Slaves: [ host1. 3 4 http://www.168.localdomain p_ip_mysql (ocf::heartbeat:IPaddr2): Started host2. Oracle delivers highest quality.com/products/enterprise/ http://www. .oracle.localdomain p_mysql (ocf::heartbeat:mysql): Started host2. Oracle Linux Premier Support Oracle Linux leverages Oracle’s world-class support infrastructure to offer 24/7 Linux support in 145 countries worldwide. low-cost support for Linux. whether issues relate to the operating system. and commercial support is available as part of MySQL Enterprise Edition3 and Oracle Linux Premier Support4.localdomain host2.localdomain ] Resource Group: g_mysql p_fs_mysql (ocf::heartbeat:Filesystem): Started host2. including: .Access to the Oracle web-based customer support system to log Service Requests online or via the phone.102 -P3306 -u root -e 'SELECT * FROM clusterdb. enterprise-class. you need to identify which of the two hosts has the correct data and then have DRBD resynchronize the data.mysql.localdomain host1. .com/us/technologies/linux/OracleLinuxSupport/index.d/drbd status). providing a single point of contact for the entire stack. In the event that this happens DRBD will break the connection (you can confirm the status of the DRBD relationship by running /etc/init.d/drbd status.Ksplice for zero downtime updates.simples. Support for DRBD The complete DRBD stack for MySQL has been certified by Oracle. clustering software or MySQL.Online: [ host1.localdomain ] Master/Slave Set: ms_drbd_mysql [p_drbd_mysql] Masters: [ host2.--discard-my-data connect clusterdb_res [root@host1 ~]# drbdadm connect clusterdb_res You can then check on the state of the resynchronization using /etc/init. users get access to the Unbreakable Linux Network5 which provides on-going updates and patches to the components of DRBD via the “HA Utilities for MySQL” channel. .Backport of fixes.' +----+ | id | +----+ | 1 | | 2 | | 3 | | 4 | +----+ Recovering from Split-Brain The components of this stack are designed to cope with component failures but there may be cases where a sequence of multiple failures could result in DRBD not being confident that the data on the two hosts is consistent.

Note that these components can be downloaded from https://edelivery. and optimize performance. modeling. ! Page 25 . and administration tools so organizations can achieve the highest levels of performance. security and uptime. It’s like having a “virtual DBA” assistant at your side to recommend best practices and eliminate security vulnerabilities. Backed by support for the entire stack – from Operating System and DRBD to the clustering processes and MySQL itself. Oracle Premier Support for MySQL – MySQL Enterprise Edition provides 24x7x365 access to Oracle’s MySQL Support team. In conjunction with the MySQL binlog. and performing backups of subsets of InnoDB tables. All rights reserved. 5 https://linux. and data architects to design. MySQL Enterprise Backup supports creating compressed backup files. development. a flexible SQL editor. MySQL Enterprise Monitor and Query Analyzer – The MySQL Enterprise Monitor provides at-a-glance views of the health of your MySQL databases. with direct access to the MySQL development team. Rapid diagnosis and solution to complex issues Unlimited incidents Emergency hot fix builds Access to Oracle’s MySQL Knowledge Base Consultative support services Conclusion With synchronous replication and support for distributed storage. As a result.com/ where they can be evaluated for 30 days. DRBD represents one of the most popular HA solutions for MySQL.com/pls/apex/f?p=101:3 Copyright © 2012.oracle. which is staffed by seasoned database experts ready to help with the most complex technical issues. users can quickly deploy new services based on open source technology. and comprehensive administrative tools. with the backing of 24 x 7 global support from Oracle. MySQL Workbench – MySQL Workbench is a unified visual tool that enables developers. MySQL Workbench provides advanced data modeling. monitoring. users can perform point in time recovery. DBAs.MySQL Enterprise Edition MySQL Enterprise Edition delivers the most comprehensive set of MySQL production. improve replication. backup. Customers of MySQL Enterprise Edition have access to the following offerings that complement the DRBD stack in delivering high availability via both technology and operational processes: MySQL Database – The world most popular open source database for delivering high performance and scalable web based applications as well as custom enterprise and embedded applications. You get a consistent backup copy of your database to recover your data to a precise point in time. MySQL Enterprise Backup – MySQL Enterprise Backup performs online “Hot” backups of your MySQL databases. helping to reduce storage costs. DBAs and system administrators can manage more servers in less time and helps developers and DBAs improve application performance by monitoring queries and accurately pinpointing SQL code that is causing a slow down. Oracle’s Premier support provides you with: 24x7x365 phone and online support. Oracle and/or its affiliates. This Guide has been designed to enable you to get started today. develop and administer MySQL servers. Compression typically reduces backup size up to 90% when compared with the size of actual database files. It continuously monitors your MySQL servers and alerts you to potential problems before they impact your system.oracle. In addition.

! Page 26 .com/why-mysql/white-papers/mysql_wp_ha_strategy_guide.An Overview: http://www.com/us/technologies/027615.html Unbreakable Linux Network .com/why-mysql/white-papers/mysql_wp_enterprise_ready.mysql.oracle.oracle.com/us/technologies/linux/index.php Oracle Linux: http://www.php Copyright © 2012.pdf MySQL Enterprise Edition Product Guide: http://www.Additional Resources Guide to MySQL High Availability Solutions: http://www. All rights reserved. Oracle and/or its affiliates.mysql.