0% found this document useful (0 votes)
752 views15 pages

Building A Solaris Cluster Express Cluster in A VirtualBox On OpenSolaris System-Log - Tyr

cluster
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
752 views15 pages

Building A Solaris Cluster Express Cluster in A VirtualBox On OpenSolaris System-Log - Tyr

cluster
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

2/25/2014

Building a Solaris Cluster Express cluster in a VirtualBox on OpenSolaris system-log:tyr

system-log:tyr
Skyscraper I love you

Building a Solaris Cluster Express cluster in a VirtualBox on OpenSolaris


2008/06/27 Ive been wanting to have a play around with both the Solaris Cluster Express work thats coming out of OpenSolaris and also VirtualBox, a virtualization platform that Sun recently acquired and have moved under their xVM banner. So wanting to kill two birds with one stone I thought Id try setting up a VirtualBox Solaris Express cluster. Heres a run through on how to get the same thing going if youd like to try. First an overview of what I set out to achieve. Clustered MySQL server in a zone Clustered Apache zone providing phpmyadmin front end for MySQL On a two node Solaris Express cluster With nodes running in VirtualBox virtual machines on a single physical machine Shared storage provided over iSCSI from the host machine VirtualBox does not support sharing the same (VDI) disk image between multiple hosts, unless the image is read-only. As such VirtualBox cannot natively provide storage appropriate for a clustered environment, so were going to prevent storage over iSCSI to the virtual nodes. At the time of writing Solaris Cluster Express 6/08 has just been released, this has been updated to run on Solaris Express Community Edition (SXCE) Build 86, so well be using that for our guest OS in the virtual machines. Initially I was hoping to use OS X as the host platform for VirtualBox however the networking support is far from complete in the OS X release. Specifically is does not support internal networking and thats needed for the cluster interconnects. So instead I chose OpenSolaris 2008.05 for the host, this has the advantage that the host can be used to provide the iSCSI services along with a Quorum Server and has the required networking flexibility. If you dont have an OpenSolaris host, then any recent Solaris 10 or Solaris Express server on your network should work fine. Ensure that ZFS supports the shareiscsi options and that you can share 2 8GB volumes from it. Also you will need to install a Quorum Server to it. If you dont have a suitable host on your network you could create a 3rd VirtualBox machine to provide the required services. To build this environment the following downloads were used Solaris Express community edition, build 86, from here VirtualBox 1.6.2 from here Solaris Cluster Express 6/08 from here OpenSolaris 2008.05 from here phpMyAdmin 2.11.6 from here Youll need a machine with at least 2 GB of RAM, and around 50GB of hard disk space to follow this guide. If you have 2GB of RAM youll spend a fair amount of time swapping and may experience occasional cluster node panic, so more is recommended if you can get it. Throughout this guide the prompt for the commands indicated the server it should be typed on.
o p e n s o l h o s t # c x n o d e 1 #

The OpenSolaris host The first Cluster Express node c x n o d e 2 # The second Cluster Express node b o t h n o d e s # Repeat on both the first and second cluster nodes a p a c h e z o n e # Within the Apache zone m y s q l z o n e # Within the MySQL zone The following IP addresses were used, modify these to match your local network environment [Link] The IPMP failover IP for cxnode1 [Link] The IPMP failover IP for cxnode2 [Link] The IP for the Apache zone [Link] The IP for the MySQL zone [Link] The first IPMP test address on cxnode1 [Link] The first IPMP test address on cxnode2 [Link] The second IPMP test address on cxnode1 [Link] The second IPMP test address on cxnode2 Sun provide extensive documentation for Sun Cluster 3.2, which for these purposes should match Cluster Express closely enough. You can find a good starting point at this documentation centre page.

Installing and configuring VirtualBox


To begin download VirtualBox from the links above, the steps below reference the 64 bit version but they should be the same for 32 bit users. Download the package and unzip/tar it, inside youll find two packages, they both need installing, ie:
o p e n s o l h o s t #p k g a d ddV i r t u a l B o x K e r n 1 . 6 . 2 S u n O S r 3 1 4 6 6 . p k g

[Link]

1/15

2/25/2014

Building a Solaris Cluster Express cluster in a VirtualBox on OpenSolaris system-log:tyr

o p e n s o l h o s t #p k g a d ddV i r t u a l B o x 1 . 6 . 2 S u n O S a m d 6 4 r 3 1 4 6 6 . p k g

Before the virtual machines can be created some network configuration on the host is required. To enable the front interfaces of the cluster nodes to connect to the local network some virtual interfaces on the host are required. These will have the desired affect of bridging the local VLAN with the virtual machines. The instructions in the VirtualBox documentation say to use /opt/VirtualBox/setup_vnic.sh to create these interfaces, however I had problems getting that to work. Not only is does it need some tweaks to get it to work with OpenSolaris 2008.05, I found I couldnt get the interfaces going even after they were created. Fortunately I came across this blog post, which pointed me in the direction of the steps below. 2 public interfaces are required for each cluster node, these can then be configured with IPMP, just as you would do in a real cluster, whilst this doesnt really provide more resiliency its a worthwhile exercise. To create these public interfaces a total of four virtual interfaces are required on the host. Each one needs a defined MAC address that you can choose yourself, or you can use the suggested ones here. To create the interfaces do this, replacing e1000g0 with the address of your physical interface on your host.
o p e n s o l h o s t #/ u s r / l i b / v n ae 1 0 0 0 g 0c 0 : f f : e e : 0 : 1 : 0 o p e n s o l h o s t #/ u s r / l i b / v n ae 1 0 0 0 g 0c 0 : f f : e e : 0 : 1 : 1 o p e n s o l h o s t #/ u s r / l i b / v n ae 1 0 0 0 g 0c 0 : f f : e e : 0 : 2 : 0 o p e n s o l h o s t #/ u s r / l i b / v n ae 1 0 0 0 g 0c 0 : f f : e e : 0 : 2 : 1

Ive chosen these MAC addresses because [Link] is easy to remember, and the last two octets represent the cluster node number and then the interface number in that node. Now plumb these interfaces, but dont give them IP addresses.
o p e n s o l h o s t #i f c o n f i gv n i c 0p l u m b o p e n s o l h o s t #i f c o n f i gv n i c 1p l u m b o p e n s o l h o s t #i f c o n f i gv n i c 2p l u m b o p e n s o l h o s t #i f c o n f i gv n i c 3p l u m b

These four vnic interfaces will form the public interfaces of our cluster nodes. However these will not persist over a reboot so you may want to create a start up script to create these. Theres a sample script in the linked blog post above. Its now time to create the virtual machines, as your normal user run /opt/VirtualBox/VirtualBox and a window should pop up like this

To start creating the virtual machine that will serve as the first node of the cluster, click New to fire up the new server wizard. Enter cxnode1 for the name of the first node (or choose something more imaginative), pick Solaris as the OS. Page through the rest of the wizard choosing 1GB for the memory allocation (this can be reduced later, at least 700MB is recommended) and create the default 16GB disk image. The network interfaces need to be configured to support the cluster. Click into the Network settings area and change the Adapter 0 type to Intel PRO/1000 MT Desktop and change the Attached to to Host Interface. Then enter the first MAC address you configured for the VNICs, if you followed the example above this will be c0ffee000100, then enter an interface name of vnic0. This Intel PRO/1000 MT Desktop adapter will appear as e1000g0 within the virtual machine. Generally Ive grown to like these adapters, not least because they use a GLDv3 driver. Now for Adapter 1 enable the adapter and change the type as before, and set Attached to to Host Interface, the MAC to c0ffee000101 (or as appropriate) and the interface name to vnic1. Then enable Adapter 2 and set Attached to to Internal Network and set Network Name to Interconnect 1. Repeat for Adapter 2 but set the Network Name to Interconnect 2. Finally point the CD/DVD-ROM to the ISO image you downloaded for Solaris Express Build 86. You need to add the ISO with the Virtual Disk Manager to make it available to the machine. You can use the /net NFS automounter to point to a NFS share where this image resides if required. Finally change the boot order, in General / Advanced, so that Hard Disk comes before CD/DVD. This means that it will initially boot the install media, but once installed will boot from the installed drive. Repeat the above steps to create a second cluster node. Ensure that Adapter 2 and Adapter 3 are connected to the same networks as for the first cluster node. Adapters 1 and 2 should be connected to the 3rd and 4th VNICs created previously.

Installing Solairs
Solaris now needs to be installed to both the cluster nodes. Repeat the following steps for each node. To boot a virtual machine click Start and the machine should boot and display the Grub menu. DONT pick the default of Solaris Express Developer Edition but rather choose Solaris Express. If you choose the Developer Edition option youll get the SXDE installer which does not offer the flexibility required around partition layout. Pick one of the Solaris Interactive install options as per your personal preference. If youve ever installed one of the main line Solaris releases that youll be at home here. Suggested settings for system identification phase: Networked Configure e1000g0 (leave the others for the time being) No DHCP
[Link] 2/15

2/25/2014

Building a Solaris Cluster Express cluster in a VirtualBox on OpenSolaris system-log:tyr

Hostname: cxnode1 (or choose something yourself) IP: [Link] (or something else as appropriate) Netmask [Link] (or as appropriate) No IPV6 Default Route Specify [Link] (the IP of your default router) Dont enable Kerberos None for naming service Default derived domain for NFSv4 domain Specify time zone as required Pick a root password Then in the installation phase pick the following options Reboot automatically: no Eject additional CDs: yes (not that well be using any) Media: CD/DVD Install Type: Custom Add additional locales as required, were using C as the default. Web Start Scan location: None Software group: Entire Group Plus OEM / Default Packages Leave the fdisk partition as the default, one single Solaris partition covering the whole disk. To support the cluster a specific filesystem layout is required. Click Modify then set as this: (If space is needed for a live upgrade in the future then an additional virtual disk can be attached) Slice 0: / 13735 MB Slice 1: swap 2048 MB Slice 6 /globaldevices 512 MB Slice 7 (leave blank) 32MB Now just confirm and start the install The installer should now run through in due course, once complete reboot the machine and check it boots up fine for the first time. By default Solaris will boot up into a GNOME desktop. If you want to disable the graphic login prompt from launching then do svcadm disable cde-login Before the cluster framework is installed the VirtualBox Guest Additions need to be installed, these serve a similar role to VMWare Tools in that they provide better integration with the host environment. Specifically the Time synchronization facilities are required to assist with keeping the cluster nodes in sync. If you still have the SXDE DVD image mounted then eject this in the guest and Unmount it from the Devices menu. Then choose Install Guest Additions from the VirtualBox menu. The Guest Additions iso should mount, then su to root and pkgadd the [Link] from the CD. If youre running X you should logout and back in to activate the X11 features. Repeat the above steps for both cluster nodes. Its worth considering taking a snapshot at this point so if you run into problems later you can just snap it back to this post install state.

Preparing for cluster install


Once the nodes are installed there are a few steps required to configure them before the cluster framework can be installed. Firstly the public network interfaces on the nodes need to be configured. To do this use the below /etc/hostname files modifying where appropriate to your local network. cxnode1:/etc/hostname.e1000g0
1 9 2 . 1 6 8 . 0 . 1 2 1n e t m a s k2 5 5 . 2 5 5 . 2 5 5 . 0b r o a d c a s t1 9 2 . 1 6 8 . 0 . 2 5 5d e p r e c a t e df a i l o v e rg r o u pp u b l i cu p a d d i f1 9 2 . 1 6 8 . 0 . 1 1 1u p

cxnode1:/etc/hostname.e1000g1
1 9 2 . 1 6 8 . 0 . 1 3 1n e t m a s k2 5 5 . 2 5 5 . 2 5 5 . 0b r o a d c a s t1 9 2 . 1 6 8 . 0 . 2 5 5d e p r e c a t e df a i l o v e rg r o u pp u b l i cu p

cxnode2:/etc/hostname.e1000g0
1 9 2 . 1 6 8 . 0 . 1 2 2n e t m a s k2 5 5 . 2 5 5 . 2 5 5 . 0b r o a d c a s t1 9 2 . 1 6 8 . 0 . 2 5 5d e p r e c a t e df a i l o v e rg r o u pp u b l i cu p a d d i f1 9 2 . 1 6 8 . 0 . 1 1 2u p

cxnode2:/etc/hostname.e1000g1
1 9 2 . 1 6 8 . 0 . 1 3 2n e t m a s k2 5 5 . 2 5 5 . 2 5 5 . 0b r o a d c a s t1 9 2 . 1 6 8 . 0 . 2 5 5d e p r e c a t e df a i l o v e rg r o u pp u b l i cu p

Also check that /etc/defaultrouter is correct. The RPC communication must be activated for the cluster framework to function. To do this do
o p e n s o l h o s t #s v c c f g s v c : >s e l e c tn e t w o r k / r p c / b i n d s v c : / n e t w o r k / r p c / b i n d >s e t p r o pc o n f i g / l o c a l _ o n l y = f a l s e s v c : / n e t w o r k / r p c / b i n d >q u i t o p e n s o l h o s t #s v c a d mr e f r e s hn e t w o r k / r p c / b i n d : d e f a u l t o p e n s o l h o s t #s v c p r o pn e t w o r k / r p c / b i n d : d e f a u l t|g r e pl o c a l _ o n l y

[Link]

3/15

2/25/2014

Building a Solaris Cluster Express cluster in a VirtualBox on OpenSolaris system-log:tyr

The last command should return false. Modify your path to include
/ u s r / b i n / u s r / c l u s t e r / b i n / u s r / s b i n / u s r / c c s / b i n

Also check your umask is set to 0022 and change it if not. Finally we need to ensure the cluster nodes exist in /etc/inet/hosts on both hosts. For example
1 9 2 . 1 6 8 . 0 . 1 1 1c x n o d e 1 1 9 2 . 1 6 8 . 0 . 1 1 2c x n o d e 2

After making the above changes bounce the nodes to check it all persists across a reboot. Once the above has been repeated on both cluster nodes it is time to install the cluster framework.

Installing Cluster Express


Download and extract Solaris Cluster Express 6/08, inside the package c dinto ' S o l a r i s _ x 8 6 'then run ' . / i n s t a l l e r ' . If you are connected over a ssh or on the console then run ' . / i n s t a l l e rn o d i s p l a y 'instead. The steps listed here are for the GUI installer, but the text one is much the same. Wait for the GUI to launch, click through and accept the license. Then from the list of available services choose Solaris (TM) Cluster Express 6/08 and Solaris (TM) Cluster Express Agents 6/08 (enter 4,6 if youre in the text installer). Leave the other options disabled, and clear Install multilingual packages unless you want them. Click Next to start the installer. Youll be informed that some packages are being upgraded from their existing versions (namely SUNWant, SUNWjaf and SUNWjmail). The cluster will now perform some pre installation checks and they should all pass OK, then proceed with the install. Choose Configure Later when prompted, as that will be done once all the installation steps are finished. Repeat the install process on the second node. Now the cluster is installed it can be configured and established. This is started by running / u s r / c l u s t e r / b i n / s c i n s t a l l . I prefer to do this on the second node of the cluster (c x n o d e 2 ), as the installer configures the partner server first and it will be assigned node id 1, the server running the install will be assigned node id 2. It doesnt really matter I just prefer it to follow the ordering of the hostnames. Once the installer is running choose option 1 (Create a new cluster or add a cluster node), then option 1 (Create a new cluster), answer yes to continue, choose 2 for custom configuration. Pick a cluster name eg DEV1. Then when prompted for the other nodes in the cluster enter cxnode2 then ^D to complete. Confirm the list of nodes is correct, communication with the other node will now be tested and should complete fine. Answer no for DES authentication. Now youll be asked about the network configuration for the interconnect. The default network range is [Link]/[Link] but you can change this if required. Answer yes to use at least two private networks, then yes to the question about switches. Although there are no switches to configure were considering each private network configured in VirtualBox to be an unmanaged switch. Accept the default names for switch1 and switch2. Youll now be asked to configure the transport adapters (effectively the network interfaces). Pick 1 (e1000g2) for the first cluster transport adapter, answer yes to indicate it is a dedicated cluster transport adapter. Answer switch1 when asked about which switch it is connected to and accept the defaut name. Then pick 2 (e1000g3) for the second adapter and again pick the default options. Answer yes for auto discovery. Youll now be asked about Quorum configuration, this will be addressed later so choose yes to disable automatic selection. The next quetion is about the global devices file system. The default is /globaldevices, accept it as this matches the layout created when initially installing Solaris. Accept this for cxnode2 as well. Youll be asked for final confirmation that it is ok to proceed so just answer yes when youre ready. Youll now be asked if you want the creation to be interrupted if sccheck fails, go for yes rather than the default of no. The cluster establishment will now start. You should see something similar to this when it discovers the cluster transports:
T h ef o l l o w i n gc o n n e c t i o n sw e r ed i s c o v e r e d : c x n o d e 2 : e 1 0 0 0 g 2 s w i t c h 1 c x n o d e 1 : e 1 0 0 0 g 2 c x n o d e 2 : e 1 0 0 0 g 3 s w i t c h 2 c x n o d e 1 : e 1 0 0 0 g 3 c d n o d e 1will reboot and

establish the cluster with itself as the only node. Dont worry about any errors about / e t c / c l u s t e r / c c r / d i d _ i n s t a n c e s , the DID database hasnt been created yet. The installer will then configure c x n o d e 2and reboot that. When it boots back up it will join the new cluster. We now have an established cluster! However its in installmode until a quorum device is configured.

Configuring a Quorum Server


To finish the base cluster configuration a quorum device must be assigned. Initially I was planning to do this by presenting an iSCSI LUN from the OpenSolaris host into the guests, then using that for the quorum device. However I found that it, although it could be added fine, it would show as offline and not function properly. As such a quorum server running on the OpenSolaris host will be used. Full documentation for this can be found here, but this overview should be enough to get you going. Fortunately it can be installed on its own, without requiring a full cluster install. To install it make the Cluster Express package available on the host and run the installer again, youll need to use the -nodisplay option as some packages wont be available for the graphical installer. When running the installer choose No when asked if you want to install the full set of components. Then choose option 2 for Quorum Server and install that. Choose Configure Later when asked. The default install creates a configuration for one quorum server, running on port 9000. You can see the configuration in /etc/scqsd/[Link]. Start the quorum server by running
o p e n s o l h o s t #/ u s r / c l u s t e r / b i n / c l q u o r u m s e r v e rs t a r t9 0 0 0

The quorum server can now be configured into the cluster to get it fully operational. One one of the cluster nodes run c l s e t u p . Answer yes to confirm you have finished the initial cluster setup and yes to add a quorum device. Now pick option 3 (Quorum
[Link] 4/15

2/25/2014

Building a Solaris Cluster Express cluster in a VirtualBox on OpenSolaris system-log:tyr

Server). Answer yes to continue then give a name for the device, such as opelsolhost. Enter the IP of your host when prompted and 9000 as the port number. Allow it to proceed then choose yes to reset installmode. The cluster is now properly established. You can check the quorum registration on the OpenSolaris host by doing / u s r / c l u s t e r / b i n / c l q u o r u m s e r v e rs h o w , you should see something like this:
o p e n s o l h o s t # / u s r / c l u s t e r / b i n / c l q u o r u m s e r v e rs h o w - C l u s t e rD E V 1( i d0 x 4 8 4 E 9 D B 9 )R e g i s t r a t i o n sN o d eI D : 1 R e g i s t r a t i o nk e y : 0 x 4 8 4 e 9 d b 9 0 0 0 0 0 0 0 1 N o d eI D : 2 R e g i s t r a t i o nk e y : 0 x 4 8 4 e 9 d b 9 0 0 0 0 0 0 0 2

Provisioning storage
To enable the creation of the Apache and MySQL zones its necessary to present some storage to the cluster that can be shared between them. As OpenSolaris is running on the host, take advantage of the in built iSCSI support in ZFS. First create some ZFS Volumes, one for each clustered service. eg.
o p e n s o l h o s t #z f sc r e a t eV8 gr p o o l / a p a c h e o p e n s o l h o s t #z f sc r e a t eV8 gr p o o l / m y s q l

Now enable iSCSI export on them.


o p e n s o l h o s t #z f ss e ts h a r e i s c s i = o nr p o o l / a p a c h e o p e n s o l h o s t #z f ss e ts h a r e i s c s i = o nr p o o l / m y s q l

And confirm this with i s c s i t a d ml i s tt a r g e tv


o p e n s o l h o s t #i s c s i t a d ml i s tt a r g e tv T a r g e t :r p o o l / a p a c h e i S C S IN a m e :i q n . 1 9 8 6 0 3 . c o m . s u n : 0 2 : 2 9 f 2 6 a 9 a f a c b 4 d 2 9 a d f 8 a d e f 6 7 b 2 1 a 0 0 A l i a s :r p o o l / a p a c h e s n i p S i z e :8 . 0 G B a c k i n gs t o r e :/ d e v / z v o l / r d s k / r p o o l / a p a c h e S t a t u s :o n l i n e T a r g e t :r p o o l / m y s q l i S C S IN a m e :i q n . 1 9 8 6 0 3 . c o m . s u n : 0 2 : 1 a 7 8 2 d 0 5 c 7 c 5 c d 7 4 8 e a 3 b 6 c 0 9 4 c b 3 c f 8 A l i a s :r p o o l / m y s q l s n i p S i z e :8 . 0 G B a c k i n gs t o r e :/ d e v / z v o l / r d s k / r p o o l / m y s q l S t a t u s :o n l i n e

Now configure the nodes to see the presented storage. Do this on both of the nodes, replacing [Link] with the IP of your host.
b o t h n o d e s #i s c s i a d mm o d i f yd i s c o v e r y s e n d t a r g e t se n a b l e b o t h n o d e s #i s c s i a d ma d dd i s c o v e r y a d d r e s s1 9 2 . 1 6 8 . 0 . 1 0 4 b o t h n o d e s #s v c a d me n a b l en e t w o r k / i s c s i _ i n i t i a t o r

And then to confirm:


b o t h n o d e s #i s c s i a d ml i s tt a r g e tS T a r g e t :i q n . 1 9 8 6 0 3 . c o m . s u n : 0 2 : 1 a 7 8 2 d 0 5 c 7 c 5 c d 7 4 8 e a 3 b 6 c 0 9 4 c b 3 c f 8 A l i a s :r p o o l / m y s q l s n i p O SD e v i c eN a m e :/ d e v / r d s k / c 2 t 0 1 0 0 0 0 1 7 F 2 0 2 6 4 2 4 0 0 0 0 2 A 0 0 4 8 4 F C C C 1 d 0 s 2 T a r g e t :i q n . 1 9 8 6 0 3 . c o m . s u n : 0 2 : 2 9 f 2 6 a 9 a f a c b 4 d 2 9 a d f 8 a d e f 6 7 b 2 1 a 0 0 A l i a s :r p o o l / a p a c h e s n i p O SD e v i c eN a m e :/ d e v / r d s k / c 2 t 0 1 0 0 0 0 1 7 F 2 0 2 6 4 2 4 0 0 0 0 2 A 0 0 4 8 4 F C C B F d 0 s 2

Make a note of the OS Device Name to Alias matching as you need to put the right LUN into the correct resource group. You can also confirm the storage is available by running format, eg:
b o t h n o d e s #f o r m a t S e a r c h i n gf o rd i s k s . . . d o n e A V A I L A B L ED I S KS E L E C T I O N S : 0 .c 0 d 0 / p c i @ 0 , 0 / p c i i d e @ 1 , 1 / i d e @ 0 / c m d k @ 0 , 0 1 .c 2 t 0 1 0 0 0 0 1 7 F 2 0 2 6 4 2 4 0 0 0 0 2 A 0 0 4 8 4 F C C B F d 0 / s c s i _ v h c i / d i s k @ g 0 1 0 0 0 0 1 7 f 2 0 2 6 4 2 4 0 0 0 0 2 a 0 0 4 8 4 f c c b f 2 .c 2 t 0 1 0 0 0 0 1 7 F 2 0 2 6 4 2 4 0 0 0 0 2 A 0 0 4 8 4 F C C C 1 d 0 / s c s i _ v h c i / d i s k @ g 0 1 0 0 0 0 1 7 f 2 0 2 6 4 2 4 0 0 0 0 2 a 0 0 4 8 4 f c c c 1

To make this storage available to the cluster you must populate the DID device database. This is performed via c l d e v i c eand only needs to be run on one of the nodes:
c x n o d e 1 #c l d e v i c ep o p u l a t e C o n f i g u r i n gD I Dd e v i c e s c l d e v i c e :( C 5 0 7 8 9 6 )I n q u i r yo nd e v i c e / d e v / r d s k / c 0 d 0 s 2 f a i l e d . d i di n s t a n c e5c r e a t e d . d i ds u b p a t hc x n o d e 1 : / d e v / r d s k / c 2 t 0 1 0 0 0 0 1 7 F 2 0 2 6 4 2 4 0 0 0 0 2 A 0 0 4 8 4 F C C C 1 d 0c r e a t e df o ri n s t a n c e5 . d i di n s t a n c e6c r e a t e d . d i ds u b p a t hc x n o d e 1 : / d e v / r d s k / c 2 t 0 1 0 0 0 0 1 7 F 2 0 2 6 4 2 4 0 0 0 0 2 A 0 0 4 8 4 F C C B F d 0c r e a t e df o ri n s t a n c e6 . C o n f i g u r i n gt h e/ d e v / g l o b a ld i r e c t o r y( g l o b a ld e v i c e s ) o b t a i n i n ga c c e s st oa l la t t a c h e dd i s k s

[Link]

5/15

2/25/2014

Building a Solaris Cluster Express cluster in a VirtualBox on OpenSolaris system-log:tyr

Then list the device database to check the storage is available


c x n o d e 1 #c l d e v i c el i s tv D I DD e v i c e F u l lD e v i c eP a t h d 1 c x n o d e 1 : / d e v / r d s k / c 1 t 0 d 0 d 2 c x n o d e 1 : / d e v / r d s k / c 0 d 0 d 3 c x n o d e 2 : / d e v / r d s k / c 1 t 0 d 0 d 4 c x n o d e 2 : / d e v / r d s k / c 0 d 0 d 5 c x n o d e 1 : / d e v / r d s k / c 2 t 0 1 0 0 0 0 1 7 F 2 0 2 6 4 2 4 0 0 0 0 2 A 0 0 4 8 4 F C C C 1 d 0 d 5 c x n o d e 2 : / d e v / r d s k / c 2 t 0 1 0 0 0 0 1 7 F 2 0 2 6 4 2 4 0 0 0 0 2 A 0 0 4 8 4 F C C C 1 d 0 d 6 c x n o d e 1 : / d e v / r d s k / c 2 t 0 1 0 0 0 0 1 7 F 2 0 2 6 4 2 4 0 0 0 0 2 A 0 0 4 8 4 F C C B F d 0 d 6 c x n o d e 2 : / d e v / r d s k / c 2 t 0 1 0 0 0 0 1 7 F 2 0 2 6 4 2 4 0 0 0 0 2 A 0 0 4 8 4 F C C B F d 0

Now the basic configuration of the cluster is fairly complete. Its a good opportunity to shut it down and take a snapshot. When shutting down the entire cluster you must use c l u s t e rs h u t d o w nrather than just shutting down the individual nodes. If you dont then you must bring up the nodes in the reverse order of shutting them down. For an immediate shutdown do c l u s t e rs h u t d o w nyg0 .

Configuring the Storage


Originally I was planning to create some SVM metasets and add the disks to those, however this doesnt appear to be possible with iSCSI LUNs yet, not even if you use the VirtualBox built in iSCSI initiator support, which results in the storage appearing as local disks. So instead I settled on using ZFS-HA to manage the disks. The process for this, as with most zfs stuff, is fairly straightforward. First create a ZFS storage pool for each clustered service. Well use the DID device number here, so make sure you follow list back via c l d e v i c el i s tvand i s c s i a d ml i s t t a r g e tSto ensure you put the correct target into the correct ZFS pool. In order for the storage to be added to ZFS it needs to have a fdisk partition added to it. One one of the nodes run fdisk against the two devices, accepting the default partition layout.:
c x n o d e 1 #f d i s k/ d e v / r d s k / c 2 t 0 1 0 0 0 0 1 7 F 2 0 2 6 4 2 4 0 0 0 0 2 A 0 0 4 8 4 B 5 4 6 7 d 0 p 0 N of d i s kt a b l ee x i s t s .T h ed e f a u l tp a r t i t i o nf o rt h ed i s ki s : a1 0 0 % S O L A R I SS y s t e m p a r t i t i o n T y p e y t oa c c e p tt h ed e f a u l tp a r t i t i o n , o t h e r w i s et y p e n t oe d i tt h e p a r t i t i o nt a b l e .

The standard SMI label needs to be replaced with an EFI one. When Ive been working with ZFS previously this has always happened automatically but it didnt work for me this time, possibly because we are going to add the DID device to the zpool rather than a traditional disk device. To change it manually run f o r m a twith the eoption.
c x n o d e 1 #f o r m a te S e a r c h i n gf o rd i s k s d o n e A V A I L A B L ED I S KS E L E C T I O N S : 0 .c 0 d 0 / p c i @ 0 , 0 / p c i i d e @ 1 , 1 / i d e @ 0 / c m d k @ 0 , 0 1 .c 2 t 0 1 0 0 0 0 1 7 F 2 0 2 6 4 2 4 0 0 0 0 2 A 0 0 4 8 4 F C C B F d 0 / s c s i _ v h c i / d i s k @ g 0 1 0 0 0 0 1 7 f 2 0 2 6 4 2 4 0 0 0 0 2 a 0 0 4 8 4 f c c b f 2 .c 2 t 0 1 0 0 0 0 1 7 F 2 0 2 6 4 2 4 0 0 0 0 2 A 0 0 4 8 4 F C C C 1 d 0 / s c s i _ v h c i / d i s k @ g 0 1 0 0 0 0 1 7 f 2 0 2 6 4 2 4 0 0 0 0 2 a 0 0 4 8 4 f c c c 1 S p e c i f yd i s k( e n t e ri t sn u m b e r ) :2 s e l e c t i n gc 2 t 0 1 0 0 0 0 1 7 F 2 0 2 6 4 2 4 0 0 0 0 2 A 0 0 4 8 4 F C C C 1 d 0 [ d i s kf o r m a t t e d ] E r r o ro c c u r r e dw i t hd e v i c ei nu s ec h e c k i n g :N os u c hd e v i c e f o r m a t >l a b e l E r r o ro c c u r r e dw i t hd e v i c ei nu s ec h e c k i n g :N os u c hd e v i c e [ 0 ]S M IL a b e l [ 1 ]E F IL a b e l S p e c i f yL a b e lt y p e [ 0 ] :1 W a r n i n g :T h i sd i s kh a sa nS M Il a b e l .C h a n g i n gt oE F Il a b e lw i l le r a s ea l l c u r r e n tp a r t i t i o n s . C o n t i n u e ?y f o r m a t >p p a r t i t i o n >p C u r r e n tp a r t i t i o nt a b l e( d e f a u l t ) : T o t a ld i s ks e c t o r sa v a i l a b l e :1 6 7 6 0 7 9 8+1 6 3 8 4( r e s e r v e ds e c t o r s ) P a r t T a g 0 u s r 1u n a s s i g n e d 2u n a s s i g n e d 3u n a s s i g n e d 4u n a s s i g n e d 5u n a s s i g n e d 6u n a s s i g n e d 7u n a s s i g n e d 8 r e s e r v e d F l a g w m w m w m w m w m w m w m w m w m F i r s tS e c t o r 3 4 0 0 0 0 0 0 0 1 6 7 6 0 7 9 9 S i z e 7 . 9 9 G B 0 0 0 0 0 0 0 8 . 0 0 M B L a s tS e c t o r 1 6 7 6 0 7 9 8 0 0 0 0 0 0 0 1 6 7 7 7 1 8 2

If youd like to suppress the Error occurred with device in use checking: No such device warning then set NOINUSE_CHECK as an environment variable. It seems to be this bug thats causing the warning. Now the disks can be added to a ZFS pool. Make sure you add the correct DID device here:
c x n o d e 1 #z p o o lc r e a t ea p a c h e p o o l/ d e v / d i d / d s k / d 6 s 0 c x n o d e 1 #z p o o lc r e a t em y s q l p o o l/ d e v / d i d / d s k / d 5 s 0

Then confirm these are available as expected:


[Link] 6/15

2/25/2014
c x n o d e 1 #z p o o ll i s t N A M E S I Z E U S E D A V A I L a p a c h e p o o l 7 . 9 4 G 1 1 1 K 7 . 9 4 G m y s q l p o o l 7 . 9 4 G 1 1 1 K 7 . 9 4 G

Building a Solaris Cluster Express cluster in a VirtualBox on OpenSolaris system-log:tyr


C A P H E A L T H A L T R O O T 0 % O N L I N E 0 % O N L I N E -

Now create some filesystems in the pools. For each service three file systems are required zone This will become the zone root data This will become /data within the zone and be used to store application data params This will be used to store a cluster parameter file
c x n o d e 1 #z f sc r e a t ea p a c h e p o o l / z o n e c x n o d e 1 #z f sc r e a t ea p a c h e p o o l / d a t a c x n o d e 1 #z f sc r e a t ea p a c h e p o o l / p a r a m s c x n o d e 1 #z f sc r e a t em y s q l p o o l / z o n e c x n o d e 1 #z f sc r e a t em y s q l p o o l / d a t a c x n o d e 1 #z f sc r e a t em y s q l p o o l / p a r a m s

To make the storage usable with the cluster it needs to be configured into a resource group. Create two resource groups, one for each service
c x n o d e 1 #c l r e s o u r c e g r o u pc r e a t ea p a c h e r g c x n o d e 1 #c l r e s o u r c e g r o u pc r e a t em y s q l r g

[Link] is the resource type that can manage ZFS storage, along with SVM and VxVM. It needs to be registered with the cluster with c l r e s o u r c e t y p e r e g i s t e r , this only needs to be performed on one node.
c x n o d e 1 #c l r e s o u r c e t y p er e g i s t e rS U N W . H A S t o r a g e P l u s

Create clustered resources to manage the ZFS pools:


c x n o d e 1 #c l r e s o u r c ec r e a t ega p a c h e r gtS U N W . H A S t o r a g e P l u spZ p o o l s = a p a c h e p o o la p a c h e s t o r c x n o d e 2 #c l r e s o u r c ec r e a t egm y s q l r gtS U N W . H A S t o r a g e P l u spZ p o o l s = m y s q l p o o lm y s q l s t o r

The resource groups will currently be in Unmanaged state, you can confirm this with c l r e s o u r c e g r o u ps t a t u s . To bring them under cluster management bring them online on the first node:
c x n o d e 1 #c l r e s o u r c e g r o u po n l i n eMnc x n o d e 1a p a c h e r g

Then check the file systems are available are available on the first node but not on the second.
c x n o d e 1 #z f sl i s t N A M E U S E D A V A I L R E F E R M O U N T P O I N T a p a c h e p o o l 1 5 8 K 7 . 8 1 G 2 1 K / / a p a c h e p o o l a p a c h e p o o l / d a t a 1 8 K 7 . 8 1 G 1 8 K / / a p a c h e p o o l / d a t a a p a c h e p o o l / p a r a m s1 8 K 7 . 8 1 G 1 8 K / / a p a c h e p o o l / p a r a m s a p a c h e p o o l / z o n e 1 8 K 7 . 8 1 G 1 8 K / / a p a h c e p o o l / z o n e c x n o d e 2 #z f sl i s t n od a t a s e t sa v a i l a b l e

Now move the pool to the other node and check that it becomes available on that node
c x n o d e 1 #c l r e s o u r c e g r o u ps w i t c hnc x n o d e 2a p a c h e r g c x n o d e 1 #z f sl i s t n od a t a s e t sa v a i l a b l e c x n o d e 2 # z f sl i s t N A M E U S E D A V A I L R E F E R M O U N T P O I N T a p a c h e p o o l 1 5 8 K 7 . 8 1 G 2 1 K / / a p a c h e p o o l a p a c h e p o o l / d a t a 1 8 K 7 . 8 1 G 1 8 K / / a p a c h e p o o l / d a t a a p a c h e p o o l / p a r a m s1 8 K 7 . 8 1 G 1 8 K / / a p a c h e p o o l / p a r a m s a p a c h e p o o l / z o n e 1 8 K 7 . 8 1 G 1 8 K / / a p a h c e p o o l / z o n e

When youre happy switch it back to the first node:


c x n o d e 1 #c l r e s o u r c e g r o u ps w i t c hnc x n o d e 1a p a c h e r g

Then repeat the above online and failover tests for mysql-rg and mysql-pool As the ZFS pools have now been added as cluster managed resources you must now use the cluster to mange them. Dont perform export/import operations manually.

Preparing Apache and MySQL zones for clustering


To provide a clustered Apache and MySQL service two zones are going to be created. Whilst these services could equally well be clustered without the use of zones I decided to go down this path so that the benefits of zones could be enjoyed in tandem with the clustering. A full example run through for configuring MySQL in a clustered zone is provided by Sun in the official documentation if you require further information. Its worth pointing out that if you are following this plan then it will create zones on ZFS devices, currently this is not supported for Live Upgrade at present, so you are restricting your upgrade paths in the future. If you do decide you need to Live Upgrade the cluster at some point in the future then you could remove the zones, do the upgrade, and then recreate the zones. If you dont want to do this then consider using raw disk slices with the cluster rather than ZFS. For each zone the pool/zone file system will be used for the zone root, then the pool/data file system will be delegated to the zones control which will be used for the application to store its data, i.e. the MySQL databases or the Apache document root.
[Link] 7/15

2/25/2014

Building a Solaris Cluster Express cluster in a VirtualBox on OpenSolaris system-log:tyr

To provide IP addressing for the zones [Link] resources will be used, rather than directly configuring the zone with an IP addres. First entries need adding to /etc/hosts on both nodes, eg:
1 9 2 . 1 6 8 . 0 . 1 1 8a p a c h e z o n e 1 9 2 . 1 6 8 . 0 . 1 1 9m y s q l z o n e

Then create LogicalHostname resources for each address:


c x n o d e 1 #c l r e s l o g i c a l h o s t n a m ec r e a t ega p a c h e r gha p a c h e z o n ea p a c h e a d d r c x n o d e 1 #c l r e s l o g i c a l h o s t n a m ec r e a t egm y s q l r ghm y s q l z o n em y s q l a d d r

Now its time to create the zones. This is a really simple zone configuration, you could add resource controls or other features as desired. a u t o b o o tmust be set to f a l s eas the cluster will be managing the starting and stopping of the zone.
c x n o d e 1 #z o n e c f gza p a c h e a p a c h e :N os u c hz o n ec o n f i g u r e d U s e' c r e a t e 't ob e g i nc o n f i g u r i n gan e wz o n e . z o n e c f g : a p a c h e >c r e a t e z o n e c f g : a p a c h e >s e tz o n e p a t h = / a p a c h e p o o l / z o n e z o n e c f g : a p a c h e >s e ta u t o b o o t = f a l s e z o n e c f g : a p a c h e >a d dd a t a s e t z o n e c f g : a p a c h e : d a t a s e t >s e tn a m e = a p a c h e p o o l / d a t a z o n e c f g : a p a c h e : d a t a s e t >e n d z o n e c f g : a p a c h e >v e r i f y z o n e c f g : a p a c h e >c o m m i t z o n e c f g : a p a c h e >e x i t c x n o d e 1 #z o n e c f gzm y s q l m y s q l :N os u c hz o n ec o n f i g u r e d U s e' c r e a t e 't ob e g i nc o n f i g u r i n gan e wz o n e . z o n e c f g : m y s q l >c r e a t e z o n e c f g : m y s q l >s e tz o n e p a t h = / m y s q l p o o l / z o n e z o n e c f g : m y s q l >s e ta u t o b o o t = f a l s e z o n e c f g : m y s q l >a d dd a t a s e t z o n e c f g : m y s q l : d a t a s e t > s e tn a m e = m y s q l p o o l / d a t a z o n e c f g : m y s q l : d a t a s e t >e n d z o n e c f g : m y s q l >v e r i f y z o n e c f g : m y s q l >c o m m i t z o n e c f g : m y s q l >e x i t

To enable the zones to be installed we must change the permissions on the zone roots:
c x n o d e 1 #c h m o d7 0 0/ a p a c h e p o o l / z o n e c x n o d e 1 #c h m o d7 0 0/ m y s q l p o o l / z o n e

Now install the zones:


#z o n e a d mza p a c h ei n s t a l l P r e p a r i n gt oi n s t a l lz o n e< a p a c h e > . C r e a t i n gl i s to ff i l e st oc o p yf r o mt h eg l o b a lz o n e . C o p y i n g< 9 6 6 8 >f i l e st ot h ez o n e . I n i t i a l i z i n gz o n ep r o d u c tr e g i s t r y .D e t e r m i n i n gz o n ep a c k a g ei n i t i a l i z a t i o no r d e r . P r e p a r i n gt oi n i t i a l i z e< 1 3 4 6 >p a c k a g e so nt h ez o n e . I n i t i a l i z e d< 1 3 4 6 >p a c k a g e so nz o n e . Z o n e< a p a c h e >i si n i t i a l i z e d . I n s t a l l a t i o no ft h e s ep a c k a g e sg e n e r a t e dw a r n i n g s : < s u n w v b o x g u e s t > T h ef i l e< / a p a c h e p o o l / z o n e / r o o t / v a r / s a d m / s y s t e m / l o g s / i n s t a l l _ l o g >c o n t a i n sal o go ft h ez o n ei n s t a l l a t i o n . #z o n e a d mzm y s q li n s t a l l e t c . .

The documentation has this to say about configuring zones: Caution: If the zone is to run in a failover configuration, each node being able to host that zone must have the exact same zone configuration for that zone. After installing the zone on the first node, the zones zone path already exists on the zoness disk storage. Therefore it must get removed on the next node prior to successfully create and install the zone.[..] Only the zones zone path created on the last node will be kept as the final zone path for the failover zone. For that reason any configuration and customization within the failover zone should get performed after the failover zone is known to all nodes that should be able to host it. To achieve this the newly created zone must be destroyed and recreated on the second node. To my mind this is a really ugly way of achieving this, the cluster should be able to manage this itself and make suitable configuration changes on a node when the zone is configured into the cluster. In later releases of Sun Cluster 3.1 the recommended way to manage this configuration was that the /etc/zones files be hacked to replicate the configuration of the zones from one node to another. However this method is not supported any more so the official instructions will be followed. To do this migrate the storage for the zones to the other node
c x n o d e 1 #c l r e s o u r c e g r o u ps w i t c hnc x n o d e 2a p a c h e r g c x n o d e 1 #c l r e s o u r c e g r o u ps w i t c hnc x n o d e 2m y s q l r g

Then delete the previously installed zone roots on the second node
c x n o d e 2 #r mr f/ a p a c h e p o o l / z o n e / * c x n o d e 2 #r mr f/ m y s q l p o o l / z o n e / *

Now repeat the zonecfg and zoneadm steps above to recreate both of the zones.

[Link]

8/15

2/25/2014

Building a Solaris Cluster Express cluster in a VirtualBox on OpenSolaris system-log:tyr

When the zone installs have completed again move the storage back to the first node.
c x n o d e 1 #c l r e s o u r c e g r o u ps w i t c hnc x n o d e 1a p a c h e r g c x n o d e 1 #c l r e s o u r c e g r o u ps w i t c hnc x n o d e 1m y s q l r g

The zones can now be booted and configured. Repeat these steps for Apache and the the MySQL zone. Boot the zone:
c x n o d e 1 #z o n e a d mza p a c h eb o o t

I got this error when booting the zones U n a b l et os e tr o u t ef o ri n t e r f a c el o 0t o* ? ? 9 ? x ,I m not sure what this means but it doesnt seem to impact anything. Login to the zones console to configure it:
c x n o d e 1 #z l o g i nCa p a c h e

Youll be asked a few questions to configure the zone. Choose language, terminal type and time zone information as appropriate. Enter the same hostname as you used above, eg apache-zone or mysql-zone. I received some alerts about avahi-bridge-dsd failing to start when booting, as far as I can tell its some sort of Bonjour networking thing, we dont need it here so its ok to disable. You can also disable some other services that are not required to free up some resources
a p a c h e z o n e #s v c a d md i s a b l ec d e l o g i n a p a c h e z o n e #s v c a d md i s a b l es e n d m a i l a p a c h e z o n e #s v c a d md i s a b l ew e b c o n s o l e a p a c h e z o n e #s v c a d md i s a b l ea v a h i b r i d g e d s d a p a c h e z o n e #s v c a d md i s a b l ep p d c a c h e u p d a t e

Now mount the zfs file systems that have delegated to the zone to an appropriate place. To do this on the Apache zone:
a p a c h e z o n e #z f ss e tm o u n t p o i n t = / d a t aa p a c h e p o o l / d a t a

And on the MySQL zone:


m y s q l z o n e #z f ss e tm o u n t p o i n t = / d a t am y s q l p o o l / d a t a

Wait for the zone to finish booting and check you dont have any failed services with s v c sx v . Then shut the zone down and repeat for the other zone.

Clustering the zones


Before proceeding further ensure the storage is available on the first node, fail it over if necessary. Also make sure the zones are shut down. To enable clustering for the zones they must be registered with the cluster. To do this a script called s c z b t _ r e g i s t e ris provided. To use this a configuration file must be completed and then registered. A sample configuration file is provided at / o p t / S U N W s c z o n e / s c z b t / u t i l / s c z b t _ c o n f i g , this is also the file that will be read by default by s c z b t _ r e g i s t e r . It is recommended to copy this file to some other place for future reference, then run s c z b t _ r e g i s t e ragainst that. Comments are included in the file to explain the options, or see the official docs for more info. Copy / o p t / S U N W s c z o n e / s c z b t / u t i l / s c z b t _ c o n f i gto / e t c / s c z b t _ c o n f i g . a p a c h eand / e t c / s c z b t _ c o n f i g . m y s q land edit as follows
/ e t c / s c z b t _ c o n f i g . a p a c h e : R S = a p a c h e z o n e R G = a p a c h e r g P A R A M E T E R D I R = / a p a c h e p o o l / p a r a m s S C _ N E T W O R K = t r u e S C _ L H = a p a c h e a d d r F A I L O V E R = t r u e H A S _ R S = a p a c h e s t o r Z o n e n a m e = " a p a c h e " Z o n e b r a n d = " n a t i v e " Z o n e b o o t o p t = " " M i l e s t o n e = " m u l t i u s e r s e r v e r " L X r u n l e v e l = " 3 " S L r u n l e v e l = " 3 " M o u n t s = " " / e t c / s c z b t _ c o n f i g . m y s q l R S = m y s q l z o n e R G = m y s q l r g P A R A M E T E R D I R = / m y s q l p o o l / p a r a m s S C _ N E T W O R K = t r u e S C _ L H = m y s q l a d d r F A I L O V E R = t r u e H A S _ R S = m y s q l s t o r Z o n e n a m e = " m y s q l " Z o n e b r a n d = " n a t i v e " Z o n e b o o t o p t = " " M i l e s t o n e = " m u l t i u s e r s e r v e r " L X r u n l e v e l = " 3 " S L r u n l e v e l = " 3 " M o u n t s = " "

The zones can now be registered. First you need to register the [Link] resource type. On one node do:
c x n o d e 1 #c l r e s o u r c e t y p er e g i s t e rS U N W . g d s

[Link]

9/15

2/25/2014

Building a Solaris Cluster Express cluster in a VirtualBox on OpenSolaris system-log:tyr

Then register the two zones


c x n o d e 1 #/ o p t / S U N W s c z o n e / s c z b t / u t i l / s c z b t _ r e g i s t e rf/ e t c / s c z b t _ c o n f i g . a p a c h e s o u r c i n g/ e t c / s c z b t _ c o n f i g . a p a c h e R e g i s t r a t i o no fr e s o u r c ea p a c h e z o n es u c c e e d e d . V a l i d a t i o no fr e s o u r c ea p a c h e z o n es u c c e e d e d . c x n o d e 1 #/ o p t / S U N W s c z o n e / s c z b t / u t i l / s c z b t _ r e g i s t e rf/ e t c / s c z b t _ c o n f i g . m y s q l s o u r c i n g/ e t c / s c z b t _ c o n f i g . m y s q l R e g i s t r a t i o no fr e s o u r c em y s q l z o n es u c c e e d e d . V a l i d a t i o no fr e s o u r c em y s q l z o n es u c c e e d e d .

Then enable the Apache zone and log in to it. You should see the LogicalHostname resource has been assigned to the zone
c x n o d e 1 #c l r e s o u r c ee n a b l ea p a c h e z o n e c x n o d e 1 #z o n e a d ml i s tc v I DN A M E S T A T U S P A T H B R A N D 0g l o b a l r u n n i n g / n a t i v e 1a p a c h e r u n n i n g / a p a c h e p o o l / z o n e n a t i v e -m y s q l i n s t a l l e d / m y s q l p o o l / z o n e n a t i v e c x n o d e 1 #z l o g i na p a c h e [ C o n n e c t e dt oz o n e' a p a c h e 'p t s / 2 ] L a s tl o g i n :T u eJ u n1 72 0 : 2 3 : 0 8o np t s / 2 S u nM i c r o s y s t e m sI n c . S u n O S5 . 1 1 s n v _ 8 6 J a n u a r y2 0 0 8 a p a c h e z o n e #i f c o n f i ga l o 0 : 1 :f l a g s = 2 0 0 1 0 0 0 8 4 9m t u8 2 3 2i n d e x1 i n e t1 2 7 . 0 . 0 . 1n e t m a s kf f 0 0 0 0 0 0 e 1 0 0 0 g 1 : 1 :f l a g s = 2 0 1 0 4 0 8 4 3m t u1 5 0 0i n d e x3 i n e t1 9 2 . 1 6 8 . 0 . 1 1 8n e t m a s kf f f f f f 0 0b r o a d c a s t1 9 2 . 1 6 8 . 0 . 2 5 5

I P s h a r e d s h a r e d s h a r e d

Then test a failover and failback of the zone.


c x n o d e 1 #c l r e s o u r c e g r o u ps w i t c hnc x n o d e 2a p a c h e r g c x n o d e 1 #c l r e s o u r c e g r o u ps w i t c hnc x n o d e 1a p a c h e r g

Repeat the same checks for the MySQL zone.

Installing and configuring MySQL


MySQL 5.0.45 is installed by default with the Solaris Express install we have performed. This consists of the SUNWmysql5r, SUNWmysql5u and SUNWmysql5test packages. The installation can be used pretty much unmodified, the only modification needed is to repoint the MySQL data directory to the delegated zfs file system. This is done by modifying the SMF properties for the service, you cannot make this change by modifying [Link]. Make the change with svccfg:
m y s q l z o n e #s v c c f g s v c : >s e l e c ts v c : / a p p l i c a t i o n / d a t a b a s e / m y s q l : v e r s i o n _ 5 0 s v c : / a p p l i c a t i o n / d a t a b a s e / m y s q l : v e r s i o n _ 5 0 >s e t p r o pm y s q l / d a t a=/ d a t a / m y s q l s v c : / a p p l i c a t i o n / d a t a b a s e / m y s q l : v e r s i o n _ 5 0 >r e f r e s h s v c : / a p p l i c a t i o n / d a t a b a s e / m y s q l : v e r s i o n _ 5 0 >e n d

Then create the directory for the databases and one for some logs
m y s q l z o n e #m k d i r/ d a t a / m y s q l m y s q l z o n e #m k d i r/ d a t a / l o g s m y s q l z o n e #c h o w nm y s q l : m y s q l/ d a t a / m y s q l m y s q l z o n e #c h o w nm y s q l : m y s q l/ d a t a / l o g s

Now start up the database and check that it starts ok


m y s q l z o n e #s v c a d me n a b l em y s q l : v e r s i o n _ 5 0 m y s q l z o n e #s v c sm y s q l : v e r s i o n _ 5 0 S T A T E S T I M E F M R I o n l i n e 1 0 : 3 8 : 5 4s v c : / a p p l i c a t i o n / d a t a b a s e / m y s q l : v e r s i o n _ 5 0

Now set a password for the root users for the database, its set to root in this case.
m y s q l z o n e #/ u s r / m y s q l / 5 . 0 / b i n / m y s q l a d m i nur o o tp a s s w o r dr o o t m y s q l z o n e #/ u s r / m y s q l / 5 . 0 / b i n / m y s q l a d m i nur o o thl o c a l h o s tpp a s s w o r dr o o t E n t e rp a s s w o r d : r o o t

The /etc/hosts file in the zone needs to be modified so that mysql-zone is the name for the clustered IP address for the zone rather than the localhost, also the address for the apache-zone needs to be added.
1 2 7 . 0 . 0 . 1 1 9 2 . 1 6 8 . 0 . 1 1 9 1 9 2 . 1 6 8 . 0 . 1 1 8 l o c a l h o s t m y s q l z o n e a p a c h e z o n e l o g h o s t

Allow access to the root user to connect from the Apache zone.
m y s q l z o n e #/ u s r / m y s q l / 5 . 0 / b i n / m y s q lp E n t e rp a s s w o r d :r o o t m y s q l >G R A N TA L LO N* . *T O' r o o t ' @ ' a p a c h e z o n e 'i d e n t i f i e db y' r o o t ' ;

MySQL is now configured and ready to be clustered. Well be using a process loosely based on the one documented here. Alternatively it would be possible to use SMF to manage the service, you can see an example of that method in the Apache configuration later.
[Link] 10/15

2/25/2014

Building a Solaris Cluster Express cluster in a VirtualBox on OpenSolaris system-log:tyr

First a user for the fault monitor must be created along with a test database for it to use. A script is provided with the agent to do this for you. It will grant the fault monitor user PROCESS, SELECT, RELOAD, SHUTDOWN, SUPER on all databases, then ALL privileges on the test database. To create the required users you need to provide a config file. Copy the supplied template into / e t cand edit it there
m y s q l z o n e #c p/ o p t / S U N W s c m y s / u t i l / m y s q l _ c o n f i g/ e t c m y s q l z o n e #v i/ e t c / m y s q l _ c o n f i g

Use these values, note that MYSQL_DATADIR is the location of the [Link], not the directory to the databases. The meaning of DATADIR changed in 5.0.3 to mean the location of the data and not the config directory, but for this configuration it should point to the config directory.
M Y S Q L _ B A S E = / u s r / m y s q l / 5 . 0 M Y S Q L _ U S E R = r o o t M Y S Q L _ P A S S W D = r o o t M Y S Q L _ H O S T = m y s q l z o n e F M U S E R = f m u s e r F M P A S S = f m p a s s M Y S Q L _ S O C K = / t m p / m y s q l . s o c k M Y S Q L _ N I C _ H O S T N A M E = " m y s q l z o n e " M Y S Q L _ D A T A D I R = / e t c / m y s q l / 5 . 0

Then run the registration script

m y s q l z o n e #/ o p t / S U N W s c m y s / u t i l / m y s q l _ r e g i s t e rf/ e t c / m y s q l _ c o n f i g s o u r c i n g/ e t c / m y s q l _ c o n f i ga n dc r e a t eaw o r k i n gc o p yu n d e r/ o p t / S U N W s c m y s / u t i l / m y s q l _ c o n f i g . w o r k M y S Q Lv e r s i o n5d e t e c t e do n5 . 1 1 / S C 3 . 2 C h e c ki ft h eM y S Q Ls e r v e ri sr u n n i n ga n da c c e p t i n gc o n n e c t i o n s A d df a u l m o n i t o ru s e r( f m u s e r )w i t hp a s s w o r d( f m p a s s )w i t hP r o c e s s , S e l e c t ,R e l o a d -a n dS h u t d o w n p r i v i l e g e st ou s e rt a b l ef o rm y s q ld a t a b a s ef o r A d dS U P E Rp r i v i l e g ef o rf m u s e r @ m y s q l z o n e C r e a t et e s t d a t a b a s es c 3 _ t e s t _ d a t a b a s e G r a n ta l lp r i v i l e g e st os c 3 _ t e s t _ d a t a b a s ef o rf a u l t m o n i t o r u s e rf m u s e rf o rh o s tm y s q l z o n e F l u s ha l lp r i v i l e g e s M y s q lc o n f i g u r a t i o nf o rH Ai sd o n e

Now shut down the database so it can be bought online by the cluster.
m y s q l z o n e #s v c a d md i s a b l em y s q l : v e r s i o n _ 5 0

Drop back to the global zone and copy the MySQL agent configuration template to /etc
c x n d o e 1 #c p/ o p t / S U N W s c m y s / u t i l / h a _ m y s q l _ c o n f i g/ e t c / h a _ m y s q l _ c o n f i g

Use these settings, this time the DATADIR should be set to point to the actual data location and not the location of the config. Descriptions of the configuration is given in the file:
R S = m y s q l s e r v e r R G = m y s q l r g P O R T = 3 3 0 6 L H = m y s q l a d d r H A S _ R S = m y s q l s t o r Z O N E = m y s q l Z O N E _ B T = m y s q l z o n e P R O J E C T = B A S E D I R = / u s r / m y s q l / 5 . 0 D A T A D I R = / d a t a / m y s q l M Y S Q L U S E R = m y s q l M Y S Q L H O S T = m y s q l z o n e F M U S E R = f m u s e r F M P A S S = f m p a s s L O G D I R = / d a t a / l o g s / C H E C K = N O

Now register this with the cluster:


c x n o d e 1 #/ o p t / S U N W s c m y s / u t i l / h a _ m y s q l _ r e g i s t e rf/ e t c / h a _ m y s q l _ c o n f i g s o u r c i n g/ e t c / h a _ m y s q l _ c o n f i ga n dc r e a t eaw o r k i n gc o p yu n d e r/ o p t / S U N W s c m y s / u t i l / h a _ m y s q l _ c o n f i g . w o r k c l e a nu pt h em a n i f e s t/s m fr e s o u r c e s o u r c i n g/ o p t / S U N W s c m y s / u t i l / h a _ m y s q l _ c o n f i g d i s a b l i n gt h es m fs e r v i c es v c : / a p p l i c a t i o n / s c z o n e a g e n t s : r e m o v i n gt h es m fs e r v i c es v c : / a p p l i c a t i o n / s c z o n e a g e n t s : r e m o v i n gt h es m fm a n i f e s t/ v a r / s v c / m a n i f e s t / a p p l i c a t i o n / s c z o n e a g e n t s / . x m l s o u r c i n g/ t m p / h a _ m y s q l _ c o n f i g . w o r k / v a r / s v c / m a n i f e s t / a p p l i c a t i o n / s c z o n e a g e n t s / m y s q l s e r v e r . x m ls u c c e s s f u l l yc r e a t e d / v a r / s v c / m a n i f e s t / a p p l i c a t i o n / s c z o n e a g e n t s / m y s q l s e r v e r . x m ls u c c e s s f u l l yv a l i d a t e d / v a r / s v c / m a n i f e s t / a p p l i c a t i o n / s c z o n e a g e n t s / m y s q l s e r v e r . x m ls u c c e s s f u l l yi m p o r t e d M a n i f e s ts v c : / a p p l i c a t i o n / s c z o n e a g e n t s : m y s q l s e r v e rw a sc r e a t e di nz o n em y s q l R e g i s t e r i n gt h ez o n es m fr e s o u r c e s o u r c i n g/ o p t / S U N W s c z o n e / s c z s m f / u t i l / s c z s m f _ c o n f i g R e g i s t r a t i o no fr e s o u r c em y s q l s e r v e rs u c c e e d e d . V a l i d a t i o no fr e s o u r c em y s q l s e r v e rs u c c e e d e d . r e m o v et h ew o r k i n gc o p y/ o p t / S U N W s c m y s / u t i l / h a _ m y s q l _ c o n f i g . w o r k

Before bringing this online a tweak is needed to the supplied agent scripts. As mentioned briefly above the use of DATADIR is a bit broken. If you try to bring MySQL online now it will fail as it wont be able to find its configuration file. The agent scripts have this hard coded to $ { M Y S Q L _ D A T A D I R } / m y . c n fwhich is no use for our purposes. In the zone edit /opt/SUNWscmys/bin/functions and make this replacement, ensure you edit the copy in the MySQL zone and not the one in the global zone.
M Y S Q L _ D E F A U L T _ F I L E = $ { M Y S Q L _ D A T A D I R } / m y . c n f

[Link]

11/15

2/25/2014

Building a Solaris Cluster Express cluster in a VirtualBox on OpenSolaris system-log:tyr

with
M Y S Q L _ D E F A U L T _ F I L E = / e t c / m y s q l / 5 . 0 / m y . c n f

The mysql-server can now be enabled.


c x n o d e 1 #c l r e s o u r c ee n a b l em y s q l s e r v e r

Configuring Apache
Apache is going to be used to provide a web front end to the MySQL install, via the ubiquitous phpMyAdmin. The supplied Apache install (at /usr/apache2/2.2) is going to be used. As Apahce will be running in a zone it can be used unmodified, keeping the configuration in /etc and not worrying about any potential conflicts with other Apache installs. At present the supplied Apache resource type does not directly support running the resource in a zone, or at least I couldnt figure it out. So instead some of the provided zone monitoring tools are going to be used to ensure Apache is up and running. This uses a combination of SMF and a shell script. To begin Apache must be configured. Bring the zone online on one of the nodes and log in to it. The configuration file for Apache is at / e t c / a p a c h e 2 / 2 . 2 / h t t p d . c o n f . Only a small tweak is required to move the document root onto the zfs file system we have prepared for it. You could, if desired, also move other parts of the configuration, such as the log location. For this example just change D o c u m e n t R o o tto be / d a t a / h t d o c sand update the D i r e c t o r ystanza a page or so below it. Then do a m k d i ron / d a t a / h t d o c s . That completes our very simple Apache configuration. So start it up svcadm enable apache22. Download phpMyAdmin from here. Solaris now ships with p7zip to manage 7z files, so you could download that version to save a bit of bandwidth if you like. You can extract them with p z i pd[ f i l e n a m e ] . Once extracted move the extracted directory to / d a t a / h t d o c s / p h p m y a d m i n . Add m y s q l z o n eto /etc/hosts eg
1 9 2 . 1 6 8 . 0 . 1 1 9 m y s q l z o n e

Modify [Link] setting these two values:


$ c f g [ ' S e r v e r s ' ] [ $ i ] [ ' h o s t ' ]=' m y s q l z o n e ' ; $ c f g [ ' b l o w f i s h _ s e c r e t ' ]=' e n t e rar a n d o mv a l u eh e r e ' ;

To enable monitoring of the Apache instance we need a simple probe script. Make a directory / o p t / p r o b e sin the zone and create a file called p r o b e a p a c h e . k s hwith this content:
# ! / u s r / b i n / k s h i fe c h o" G E T ;e x i t "|m c o n n e c tp8 0>/ d e v / n u l l2 > & 1 t h e n e x i t0 e l s e e x i t1 0 0 f i

Then c h m o d7 5 5/ o p t / p r o b e s / p r o b e a p a c h e . k s h . All this does is a simple connect on port 80, it could be replaced with something more complex if needed. Finally disable Apache so that the cluster can start it:
a p a c h e z o n e #s v c a d md i s a b l ea p a c h e 2 2

Drop back to the global zone and copy / o p t / S U N W s c z o n e / s c z s m f / u t i l / s c z s m f _ c o n f i gto / e t c / s c z s m f _ c o n f i g . a p a c h eand set the following settings
R S = a p a c h e s e r v e r R G = a p a c h e r g S C Z B T _ R S = a p a c h e z o n e Z O N E = a p a c h e S E R V I C E = a p a c h e 2 2 R E C U R S I V E = t r u e S T A T E = t r u e S E R V I C E _ P R O B E = " / o p t / p r o b e s / p r o b e a p a c h e . k s h "

Now this can be registered with the cluster:


c x n o d e 1 #/ o p t / S U N W s c z o n e / s c z s m f / u t i l / s c z s m f _ r e g i s t e rf/ e t c / s c z s m f _ c o n f i g . a p a c h e s o u r c i n g/ e t c / s c z s m f _ c o n f i g . a p a c h e R e g i s t r a t i o no fr e s o u r c ea p a c h e s e r v e rs u c c e e d e d . V a l i d a t i o no fr e s o u r c ea p a c h e s e r v e rs u c c e e d e d .

Now enable Apache and check that its functioning correctly:


c l r e s o u r c ee n a b l ea p a c h e s e r v e r

You can now browse to / p h p m y a d m i nand check that everything it working!

Conclusions
When I started this work I wasnt sure whether it was going to be possible or not, but despite a couple of bumps along the way Im happy with the end result. Whilst it might not be a perfect match for a real cluster it certainly provides enough opportunity for testing and for use in training. Tagged with: Categorised as: Solaris
[Link] 12/15

2/25/2014

Building a Solaris Cluster Express cluster in a VirtualBox on OpenSolaris system-log:tyr

14 Comments
1. Solaris Cluster testbed - [Link] says: 2008/08/05 at 15:43 [...] There is an interesting walkthrough for testing Solaris Cluster with iSCSI and Virtualbox at Building a Solaris Cluster Express cluster in a VirtualBox on OpenSolaris. Whoever wants to play with a mature cluster framework should give this a try. Posted by Joerg [...] 2. Harald Dumdey Blog Archive links for 2008-08-07 [[Link]] says: 2008/08/07 at 10:28 [...] Building a Solaris Cluster Express cluster in a VirtualBox on OpenSolaris (tags: zfs sun solaris cluster computer software) [...]

3.

daniel.p says: 2008/08/29 at 14:57 great work !! thanks much. i have spent a lot of time playing with cluster in ESX vmware infrastructure server (on x86) to make it fully working .. break point was for me to configure shared device (because everything else had worked for me).. now, i am encouraged by this article. so, anyway my conditions changed, because our company has bought t5240 sparc server (and got as a gift from [Link] two additional fairly historic netra servers) .. but with no diskpool, so shared storage in cluster still pains me .. thanks much for this article .. tomorrow ill begin with *second service* with virtual box instead then ESX vmware .. Very Best Regards by daniel

4.

Gideon says: 2008/09/04 at 21:52 Ive set up something similar, but Ive found that only one of my two virtual nodes can access the iSCSI device. The other node sees it, but if I try to run, say, format on it, I get an error about a reservation conflict. Any ideas?

5.

Franky G says: 2008/09/12 at 23:39 This is Fantastic, Im sure people will agree that the shred storage has been a stickler for most, I will attempt this with the host OS on Linux Centos 5.1 Once im done, ill post my results Franky G

6. A trip down memory lane Dominiks Weblog says: 2008/12/28 at 23:22 [...] Weihnachtszeit nutzen, um auf scaleo einen Solaris Cluster aus Virtualbox-Maschinen zu bauen (siehe hier und hier). Aber zwei unvorhergesehene Ereignisse trafen [...] 7. system-log:tyr Blog Archive Linux zones on Solaris Express X86 says: 2009/01/28 at 13:39 [...] takes a look at the interesting world of the Linux branded zone. Ive posted about VirtualBox before and I hope to take a look at xVM Server (Xen) in a future post. Read on for my first steps with [...] 8. So viel Interessantes und so wenig Zeit Dominiks Weblog says: 2009/01/31 at 22:16 [...] virtueller Sun Cluster mit [...]

9.

fii says: 2009/03/02 at 03:49 Thanks for this info . Tried it and it works perfect . I now have a sun cluster lab and im about to throw in oracle rac and stuff in there . Can I buy you a beer the next time Im in Leeds ?

[Link]

13/15

2/25/2014

Building a Solaris Cluster Express cluster in a VirtualBox on OpenSolaris system-log:tyr

10.

Aero says: 2009/07/09 at 23:20 Hello, thanks for posting the procedure. Can you tell me what is this error: cldevice: (C507896) Inquiry on device /dev/rdsk/c0d0s2 failed. You got this output after cldevice populate.? does cldev status show FAIL for that disk ?

11.

Upendra says: 2009/10/16 at 20:52 svcadm enable iscsi/initiator

12.

Koko says: 2010/08/04 at 10:18 I found this page quite interesting. I want to try it. But I cannot find the Solaris Cluster Express 12/08, it says that the product is no longer available. Any idea where can i find another location? Thank You

13.

Chris says: 2010/09/14 at 19:41 Thanks so much for taking the time to do this! This is EXACTLY what I needed and couldnt find anywhere!

14.

Baban says: 2011/01/20 at 07:52 Hi i am new to clustering. And i do find the ip addresses assigned to multipathing group, public interfaces & cluster interconnects to be quite confusing. Can any one clear which and how many ip addresses are assigned to ipmp, cluster interconnect?

Leave a Reply
Your email address will not be published. Required fields are marked * Name * Email * Website

Comment You may use these HTML tags and attributes: < ah r e f = " "t i t l e = " " >< a b b rt i t l e = " " >< a c r o n y mt i t l e = " " >< b >< b l o c k q u o t ec i t e = " " >< c i t e >< c o d e >< d e l
d a t e t i m e = " " >< e m >< i >< qc i t e = " " >< s t r i k e >< s t r o n g > Post Comment

Categories
iOS Leeds
[Link] 14/15

2/25/2014

Building a Solaris Cluster Express cluster in a VirtualBox on OpenSolaris system-log:tyr

Mac Maps Solaris Transport Uncategorized Wikipedia

Archives
February 2013 February 2012 January 2012 November 2009 June 2009 January 2009 August 2008 July 2008 June 2008 May 2008 April 2008 March 2008 February 2008 January 2008

Meta
Log in Entries RSS Comments RSS [Link] 2014 system-log:tyr | powered by WordPress Entries (RSS) and Comments (RSS).

[Link]

15/15

You might also like