Building A Solaris Cluster Express Cluster in A VirtualBox On OpenSolaris System-Log - Tyr
Building A Solaris Cluster Express Cluster in A VirtualBox On OpenSolaris System-Log - Tyr
system-log:tyr
Skyscraper I love you
The OpenSolaris host The first Cluster Express node c x n o d e 2 # The second Cluster Express node b o t h n o d e s # Repeat on both the first and second cluster nodes a p a c h e z o n e # Within the Apache zone m y s q l z o n e # Within the MySQL zone The following IP addresses were used, modify these to match your local network environment [Link] The IPMP failover IP for cxnode1 [Link] The IPMP failover IP for cxnode2 [Link] The IP for the Apache zone [Link] The IP for the MySQL zone [Link] The first IPMP test address on cxnode1 [Link] The first IPMP test address on cxnode2 [Link] The second IPMP test address on cxnode1 [Link] The second IPMP test address on cxnode2 Sun provide extensive documentation for Sun Cluster 3.2, which for these purposes should match Cluster Express closely enough. You can find a good starting point at this documentation centre page.
[Link]
1/15
2/25/2014
o p e n s o l h o s t #p k g a d ddV i r t u a l B o x 1 . 6 . 2 S u n O S a m d 6 4 r 3 1 4 6 6 . p k g
Before the virtual machines can be created some network configuration on the host is required. To enable the front interfaces of the cluster nodes to connect to the local network some virtual interfaces on the host are required. These will have the desired affect of bridging the local VLAN with the virtual machines. The instructions in the VirtualBox documentation say to use /opt/VirtualBox/setup_vnic.sh to create these interfaces, however I had problems getting that to work. Not only is does it need some tweaks to get it to work with OpenSolaris 2008.05, I found I couldnt get the interfaces going even after they were created. Fortunately I came across this blog post, which pointed me in the direction of the steps below. 2 public interfaces are required for each cluster node, these can then be configured with IPMP, just as you would do in a real cluster, whilst this doesnt really provide more resiliency its a worthwhile exercise. To create these public interfaces a total of four virtual interfaces are required on the host. Each one needs a defined MAC address that you can choose yourself, or you can use the suggested ones here. To create the interfaces do this, replacing e1000g0 with the address of your physical interface on your host.
o p e n s o l h o s t #/ u s r / l i b / v n ae 1 0 0 0 g 0c 0 : f f : e e : 0 : 1 : 0 o p e n s o l h o s t #/ u s r / l i b / v n ae 1 0 0 0 g 0c 0 : f f : e e : 0 : 1 : 1 o p e n s o l h o s t #/ u s r / l i b / v n ae 1 0 0 0 g 0c 0 : f f : e e : 0 : 2 : 0 o p e n s o l h o s t #/ u s r / l i b / v n ae 1 0 0 0 g 0c 0 : f f : e e : 0 : 2 : 1
Ive chosen these MAC addresses because [Link] is easy to remember, and the last two octets represent the cluster node number and then the interface number in that node. Now plumb these interfaces, but dont give them IP addresses.
o p e n s o l h o s t #i f c o n f i gv n i c 0p l u m b o p e n s o l h o s t #i f c o n f i gv n i c 1p l u m b o p e n s o l h o s t #i f c o n f i gv n i c 2p l u m b o p e n s o l h o s t #i f c o n f i gv n i c 3p l u m b
These four vnic interfaces will form the public interfaces of our cluster nodes. However these will not persist over a reboot so you may want to create a start up script to create these. Theres a sample script in the linked blog post above. Its now time to create the virtual machines, as your normal user run /opt/VirtualBox/VirtualBox and a window should pop up like this
To start creating the virtual machine that will serve as the first node of the cluster, click New to fire up the new server wizard. Enter cxnode1 for the name of the first node (or choose something more imaginative), pick Solaris as the OS. Page through the rest of the wizard choosing 1GB for the memory allocation (this can be reduced later, at least 700MB is recommended) and create the default 16GB disk image. The network interfaces need to be configured to support the cluster. Click into the Network settings area and change the Adapter 0 type to Intel PRO/1000 MT Desktop and change the Attached to to Host Interface. Then enter the first MAC address you configured for the VNICs, if you followed the example above this will be c0ffee000100, then enter an interface name of vnic0. This Intel PRO/1000 MT Desktop adapter will appear as e1000g0 within the virtual machine. Generally Ive grown to like these adapters, not least because they use a GLDv3 driver. Now for Adapter 1 enable the adapter and change the type as before, and set Attached to to Host Interface, the MAC to c0ffee000101 (or as appropriate) and the interface name to vnic1. Then enable Adapter 2 and set Attached to to Internal Network and set Network Name to Interconnect 1. Repeat for Adapter 2 but set the Network Name to Interconnect 2. Finally point the CD/DVD-ROM to the ISO image you downloaded for Solaris Express Build 86. You need to add the ISO with the Virtual Disk Manager to make it available to the machine. You can use the /net NFS automounter to point to a NFS share where this image resides if required. Finally change the boot order, in General / Advanced, so that Hard Disk comes before CD/DVD. This means that it will initially boot the install media, but once installed will boot from the installed drive. Repeat the above steps to create a second cluster node. Ensure that Adapter 2 and Adapter 3 are connected to the same networks as for the first cluster node. Adapters 1 and 2 should be connected to the 3rd and 4th VNICs created previously.
Installing Solairs
Solaris now needs to be installed to both the cluster nodes. Repeat the following steps for each node. To boot a virtual machine click Start and the machine should boot and display the Grub menu. DONT pick the default of Solaris Express Developer Edition but rather choose Solaris Express. If you choose the Developer Edition option youll get the SXDE installer which does not offer the flexibility required around partition layout. Pick one of the Solaris Interactive install options as per your personal preference. If youve ever installed one of the main line Solaris releases that youll be at home here. Suggested settings for system identification phase: Networked Configure e1000g0 (leave the others for the time being) No DHCP
[Link] 2/15
2/25/2014
Hostname: cxnode1 (or choose something yourself) IP: [Link] (or something else as appropriate) Netmask [Link] (or as appropriate) No IPV6 Default Route Specify [Link] (the IP of your default router) Dont enable Kerberos None for naming service Default derived domain for NFSv4 domain Specify time zone as required Pick a root password Then in the installation phase pick the following options Reboot automatically: no Eject additional CDs: yes (not that well be using any) Media: CD/DVD Install Type: Custom Add additional locales as required, were using C as the default. Web Start Scan location: None Software group: Entire Group Plus OEM / Default Packages Leave the fdisk partition as the default, one single Solaris partition covering the whole disk. To support the cluster a specific filesystem layout is required. Click Modify then set as this: (If space is needed for a live upgrade in the future then an additional virtual disk can be attached) Slice 0: / 13735 MB Slice 1: swap 2048 MB Slice 6 /globaldevices 512 MB Slice 7 (leave blank) 32MB Now just confirm and start the install The installer should now run through in due course, once complete reboot the machine and check it boots up fine for the first time. By default Solaris will boot up into a GNOME desktop. If you want to disable the graphic login prompt from launching then do svcadm disable cde-login Before the cluster framework is installed the VirtualBox Guest Additions need to be installed, these serve a similar role to VMWare Tools in that they provide better integration with the host environment. Specifically the Time synchronization facilities are required to assist with keeping the cluster nodes in sync. If you still have the SXDE DVD image mounted then eject this in the guest and Unmount it from the Devices menu. Then choose Install Guest Additions from the VirtualBox menu. The Guest Additions iso should mount, then su to root and pkgadd the [Link] from the CD. If youre running X you should logout and back in to activate the X11 features. Repeat the above steps for both cluster nodes. Its worth considering taking a snapshot at this point so if you run into problems later you can just snap it back to this post install state.
cxnode1:/etc/hostname.e1000g1
1 9 2 . 1 6 8 . 0 . 1 3 1n e t m a s k2 5 5 . 2 5 5 . 2 5 5 . 0b r o a d c a s t1 9 2 . 1 6 8 . 0 . 2 5 5d e p r e c a t e df a i l o v e rg r o u pp u b l i cu p
cxnode2:/etc/hostname.e1000g0
1 9 2 . 1 6 8 . 0 . 1 2 2n e t m a s k2 5 5 . 2 5 5 . 2 5 5 . 0b r o a d c a s t1 9 2 . 1 6 8 . 0 . 2 5 5d e p r e c a t e df a i l o v e rg r o u pp u b l i cu p a d d i f1 9 2 . 1 6 8 . 0 . 1 1 2u p
cxnode2:/etc/hostname.e1000g1
1 9 2 . 1 6 8 . 0 . 1 3 2n e t m a s k2 5 5 . 2 5 5 . 2 5 5 . 0b r o a d c a s t1 9 2 . 1 6 8 . 0 . 2 5 5d e p r e c a t e df a i l o v e rg r o u pp u b l i cu p
Also check that /etc/defaultrouter is correct. The RPC communication must be activated for the cluster framework to function. To do this do
o p e n s o l h o s t #s v c c f g s v c : >s e l e c tn e t w o r k / r p c / b i n d s v c : / n e t w o r k / r p c / b i n d >s e t p r o pc o n f i g / l o c a l _ o n l y = f a l s e s v c : / n e t w o r k / r p c / b i n d >q u i t o p e n s o l h o s t #s v c a d mr e f r e s hn e t w o r k / r p c / b i n d : d e f a u l t o p e n s o l h o s t #s v c p r o pn e t w o r k / r p c / b i n d : d e f a u l t|g r e pl o c a l _ o n l y
[Link]
3/15
2/25/2014
The last command should return false. Modify your path to include
/ u s r / b i n / u s r / c l u s t e r / b i n / u s r / s b i n / u s r / c c s / b i n
Also check your umask is set to 0022 and change it if not. Finally we need to ensure the cluster nodes exist in /etc/inet/hosts on both hosts. For example
1 9 2 . 1 6 8 . 0 . 1 1 1c x n o d e 1 1 9 2 . 1 6 8 . 0 . 1 1 2c x n o d e 2
After making the above changes bounce the nodes to check it all persists across a reboot. Once the above has been repeated on both cluster nodes it is time to install the cluster framework.
establish the cluster with itself as the only node. Dont worry about any errors about / e t c / c l u s t e r / c c r / d i d _ i n s t a n c e s , the DID database hasnt been created yet. The installer will then configure c x n o d e 2and reboot that. When it boots back up it will join the new cluster. We now have an established cluster! However its in installmode until a quorum device is configured.
The quorum server can now be configured into the cluster to get it fully operational. One one of the cluster nodes run c l s e t u p . Answer yes to confirm you have finished the initial cluster setup and yes to add a quorum device. Now pick option 3 (Quorum
[Link] 4/15
2/25/2014
Server). Answer yes to continue then give a name for the device, such as opelsolhost. Enter the IP of your host when prompted and 9000 as the port number. Allow it to proceed then choose yes to reset installmode. The cluster is now properly established. You can check the quorum registration on the OpenSolaris host by doing / u s r / c l u s t e r / b i n / c l q u o r u m s e r v e rs h o w , you should see something like this:
o p e n s o l h o s t # / u s r / c l u s t e r / b i n / c l q u o r u m s e r v e rs h o w - C l u s t e rD E V 1( i d0 x 4 8 4 E 9 D B 9 )R e g i s t r a t i o n sN o d eI D : 1 R e g i s t r a t i o nk e y : 0 x 4 8 4 e 9 d b 9 0 0 0 0 0 0 0 1 N o d eI D : 2 R e g i s t r a t i o nk e y : 0 x 4 8 4 e 9 d b 9 0 0 0 0 0 0 0 2
Provisioning storage
To enable the creation of the Apache and MySQL zones its necessary to present some storage to the cluster that can be shared between them. As OpenSolaris is running on the host, take advantage of the in built iSCSI support in ZFS. First create some ZFS Volumes, one for each clustered service. eg.
o p e n s o l h o s t #z f sc r e a t eV8 gr p o o l / a p a c h e o p e n s o l h o s t #z f sc r e a t eV8 gr p o o l / m y s q l
Now configure the nodes to see the presented storage. Do this on both of the nodes, replacing [Link] with the IP of your host.
b o t h n o d e s #i s c s i a d mm o d i f yd i s c o v e r y s e n d t a r g e t se n a b l e b o t h n o d e s #i s c s i a d ma d dd i s c o v e r y a d d r e s s1 9 2 . 1 6 8 . 0 . 1 0 4 b o t h n o d e s #s v c a d me n a b l en e t w o r k / i s c s i _ i n i t i a t o r
Make a note of the OS Device Name to Alias matching as you need to put the right LUN into the correct resource group. You can also confirm the storage is available by running format, eg:
b o t h n o d e s #f o r m a t S e a r c h i n gf o rd i s k s . . . d o n e A V A I L A B L ED I S KS E L E C T I O N S : 0 .c 0 d 0 / p c i @ 0 , 0 / p c i i d e @ 1 , 1 / i d e @ 0 / c m d k @ 0 , 0 1 .c 2 t 0 1 0 0 0 0 1 7 F 2 0 2 6 4 2 4 0 0 0 0 2 A 0 0 4 8 4 F C C B F d 0 / s c s i _ v h c i / d i s k @ g 0 1 0 0 0 0 1 7 f 2 0 2 6 4 2 4 0 0 0 0 2 a 0 0 4 8 4 f c c b f 2 .c 2 t 0 1 0 0 0 0 1 7 F 2 0 2 6 4 2 4 0 0 0 0 2 A 0 0 4 8 4 F C C C 1 d 0 / s c s i _ v h c i / d i s k @ g 0 1 0 0 0 0 1 7 f 2 0 2 6 4 2 4 0 0 0 0 2 a 0 0 4 8 4 f c c c 1
To make this storage available to the cluster you must populate the DID device database. This is performed via c l d e v i c eand only needs to be run on one of the nodes:
c x n o d e 1 #c l d e v i c ep o p u l a t e C o n f i g u r i n gD I Dd e v i c e s c l d e v i c e :( C 5 0 7 8 9 6 )I n q u i r yo nd e v i c e / d e v / r d s k / c 0 d 0 s 2 f a i l e d . d i di n s t a n c e5c r e a t e d . d i ds u b p a t hc x n o d e 1 : / d e v / r d s k / c 2 t 0 1 0 0 0 0 1 7 F 2 0 2 6 4 2 4 0 0 0 0 2 A 0 0 4 8 4 F C C C 1 d 0c r e a t e df o ri n s t a n c e5 . d i di n s t a n c e6c r e a t e d . d i ds u b p a t hc x n o d e 1 : / d e v / r d s k / c 2 t 0 1 0 0 0 0 1 7 F 2 0 2 6 4 2 4 0 0 0 0 2 A 0 0 4 8 4 F C C B F d 0c r e a t e df o ri n s t a n c e6 . C o n f i g u r i n gt h e/ d e v / g l o b a ld i r e c t o r y( g l o b a ld e v i c e s ) o b t a i n i n ga c c e s st oa l la t t a c h e dd i s k s
[Link]
5/15
2/25/2014
Now the basic configuration of the cluster is fairly complete. Its a good opportunity to shut it down and take a snapshot. When shutting down the entire cluster you must use c l u s t e rs h u t d o w nrather than just shutting down the individual nodes. If you dont then you must bring up the nodes in the reverse order of shutting them down. For an immediate shutdown do c l u s t e rs h u t d o w nyg0 .
The standard SMI label needs to be replaced with an EFI one. When Ive been working with ZFS previously this has always happened automatically but it didnt work for me this time, possibly because we are going to add the DID device to the zpool rather than a traditional disk device. To change it manually run f o r m a twith the eoption.
c x n o d e 1 #f o r m a te S e a r c h i n gf o rd i s k s d o n e A V A I L A B L ED I S KS E L E C T I O N S : 0 .c 0 d 0 / p c i @ 0 , 0 / p c i i d e @ 1 , 1 / i d e @ 0 / c m d k @ 0 , 0 1 .c 2 t 0 1 0 0 0 0 1 7 F 2 0 2 6 4 2 4 0 0 0 0 2 A 0 0 4 8 4 F C C B F d 0 / s c s i _ v h c i / d i s k @ g 0 1 0 0 0 0 1 7 f 2 0 2 6 4 2 4 0 0 0 0 2 a 0 0 4 8 4 f c c b f 2 .c 2 t 0 1 0 0 0 0 1 7 F 2 0 2 6 4 2 4 0 0 0 0 2 A 0 0 4 8 4 F C C C 1 d 0 / s c s i _ v h c i / d i s k @ g 0 1 0 0 0 0 1 7 f 2 0 2 6 4 2 4 0 0 0 0 2 a 0 0 4 8 4 f c c c 1 S p e c i f yd i s k( e n t e ri t sn u m b e r ) :2 s e l e c t i n gc 2 t 0 1 0 0 0 0 1 7 F 2 0 2 6 4 2 4 0 0 0 0 2 A 0 0 4 8 4 F C C C 1 d 0 [ d i s kf o r m a t t e d ] E r r o ro c c u r r e dw i t hd e v i c ei nu s ec h e c k i n g :N os u c hd e v i c e f o r m a t >l a b e l E r r o ro c c u r r e dw i t hd e v i c ei nu s ec h e c k i n g :N os u c hd e v i c e [ 0 ]S M IL a b e l [ 1 ]E F IL a b e l S p e c i f yL a b e lt y p e [ 0 ] :1 W a r n i n g :T h i sd i s kh a sa nS M Il a b e l .C h a n g i n gt oE F Il a b e lw i l le r a s ea l l c u r r e n tp a r t i t i o n s . C o n t i n u e ?y f o r m a t >p p a r t i t i o n >p C u r r e n tp a r t i t i o nt a b l e( d e f a u l t ) : T o t a ld i s ks e c t o r sa v a i l a b l e :1 6 7 6 0 7 9 8+1 6 3 8 4( r e s e r v e ds e c t o r s ) P a r t T a g 0 u s r 1u n a s s i g n e d 2u n a s s i g n e d 3u n a s s i g n e d 4u n a s s i g n e d 5u n a s s i g n e d 6u n a s s i g n e d 7u n a s s i g n e d 8 r e s e r v e d F l a g w m w m w m w m w m w m w m w m w m F i r s tS e c t o r 3 4 0 0 0 0 0 0 0 1 6 7 6 0 7 9 9 S i z e 7 . 9 9 G B 0 0 0 0 0 0 0 8 . 0 0 M B L a s tS e c t o r 1 6 7 6 0 7 9 8 0 0 0 0 0 0 0 1 6 7 7 7 1 8 2
If youd like to suppress the Error occurred with device in use checking: No such device warning then set NOINUSE_CHECK as an environment variable. It seems to be this bug thats causing the warning. Now the disks can be added to a ZFS pool. Make sure you add the correct DID device here:
c x n o d e 1 #z p o o lc r e a t ea p a c h e p o o l/ d e v / d i d / d s k / d 6 s 0 c x n o d e 1 #z p o o lc r e a t em y s q l p o o l/ d e v / d i d / d s k / d 5 s 0
2/25/2014
c x n o d e 1 #z p o o ll i s t N A M E S I Z E U S E D A V A I L a p a c h e p o o l 7 . 9 4 G 1 1 1 K 7 . 9 4 G m y s q l p o o l 7 . 9 4 G 1 1 1 K 7 . 9 4 G
Now create some filesystems in the pools. For each service three file systems are required zone This will become the zone root data This will become /data within the zone and be used to store application data params This will be used to store a cluster parameter file
c x n o d e 1 #z f sc r e a t ea p a c h e p o o l / z o n e c x n o d e 1 #z f sc r e a t ea p a c h e p o o l / d a t a c x n o d e 1 #z f sc r e a t ea p a c h e p o o l / p a r a m s c x n o d e 1 #z f sc r e a t em y s q l p o o l / z o n e c x n o d e 1 #z f sc r e a t em y s q l p o o l / d a t a c x n o d e 1 #z f sc r e a t em y s q l p o o l / p a r a m s
To make the storage usable with the cluster it needs to be configured into a resource group. Create two resource groups, one for each service
c x n o d e 1 #c l r e s o u r c e g r o u pc r e a t ea p a c h e r g c x n o d e 1 #c l r e s o u r c e g r o u pc r e a t em y s q l r g
[Link] is the resource type that can manage ZFS storage, along with SVM and VxVM. It needs to be registered with the cluster with c l r e s o u r c e t y p e r e g i s t e r , this only needs to be performed on one node.
c x n o d e 1 #c l r e s o u r c e t y p er e g i s t e rS U N W . H A S t o r a g e P l u s
The resource groups will currently be in Unmanaged state, you can confirm this with c l r e s o u r c e g r o u ps t a t u s . To bring them under cluster management bring them online on the first node:
c x n o d e 1 #c l r e s o u r c e g r o u po n l i n eMnc x n o d e 1a p a c h e r g
Then check the file systems are available are available on the first node but not on the second.
c x n o d e 1 #z f sl i s t N A M E U S E D A V A I L R E F E R M O U N T P O I N T a p a c h e p o o l 1 5 8 K 7 . 8 1 G 2 1 K / / a p a c h e p o o l a p a c h e p o o l / d a t a 1 8 K 7 . 8 1 G 1 8 K / / a p a c h e p o o l / d a t a a p a c h e p o o l / p a r a m s1 8 K 7 . 8 1 G 1 8 K / / a p a c h e p o o l / p a r a m s a p a c h e p o o l / z o n e 1 8 K 7 . 8 1 G 1 8 K / / a p a h c e p o o l / z o n e c x n o d e 2 #z f sl i s t n od a t a s e t sa v a i l a b l e
Now move the pool to the other node and check that it becomes available on that node
c x n o d e 1 #c l r e s o u r c e g r o u ps w i t c hnc x n o d e 2a p a c h e r g c x n o d e 1 #z f sl i s t n od a t a s e t sa v a i l a b l e c x n o d e 2 # z f sl i s t N A M E U S E D A V A I L R E F E R M O U N T P O I N T a p a c h e p o o l 1 5 8 K 7 . 8 1 G 2 1 K / / a p a c h e p o o l a p a c h e p o o l / d a t a 1 8 K 7 . 8 1 G 1 8 K / / a p a c h e p o o l / d a t a a p a c h e p o o l / p a r a m s1 8 K 7 . 8 1 G 1 8 K / / a p a c h e p o o l / p a r a m s a p a c h e p o o l / z o n e 1 8 K 7 . 8 1 G 1 8 K / / a p a h c e p o o l / z o n e
Then repeat the above online and failover tests for mysql-rg and mysql-pool As the ZFS pools have now been added as cluster managed resources you must now use the cluster to mange them. Dont perform export/import operations manually.
2/25/2014
To provide IP addressing for the zones [Link] resources will be used, rather than directly configuring the zone with an IP addres. First entries need adding to /etc/hosts on both nodes, eg:
1 9 2 . 1 6 8 . 0 . 1 1 8a p a c h e z o n e 1 9 2 . 1 6 8 . 0 . 1 1 9m y s q l z o n e
Now its time to create the zones. This is a really simple zone configuration, you could add resource controls or other features as desired. a u t o b o o tmust be set to f a l s eas the cluster will be managing the starting and stopping of the zone.
c x n o d e 1 #z o n e c f gza p a c h e a p a c h e :N os u c hz o n ec o n f i g u r e d U s e' c r e a t e 't ob e g i nc o n f i g u r i n gan e wz o n e . z o n e c f g : a p a c h e >c r e a t e z o n e c f g : a p a c h e >s e tz o n e p a t h = / a p a c h e p o o l / z o n e z o n e c f g : a p a c h e >s e ta u t o b o o t = f a l s e z o n e c f g : a p a c h e >a d dd a t a s e t z o n e c f g : a p a c h e : d a t a s e t >s e tn a m e = a p a c h e p o o l / d a t a z o n e c f g : a p a c h e : d a t a s e t >e n d z o n e c f g : a p a c h e >v e r i f y z o n e c f g : a p a c h e >c o m m i t z o n e c f g : a p a c h e >e x i t c x n o d e 1 #z o n e c f gzm y s q l m y s q l :N os u c hz o n ec o n f i g u r e d U s e' c r e a t e 't ob e g i nc o n f i g u r i n gan e wz o n e . z o n e c f g : m y s q l >c r e a t e z o n e c f g : m y s q l >s e tz o n e p a t h = / m y s q l p o o l / z o n e z o n e c f g : m y s q l >s e ta u t o b o o t = f a l s e z o n e c f g : m y s q l >a d dd a t a s e t z o n e c f g : m y s q l : d a t a s e t > s e tn a m e = m y s q l p o o l / d a t a z o n e c f g : m y s q l : d a t a s e t >e n d z o n e c f g : m y s q l >v e r i f y z o n e c f g : m y s q l >c o m m i t z o n e c f g : m y s q l >e x i t
To enable the zones to be installed we must change the permissions on the zone roots:
c x n o d e 1 #c h m o d7 0 0/ a p a c h e p o o l / z o n e c x n o d e 1 #c h m o d7 0 0/ m y s q l p o o l / z o n e
The documentation has this to say about configuring zones: Caution: If the zone is to run in a failover configuration, each node being able to host that zone must have the exact same zone configuration for that zone. After installing the zone on the first node, the zones zone path already exists on the zoness disk storage. Therefore it must get removed on the next node prior to successfully create and install the zone.[..] Only the zones zone path created on the last node will be kept as the final zone path for the failover zone. For that reason any configuration and customization within the failover zone should get performed after the failover zone is known to all nodes that should be able to host it. To achieve this the newly created zone must be destroyed and recreated on the second node. To my mind this is a really ugly way of achieving this, the cluster should be able to manage this itself and make suitable configuration changes on a node when the zone is configured into the cluster. In later releases of Sun Cluster 3.1 the recommended way to manage this configuration was that the /etc/zones files be hacked to replicate the configuration of the zones from one node to another. However this method is not supported any more so the official instructions will be followed. To do this migrate the storage for the zones to the other node
c x n o d e 1 #c l r e s o u r c e g r o u ps w i t c hnc x n o d e 2a p a c h e r g c x n o d e 1 #c l r e s o u r c e g r o u ps w i t c hnc x n o d e 2m y s q l r g
Then delete the previously installed zone roots on the second node
c x n o d e 2 #r mr f/ a p a c h e p o o l / z o n e / * c x n o d e 2 #r mr f/ m y s q l p o o l / z o n e / *
Now repeat the zonecfg and zoneadm steps above to recreate both of the zones.
[Link]
8/15
2/25/2014
When the zone installs have completed again move the storage back to the first node.
c x n o d e 1 #c l r e s o u r c e g r o u ps w i t c hnc x n o d e 1a p a c h e r g c x n o d e 1 #c l r e s o u r c e g r o u ps w i t c hnc x n o d e 1m y s q l r g
The zones can now be booted and configured. Repeat these steps for Apache and the the MySQL zone. Boot the zone:
c x n o d e 1 #z o n e a d mza p a c h eb o o t
I got this error when booting the zones U n a b l et os e tr o u t ef o ri n t e r f a c el o 0t o* ? ? 9 ? x ,I m not sure what this means but it doesnt seem to impact anything. Login to the zones console to configure it:
c x n o d e 1 #z l o g i nCa p a c h e
Youll be asked a few questions to configure the zone. Choose language, terminal type and time zone information as appropriate. Enter the same hostname as you used above, eg apache-zone or mysql-zone. I received some alerts about avahi-bridge-dsd failing to start when booting, as far as I can tell its some sort of Bonjour networking thing, we dont need it here so its ok to disable. You can also disable some other services that are not required to free up some resources
a p a c h e z o n e #s v c a d md i s a b l ec d e l o g i n a p a c h e z o n e #s v c a d md i s a b l es e n d m a i l a p a c h e z o n e #s v c a d md i s a b l ew e b c o n s o l e a p a c h e z o n e #s v c a d md i s a b l ea v a h i b r i d g e d s d a p a c h e z o n e #s v c a d md i s a b l ep p d c a c h e u p d a t e
Now mount the zfs file systems that have delegated to the zone to an appropriate place. To do this on the Apache zone:
a p a c h e z o n e #z f ss e tm o u n t p o i n t = / d a t aa p a c h e p o o l / d a t a
Wait for the zone to finish booting and check you dont have any failed services with s v c sx v . Then shut the zone down and repeat for the other zone.
The zones can now be registered. First you need to register the [Link] resource type. On one node do:
c x n o d e 1 #c l r e s o u r c e t y p er e g i s t e rS U N W . g d s
[Link]
9/15
2/25/2014
Then enable the Apache zone and log in to it. You should see the LogicalHostname resource has been assigned to the zone
c x n o d e 1 #c l r e s o u r c ee n a b l ea p a c h e z o n e c x n o d e 1 #z o n e a d ml i s tc v I DN A M E S T A T U S P A T H B R A N D 0g l o b a l r u n n i n g / n a t i v e 1a p a c h e r u n n i n g / a p a c h e p o o l / z o n e n a t i v e -m y s q l i n s t a l l e d / m y s q l p o o l / z o n e n a t i v e c x n o d e 1 #z l o g i na p a c h e [ C o n n e c t e dt oz o n e' a p a c h e 'p t s / 2 ] L a s tl o g i n :T u eJ u n1 72 0 : 2 3 : 0 8o np t s / 2 S u nM i c r o s y s t e m sI n c . S u n O S5 . 1 1 s n v _ 8 6 J a n u a r y2 0 0 8 a p a c h e z o n e #i f c o n f i ga l o 0 : 1 :f l a g s = 2 0 0 1 0 0 0 8 4 9m t u8 2 3 2i n d e x1 i n e t1 2 7 . 0 . 0 . 1n e t m a s kf f 0 0 0 0 0 0 e 1 0 0 0 g 1 : 1 :f l a g s = 2 0 1 0 4 0 8 4 3m t u1 5 0 0i n d e x3 i n e t1 9 2 . 1 6 8 . 0 . 1 1 8n e t m a s kf f f f f f 0 0b r o a d c a s t1 9 2 . 1 6 8 . 0 . 2 5 5
I P s h a r e d s h a r e d s h a r e d
Then create the directory for the databases and one for some logs
m y s q l z o n e #m k d i r/ d a t a / m y s q l m y s q l z o n e #m k d i r/ d a t a / l o g s m y s q l z o n e #c h o w nm y s q l : m y s q l/ d a t a / m y s q l m y s q l z o n e #c h o w nm y s q l : m y s q l/ d a t a / l o g s
Now set a password for the root users for the database, its set to root in this case.
m y s q l z o n e #/ u s r / m y s q l / 5 . 0 / b i n / m y s q l a d m i nur o o tp a s s w o r dr o o t m y s q l z o n e #/ u s r / m y s q l / 5 . 0 / b i n / m y s q l a d m i nur o o thl o c a l h o s tpp a s s w o r dr o o t E n t e rp a s s w o r d : r o o t
The /etc/hosts file in the zone needs to be modified so that mysql-zone is the name for the clustered IP address for the zone rather than the localhost, also the address for the apache-zone needs to be added.
1 2 7 . 0 . 0 . 1 1 9 2 . 1 6 8 . 0 . 1 1 9 1 9 2 . 1 6 8 . 0 . 1 1 8 l o c a l h o s t m y s q l z o n e a p a c h e z o n e l o g h o s t
Allow access to the root user to connect from the Apache zone.
m y s q l z o n e #/ u s r / m y s q l / 5 . 0 / b i n / m y s q lp E n t e rp a s s w o r d :r o o t m y s q l >G R A N TA L LO N* . *T O' r o o t ' @ ' a p a c h e z o n e 'i d e n t i f i e db y' r o o t ' ;
MySQL is now configured and ready to be clustered. Well be using a process loosely based on the one documented here. Alternatively it would be possible to use SMF to manage the service, you can see an example of that method in the Apache configuration later.
[Link] 10/15
2/25/2014
First a user for the fault monitor must be created along with a test database for it to use. A script is provided with the agent to do this for you. It will grant the fault monitor user PROCESS, SELECT, RELOAD, SHUTDOWN, SUPER on all databases, then ALL privileges on the test database. To create the required users you need to provide a config file. Copy the supplied template into / e t cand edit it there
m y s q l z o n e #c p/ o p t / S U N W s c m y s / u t i l / m y s q l _ c o n f i g/ e t c m y s q l z o n e #v i/ e t c / m y s q l _ c o n f i g
Use these values, note that MYSQL_DATADIR is the location of the [Link], not the directory to the databases. The meaning of DATADIR changed in 5.0.3 to mean the location of the data and not the config directory, but for this configuration it should point to the config directory.
M Y S Q L _ B A S E = / u s r / m y s q l / 5 . 0 M Y S Q L _ U S E R = r o o t M Y S Q L _ P A S S W D = r o o t M Y S Q L _ H O S T = m y s q l z o n e F M U S E R = f m u s e r F M P A S S = f m p a s s M Y S Q L _ S O C K = / t m p / m y s q l . s o c k M Y S Q L _ N I C _ H O S T N A M E = " m y s q l z o n e " M Y S Q L _ D A T A D I R = / e t c / m y s q l / 5 . 0
Now shut down the database so it can be bought online by the cluster.
m y s q l z o n e #s v c a d md i s a b l em y s q l : v e r s i o n _ 5 0
Drop back to the global zone and copy the MySQL agent configuration template to /etc
c x n d o e 1 #c p/ o p t / S U N W s c m y s / u t i l / h a _ m y s q l _ c o n f i g/ e t c / h a _ m y s q l _ c o n f i g
Use these settings, this time the DATADIR should be set to point to the actual data location and not the location of the config. Descriptions of the configuration is given in the file:
R S = m y s q l s e r v e r R G = m y s q l r g P O R T = 3 3 0 6 L H = m y s q l a d d r H A S _ R S = m y s q l s t o r Z O N E = m y s q l Z O N E _ B T = m y s q l z o n e P R O J E C T = B A S E D I R = / u s r / m y s q l / 5 . 0 D A T A D I R = / d a t a / m y s q l M Y S Q L U S E R = m y s q l M Y S Q L H O S T = m y s q l z o n e F M U S E R = f m u s e r F M P A S S = f m p a s s L O G D I R = / d a t a / l o g s / C H E C K = N O
Before bringing this online a tweak is needed to the supplied agent scripts. As mentioned briefly above the use of DATADIR is a bit broken. If you try to bring MySQL online now it will fail as it wont be able to find its configuration file. The agent scripts have this hard coded to $ { M Y S Q L _ D A T A D I R } / m y . c n fwhich is no use for our purposes. In the zone edit /opt/SUNWscmys/bin/functions and make this replacement, ensure you edit the copy in the MySQL zone and not the one in the global zone.
M Y S Q L _ D E F A U L T _ F I L E = $ { M Y S Q L _ D A T A D I R } / m y . c n f
[Link]
11/15
2/25/2014
with
M Y S Q L _ D E F A U L T _ F I L E = / e t c / m y s q l / 5 . 0 / m y . c n f
Configuring Apache
Apache is going to be used to provide a web front end to the MySQL install, via the ubiquitous phpMyAdmin. The supplied Apache install (at /usr/apache2/2.2) is going to be used. As Apahce will be running in a zone it can be used unmodified, keeping the configuration in /etc and not worrying about any potential conflicts with other Apache installs. At present the supplied Apache resource type does not directly support running the resource in a zone, or at least I couldnt figure it out. So instead some of the provided zone monitoring tools are going to be used to ensure Apache is up and running. This uses a combination of SMF and a shell script. To begin Apache must be configured. Bring the zone online on one of the nodes and log in to it. The configuration file for Apache is at / e t c / a p a c h e 2 / 2 . 2 / h t t p d . c o n f . Only a small tweak is required to move the document root onto the zfs file system we have prepared for it. You could, if desired, also move other parts of the configuration, such as the log location. For this example just change D o c u m e n t R o o tto be / d a t a / h t d o c sand update the D i r e c t o r ystanza a page or so below it. Then do a m k d i ron / d a t a / h t d o c s . That completes our very simple Apache configuration. So start it up svcadm enable apache22. Download phpMyAdmin from here. Solaris now ships with p7zip to manage 7z files, so you could download that version to save a bit of bandwidth if you like. You can extract them with p z i pd[ f i l e n a m e ] . Once extracted move the extracted directory to / d a t a / h t d o c s / p h p m y a d m i n . Add m y s q l z o n eto /etc/hosts eg
1 9 2 . 1 6 8 . 0 . 1 1 9 m y s q l z o n e
To enable monitoring of the Apache instance we need a simple probe script. Make a directory / o p t / p r o b e sin the zone and create a file called p r o b e a p a c h e . k s hwith this content:
# ! / u s r / b i n / k s h i fe c h o" G E T ;e x i t "|m c o n n e c tp8 0>/ d e v / n u l l2 > & 1 t h e n e x i t0 e l s e e x i t1 0 0 f i
Then c h m o d7 5 5/ o p t / p r o b e s / p r o b e a p a c h e . k s h . All this does is a simple connect on port 80, it could be replaced with something more complex if needed. Finally disable Apache so that the cluster can start it:
a p a c h e z o n e #s v c a d md i s a b l ea p a c h e 2 2
Drop back to the global zone and copy / o p t / S U N W s c z o n e / s c z s m f / u t i l / s c z s m f _ c o n f i gto / e t c / s c z s m f _ c o n f i g . a p a c h eand set the following settings
R S = a p a c h e s e r v e r R G = a p a c h e r g S C Z B T _ R S = a p a c h e z o n e Z O N E = a p a c h e S E R V I C E = a p a c h e 2 2 R E C U R S I V E = t r u e S T A T E = t r u e S E R V I C E _ P R O B E = " / o p t / p r o b e s / p r o b e a p a c h e . k s h "
Conclusions
When I started this work I wasnt sure whether it was going to be possible or not, but despite a couple of bumps along the way Im happy with the end result. Whilst it might not be a perfect match for a real cluster it certainly provides enough opportunity for testing and for use in training. Tagged with: Categorised as: Solaris
[Link] 12/15
2/25/2014
14 Comments
1. Solaris Cluster testbed - [Link] says: 2008/08/05 at 15:43 [...] There is an interesting walkthrough for testing Solaris Cluster with iSCSI and Virtualbox at Building a Solaris Cluster Express cluster in a VirtualBox on OpenSolaris. Whoever wants to play with a mature cluster framework should give this a try. Posted by Joerg [...] 2. Harald Dumdey Blog Archive links for 2008-08-07 [[Link]] says: 2008/08/07 at 10:28 [...] Building a Solaris Cluster Express cluster in a VirtualBox on OpenSolaris (tags: zfs sun solaris cluster computer software) [...]
3.
daniel.p says: 2008/08/29 at 14:57 great work !! thanks much. i have spent a lot of time playing with cluster in ESX vmware infrastructure server (on x86) to make it fully working .. break point was for me to configure shared device (because everything else had worked for me).. now, i am encouraged by this article. so, anyway my conditions changed, because our company has bought t5240 sparc server (and got as a gift from [Link] two additional fairly historic netra servers) .. but with no diskpool, so shared storage in cluster still pains me .. thanks much for this article .. tomorrow ill begin with *second service* with virtual box instead then ESX vmware .. Very Best Regards by daniel
4.
Gideon says: 2008/09/04 at 21:52 Ive set up something similar, but Ive found that only one of my two virtual nodes can access the iSCSI device. The other node sees it, but if I try to run, say, format on it, I get an error about a reservation conflict. Any ideas?
5.
Franky G says: 2008/09/12 at 23:39 This is Fantastic, Im sure people will agree that the shred storage has been a stickler for most, I will attempt this with the host OS on Linux Centos 5.1 Once im done, ill post my results Franky G
6. A trip down memory lane Dominiks Weblog says: 2008/12/28 at 23:22 [...] Weihnachtszeit nutzen, um auf scaleo einen Solaris Cluster aus Virtualbox-Maschinen zu bauen (siehe hier und hier). Aber zwei unvorhergesehene Ereignisse trafen [...] 7. system-log:tyr Blog Archive Linux zones on Solaris Express X86 says: 2009/01/28 at 13:39 [...] takes a look at the interesting world of the Linux branded zone. Ive posted about VirtualBox before and I hope to take a look at xVM Server (Xen) in a future post. Read on for my first steps with [...] 8. So viel Interessantes und so wenig Zeit Dominiks Weblog says: 2009/01/31 at 22:16 [...] virtueller Sun Cluster mit [...]
9.
fii says: 2009/03/02 at 03:49 Thanks for this info . Tried it and it works perfect . I now have a sun cluster lab and im about to throw in oracle rac and stuff in there . Can I buy you a beer the next time Im in Leeds ?
[Link]
13/15
2/25/2014
10.
Aero says: 2009/07/09 at 23:20 Hello, thanks for posting the procedure. Can you tell me what is this error: cldevice: (C507896) Inquiry on device /dev/rdsk/c0d0s2 failed. You got this output after cldevice populate.? does cldev status show FAIL for that disk ?
11.
12.
Koko says: 2010/08/04 at 10:18 I found this page quite interesting. I want to try it. But I cannot find the Solaris Cluster Express 12/08, it says that the product is no longer available. Any idea where can i find another location? Thank You
13.
Chris says: 2010/09/14 at 19:41 Thanks so much for taking the time to do this! This is EXACTLY what I needed and couldnt find anywhere!
14.
Baban says: 2011/01/20 at 07:52 Hi i am new to clustering. And i do find the ip addresses assigned to multipathing group, public interfaces & cluster interconnects to be quite confusing. Can any one clear which and how many ip addresses are assigned to ipmp, cluster interconnect?
Leave a Reply
Your email address will not be published. Required fields are marked * Name * Email * Website
Comment You may use these HTML tags and attributes: < ah r e f = " "t i t l e = " " >< a b b rt i t l e = " " >< a c r o n y mt i t l e = " " >< b >< b l o c k q u o t ec i t e = " " >< c i t e >< c o d e >< d e l
d a t e t i m e = " " >< e m >< i >< qc i t e = " " >< s t r i k e >< s t r o n g > Post Comment
Categories
iOS Leeds
[Link] 14/15
2/25/2014
Archives
February 2013 February 2012 January 2012 November 2009 June 2009 January 2009 August 2008 July 2008 June 2008 May 2008 April 2008 March 2008 February 2008 January 2008
Meta
Log in Entries RSS Comments RSS [Link] 2014 system-log:tyr | powered by WordPress Entries (RSS) and Comments (RSS).
[Link]
15/15