You are on page 1of 3
26/12/2017 Document 1986967.1 Solaris Cluster How to Setup/Configure Root Filesystem for a Failover Guest LDom Controlled by a SUNW.Idom Resource (Doc ID 1366967.1) Solaris Cluster - Version 3.3 and later Oracle Solaris on SPARC (64-bit) This document explains how to configure a SUNW.ldom resource in a Solaris Cluster in order to be able to perform live migration. It explains also why some specific disk configuration is required for the guest logical domain controlled by ‘SUNW.Idom resource ‘Two servers with logical domain (LDom) capability have been configured as a 2 node Solaris Cluster: node and node2. ‘A guest logical domain (Id1) has been created and a SUNW.ldom resource has been configured in a failover resource group to control this domain. ‘This SUNW./dom resource has been created with a migration_type set to MIGRATE, which means that it will do live migration. When the SUNW.Idom resource is switched from node1 to node2, the stop method runs on nodel and performs the following steps to achieve live migration: = it copies memory pages from the source node to the target node (until 3 retries), - then it suspends the activity on the source node, ~ and copies the last changed memory blocks to the target node, - then it resumes activity on the target node. For example, you could have started the following process in Id1 running on node!: nodel# nohup sleep 360) 906nodei# ps ef daemon 16. Yuse/sadn/11b/sme/bin/ sf fusr/sadn/1ib/snc/bin/smc S10 09 /uer/lib/eryp' soot 2781 0 09 /usr/sbin/eron 33 0 00 /usr/lib/nfs/ntsdcba 30 ° 00 sleep 3600 37 a 00 /usr/sbin/in.routed ‘Then once the migration is done and the resource has been stopped on node1 and started on node2, Id1 running on aa a1 ypto. Jusr/sbin/eron fasr/lib/nts/nts sleep 3600 Just/sbin/in.routed fusr/sadn/1ib/snc/bin/ st Vast /sadn/11b/sme/bin/smeboo ai: 21: 2a as a hitps:/supportoracle.comvepmosifaces/DocumentDisplay?_adt.ctt-sate rng tp_A8id=1966967.1 18 26/12/2017, Document 1986967.1 ‘The stop method of the SUNW.Idom resource is doing the shutdown of the guest LDom on node1. Furthermore the stop method of SUNW.Idom is also doing the startup of the guest LDom on node2, Which means, that when the stop method of ‘SUNW.Idom completes, the quest LDom (with all processes) has already been started on nade2. The start method of ‘SUNW.Idom is running but is NOT used to start the guest LDom. This is unusual of the stop method of SUNW.Idom because normally a stop method does not start the resource on the other node. Only for live migration of LDom: * the root filesystem for the failover quest domain must be accessible from both nodes without a ‘storage failover. ‘The fact that stopping, migrating and starting of a quest LDoM is all being performed by the stop method, makes it obvious, that a failover filesystem, configured as an HAStoragePlus resource cannot be used as the root file system for ‘a guest LDom that shall be live migrated. A failover file system would only be available after its start method has run which is too late, as the LDom has been resumed by the stop method. ‘This means ZFS can not be used for the LDom’s root file system. In other words the location of the image file that will be used as a backend to create 2 vdisk CANNOT be on ZFS. ZFS is only available as failover file system, but not as a global" (or shared) file system. Possible options for the root file system of a domain with live migration are: pxfs (UFS/SVM), NFS, iSCSI, and SAN LUNs because all accessible at the same time from both nodes. Further clarification when ZFS can be used + ZFS can always be used as root filesystem in the guest LDom itself. In other words the ZFS root filesystem of guest LDom is located on the vdisk of the LDom, = For cold migration (which is migration_type=NORMAL) ZFS can also be used for LDom’s root file system. In other words the location of the image file that will be used as a backend to create a vaisk can be on ZFS but only for cold migration. 1.) One way to achieve that is to use a global filesystem mounted on node1 and node2, which containing the root filesystem for the guest logical domain a) Mount the global filesystem 4 mount -g /dev/global/dsk/dlse0 /global/zoot_gue: b) Create a 309 file in the global filesystem # mkfile 30g /global/root_guest/Igdt.vaisk ©) Add this 30g file as a virtual disk for the guest Id1. domain # ton dev /global/root_guest/ 1: ary-vas0 # lan add-vaisk diskRoot rectigdleprimay; Now Idi domain is bound, started and solaris is installed on diskRoot in the guest Idi LDom. d) Create resources in the cluster: 4 clrs create -g ldom-rg -t SUNW.HAStoragePlus -p FilesysterMountPoints-/global/root_quest ldom- fo cirs create -g ldom-rg -¢ SUMW.1dom in_namesldl -p Password file=/pasawa. file -p ce_dependencies=idon-nae: With such a configuration, when an I/O is performed to / in the guest domain, it will end up doing a write to lgd1.vdisk 30g file on a global filesystem. The 1/0 path is write => ufs in guest LDom => virtual disk client (vc) => virtual disk server (vds) => pxfs client => pxfs server hitps:/supportoracle.comvepmosifaces/DocumentDisplay?_adt.ctt-sate 218 26/12/2017, Document 1986967.1 2.) Another way to achieve that is to use a raw lun. In this case, a shared lun is added to the quest domain: # lam ada-vasdev /4 # ldn add-vdiex cootdld diddioepzina: /esd/rdak/dl0s2 diddlogprimary-vasd ‘Then the domain is bound, started and solaris installed on rootd10 disk. Now you create the resources in the cluster: For SMI labeled disk which is used as root disk for failover guest domain: SUNW. HAStorage?! GlobalbevicePaths~/dev/global /dsk/d10s2 Ldon- oe ses For EFI labeled disk which is used as root disk for failover guest domain: nrg Plus =p ClobalbevicePaths=/dev/qlobal/dsk/dl0s0 Idon- With Solaris 11.1 and higher the boot from EFI labeled disk is supported in specific circumstances, An EFI labeled disk does not have a slice2 therefore the monitoring for HAStoragePlus must be placed on sliced Create LDom resource by following the documentation specific to the cluster version being run: + SC3.3 - How to Configure HA for Oracle VM Server + SC4.2 - How to Configure HA for Oracle VM Server + SC4.3 - How to Configure HA for Oracle VM Server ‘This configuration has been qualified recently but is not yet described in the Oracle Solaris Cluster Data Service for Oracle VM Server for SPARC Guide documentation. Bug 15687408 SUNBT7007367: HA LDom guide must describe placement of guest root FS on raw LUNS Still have questions? Consider posting them in the Oracle Solaris Cluster Community. Didnt find what you are looking for? hitps:/support oracle,convepmosifaces/DocumentDisplay?_sdt.ci-state=tvmgj6p|_48id=1356967,1 a8

You might also like