Professional Documents
Culture Documents
3 Systems
Customer:
TASK Number:
Technician:
Version EIS-DVD:
Date:
• It is recommend that the EIS web pages are checked for the latest version of this
checklist prior to commencing the installation.
• It is assumed that the installation is carried out with the help of the current EIS-DVD.
• The idea behind this checklist is to help the installer achieve a "good" installation.
• It is assumed that the installer has attended the appropriate training classes.
• The installation should be prepared using EISdoc V4.
• It is not intended that this checklist be handed over to the customer.
• This installation checklist for Solaris Cluster is to be used together with the
installation checklist(s) as appropriate for Solaris 11, server, storage etc..
• This checklist is for Solaris Cluster 4.3 in combination Solaris 11.3 (SPARC & x86).
SC4.3 can also run on Solaris 11.2 SRU13.6 or higher but is not supported with
earlier Solaris 11 releases or Solaris 10.
• Supported configurations are detailed in the official Solaris Cluster configuration
guide: http://www.oracle.com/technetwork/server-storage/solaris-
cluster/overview/solariscluster4-compatibilityguide-1429037.pdf
• A series of Assessment Tests for the completed cluster are available as described in
the EIS Test Procedures Plan. This document should be available during the
installation.
• The entire documentation for Solaris Cluster 4.3 can be found at:
http://docs.oracle.com/cd/E56676_01/
• Feedback on issues with EIS content or product quality is welcome – refer to the last
page of this checklist.
Oracle Internal and Approved Partners Only Page 1 of 44 Vn 1.1 Created: 7 Jan 2016
Task Comment Check
PREPARATION
If non-Oracle/Sun storage is involved http://www.oracle.com/technetwork/server-
ensure that the information on the Open storage/solaris-cluster/partnerprogram-
cluster-168135.pdf
Storage Program was consulted.
Currently no third party storage is
qualified!
EIS Installation Configuration Plan & EISdoc V4:
Test Procedures Plan complete? Use appropriate BUILD & TPP templates,
inserted into “Master” files (Chapter Oracle
Solaris Cluster):
EIS-BUILD-Cluster.odt
EIS-TPP-Cluster.odt
Ensure that the intended Solaris Cluster configuration is supported. Consult the
Configuration Guide for Solaris Cluster for details:
http://www.oracle.com/technetwork/server-storage/solaris-
cluster/overview/solariscluster4-compatibilityguide-1429037.pdf
FABs/EIS-ALERTs reviewed?
MANDATORY: Ensure that the licenses are available for Solaris Cluster
Framework & Agents.
Oracle Internal and Approved Partners Only Page 2 of 44 Vn 1.1 Created: 7 Jan 2016
Task Comment Check
Oracle Internal and Approved Partners Only Page 3 of 44 Vn 1.1 Created: 7 Jan 2016
Task Comment Node
1 2 3 4
Oracle Internal and Approved Partners Only Page 4 of 44 Vn 1.1 Created: 7 Jan 2016
Task Comment Node
1 2 3 4
Only x86: Run fdisk on each disk and create Solaris partition if not already
done!
Notice: MPxIO is enabled by default Do not run the stmsboot command if SC
for fp.conf, iscsi.conf and mpt_sas.conf is already installed, refer to stmsboot
man page if necessary.
If not run for your controller:
# stmsboot -D fp -e
# stmsboot -D iscsi -e
# stmsboot -D mpt -e
# stmsboot -D mpt_sas -e
For safety reasons clean the devices You may experience problems within
with: the SC installation if the devices are not
clean.
devfsadm -C on all nodes.
Ensure that the local_only property of rpcbind is set to false:
# svcprop network/rpc/bind:default | grep local_only
if not false run:
# svccfg
svc:> select network/rpc/bind
svc:/network/rpc/bind> setprop config/local_only=false
svc:/network/rpc/bind> quit
# svcadm refresh network/rpc/bind:default
Note: With the “netservices open” command you can reverse the “secure
by default” feature.
Disable NWAM on the console, because all network connections are lost:
# netadm enable -p ncp defaultfixed
# netadm list -p ncp defaultfixed
NWAM can not coexist with SC. Therefore enable defaultfixed to disable NVAM.
Public Network Management – Each public-network adapter that is used
for data-service traffic must belong to a PNM object that includes IPMP
groups, link aggregations, and VNICs that are directly backed by link
aggregations. If a public-network adapter is not used for data-service
traffic, you do not have to configure it in a PNM object.
Optional: Configure IPMP (probe-based or link-based)
Configure the public network for net0 or follow the next examples to
configure IPMP groups.
Example for net0:
# ipadm create-ip net0
# ipadm create-addr -T static -a local=10.xxx.xxx.xx/24 net0/pri
# route -p add default 10.xxx.xxx.1
# edit /etc/netmasks (optional, but required for logical host resource)
# configure name services
Oracle Internal and Approved Partners Only Page 5 of 44 Vn 1.1 Created: 7 Jan 2016
Task Comment Node
1 2 3 4
Example probe-based IPMP group active-active with interfaces net0 and
net1 with one production IP:
# ipadm create-ipmp ipmp0
# ipadm create-ip net0
# ipadm create-ip net1
# ipadm add-ipmp -i net0 -i net1 ipmp0
# ipadm create-addr -T static -a ipmp0-host ipmp0/data1
# ipadm create-addr -T static -a net0-test1 net0/test1
# ipadm create-addr -T static -a net1-test1 net1/test1
If the defaultrouter is NOT 100% available see MOS Document ID 1010640.1
• Do not use test IP for normal applications.
• Test IP for all adapters in the same IPMP group must belong to a single
IP subnet.
To enable transitive probing for probe-based IPMP only if no test address
is configured:
# svccfg -s svc:/network/ipmp setprop config/transitive-probing=true
# svcadm refresh svc:/network/ipmp:default
Oracle Internal and Approved Partners Only Page 6 of 44 Vn 1.1 Created: 7 Jan 2016
Task Comment Node
1 2 3 4
OPTIONAL: CONFIGURE LINK AGGREGATION
Since SC4.3 link aggregation can be used instead of IPMP. Examples are
available at: https://blogs.oracle.com/SC/entry/new_choices_for_oracle_solaris
INSTALLING SOLARIS CLUSTER 4.3
Two options to install Solaris Cluster 4.3:
A) If the cluster nodes have direct access to the Internet and customer has
valid support contract for the system.
B) Using ISO image of SC 4.3 (included on EIS-DVD ≥02DEC15).
If there is a repository server available on customer's site it's also possible
to add the ISO image to this repository server (not covered in this
checklist).
A: Installation if the Cluster has Direct Access to the Internet
Go to http://pkg-register.oracle.com Valid MOS Account is required!
Oracle Internal and Approved Partners Only Page 7 of 44 Vn 1.1 Created: 7 Jan 2016
Task Comment Node
1 2 3 4
Make ISO image of release available, e.g:
# mkdir /screpo
# mount -F hsfs {full_path_to}/osc-4_3-repo-full.iso /screpo
Set the publisher to use the local 'main repo' repository in place of original:
# pkg set-publisher -G '*' -g file:///screpo/repo ha-cluster
Notice: If you change your publisher back to the Internet, please be aware
that the newest versions are available at:
solaris https://pkg.oracle.com/solaris/support/
ha-cluster https://pkg.oracle.com/ha-cluster/support/
WHEN PUBLISHER IS CONFIGURED CONTINUE TO INSTALL
OSC 4.3
Install the necessary packages: This command creates a 'XXXXX-
backup-1' boot environment before the
# pkg install --accept ha-cluster- SC packages will be installed.
framework-full
Furthermore it creates a 'XXXXX-
Optional: backup-2' boot environment before it
Install Solaris Cluster Manager GUI: changes the default JAVA version from
8 to 7.
# pkg install --accept ha-
cluster/system/manager
Oracle Internal and Approved Partners Only Page 8 of 44 Vn 1.1 Created: 7 Jan 2016
Task Comment Node
1 2 3 4
Optional: List the installed packages:
# pkg list -af 'pkg://ha-cluster/*'
(the i in the I column shows if package is installed)
Note: without -af only the installed packages will be listed:
# pkg info 'pkg://ha-cluster/*'
(use -r to match packages with publisher)
Configure PATH: Normally provided from the EIS-DVD
SC 4.x: /usr/cluster/bin by .profile-EIS. Just log out & back in
/usr/cluster/lib/sc /usr/sbin to set the environment correctly.
/usr/cluster/bin must be the first entry in
Configure MANPATH: PATH!
SC 4.x: /usr/cluster/man /usr/man
Note: If you plan to enable TCP wrappers after installation, add all
clprivnet0 IP addresses to file /etc/hosts.allow. Use ipadm show-addr to
identify clprivnet0 interfaces. Without this addition TCP wrappers prevent
internode communication over RPC.
If using switches for private- After the cluster is established, you can
interconnect, ensure that Neighbour re-enable NDP on the private-
interconnect switches if you want to use
Discovery Protocol (NDP) is disabled. that feature.
Follow the switch documentation.
Reason: No traffic is allowed on the
private interconnect within SC
installation.
If public network has only IPv6 Bug 16355496
addresses then scinstall will fail. When the cluster is formed the IPv4
Configure IPv4 addresses that scinstall addresses can be removed.
will work.
Authorize acceptance of cluster control-node = system which used to
configuration commands by the control issue cluster creation command
(scinstall)!
node. Run on all systems that you will
configure in the cluster, other than the
control node:
# clauth enable -n control-node
Oracle Internal and Approved Partners Only Page 9 of 44 Vn 1.1 Created: 7 Jan 2016
Task Comment Node
1 2 3 4
Establish SC on node1 (control-node) The sponsor node (node1) must be
using scinstall. rebooted and member of the cluster
before starting scinstall on the other
Select menu 1) then 2) Create just the nodes.
first node of a new cluster on this
Run cluster check within scinstall and
machine correct problems if necessary.
Interactive Q+A
In case of Quorum Server: you must
Alternative: Select menu 1) then 1) the disable quorum auto-configuration.
“Create a new cluster” option to configure all
nodes at once. Beware that the node which
runs the scinstall command gets the highest Bug 18245335
nodeid but is the control-node.
The last node in the order list within scinstall
will be nodeid 1.
The autodiscovery feature is not working if
using VNICs for private interconnect.
Therefore select transport adapters manually
for private interconnect.
If selected menu 1) then 2) for the first Interactive Q+A.
node then configure Solaris Cluster 4.3 Run cluster check within scinstall and
on all additional nodes via scinstall by correct problems if necessary.
Select menu 1) then 3).
Configure Quorum via clsetup if not Only on 1 node!
already done! The rule is number of (min. 1 Quorum).
nodes minus 1. Optional for 3 nodes or more.
But it also depends on the topology!
Take cluster out of installmode if not # clq reset
already done:
If installed, ensure that Cluster Ensure that the browser's disk and
Manager is working. Access via: memory cache sizes are set to a value
that is greater than 0.
https://<nodename_or_IP>:8998/scm
and accept the security certificate! Verify that Java and Javascript are
enabled in the browser.
Verify that the network-bind-address is
0.0.0.0
# cacaoadm list-params | grep
network
If not 0.0.0.0 do:
# cacaoadm stop
# cacaoadm set-param network-bind-
address=0.0.0.0
# cacaoadm start
Oracle Internal and Approved Partners Only Page 10 of 44 Vn 1.1 Created: 7 Jan 2016
Task Comment Node
1 2 3 4
If not run:
# svcadm enable svc:/network/ntp:default
Optional configure IP Filter. Refer to 'How When IPMP is used for public
to Configure IP Filter' in: network. SC relies on IPMP for
public network monitoring. Any IP
http://docs.oracle.com/cd/E56676_01/html/E56678/ Filter configuration must be made
gftcs.html#scrolltoc in accordance with IPMP
configuration guidelines and
Only use IP Filter with failover data services. restrictions concerning IP Filter.
The use of IP Filter with scalable data services
is not supported.
IPv6 scalable service support is NOT (Bug 15290321)
enabled by default, if required do:
Add to /etc/system This create IPv6 interface on all the
set cl_comm:ifk_disable_v6=0 cluster interconnect adapters with a
reboot if possible, if not run: link-local address
# /usr/cluster/lib/sc/config_ipv6
Java Version 1.7 required for SC4.3 More details in Alert 2055578.1
Java 1.8 and earlier versions than 1.7 are
NOT supported!
If necessary set to Java 1.7 with:
# pkg set-mediator -V 1.7 java
Verify with:
# java -version | grep version
Oracle Internal and Approved Partners Only Page 11 of 44 Vn 1.1 Created: 7 Jan 2016
Task Comment Node
1 2 3 4
Check/Modify svc:/system/name-service/switch SMF service
(formerly: /etc/nsswitch.conf) if applicable:
hosts: cluster files <any other hosts database>
netmasks: cluster files <any other netmasks database>
Note:
• scinstall enters cluster as the first entry for hosts & netmasks.
• files should always be in the second place following cluster.
• Enter dns when node is dns client.
• All further databases can be added to the end of the line.
• The ipnodes and hosts entry is the same in Solaris 11 and will be set
with host property.
Additional requirements which depend on the dataservice:
HA-NFS:
hosts: cluster files [SUCCESS=return]
rpc: files
The RPC (remote procedure call) program If the RPC service that you install
numbers 100141, 100142 and 100248 are also uses one of these program
numbers, you must change that
reserved for daemons rgmd_receptionist, RPC service to use a different
fed and pmfd, respectively. program number.
Beware the following scheduling classes
are not supported on SC:
• time-sharing with high priority
processes,
• real-time processes.
Do not use Solaris Cluster configuration to provide an rarpd service.
Do not use Solaris Cluster configuration to provide a highly available
installation service on client systems.
Oracle Internal and Approved Partners Only Page 12 of 44 Vn 1.1 Created: 7 Jan 2016
Task Comment Node
1 2 3 4
When using supported network adapters MOS Document ID 1017839.1.
which use the *ce* driver for REMOVE SC automatically chooses the best
in /etc/system: configuration for the ce driver
set ce:ce_taskq_disable=1 (RFEs 6281341 & 6487117).
When using supported network adapters If using ixge, ipge only for public
which use the *ixge* or *ipge*driver for network the default of 0 is ok for
the mentioned variable.
private transport, uncomment in
/etc/system:
set ixge:ixge_taskq_disable=1
set ipge:ipge_taskq_disable=1
When not using ge driver den REMOVE
the following line from /etc/system
set ge:ge_intr_mode=0x833
Consider enabling coredump for setid program in /var/cores:
# coreadm -g /var/cores/%f.%n.%p.core -e global -e process -e \
global-setid -e proc-setid -e log
Oracle Internal and Approved Partners Only Page 13 of 44 Vn 1.1 Created: 7 Jan 2016
Task Comment Node
1 2 3 4
Oracle Internal and Approved Partners Only Page 14 of 44 Vn 1.1 Created: 7 Jan 2016
Task Comment Node
1 2 3 4
Manually configure routing information about QS on all cluster nodes:
• Add the QS host name to /etc/inet/hosts
• Add the quorum server host netmask to /etc/inet/netmasks.
On one cluster node add the QS by using clsetup or: e.g:
# clq add -t quorum_server -p qshost=<IP_of_QS> -p port=<port> <QSname>
Oracle Internal and Approved Partners Only Page 15 of 44 Vn 1.1 Created: 7 Jan 2016
Task Comment Node
1 2 3 4
Oracle Internal and Approved Partners Only Page 16 of 44 Vn 1.1 Created: 7 Jan 2016
Task Comment Node
1 2 3 4
USING A ZFS STORAGE APPLIANCE AS ISCSI DEVICE
Using https://<appliance_ip>:215 BUI to Create iSCSI target group on Per-Appliance
Appliance
Click → Configuration → SAN → Targets → iSCSI Targets → + iSCSI
Targets
• Target IQN use 'Auto-assign',
• Alias use e.g: clustername_networkinterfacename,
• Initiator authentication mode use 'None',
• and select a Network interface (if you like more than one network interface in the
target group then create a iSCSI Target for each network interface).
Move the iSCSI Targets with the mouse into a new iSCSI Target Group.
Add iSCSI initiators and group on Appliance 1 2 3 4
Verify with:
# iscsiadm list discovery
Oracle Internal and Approved Partners Only Page 17 of 44 Vn 1.1 Created: 7 Jan 2016
Task Comment Node
1 2 3 4
If device not already visible run
# devfsadm -i iscsi
# cldevice refresh
Reconfigure global device space Identify the new did device with
scdidadm -L if necessary.
# cldev populate
Oracle Internal and Approved Partners Only Page 18 of 44 Vn 1.1 Created: 7 Jan 2016
Task Comment Node
1 2 3 4
Set the Share Mode for the project to None or Read only, depending on the Per-Appliance
desired access rights for non-clustered systems.
Click → Shares → Projects → Pencil to edit your created Project →
Protocols
Note: The Share Mode can be set to Read/Write if it is required to make
the project world-writable, but it is not recommended.
Note: Read-only can be tested by booting a node in non-cluster-mode and
try to touch a file on the NAS filesystem.
Add a read/write NFS Exception for each cluster node. 1 2 3 4
Check with:
# clnasdevice show -v
# clnasdevice show -v -d all
If not using automounter, create a mount- # mkdir -p /NAS/FS1
point for each Appliance filesystem:
Oracle Internal and Approved Partners Only Page 19 of 44 Vn 1.1 Created: 7 Jan 2016
Task Comment Node
1 2 3 4
On each node add an entry to the /etc/vfstab file for the mountpoint:
appliancename:/export/FS1 - /NAS/FS1 nfs no yes -
Refer to MOS Document ID: 359515.1 for current list of supported files
and mount options for Oracle RAC or HA Oracle.
Or enable file system monitoring, configure a resource of type
SUNW.ScalMountPoint for the file systems.
E.g:
# clrg create -S -p Desired_primaries=2 -p \
Maximum_primaries=2 NASmp-rg
# clrt register SUNW.ScalMountPoint
# clrs create -g NASmp-rg -t SUNW.ScalMountPoint -p \
TargetFileSystem=appliancename:path -p \
FileSystemType=nas -p MountPointDir=fs_mountpoint NASmp-rs
E.g:
# clrs create -g NASmp-rg -t SUNW.ScalMountPoint -p \
TargetFileSystem=filer1:/export/FS1 -p FileSystemType=nas -p \
MountPointDir=/NAS/FS1 NASmp-rs
# clrg online -eM NASmp-rg
Note: No need to add the FS1 into /etc/vfstab for this setup.
If creating an application resource group then set the following positive
affinity and dependencies e.g:
For resource group:
# clrg create -p rg_affinities=++scalmp-rg app-rg
For resource:
# clrs create -g app-rg -t app_resource_type -p \
Resource_dependencies_offline_restart=scalmp-rs app-rs
Oracle Internal and Approved Partners Only Page 20 of 44 Vn 1.1 Created: 7 Jan 2016
Task Comment Node
1 2 3 4
Create replicas on the local disks: Always put the replicas on different
# metadb -afc 3 c0t0d0s7 c1t0d0s7 c2t0d0s7 physical drives and if possible on 3
different controllers.
Recommendation: Place replica on slice7.
If ZFS root is on a partition you can
use a slice on the same disk.
Otherwise use different local disk.
When using OBAN (Multiowner diskset) jump to RAC install section.
Set up the metasets. Use the did devices.
metaset -s <setname> -a -h <node1> <node2> e.g: /dev/did/dsk/d0
metaset -s <setname> -a <drivename1>
<drivename2> ...
Slice 7(VTOC) or Slice 6(EFI) size
for metadb is fixed when it's added to
NOTE: Do NOT name disk set admin or the metaset.
shared.
Optional: Check replicas of the disksets!
# removes the one copy if necessary
metadb -s <setname> -d -f <drivename>
# Adds back two copies (maybe you must expand slice7)
metadb -s <setname> -afc 2 <drivename>
Set up the Mediator Host for each diskset See also
# man mediator
when the cluster matches the two-hosts
and two-strings criteria:
metaset -s <setname> -a -m <node1> <node2>
Jump to common actions for filesystems and boot devices on page 23.
Oracle Internal and Approved Partners Only Page 21 of 44 Vn 1.1 Created: 7 Jan 2016
Task Comment Node
1 2 3 4
Oracle Internal and Approved Partners Only Page 22 of 44 Vn 1.1 Created: 7 Jan 2016
Task Comment Node
1 2 3 4
If you have issues with cacao then copy the security files
# cacaoadm stop; cacaoadm disable (on all nodes)
On one node:
# cd /etc/cacao/instances/default
# tar cf /tmp/security.tar security
# cp the security.tar files to all other nodes into the same directory and unpack them.
# cacaoadm enable; cacaoadm start (on all nodes)
Test switch of the disk groups with:
# cldg switch -n <phy.host> <devicegroup>
Check/Set localonly flag for all local did devices (which means all disks
which are managed only on one node). At least necessary for the root &
root mirror disks:
• Find the matching between physical and did device
# cldev list -v <physical device or did device> (scdidadm -l)
• Make sure only local node in node list:
# cldg show dsk/d<N>
• If other nodes in node list, remove them:
# cldg remove-node -n <other_phy_host> dsk/d<N>
• Set localonly & autogen to true if not already done:
# cldg set -p localonly=true dsk/d<N>
# cldg set -p autogen=true dsk/d<N>
NOTE: To get the did device use:
cldev list -n <phy_host> -v (scdidadm -l)
Oracle Internal and Approved Partners Only Page 23 of 44 Vn 1.1 Created: 7 Jan 2016
Task Comment Node
1 2 3 4
Only Solaris11.3 x86 using GRUB2: All is written to
/rpool/boot/grub/grub.cfg
List GRUB2 entries:
# bootadm list-menu
Add menu entry:
# bootadm add-entry -i 2 \ -i <new-menu-entry-number> in
"OSC43-non-cluster-mode" this example menu entry 2
# bootadm change-entry -i 2 kargs="-x" For kargs:
-s single user boot
Verify the new entry: -v verbose
# bootadm list-menu -i 2 -k to boot with kernel debugger
enabled
This new entry use the current active boot
environment (BE) which can be listed with
'beadm list'
More details in GRUB2 config Guide
http://docs.oracle.com/cd/E53394_01/html/E54742
/gkvhz.html#scrolltoc
Test booting using the new aliases. If 'Fast Reboot' is enabled (default
for x86) then use reboot -p to
disable 'Fast Reboot'
Oracle Internal and Approved Partners Only Page 24 of 44 Vn 1.1 Created: 7 Jan 2016
Task Comment Node
1 2 3 4
Optional: consider to set failmode: When all connections to the zpool are
# zpool set failmode=panic <data> lost then the failmode will panic the
node and speed up the failover time.
The ZFS pool failmode property is set to wait by
default. This setting can result in the HAStoragePlus
resource blocking, which might prevent a failover of
the resource group. See the zpool(1M) man page to
understand the possible values for the failmode
property and decide which value fits your
requirements.
# clrt register SUNW.HAStoragePlus
Register HAStoragePlus:
Create failover resource group: # clrg create zfs1-rg
Oracle Internal and Approved Partners Only Page 25 of 44 Vn 1.1 Created: 7 Jan 2016
Task Comment Node
1 2 3 4
Switch resource group online: # clrg online -M zfs1-rg
# clrg switch -n <other_node>
Test to switch the resource group: zfs1-rg
Oracle Internal and Approved Partners Only Page 26 of 44 Vn 1.1 Created: 7 Jan 2016
Task Comment Node
1 2 3 4
CONFIGURATION STEPS FOR ZFS WITHOUT HASTORAGEPLUS
Setup of local zpool (only available on one node) can be done with local
devices (only connected to one node) or shared devices (accessible from all
nodes in the cluster via SAN). However in case of shared device it would
be better to setup a zone in the SAN switch to make the device only
available to one host.
Identify the did devices for used physical device in zpool.
Example for local device:
# scdidadm -l c1t3d0
49 node0:/dev/rdsk/c1t3d0 /dev/did/rdsk/d49
Check the settings of the used did device: # cldg show dsk/d11
Only one node should be in node list.
In case of shared device remove one node # cldg remove-node -n \
<node1> dsk/d11
from the node list:
Set localonly flag for the did device Optional: Set the autogen flag, but
after localonly!
# cldg set -p localonly=true dsk/d11
Note: autogen must be false to set localonly # cldg set -p autogen=true dsk/d11
# cldev show d11
To disable fencing for the did device:
# cldev set -p default_fencing=nofencing d11
Oracle Internal and Approved Partners Only Page 27 of 44 Vn 1.1 Created: 7 Jan 2016
Task Comment Node
1 2 3 4
Oracle Internal and Approved Partners Only Page 28 of 44 Vn 1.1 Created: 7 Jan 2016
Task Comment Node
1 2 3 4
set physical-host=<physical-hostname-node2> with GZ.
set hostname=<zone-hostname-node2>
add net
* ip-type=exclusive: requires his
set address=<ip-zone-hostname-node2> own NICs (VNICs).
set physical=<network_or_ipmpgroup-node2>
end * enable_priv_net=true: separate
end clprivnet interface in ZC. If ip-
commit type=exlusive then separate NICs
exit required.
Example for one node if IP exclusive for * enable_priv_net=false: no separate
clprivnet in ZC. The clprivnet of GZ
(public and private) is used: will be used.
add node
set physical-host=<physical-hostname-node1>
set hostname=<zone-hostname-node1>
add net
set physical=<NIC_or_VNIC_public>
end
add privnet
set physical=<NIC_or_VNIC_private>
end
add privnet
set physical=<NIC_or_VNIC_private>
end
end
Oracle Internal and Approved Partners Only Page 29 of 44 Vn 1.1 Created: 7 Jan 2016
Task Comment Node
1 2 3 4
Re-login or source /.profile-EIS and configure non-global-zones as
required.
If using additional name service Refer to page 12 for setup details.
configure /etc/nsswitch.conf of zone
cluster non-global zones.
hosts: cluster files <any other hosts database>
netmasks: cluster files <any other netmasks database>
Set up the logical host resource for zone cluster. In the global zone do:
# clzc configure zc1
# clzc:zc1> add net
# clzc:zc1:net> set address=<zone-logicalhost-ip>
# clzc:zc1:net> end
# clzc:zc1> commit
# clzc:zc1> exit
In zonecluster do:
# clrslh create -g app-rg -h <zone-logicalhost> <zone-logicalhost>-rs
Note: Ensure that the logical host is in the /etc/hosts file on all zone cluster
nodes.
Following types of file systems can configured with clzc command:
• Direct mount and loopback mount:
UFS local file system
• Direct mount:
Oracle Solaris ZFS (exported as a data set), NFS from supported NAS devices, QFS,
ACFS
• Loopback mount:
UFS cluster file system (global file system)
Different examples are also available at:
http://docs.oracle.com/cd/E56676_01/html/E56678/gmfka.html#scrolltoc
Register HAStoragePlus in zone cluster Also SUNW.ScalMountPoint
# clrt register SUNW.HAStoragePlus resource can be used
Oracle Internal and Approved Partners Only Page 30 of 44 Vn 1.1 Created: 7 Jan 2016
Task Comment Node
1 2 3 4
clzc:zc1> verify
clzc:zc1> commit
clzc:zc1> exit
B) HA filesystem
In the global zone configure SVM diskset and SVM devices. Run
newfs and make the metadevice available via HAStoragePlus in zc1:
# clzc configure zc1
clzc:zc1> add fs
clzc:zc1:fs> set dir=/data
clzc:zc1:fs> set special=/dev/md/datads/dsk/d0
clzc:zc1:fs> set raw=/dev/md/datads/rdsk/d0
clzc:zc1:fs> set type=ufs
clzc:zc1:fs> add options [logging]
clzc:zc1:fs> end
clzc:zc1> verify
clzc:zc1> commit
clzc:zc1> exit
Check setup with:
# clzc show -v zc1
Oracle Internal and Approved Partners Only Page 31 of 44 Vn 1.1 Created: 7 Jan 2016
Task Comment Node
1 2 3 4
Add supported dataservice to
zone cluster.
Oracle Internal and Approved Partners Only Page 32 of 44 Vn 1.1 Created: 7 Jan 2016
Task Comment Node
1 2 3 4
Oracle Internal and Approved Partners Only Page 33 of 44 Vn 1.1 Created: 7 Jan 2016
Task Comment Node
1 2 3 4
Optional: NFS security features:
• SC does NOT support the following options of the share_nfs
command:
secure
sec=dh
• SC does support secure ports for NFS. If you wish to use it add the
following to the file /etc/system:
set nfssrv:nfs_portmon=1
• The use of Kerberos with NFS. Details:
http://docs.oracle.com/cd/E56676_01/html/E57645/fdkyv.html#scrolltoc
Optional: Customise the nfsd or lockd
startup options.
If you use ZFS as exported file system Verify with:
# zfs get sharenfs <zfs>
you must set sharenfs property to off.
e.g:
# zfs set sharenfs=off <zfs>
Edit /etc/hosts Problem Resolution in MOS
Document ID 1012164.1.
In case of Mx000 add internal sppp0. e.g:
192.168.224.2 sppp0 Bug 15232742
If you do not add these entries the
If IPMP group uses 0.0.0.0 add the HA-NFS will not failover correctly.
following dummy:
0.0.0.0 dummy Note: HA-NFS resolves all
configured network addresses.
Ensure that the solaris and ha-cluster publishers are valid. Refer to page 7
of this checklist to configure the SC publisher.
Install HA-NFS if not already done
# pkg info ha-cluster/data-service/nfs
# pkg install ha-cluster/data-service/nfs
Set up the resource group: The name of the resource group can
be freely chosen!
# clrg create -n phy-host0,phy-host1
-p Pathprefix=/global/nfs1 nfs1-rg If not already done, run:
# mkdir -p /global/nfs1
NOTE: If using ZFS the Pathprefix changes to
zpool name e.g: /nfs1pool
Set up the logical host resource and bind Use the <logical_host> for the
to the resource group: resource name and add -rs on the
end. The IPMP groups (-N option is
# clrslh create -g nfs1-rg -h \ optional).
<logical_host> -N \
ipmp1@phy-host0,ipmp1@phy-host1 \
<logical_host>-rs
Oracle Internal and Approved Partners Only Page 34 of 44 Vn 1.1 Created: 7 Jan 2016
Task Comment Node
1 2 3 4
Optional: Setup HAStoragePlus resource
# clrt register SUNW.HAStoragePlus (check with clrt list)
Modify /etc/vfstab
E.g: of /etc/vfstab for failover filesystem:
/dev/md/nfsset/dsk/d10 /dev/md/nfsset/rdsk/d10 /global/nfs1 ufs 2 no logging
In case of ZFS:
# clrs create -g nfs1-rg -t SUNW.HAStoragePlus -p \
Zpools=nfs1pool -p AffinityOn=True nfs1-hastp-rs
Set up the directory for NFS state The SUNW.nfs directory must be in
information in an online shared the devicegroup. Recommendation:
SUNW.nfs directory should NOT be
filesystem visible by the NFS-Clients!
# cd /global/nfs1
# mkdir SUNW.nfs
# cd SUNW.nfs
# vi dfstab.nfs1-server-rs
(The name for the resource type can be freely chosen).
share -F nfs -o rw /global/nfs1/data
# cd; mkdir /global/nfs1/data
In case of ZFS:
# cd /nfs1pool
# mkdir SUNW.nfs
# cd SUNW.nfs
# vi dfstab.nfs1-server-rs
(The name for the resource type can be freely chosen).
share -F nfs -o rw /nfs1pool/data
# zfs create nfs1pool/data
Oracle Internal and Approved Partners Only Page 35 of 44 Vn 1.1 Created: 7 Jan 2016
Task Comment Node
1 2 3 4
Oracle Internal and Approved Partners Only Page 36 of 44 Vn 1.1 Created: 7 Jan 2016
Task Comment Node
1 2 3 4
Select boot device for failover guest The full raw disk can be provided via
domain fgd0. Recommendation is to use SAN or iSCSI.
full raw disk because its expected to do
“live” migration. Using PXFS as root disk has
performance issues (MOS Document
Remember ZFS as root filesystem can ID 1366967.1).
ONLY be used if doing 'cold'
migration.
Boot drive on raw filesystem. Prepare partition table and
Add boot device to fgd0:
# ldm add-vdsdev /dev/did/rdsk/d7s2 boot_fgd0@primary-vds0
# ldm add-vdisk root_fgd0 boot_fgd0@primary-vds0 fgd0
Optional: Configure MAC addresses of LDom. The LDom Manager
assigns MAC automatically but the following issues can occur:
• Duplicate MAC address if other guest LDoms are down when creating
a new LDom.
• MAC address can change after failover of a Ldom.
Assign your own MAC address this example use the suggested range
between 00:14:4F:FC:00:00 & 00:14:4F:FF:FF:FF as described
in Assigning MAC Addresses Automatically or Manually within the Oracle
VM Server for SPARC 3.0 Administration Guide.
Example:
Identify current automatically assigned MAC address:
# ldm list -l fgd0
to see the HOSTID which is similar as MAC a 'ldm bind fldg0' is
necessary. Unbind fldg0 afterwards 'ldm unbind fldg0':
MAC: 00:14:4f:fb:50:dc → change to 00:14:4f:fc:50:dc
HOSTID: 0x84fb50dc → change to 0x84fc50dc
public-net: 00:14:4f:fa:01:49 → change to 00:14:4f:fc:01:49
# ldm set-domain mac-addr=00:14:4f:fc:50:dc fgd0
# ldm set-domain hostid=0x84fc50dc fgd0
# ldm set-vnet mac-addr=00:14:4f:fc:01:49 public-net0 fgd0
# ldm list-constraints fgd0 (this shows assigned MAC now)
If HA-LDom is already running refer to MOS Document ID 1559415.1.
Bind and start the fgd0: # ldm bind fgd0
# ldm start fgd0
Oracle Internal and Approved Partners Only Page 37 of 44 Vn 1.1 Created: 7 Jan 2016
Task Comment Node
1 2 3 4
Install Solaris10 or Solaris 11 on LDom by using install server.
To identify the MAC address of a LDom do in the console of fgd0:
{0} ok devalias net
{0} ok cd /virtual-devices@100/channel-devices@200/network@0
{0} ok .properties
local-mac-address 00 14 4f fc 01 49
…
If you like to use a different method then refer to EIS checklist for LDoms.
Install HA-LDom (HA for Oracle VM Server Package) on all primary
domain nodes if not already done:
# pkg info ha-cluster/data-service/ha-ldom
# pkg install ha-cluster/data-service/ha-ldom
Oracle Internal and Approved Partners Only Page 38 of 44 Vn 1.1 Created: 7 Jan 2016
Task Comment Node
1 2 3 4
Setup password file for non-interactive “live” migration on one primary:
# clpstring create –b fgd0-rs fldom-rg_fgd0-rs_ldompasswd
Enter string value:
Enter string value again:
Oracle Internal and Approved Partners Only Page 39 of 44 Vn 1.1 Created: 7 Jan 2016
Task Comment
CONFIGURATION STEPS FOR HA-ORACLE WITH
SUNW.HAStorageplus
Not yet finished for SC4.3
Refer to the Oracle Solaris Cluster Data Service for Oracle Guide:
http://docs.oracle.com/cd/E56676_01/html/E56737/index.html
Oracle Internal and Approved Partners Only Page 40 of 44 Vn 1.1 Created: 7 Jan 2016
Task Comment Node
1 2 3 4
Oracle Internal and Approved Partners Only Page 41 of 44 Vn 1.1 Created: 7 Jan 2016
Task Comment Node
1 2 3 4
Oracle Internal and Approved Partners Only Page 42 of 44 Vn 1.1 Created: 7 Jan 2016
Task Comment Node
1 2 3 4
SYSTEM COMPLETION
Make copy of the cluster configuration:
# cluster export -o /var/cluster/Config_after_EIS
Oracle Internal and Approved Partners Only Page 43 of 44 Vn 1.1 Created: 7 Jan 2016
Task Comment Check
HANDOVER
MANDATORY: Ensure that the licenses are available for Solaris Cluster
Framework & Agents.
Perform Installation Assessment tests as EISdoc V4 – completed during preparation
described in the EIS Test Procedures of installation.
Plan.
Complete documentation and hand over EISdoc V4:
to customer. File EIS-DOCS-Operational Handover-
Document.odt
Short briefing: the configuration.
Copies of the checklists are available on the EIS web pages or on the EIS-DVD. We recommend that
you always check the web pages for the latest version.
Oracle Internal and Approved Partners Only Page 44 of 44 Vn 1.1 Created: 7 Jan 2016