You are on page 1of 44

EIS Installation Checklist for Solaris Cluster 4.

3 Systems

Customer:
TASK Number:
Technician:
Version EIS-DVD:
Date:

• It is recommend that the EIS web pages are checked for the latest version of this
checklist prior to commencing the installation.
• It is assumed that the installation is carried out with the help of the current EIS-DVD.
• The idea behind this checklist is to help the installer achieve a "good" installation.
• It is assumed that the installer has attended the appropriate training classes.
• The installation should be prepared using EISdoc V4.
• It is not intended that this checklist be handed over to the customer.
• This installation checklist for Solaris Cluster is to be used together with the
installation checklist(s) as appropriate for Solaris 11, server, storage etc..
• This checklist is for Solaris Cluster 4.3 in combination Solaris 11.3 (SPARC & x86).
SC4.3 can also run on Solaris 11.2 SRU13.6 or higher but is not supported with
earlier Solaris 11 releases or Solaris 10.
• Supported configurations are detailed in the official Solaris Cluster configuration
guide: http://www.oracle.com/technetwork/server-storage/solaris-
cluster/overview/solariscluster4-compatibilityguide-1429037.pdf
• A series of Assessment Tests for the completed cluster are available as described in
the EIS Test Procedures Plan. This document should be available during the
installation.
• The entire documentation for Solaris Cluster 4.3 can be found at:
http://docs.oracle.com/cd/E56676_01/
• Feedback on issues with EIS content or product quality is welcome – refer to the last
page of this checklist.

Serial# hostid hostname


Admin Workstation
Node1
Node2
Node3
Node4
Node5
Node6
Node7
Node8

Oracle Internal and Approved Partners Only Page 1 of 44 Vn 1.1 Created: 7 Jan 2016
Task Comment Check

PREPARATION
If non-Oracle/Sun storage is involved http://www.oracle.com/technetwork/server-
ensure that the information on the Open storage/solaris-cluster/partnerprogram-
cluster-168135.pdf
Storage Program was consulted.
Currently no third party storage is
qualified!
EIS Installation Configuration Plan & EISdoc V4:
Test Procedures Plan complete? Use appropriate BUILD & TPP templates,
inserted into “Master” files (Chapter Oracle
Solaris Cluster):
EIS-BUILD-Cluster.odt
EIS-TPP-Cluster.odt
Ensure that the intended Solaris Cluster configuration is supported. Consult the
Configuration Guide for Solaris Cluster for details:
http://www.oracle.com/technetwork/server-storage/solaris-
cluster/overview/solariscluster4-compatibilityguide-1429037.pdf
FABs/EIS-ALERTs reviewed?
MANDATORY: Ensure that the licenses are available for Solaris Cluster
Framework & Agents.

Oracle Internal and Approved Partners Only Page 2 of 44 Vn 1.1 Created: 7 Jan 2016
Task Comment Check

IINSTALLATION OF THE ADMINISTRATION WORKSTATION


Note – You are not required to use an administrative console. If you do not use
an administrative console, perform administrative tasks from one designated
node in the cluster.
You cannot use this software to connect to Oracle VM Server for SPARC guest
domains.
Solaris-Ready Installation as for server. Installation according to the appropriate
checklists.
Ensure that the solaris and ha-cluster publishers are valid. Refer to page 7 of
this checklist to configure the SC publisher.
Install Parallel Console Access (pconsole) The pconsole is part of Solaris 11.
software:
# pkg info terminal/pconsole
# pkg install terminal/pconsole
Ensure that xterm is installed:
# pkg info terminal/xterm
# pkg install terminal/xterm
# pkg info x11/library/libxmuu
# pkg install x11/library/libxmuu
Optional: install man page packages (they are part of SC4.3):
# pkg install ha-cluster/system/manual
# pkg install ha-cluster/system/manual/data-services
# pkg install ha-cluster/service/quorum-server/manual
# pkg install ha-cluster/geo/manual
Configure MANPATH: Provided from the EIS-DVD by .profile-
SC 4.x: /usr/cluster/man /usr/man EIS. Just log out & back in to set the
environment correctly.
If you are on remote desktop you need to
set DISPLAY variable.
Start pconsole utility: If you have a terminal server that allows
you to connect to specific port numbers
# pconsole host1[:port] host2[:port] on the IP address of the server, you can
specify the port number in addition to the
hostname or IP address as:
terminal-server:portnumber
See also man pconsole.

Oracle Internal and Approved Partners Only Page 3 of 44 Vn 1.1 Created: 7 Jan 2016
Task Comment Node
1 2 3 4

BASIC CLUSTER CONFIGURATION


Solaris-Ready Installation. SC4.3 requires at least the solaris-small-
Installation according to the server package of Solaris 11.
appropriate server checklist(s). Please
note: A dedicated partition for /globaldevices
• All cluster nodes must run the same is no longer used in SC4.x
version of Solaris. Use slice7(VTOC) or Slice 6(EFI) when
• If using SVM for shared storage zfs boot disk is on a partition or separate
then reserve slice 7 with 32MB (EIS local disk for metadb.
recommendation) on a local disk for The absolute minimum for metaDB is
metaDB/replicas. If ZFS root is on 20MB.
a partition you can use a slice on the
same disk. If ZFS is on a disk use a
different local disk.
• Internal disks can be mirrored with
raidctl utility.
Information Concerning public and private networks:
Separation of public and private network – Public networks and the
private network (cluster interconnect) must use separate adapters, or you
must configure tagged VLAN on tagged-VLAN capable adapters and
VLAN-capable switches to use the same adapter for both the private
interconnect and the public network. Alternatively, create virtual NICs on
the same physical interface and assign different virtual NICs to the private
and public networks.
VLANs shared by multiple clusters – SC configurations support the
sharing of the same private-interconnect VLAN among multiple clusters.
You do not have to configure a separate VLAN for each cluster. However,
for the highest level of fault isolation and interconnect resilience, limit the
use of a VLAN to a single cluster.
Tagged VLAN adapters – SC supports tagged VLANs to share an adapter
between the private cluster interconnect and the public network. You must
use the dladm create-vlan command to configure the adapter as a tagged
VLAN adapter before you configure it with the cluster.
To configure a tagged VLAN adapter for the cluster interconnect, specify
the adapter by its VLAN virtual device name. This name is composed of
the adapter name plus the VLAN instance number. The VLAN instance
number is derived from the formula (1000*V)+N, where V is the VID
number and N is the PPA.
As an example, for VID 73 on adapter net2, the VLAN instance number
would be calculated as (1000*73)+2. You would therefore specify the
adapter name as net73002 to indicate that it is part of a shared virtual LAN.
When configuring Ethernet switches for your cluster private interconnect,
disable the spanning tree algorithm on ports that are used for the interconnect.
http://docs.oracle.com/cd/E56676_01/html/E56926/feaad.html#CLHAMgcrz

Oracle Internal and Approved Partners Only Page 4 of 44 Vn 1.1 Created: 7 Jan 2016
Task Comment Node
1 2 3 4
Only x86: Run fdisk on each disk and create Solaris partition if not already
done!
Notice: MPxIO is enabled by default Do not run the stmsboot command if SC
for fp.conf, iscsi.conf and mpt_sas.conf is already installed, refer to stmsboot
man page if necessary.
If not run for your controller:
# stmsboot -D fp -e
# stmsboot -D iscsi -e
# stmsboot -D mpt -e
# stmsboot -D mpt_sas -e
For safety reasons clean the devices You may experience problems within
with: the SC installation if the devices are not
clean.
devfsadm -C on all nodes.
Ensure that the local_only property of rpcbind is set to false:
# svcprop network/rpc/bind:default | grep local_only
if not false run:
# svccfg
svc:> select network/rpc/bind
svc:/network/rpc/bind> setprop config/local_only=false
svc:/network/rpc/bind> quit
# svcadm refresh network/rpc/bind:default

Note: With the “netservices open” command you can reverse the “secure
by default” feature.
Disable NWAM on the console, because all network connections are lost:
# netadm enable -p ncp defaultfixed
# netadm list -p ncp defaultfixed

NWAM can not coexist with SC. Therefore enable defaultfixed to disable NVAM.
Public Network Management – Each public-network adapter that is used
for data-service traffic must belong to a PNM object that includes IPMP
groups, link aggregations, and VNICs that are directly backed by link
aggregations. If a public-network adapter is not used for data-service
traffic, you do not have to configure it in a PNM object.
Optional: Configure IPMP (probe-based or link-based)
Configure the public network for net0 or follow the next examples to
configure IPMP groups.
Example for net0:
# ipadm create-ip net0
# ipadm create-addr -T static -a local=10.xxx.xxx.xx/24 net0/pri
# route -p add default 10.xxx.xxx.1
# edit /etc/netmasks (optional, but required for logical host resource)
# configure name services

On unused physical adapters the scinstall command automatically


configures multiple-adapter IPMP group for each set of public network
adapters in the cluster that uses the same subnet. These groups are link-
based with transitive probes. The scinstall utility ignores adapters that are
already configured in an IPMP group.
Make sure the used hosts are already in the /etc/hosts file!

Oracle Internal and Approved Partners Only Page 5 of 44 Vn 1.1 Created: 7 Jan 2016
Task Comment Node
1 2 3 4
Example probe-based IPMP group active-active with interfaces net0 and
net1 with one production IP:
# ipadm create-ipmp ipmp0
# ipadm create-ip net0
# ipadm create-ip net1
# ipadm add-ipmp -i net0 -i net1 ipmp0
# ipadm create-addr -T static -a ipmp0-host ipmp0/data1
# ipadm create-addr -T static -a net0-test1 net0/test1
# ipadm create-addr -T static -a net1-test1 net1/test1
If the defaultrouter is NOT 100% available see MOS Document ID 1010640.1
• Do not use test IP for normal applications.
• Test IP for all adapters in the same IPMP group must belong to a single
IP subnet.
To enable transitive probing for probe-based IPMP only if no test address
is configured:
# svccfg -s svc:/network/ipmp setprop config/transitive-probing=true
# svcadm refresh svc:/network/ipmp:default

To check the value:


# svcprop -p config/transitive-probing svc:/network/ipmp
Example link-based IPMP group active-active with interfaces net0 and
net1 with one production IP:
# ipadm create-ipmp ipmp0
# ipadm create-ip net0
# ipadm create-ip net1
# ipadm add-ipmp -i net0 -i net1 ipmp0
# ipadm create-addr -T static -a ipmp0-host ipmp0/data1
To check setup use ipmpstat with -i, -an, -tn or -g option.
More examples available in MOS Document ID 1382335.1.
Hints / Checkpoints for all configurations:
• You need an additional IP for each logical host.
• If there is a firewall being used between clients and a HA service
running on this cluster and if this HA service is using UDP and does not
bind to a specific address, the IP stack chooses the source address for all
outgoing packages from the routing table. So, as there is no guarantee
that the same source address is chosen for all packages – the routing
table might change – it is necessary to configure all addresses
available on a network interface as valid source addresses in the
firewall.
• IPMP groups as active-standby configuration is also possible.
• Use only one IPMP group in the same subnet. It is not supported to use
more IPMP groups in the same subnet.
• If desired remove the IPMP configuration for network adapters that will
NOT be used for HA dataservices.
• All public-network adapters must use network interface cards (NICs)
that support local MAC address assignment. Local MAC address
assignment is a requirement of IPMP.

Oracle Internal and Approved Partners Only Page 6 of 44 Vn 1.1 Created: 7 Jan 2016
Task Comment Node
1 2 3 4
OPTIONAL: CONFIGURE LINK AGGREGATION
Since SC4.3 link aggregation can be used instead of IPMP. Examples are
available at: https://blogs.oracle.com/SC/entry/new_choices_for_oracle_solaris
INSTALLING SOLARIS CLUSTER 4.3
Two options to install Solaris Cluster 4.3:
A) If the cluster nodes have direct access to the Internet and customer has
valid support contract for the system.
B) Using ISO image of SC 4.3 (included on EIS-DVD ≥02DEC15).
If there is a repository server available on customer's site it's also possible
to add the ISO image to this repository server (not covered in this
checklist).
A: Installation if the Cluster has Direct Access to the Internet
Go to http://pkg-register.oracle.com Valid MOS Account is required!

Choose Oracle Solaris Cluster 4 The support repository includes the


Support and click submit. initial release + SRU.

Accept the license.


Download 'Key' and 'Certificate'.
Install the OSC 4 certificate as displayed on this certification page.
For OSC 4 Support certificate (includes latest SRU):
Copy key and certificate to /var/pkg/ssl
# mkdir -m 0755 -p /var/pkg/ssl
# cp -i /download/pkg.oracle.com.key.pem /var/pkg/ssl
# cp -i /download/pkg.oracle.com.certificate.pem /var/pkg/ssl

Configure the publisher with the key and certificate:


# pkg set-publisher \
-k /var/pkg/ssl/pkg.oracle.com.key.pem \
-c /var/pkg/ssl/pkg.oracle.com.certificate.pem \
-G '*' -g https://pkg.oracle.com/ha-cluster/support ha-cluster

For OSC 4 Release certificate (without any SRU):


Copy key and certificate to /var/pkg/ssl
# mkdir -m 0755 -p /var/pkg/ssl
# cp -i /download/pkg.oracle.com.key.pem /var/pkg/ssl
# cp -i /download/pkg.oracle.com.certificate.pem /var/pkg/ssl

Configure the publisher with the key and certificate:


# pkg set-publisher \
-k /var/pkg/ssl/pkg.oracle.com.key.pem \
-c /var/pkg/ssl/pkg.oracle.com.certificate.pem \
-G '*' -g https://pkg.oracle.com/ha-cluster/release ha-cluster

Check publisher settings and remove unrelated mirrors if necessary:


# pkg publisher ha-cluster | grep Mirror
If the output is not empty then remove unrelated mirrors with:
# pkg set-publisher -M http://mirror1.x.com ha-cluster

B: Installation if using ISO image of OSC 4.3 (included on EIS-DVD)


Download ISO images or access it from EIS-DVD-TWO (≥02DEC15).

Oracle Internal and Approved Partners Only Page 7 of 44 Vn 1.1 Created: 7 Jan 2016
Task Comment Node
1 2 3 4
Make ISO image of release available, e.g:
# mkdir /screpo
# mount -F hsfs {full_path_to}/osc-4_3-repo-full.iso /screpo

Make ISO image of SRU available, e.g:


# mkdir /screpoSRU
# mount -F hsfs {full_path_to}/osc-4_3_1_2_0-repo-incr.iso /screpoSRU

Consider adding an entry to /etc/vfstab so that the mount survives a


reboot. The /etc/vfstab entry would be something like:
{full_path_to}/osc-4_3-repo-full.iso - /screpo hsfs - yes -
{full_path_to}/osc-4_3_1_2_0-repo-incr.iso - /screpoSRU hsfs - yes -

Set the publisher to use the local 'main repo' repository in place of original:
# pkg set-publisher -G '*' -g file:///screpo/repo ha-cluster

Add Solaris Cluster SRU publisher:


# pkg set-publisher -g file:///screpoSRU/repo ha-cluster

The publisher should now look like this


# pkg publisher
PUBLISHER TYPE STATUS P LOCATION
solaris origin online F https://pkg.oracle.com/solaris/support/
ha-cluster origin online F file:///screpo/repo/
ha-cluster origin online F file:///screpoSRU/repo/

Notice: If you change your publisher back to the Internet, please be aware
that the newest versions are available at:
solaris https://pkg.oracle.com/solaris/support/
ha-cluster https://pkg.oracle.com/ha-cluster/support/
WHEN PUBLISHER IS CONFIGURED CONTINUE TO INSTALL
OSC 4.3
Install the necessary packages: This command creates a 'XXXXX-
backup-1' boot environment before the
# pkg install --accept ha-cluster- SC packages will be installed.
framework-full
Furthermore it creates a 'XXXXX-
Optional: backup-2' boot environment before it
Install Solaris Cluster Manager GUI: changes the default JAVA version from
8 to 7.
# pkg install --accept ha-
cluster/system/manager

Optional: Install data-service packages


# pkg install ha-cluster-data-services-
full

Alternative install ha-cluster-full:


# pkg install --accept ha-cluster-
full Note: You can also install only the
Available package groups: required data-service which is
recommended!
ha-cluster-full (incl. Geo Edition, and
GUI)
ha-cluster-framework-full
ha-cluster-data-services-full
ha-cluster-geo-full
Refer to SC4.3 Installation Guide
ha-cluster-minimal chapter 2 for details of package groups if
ha-cluster-framework-minimal more details are necessary

Oracle Internal and Approved Partners Only Page 8 of 44 Vn 1.1 Created: 7 Jan 2016
Task Comment Node
1 2 3 4
Optional: List the installed packages:
# pkg list -af 'pkg://ha-cluster/*'
(the i in the I column shows if package is installed)
Note: without -af only the installed packages will be listed:
# pkg info 'pkg://ha-cluster/*'
(use -r to match packages with publisher)
Configure PATH: Normally provided from the EIS-DVD
SC 4.x: /usr/cluster/bin by .profile-EIS. Just log out & back in
/usr/cluster/lib/sc /usr/sbin to set the environment correctly.
/usr/cluster/bin must be the first entry in
Configure MANPATH: PATH!
SC 4.x: /usr/cluster/man /usr/man

For ease of administration, set the same


root password on each node.
Verify that TCP wrappers are disabled:
# svccfg -s rpc/bind listprop config/enable_tcpwrappers

If NOT false do:


# svccfg -s rpc/bind setprop config/enable_tcpwrappers = false
# svcadm refresh rpc/bind
# svcadm restart rpc/bind

Note: If you plan to enable TCP wrappers after installation, add all
clprivnet0 IP addresses to file /etc/hosts.allow. Use ipadm show-addr to
identify clprivnet0 interfaces. Without this addition TCP wrappers prevent
internode communication over RPC.
If using switches for private- After the cluster is established, you can
interconnect, ensure that Neighbour re-enable NDP on the private-
interconnect switches if you want to use
Discovery Protocol (NDP) is disabled. that feature.
Follow the switch documentation.
Reason: No traffic is allowed on the
private interconnect within SC
installation.
If public network has only IPv6 Bug 16355496
addresses then scinstall will fail. When the cluster is formed the IPv4
Configure IPv4 addresses that scinstall addresses can be removed.
will work.
Authorize acceptance of cluster control-node = system which used to
configuration commands by the control issue cluster creation command
(scinstall)!
node. Run on all systems that you will
configure in the cluster, other than the
control node:
# clauth enable -n control-node

Oracle Internal and Approved Partners Only Page 9 of 44 Vn 1.1 Created: 7 Jan 2016
Task Comment Node
1 2 3 4
Establish SC on node1 (control-node) The sponsor node (node1) must be
using scinstall. rebooted and member of the cluster
before starting scinstall on the other
Select menu 1) then 2) Create just the nodes.
first node of a new cluster on this
Run cluster check within scinstall and
machine correct problems if necessary.
Interactive Q+A
In case of Quorum Server: you must
Alternative: Select menu 1) then 1) the disable quorum auto-configuration.
“Create a new cluster” option to configure all
nodes at once. Beware that the node which
runs the scinstall command gets the highest Bug 18245335
nodeid but is the control-node.
The last node in the order list within scinstall
will be nodeid 1.
The autodiscovery feature is not working if
using VNICs for private interconnect.
Therefore select transport adapters manually
for private interconnect.
If selected menu 1) then 2) for the first Interactive Q+A.
node then configure Solaris Cluster 4.3 Run cluster check within scinstall and
on all additional nodes via scinstall by correct problems if necessary.
Select menu 1) then 3).
Configure Quorum via clsetup if not Only on 1 node!
already done! The rule is number of (min. 1 Quorum).
nodes minus 1. Optional for 3 nodes or more.
But it also depends on the topology!
Take cluster out of installmode if not # clq reset
already done:
If installed, ensure that Cluster Ensure that the browser's disk and
Manager is working. Access via: memory cache sizes are set to a value
that is greater than 0.
https://<nodename_or_IP>:8998/scm
and accept the security certificate! Verify that Java and Javascript are
enabled in the browser.
Verify that the network-bind-address is
0.0.0.0
# cacaoadm list-params | grep
network
If not 0.0.0.0 do:
# cacaoadm stop
# cacaoadm set-param network-bind-
address=0.0.0.0
# cacaoadm start

Optional: Enable automatic node If desired disable the monitoring on all


reboot if ALL monitored disk paths local disks:
fail. # cldev status
# clnode set -p (to look for single attached disks)
reboot_on_path_failure=enabled + # cldev unmonitor <did_device>

To verify run clnode show


or scdpm -p all:all

Oracle Internal and Approved Partners Only Page 10 of 44 Vn 1.1 Created: 7 Jan 2016
Task Comment Node
1 2 3 4

POST INSTALLATION TASKS AND CHECK POINTS


Check/Configure /etc/inet/hosts The scinstall automatically adds
the public IP address of each node!
Enter all the logical hosts. Also logical
hosts of none-global zones.
Bug 15768225: scinstall wrongly removes
cluster node in special circumstance.
Workaround: Re-add the cluster node to
/etc/hosts.
Check if the NTP service is online: All nodes must be synchronized to
# svcs svc:/network/ntp:default the same time!

If not run:
# svcadm enable svc:/network/ntp:default

You are free to configure NTP as best


meets your individual needs.
If you do not have your own Private hostname configuration is
stored in ntp.conf.sc
/etc/inet/ntp.conf file, the
/etc/inet/ntp.conf and ntp.conf.sc file will More details in MOS Document ID
be used as your NTP configuration file 1006115.1.
automatically.
Check if ip_strict_dst_multihoming is 0. See also MOS Document ID
Verify with: 1018856.1.
# ndd /dev/ip ip_strict_dst_multihoming If the value is not 0 then it is not
possible to create a logical host
Configure if not 0: resource.
# ndd -set /dev/ip ip_strict_dst_multihoming 0

Optional configure IP Filter. Refer to 'How When IPMP is used for public
to Configure IP Filter' in: network. SC relies on IPMP for
public network monitoring. Any IP
http://docs.oracle.com/cd/E56676_01/html/E56678/ Filter configuration must be made
gftcs.html#scrolltoc in accordance with IPMP
configuration guidelines and
Only use IP Filter with failover data services. restrictions concerning IP Filter.
The use of IP Filter with scalable data services
is not supported.
IPv6 scalable service support is NOT (Bug 15290321)
enabled by default, if required do:
Add to /etc/system This create IPv6 interface on all the
set cl_comm:ifk_disable_v6=0 cluster interconnect adapters with a
reboot if possible, if not run: link-local address
# /usr/cluster/lib/sc/config_ipv6
Java Version 1.7 required for SC4.3 More details in Alert 2055578.1
Java 1.8 and earlier versions than 1.7 are
NOT supported!
If necessary set to Java 1.7 with:
# pkg set-mediator -V 1.7 java
Verify with:
# java -version | grep version

Oracle Internal and Approved Partners Only Page 11 of 44 Vn 1.1 Created: 7 Jan 2016
Task Comment Node
1 2 3 4
Check/Modify svc:/system/name-service/switch SMF service
(formerly: /etc/nsswitch.conf) if applicable:
hosts: cluster files <any other hosts database>
netmasks: cluster files <any other netmasks database>

Note:
• scinstall enters cluster as the first entry for hosts & netmasks.
• files should always be in the second place following cluster.
• Enter dns when node is dns client.
• All further databases can be added to the end of the line.
• The ipnodes and hosts entry is the same in Solaris 11 and will be set
with host property.
Additional requirements which depend on the dataservice:
HA-NFS:
hosts: cluster files [SUCCESS=return]
rpc: files

Oracle RAC & HA-Oracle:


passwd: files
group: files
publickey: files
project: files

How to check property:


# /usr/sbin/svccfg -s svc:/system/name-service/switch listprop config/host
# /usr/sbin/svccfg -s svc:/system/name-service/switch listprop
config/netmask
# /usr/sbin/svccfg -s svc:/system/name-service/switch listprop config/rpc

How to modify if necessary:


# svccfg -s svc:/system/name-service/switch \
setprop config/host = astring: \"cluster files [SUCCESS=return] dns\"
# /usr/sbin/svccfg -s svc:/system/name-service/switch \
setprop config/netmask = astring: \"cluster files ldap\"
# /usr/sbin/svcadm refresh svc:/system/name-service/switch

Do not configure cluster nodes as routers! scinstall will touch file


If necessary use /etc/notrouter per default.
# route -p add default 10.xxx.xxx.1

NIS/NIS+ configuration OK? Only NIS/NIS+ client is supported!

The RPC (remote procedure call) program If the RPC service that you install
numbers 100141, 100142 and 100248 are also uses one of these program
numbers, you must change that
reserved for daemons rgmd_receptionist, RPC service to use a different
fed and pmfd, respectively. program number.
Beware the following scheduling classes
are not supported on SC:
• time-sharing with high priority
processes,
• real-time processes.
Do not use Solaris Cluster configuration to provide an rarpd service.
Do not use Solaris Cluster configuration to provide a highly available
installation service on client systems.

Oracle Internal and Approved Partners Only Page 12 of 44 Vn 1.1 Created: 7 Jan 2016
Task Comment Node
1 2 3 4
When using supported network adapters MOS Document ID 1017839.1.
which use the *ce* driver for REMOVE SC automatically chooses the best
in /etc/system: configuration for the ce driver
set ce:ce_taskq_disable=1 (RFEs 6281341 & 6487117).

When using supported network adapters If using ixge, ipge only for public
which use the *ixge* or *ipge*driver for network the default of 0 is ok for
the mentioned variable.
private transport, uncomment in
/etc/system:
set ixge:ixge_taskq_disable=1
set ipge:ipge_taskq_disable=1
When not using ge driver den REMOVE
the following line from /etc/system
set ge:ge_intr_mode=0x833
Consider enabling coredump for setid program in /var/cores:
# coreadm -g /var/cores/%f.%n.%p.core -e global -e process -e \
global-setid -e proc-setid -e log

This helps to diagnose a hang or timeout of a SC resource.


Details in MOS Document ID 1381317.1.

Oracle Internal and Approved Partners Only Page 13 of 44 Vn 1.1 Created: 7 Jan 2016
Task Comment Node
1 2 3 4

CONFIGURATION STEPS FOR QUORUM SERVER


You can install the quorum server software on any system connected to your cluster
through public network. The quorum server software is only required on the quorum
server and NOT on the cluster nodes which are using the quorum server!
• The Quorum server must connect to your cluster through the public network on the
same subnet that is used by the cluster nodes it serves.
• The Quorum server can be a different HW platform but need to be on the supported
HW list of SC4.3.
• The Quorum server can be on a cluster node. But the cluster node must be from a
different cluster. However, a quorum server that is configured on a cluster node is
not highly available.
• The Quorum server can NOT be installed on a non-global-zone.
• A machine that runs the Solaris 10 OS can be configured as a quorum server for a
cluster that runs the Solaris 11 OS. Refer to the SC3.3 checklist for quorum server
installation on Solaris 10.
Ensure that network switches are directly connected to the cluster nodes
and using 'Fast port mode enabled on the switch' or 'Rapid Spanning Tree
Protocol (RSTP) support by the switch'. This is required for immediate
communication between quorum server and cluster nodes.
Disable the spanning tree algorithm on the Ethernet switches for the ports
that are connected to the cluster public network where the quorum server
will run.
Ensure that the solaris and ha-cluster publishers are valid. Refer to page 7
of this checklist to configure the SC publisher.
Install Quorum Server: Only required on the quorum
# pkg install ha-cluster-quorum-server-full server!
Configure PATH and MANPATH, add: Normally provided from the EIS-
Quorum server: /usr/cluster/bin DVD by .profile-EIS. Just log out
Quorum server: /usr/cluster/man & back in to set the environment
correctly.
Configure Quorum Server. To serve more than one cluster but
use a different port number or
Add to /etc/scqsd/scqsd.conf (on one line): instance, configure an additional
/usr/cluster/lib/sc/scqsd -i <QSname> entry for each additional instance
-p 9000 -d /var/scqsd/<QSname> of the quorum server that you need.
Explanation:
-i instancename. Unique name for your quorum
servers instance. Can freely be chosen (optional).
-p port. The port number on which the quorum
server listens (required).
-d quorumdirectory. Must be unique for each
quorum server (required).
Start the Quorum Server: # clqs start <QSname>
or
# clqs start +

Oracle Internal and Approved Partners Only Page 14 of 44 Vn 1.1 Created: 7 Jan 2016
Task Comment Node
1 2 3 4
Manually configure routing information about QS on all cluster nodes:
• Add the QS host name to /etc/inet/hosts
• Add the quorum server host netmask to /etc/inet/netmasks.
On one cluster node add the QS by using clsetup or: e.g:
# clq add -t quorum_server -p qshost=<IP_of_QS> -p port=<port> <QSname>

Add the cluster nodes netmasks to file /etc/inet/netmasks on the QS host


machine.

Oracle Internal and Approved Partners Only Page 15 of 44 Vn 1.1 Created: 7 Jan 2016
Task Comment Node
1 2 3 4

CONFIGURATION STEPS FOR SUN ZFS STORAGE APPLIANCE


Requirements on Oracle ZFS Storage Appliance Filer:
• Do not use the default project. File systems created in the default project will not be
fenced from failing cluster nodes.
• A Sun ZFS Storage Appliance NAS device must be directly connected (through the
same subnet) to all nodes of the cluster.
• Ensure that the ZFS Storage Appliance is running a qualified firmware release;
Minimum for SC4.3 is:
ZFS Storage Software 2013.1.4.x (Details in MOS Document ID 2021771.1).
• For all details of Requirements and Restrictions refer to
http://docs.oracle.com/cd/E56676_01/html/E39824/ggggg.html#CLNASggggh
Using Sun ZFS Storage Appliance as FCAL device
Using https://<appliance_ip>:215 BUI to create FCAL target group on Per-Appliance
Appliance.
Click → Configuration → SAN → Fibre Channel Ports→ Targets
Move the Ports with the mouse into a new Fibre Channel Target Group.
Add FCAL initiators and group on Appliance. 1 2 3 4
In Configuration → SAN → Fibre Channel Ports → Initiators → +Fibre Channel Initiators
Add each controller of each cluster host e.g:
HBA World Wide Name: 21:00:00:e0:8b:1e:a0:1e
Alias e.g: phy-host0_c1
To identify HBA WWW you can use:
# cfgadm -al -o show_FCP_dev
# luxadm -e dump_map /dev/cfg/cX
Also create FCAL initiator group in same view by moving the initiators
into a new Fibre Channel Initiator Group.
Create a new project on Appliance: Per-Appliance
Click → Shares → Projects → + Projects
Give it a name, recommendation is the name of the cluster which can be
identified via:
# cluster show | grep 'Cluster Name'
Create shares on the Appliance in the already created project: Per-Appliance
Click → Shares → Shares → + LUN
Select created project and fill the template (select created FC target and
initiator group).
If device not already visible run Use also the cfgadm 1 2 3 4
# devfsadm command if necessary.
# cldevice refresh
Reconfigure global device space Identify the new did device with
# cldev populate scdidadm -L if necessary.

Such FCAL devices can be used as quorum To add as quorum:


device: # clquorum add <did_device>

Oracle Internal and Approved Partners Only Page 16 of 44 Vn 1.1 Created: 7 Jan 2016
Task Comment Node
1 2 3 4
USING A ZFS STORAGE APPLIANCE AS ISCSI DEVICE
Using https://<appliance_ip>:215 BUI to Create iSCSI target group on Per-Appliance
Appliance
Click → Configuration → SAN → Targets → iSCSI Targets → + iSCSI
Targets
• Target IQN use 'Auto-assign',
• Alias use e.g: clustername_networkinterfacename,
• Initiator authentication mode use 'None',
• and select a Network interface (if you like more than one network interface in the
target group then create a iSCSI Target for each network interface).
Move the iSCSI Targets with the mouse into a new iSCSI Target Group.
Add iSCSI initiators and group on Appliance 1 2 3 4

In Configuration → SAN → Initiators → iSCSI Initiators → +iSCSI Initiators


on each cluster host run
phys_host0: # iscsiadm list initiator-node
Initiator node name: iqn.1986-03.com.sun:01:e00000000000.4f3bbd17
Copy this iqn to 'Initiator IQN' field
Alias e.g. phy-host0_iqn
Also create iSCSI Initiator group in same view by moving the initiators
into a new iSCSI Initiator Groups.
Create a new project on Appliance Per-Appliance
Click → Shares → Projects → + Projects
Give it a name, recommendation is the name of the cluster which can be
identified via
# cluster show | grep 'Cluster Name'

Create shares on the Appliance in the already created project Per-Appliance


Click → Shares → Shares → + LUN
Select created project, and fill the template (select created iSCSI target and
iSCSI initiator group)
Enable static iSCSI configuration: On all nodes 1 2 3 4
# iscsiadm modify discovery -s enable

Verify with:
# iscsiadm list discovery

Add static-config: On all nodes


# iscsiadm add static-config \ iqn.iSCSItarget_NAS =
iqn.iSCSItarget_NAS,IPAddress_NAS The iqn from the above created
iSCSI targets network interface.
Check the configuration:
IPAddress_NAS = The IP from
# iscsiadm list target -vS the above created the IP iSCSI
If your appliance has more configured targets.
network ports, you can do this for more
network ports for network path redundancy.

Oracle Internal and Approved Partners Only Page 17 of 44 Vn 1.1 Created: 7 Jan 2016
Task Comment Node
1 2 3 4
If device not already visible run
# devfsadm -i iscsi
# cldevice refresh

Reconfigure global device space Identify the new did device with
scdidadm -L if necessary.
# cldev populate

Such iSCSI devices can be used as quorum To add as quorum:


device. # clquorum add <did_device>

USING A ZFS STORAGE APPLIANCE AS NAS DEVICE


Add the appliance name to /etc/inet/hosts
file.
Using https://<appliance_ip>:215 BUI to configure workflow for Oracle Per-Appliance
Solaris Cluster NFS with
Click → Maintenance → Workflows → Configure for Oracle Solaris
Cluster NFS
and provide a password.
Note: Perform it only on one head in dual-head configuration. If the
workload of the specified name is not present, it is likely that the appliance
is not running the correct software release.
Install the zfssa-client package if not This package is provided with the 1 2 3 4
already done: SC4.3 release.
# pkg list -af zfssa-client
# pkg install zfssa-client
Configure device support to access the Necessary for fencing support.
appliance on one node.
# clnasdevice add -t sun_uss -p \
userid=osc_agent -p \
"nodeIPs{node1}"=10.111.11.111 -p \
"nodeIPs{node2}"=10.111.11.112 \
applicancename

At the prompt, type the same password as


used two steps before in the workflow.
Verify configuration: If done in zonecluster:
# clnasdevice show -Z zcname
# clnasdevice show
Create a new project on Appliance Per-Appliance
Click → Shares → Projects → + Projects
Give it a name, recommendation is the name of the cluster which can be
identified via
# cluster show | grep 'Cluster Name'

Oracle Internal and Approved Partners Only Page 18 of 44 Vn 1.1 Created: 7 Jan 2016
Task Comment Node
1 2 3 4
Set the Share Mode for the project to None or Read only, depending on the Per-Appliance
desired access rights for non-clustered systems.
Click → Shares → Projects → Pencil to edit your created Project →
Protocols
Note: The Share Mode can be set to Read/Write if it is required to make
the project world-writable, but it is not recommended.
Note: Read-only can be tested by booting a node in non-cluster-mode and
try to touch a file on the NAS filesystem.
Add a read/write NFS Exception for each cluster node. 1 2 3 4

Click → Shares → Projects → Pencil to edit your created Project →


Protocols → + NFS Exceptions
• Type: select network,
• Entity: IP address of node e.g: 192.168.254.254/32 (Use a CIDR mask of /
32),
• Access Mode: select Read/Write,
• If desired, select Root Access. Root Access is required when
configuring applications, such as Oracle RAC or HA Oracle.
• Add exceptions for all cluster nodes.
Click apply!
Create Filesystem shares on the Appliance in the already created project: Per-Appliance
Click → Shares → Projects → Pencil to edit your created Project → +
Filesystems
The Project is already selected. Fill the template:
Data migration source: None
Inherit mountpoint must be selected
Note: If you are adding multiple directories within the same project, verify
that each directory that needs to be protected by cluster fencing has the
Inherit from project property set.
Add project to the cluster: 1 2 3 4
# clnasdevice add-dir -d project applicancename
Example: Search for projects and add:
# clnasdevice find-dir -v
# clnasdevice add-dir -d pool-0/local/projecta applicancename

In case for zone cluster:


# clnasdevice add-dir -d project1,project2 -Z zcname appliancename

Check with:
# clnasdevice show -v
# clnasdevice show -v -d all
If not using automounter, create a mount- # mkdir -p /NAS/FS1
point for each Appliance filesystem:

Oracle Internal and Approved Partners Only Page 19 of 44 Vn 1.1 Created: 7 Jan 2016
Task Comment Node
1 2 3 4
On each node add an entry to the /etc/vfstab file for the mountpoint:
appliancename:/export/FS1 - /NAS/FS1 nfs no yes -

Refer to MOS Document ID: 359515.1 for current list of supported files
and mount options for Oracle RAC or HA Oracle.
Or enable file system monitoring, configure a resource of type
SUNW.ScalMountPoint for the file systems.
E.g:
# clrg create -S -p Desired_primaries=2 -p \
Maximum_primaries=2 NASmp-rg
# clrt register SUNW.ScalMountPoint
# clrs create -g NASmp-rg -t SUNW.ScalMountPoint -p \
TargetFileSystem=appliancename:path -p \
FileSystemType=nas -p MountPointDir=fs_mountpoint NASmp-rs

E.g:
# clrs create -g NASmp-rg -t SUNW.ScalMountPoint -p \
TargetFileSystem=filer1:/export/FS1 -p FileSystemType=nas -p \
MountPointDir=/NAS/FS1 NASmp-rs
# clrg online -eM NASmp-rg

Note: No need to add the FS1 into /etc/vfstab for this setup.
If creating an application resource group then set the following positive
affinity and dependencies e.g:
For resource group:
# clrg create -p rg_affinities=++scalmp-rg app-rg

For resource:
# clrs create -g app-rg -t app_resource_type -p \
Resource_dependencies_offline_restart=scalmp-rs app-rs

Oracle Internal and Approved Partners Only Page 20 of 44 Vn 1.1 Created: 7 Jan 2016
Task Comment Node
1 2 3 4

CONFIGURATION STEPS FOR USING SVM


Campus clusters using SDS/SVM for managing shared data. Its recommended to
use a quorum server & mediator server instead of traditional quorum device. In case of
quorum disk you should consider creating a preferred site by putting an additional
statedb in each of the metasets. The preferred site should be the same site as that which
contains the quorum disk. Doing this overcomes the problem of instantaneous loss of
half your storage and servers in the preferred site. If remaining site still happens to
panic the preferred site can still boot (more than half statedb available).
Adding an extra statedb to a metaset requires slice 7 (VTOC) or Slice 6(EFI) to more than 4MB created
by the standard metainit command. Hence, prior to using metainit, format the disk with an 20MB slice
7 (VTOC) or Slice 6(EFI) and mark the slice with the wu flags. Then initialise the metadb commands
in the next step.
Ensure that SVM software is installed: # pkg list svm
If not installed run:
# pkg install svm
# init 6

Create replicas on the local disks: Always put the replicas on different
# metadb -afc 3 c0t0d0s7 c1t0d0s7 c2t0d0s7 physical drives and if possible on 3
different controllers.
Recommendation: Place replica on slice7.
If ZFS root is on a partition you can
use a slice on the same disk.
Otherwise use different local disk.
When using OBAN (Multiowner diskset) jump to RAC install section.
Set up the metasets. Use the did devices.
metaset -s <setname> -a -h <node1> <node2> e.g: /dev/did/dsk/d0
metaset -s <setname> -a <drivename1>
<drivename2> ...
Slice 7(VTOC) or Slice 6(EFI) size
for metadb is fixed when it's added to
NOTE: Do NOT name disk set admin or the metaset.
shared.
Optional: Check replicas of the disksets!
# removes the one copy if necessary
metadb -s <setname> -d -f <drivename>
# Adds back two copies (maybe you must expand slice7)
metadb -s <setname> -afc 2 <drivename>
Set up the Mediator Host for each diskset See also
# man mediator
when the cluster matches the two-hosts
and two-strings criteria:
metaset -s <setname> -a -m <node1> <node2>

Create metadevices. Replica are pre-configured on Slice


Recommendation for slices (SPARC 7(VTOC) or Slice 6(EFI).
If necessary slice the disk up!
SMI-label):
Slice 7 - Replica (Starts at cylinder 0) Set NOINUSE_CHECK=1 in your
Slice 0 - Metadevice shell environment to prevent
warning/error messages of the
md.tab in /etc/lvm filesystem utilities commands.
Consider installing the metacheck script On EIS-DVD under
& activate via CRON. ...sun/tools/MISC/SVM

Jump to common actions for filesystems and boot devices on page 23.

Oracle Internal and Approved Partners Only Page 21 of 44 Vn 1.1 Created: 7 Jan 2016
Task Comment Node
1 2 3 4

CONFIGURATION STEPS FOR 3rd MEDIATOR HOST


This feature should be used in campus cluster configurations when using Solaris
Volume Manager for the shared devices.
• A diskset can have up to three mediator hosts (not more),
• The mediator host must NOT be part of the cluster.
Configuration steps on 3rd mediator host:
A) Ensure that SVM software is installed on mediator host
# pkg list svm if not run # pkg install svm and init 6
B) Add root to sysadmin in file /etc/group
sysadmin::14:root
C) Create metadb and dummy diskset:
# metadb -afc 3 c0t0d0s7 c1t0d0s7
# metaset -s <dummyds> -a -h <3rd_mediator_host>
Note: Maybe use <clustername_disksetname> as dummyds to have a
reminder for which cluster/diskset this mediator host is used. You can
create a dummyds for each <clustername_disksetname>.
Configuration steps on cluster nodes: ATTENTION: If using 3rd mediator
host for more than one cluster.
A) Add 3rd mediator host to /etc/hosts. Each cluster node and disk set must
have a unique name throughout all
B) Add 3rd mediator host to existing clusters and a disk set cannot be
diskset on one cluster node: named shared or admin.
# metaset -s <setname> -a -m Bug 6805425
<3rd_mediator_host>

Consider installing the metacheck script & On EIS-DVD under


activate via CRON. ...sun/tools/MISC/SVM

Oracle Internal and Approved Partners Only Page 22 of 44 Vn 1.1 Created: 7 Jan 2016
Task Comment Node
1 2 3 4

COMMON ACTIONS FOR FILESYSTEMS & BOOT DEVICES


In case of ufs run newfs on all new created
metadevices if not already done.
Enter global filesystems into /etc/vfstab Add global and logging option in
on all nodes. Recommendation: mount all /etc/vfstab if you use UFS.
FS under /global!
Create all mountpoints on all nodes.
Check global mount point entries of vfstab:
# cluster check -v -C M6336822

If you have issues with cacao then copy the security files
# cacaoadm stop; cacaoadm disable (on all nodes)
On one node:
# cd /etc/cacao/instances/default
# tar cf /tmp/security.tar security
# cp the security.tar files to all other nodes into the same directory and unpack them.
# cacaoadm enable; cacaoadm start (on all nodes)
Test switch of the disk groups with:
# cldg switch -n <phy.host> <devicegroup>

Check/Set localonly flag for all local did devices (which means all disks
which are managed only on one node). At least necessary for the root &
root mirror disks:
• Find the matching between physical and did device
# cldev list -v <physical device or did device> (scdidadm -l)
• Make sure only local node in node list:
# cldg show dsk/d<N>
• If other nodes in node list, remove them:
# cldg remove-node -n <other_phy_host> dsk/d<N>
• Set localonly & autogen to true if not already done:
# cldg set -p localonly=true dsk/d<N>
# cldg set -p autogen=true dsk/d<N>
NOTE: To get the did device use:
cldev list -n <phy_host> -v (scdidadm -l)

SPARC: Modify NVRAM parameter: Suggested naming convention


Set boot-device & diag-device to both SPARC:
sides of the mirror. • Primary: rootdisk
• Secondary: rootmirror
The diag-device is not required for T-series
servers, MOS Document ID 1489871.1.

Oracle Internal and Approved Partners Only Page 23 of 44 Vn 1.1 Created: 7 Jan 2016
Task Comment Node
1 2 3 4
Only Solaris11.3 x86 using GRUB2: All is written to
/rpool/boot/grub/grub.cfg
List GRUB2 entries:
# bootadm list-menu
Add menu entry:
# bootadm add-entry -i 2 \ -i <new-menu-entry-number> in
"OSC43-non-cluster-mode" this example menu entry 2
# bootadm change-entry -i 2 kargs="-x" For kargs:
-s single user boot
Verify the new entry: -v verbose
# bootadm list-menu -i 2 -k to boot with kernel debugger
enabled
This new entry use the current active boot
environment (BE) which can be listed with
'beadm list'
More details in GRUB2 config Guide
http://docs.oracle.com/cd/E53394_01/html/E54742
/gkvhz.html#scrolltoc
Test booting using the new aliases. If 'Fast Reboot' is enabled (default
for x86) then use reboot -p to
disable 'Fast Reboot'

Oracle Internal and Approved Partners Only Page 24 of 44 Vn 1.1 Created: 7 Jan 2016
Task Comment Node
1 2 3 4

CONFIGURATION NOTICE: ALL HASTORAGEPLUS FILESYSTEMS


Configure the fsck pass option in /etc/vfstab for HAStoragePlus
filesystems:
• if 'fsck pass' is set to 1 --> fsck will run sequentially
• if 'fsck pass' is set to 2 or greater –> fsck will run in parallel
More details in MOS Document ID 1005373.1.
CONFIGURATION STEPS FOR FAILOVER ZFS WITH HASTORAGEPLUS
Notes:
• Do not add a configured quorum device to ZFS because ZFS will
relabel the disk. You can add the quorum after creation of ZFS.
• It's recommended to use full disk instead of a disk slice.
• It's recommended to use cXtXdX devices for ZFS.
• If creating zpool on 'did' device a slice must be used. Using whole
disk /dev/did/dsk/dX is corrupting the disk label (Bug 18433476).
• HAStoragePlus does not support file systems created on ZFS file
system volumes (zvol).
• You can not use value 'legacy' or 'none' of zfs mountpoint property.
• It is possible to encrypt ZFS when you create it. Details in:
http://docs.oracle.com/cd/E56676_01/html/E56683/gbspx.html#scrolltoc
Create ZFS storage pool. e.g: Use cldev list -v to find out shared
devices.
Stripe setup:
# zpool create <data> cXtXdX cXtXdX Example with different mountpoint:
# zpool create -m
Mirror setup: <mountpoint> <poolname>
# zpool create <data> mirror cXtXdX mirror cXtXdX cXtXdX
cXtXdX Refer to ZFS documentation for more
raidz / raidz2 setup: details.
# zpool create <data> raidz2
cXtXdX cXtXdX cXtXdX

Create ZFS filesystem in a pool: # zfs create data/home


# zfs create data/home/user1

Optional: consider to set failmode: When all connections to the zpool are
# zpool set failmode=panic <data> lost then the failmode will panic the
node and speed up the failover time.
The ZFS pool failmode property is set to wait by
default. This setting can result in the HAStoragePlus
resource blocking, which might prevent a failover of
the resource group. See the zpool(1M) man page to
understand the possible values for the failmode
property and decide which value fits your
requirements.
# clrt register SUNW.HAStoragePlus
Register HAStoragePlus:
Create failover resource group: # clrg create zfs1-rg

Create HAStoragePlus resource: Do NOT use FilesystemMountPoints


# clrs create -g zfs1-rg -t \ property for ZFS!
SUNW.HAStoragePlus -p \
Zpools=<data> zfs1-rs

Oracle Internal and Approved Partners Only Page 25 of 44 Vn 1.1 Created: 7 Jan 2016
Task Comment Node
1 2 3 4
Switch resource group online: # clrg online -M zfs1-rg
# clrg switch -n <other_node>
Test to switch the resource group: zfs1-rg

Oracle Internal and Approved Partners Only Page 26 of 44 Vn 1.1 Created: 7 Jan 2016
Task Comment Node
1 2 3 4
CONFIGURATION STEPS FOR ZFS WITHOUT HASTORAGEPLUS
Setup of local zpool (only available on one node) can be done with local
devices (only connected to one node) or shared devices (accessible from all
nodes in the cluster via SAN). However in case of shared device it would
be better to setup a zone in the SAN switch to make the device only
available to one host.
Identify the did devices for used physical device in zpool.
Example for local device:
# scdidadm -l c1t3d0
49 node0:/dev/rdsk/c1t3d0 /dev/did/rdsk/d49

Example for shared device:


# scdidadm -L c6t600C0FF00000000007BA1F1023AE1711d0
11 node0:/dev/rdsk/c6t600C0FF00000000007BA1F1023AE1710d0 /dev/did/rdsk/d11
11 node1:/dev/rdsk/c6t600C0FF00000000007BA1F1023AE1710d0 /dev/did/rdsk/d11

Check the settings of the used did device: # cldg show dsk/d11
Only one node should be in node list.
In case of shared device remove one node # cldg remove-node -n \
<node1> dsk/d11
from the node list:
Set localonly flag for the did device Optional: Set the autogen flag, but
after localonly!
# cldg set -p localonly=true dsk/d11
Note: autogen must be false to set localonly # cldg set -p autogen=true dsk/d11
# cldev show d11
To disable fencing for the did device:
# cldev set -p default_fencing=nofencing d11

Verify the settings: # cldg show dsk/d11

Create zpool by using the physical device:


# zpool create localpool c6t600C0FF00000000007BA1F1023AE1711d0
# zfs create localpool/data

Oracle Internal and Approved Partners Only Page 27 of 44 Vn 1.1 Created: 7 Jan 2016
Task Comment Node
1 2 3 4

CONFIGURATION STEPS FOR ZONE CLUSTER


• The zone cluster must be configured on a global Solaris Cluster.
• The name of the zone cluster must be unique throughout the global
Solaris Cluster and can NOT be changed. The names 'all' and 'global'
are not permitted!
• By default, whole-root zones are created.
• Supported brands for zone clusters are solaris, solaris10, and labeled.
Shared-IP and Exclusive-IP zone clusters work with brand solaris or
solaris10. If not specified then Shared-IP is the default.
• For a solaris10 branded exclusive-IP zone cluster, only IPMP group as
the public network management object is allowed.
• Switching ip-type between shared and exclusive is NOT supported.
• If using VNICs in exclusive-IP zone clusters, then name of the VNIC
must be less than 16 characters long (Bug 17362337).
• To import existing non-global zones into Zone Cluster refer to
http://docs.oracle.com/cd/E56676_01/html/E56678/gpjbu.html#scrolltoc
• The labeled brand is exclusively for Trusted Extensions. For
Guidelines:
http://docs.oracle.com/cd/E56676_01/html/E56678/gmefz.html#CLISTgjumw
• Optionally assign a specific public-network IP address to each zone
cluster node. However if you do not configure public-network IP
addresses for each zone cluster node the following will occur:
a) That specific zone cluster will not be able to configure NAS devices for use in
the zone cluster. The cluster uses the IP address of the zone cluster node when
communicating with the NAS device, so not having an IP address prevents
cluster support for fencing NAS devices.
b) The cluster software will activate any Logical Host IP address on any NIC.
Check if zone cluster can be created:
# cluster show-netprops
To change number of zone clusters use: 12 zone clusters is the default, values
# cluster set-netprops -p \ can be customized!
num_zoneclusters=12 num_zoneclusters can only be set in
cluster mode! (Bug 18528191)
To change number of zone clusters with
exclusive-IP use: 3 exclusive-IP zone clusters is the
# cluster set-netprops -p \ default, num_xip_zoneclusters can
num_xip_zoneclusters=4 only be customized in non-cluster-
mode!
Create config file (zc1config) for SC4.3:It's possible to use 'clsetup'
zonecluster setup e.g: and menu item 'Zone Cluster' to
create a Zone Cluster.
create
set zonepath=/zones/zc1 The clzc selects automatically:
add node set brand=solaris
set physical-host=<physical-hostname-node1> set autoboot=true
set hostname=<zone-hostname-node1> set enable_priv_net=true
add net
set address=<ip-zone-hostname-node1>
set ip-type=shared
set physical=<network_or_ipmpgroup-node1>
end
end * ip-type=shared: the
add node
(public/private) NICs will be shared

Oracle Internal and Approved Partners Only Page 28 of 44 Vn 1.1 Created: 7 Jan 2016
Task Comment Node
1 2 3 4
set physical-host=<physical-hostname-node2> with GZ.
set hostname=<zone-hostname-node2>
add net
* ip-type=exclusive: requires his
set address=<ip-zone-hostname-node2> own NICs (VNICs).
set physical=<network_or_ipmpgroup-node2>
end * enable_priv_net=true: separate
end clprivnet interface in ZC. If ip-
commit type=exlusive then separate NICs
exit required.
Example for one node if IP exclusive for * enable_priv_net=false: no separate
clprivnet in ZC. The clprivnet of GZ
(public and private) is used: will be used.
add node
set physical-host=<physical-hostname-node1>
set hostname=<zone-hostname-node1>
add net
set physical=<NIC_or_VNIC_public>
end
add privnet
set physical=<NIC_or_VNIC_private>
end
add privnet
set physical=<NIC_or_VNIC_private>
end
end

Configure zone cluster: If not using the config file the


# clzc configure -f zc1config zc1 configuration can also be done
manually:
# clzc configure zc1

Optional: Configure zone system/CPU Details:


http://docs.oracle.com/cd/E56676_01/htm
resource control or capped-memory l/E56678/gmefz.html#CLISTgmegc

Check zone configuration: # clzc export zc1

Verify zone cluster:


# clzc verify zc1
NOTE: The following message is a notice and comes up on several clzc
commands:
Waiting for zone verify commands to complete on all the
nodes of the zone cluster "zc1"...
Install the zone cluster: Monitor the console of the global
# clzc install zc1 cluster to see what's going on...

Boot the zone cluster: # clzc boot zc1

Login into non-global-zones of # zlogin -C zc1


zone cluster zc1 on all nodes and finish
Solaris installation.
Check status of zone cluster: To delete a zone cluster do:
# clzc status zc1 # clzc halt zc1
# clzc uninstall zc1
# clzc delete zc1
Note: Zone cluster uninstall can only
be done if all resource groups are
removed in the zone cluster.
Run setup-standard from the EIS-DVD to configure ssh for the non-
global-zone.

Oracle Internal and Approved Partners Only Page 29 of 44 Vn 1.1 Created: 7 Jan 2016
Task Comment Node
1 2 3 4
Re-login or source /.profile-EIS and configure non-global-zones as
required.
If using additional name service Refer to page 12 for setup details.
configure /etc/nsswitch.conf of zone
cluster non-global zones.
hosts: cluster files <any other hosts database>
netmasks: cluster files <any other netmasks database>

Configure /etc/inet/hosts Enter all the logical hosts of non-


global zones.
Example of creating an application in For zone cluster overview use
zone cluster: command:
# cluster status
Create a resource group: If using global fs in zonecluster
# clrg create -Z zc1 -n \ scalable rg is necessary!
zonehost1,zonehost2 app-rg
IF running command in zc1 then:
# clrg create -n \
zonehost1,zonehost2 app-rg

Set up the logical host resource for zone cluster. In the global zone do:
# clzc configure zc1
# clzc:zc1> add net
# clzc:zc1:net> set address=<zone-logicalhost-ip>
# clzc:zc1:net> end
# clzc:zc1> commit
# clzc:zc1> exit

In zonecluster do:
# clrslh create -g app-rg -h <zone-logicalhost> <zone-logicalhost>-rs

Note: Ensure that the logical host is in the /etc/hosts file on all zone cluster
nodes.
Following types of file systems can configured with clzc command:
• Direct mount and loopback mount:
UFS local file system
• Direct mount:
Oracle Solaris ZFS (exported as a data set), NFS from supported NAS devices, QFS,
ACFS
• Loopback mount:
UFS cluster file system (global file system)
Different examples are also available at:
http://docs.oracle.com/cd/E56676_01/html/E56678/gmfka.html#scrolltoc
Register HAStoragePlus in zone cluster Also SUNW.ScalMountPoint
# clrt register SUNW.HAStoragePlus resource can be used

Examples adding storage to the zone cluster:


A) ZFS storage pool
In the global zone configure a zpool: if necessary details on page 25
Configuration steps for failover zfs with HAStoragePlus
then make the zpool available via HAStoragePlus in zc1:
# clzc configure zc1
clzc:zc1> add dataset
clzc:zc1:dataset> set name=zdata
clzc:zc1:dataset> end

Oracle Internal and Approved Partners Only Page 30 of 44 Vn 1.1 Created: 7 Jan 2016
Task Comment Node
1 2 3 4
clzc:zc1> verify
clzc:zc1> commit
clzc:zc1> exit

Check setup with:


# clzc show -v zc1

In the zonecluster do:


# clrs create -g app-rg -t SUNW.HAStoragePlus -p \
zpools=zdata app-hasp-rs

B) HA filesystem
In the global zone configure SVM diskset and SVM devices. Run
newfs and make the metadevice available via HAStoragePlus in zc1:
# clzc configure zc1
clzc:zc1> add fs
clzc:zc1:fs> set dir=/data
clzc:zc1:fs> set special=/dev/md/datads/dsk/d0
clzc:zc1:fs> set raw=/dev/md/datads/rdsk/d0
clzc:zc1:fs> set type=ufs
clzc:zc1:fs> add options [logging]
clzc:zc1:fs> end
clzc:zc1> verify
clzc:zc1> commit
clzc:zc1> exit
Check setup with:
# clzc show -v zc1

In the zone cluster do:


# clrs create -g app-rg -t SUNW.HAStoragePlus -p \
FilesystemMountPoints=/data app-hasp-rs

C) Global filesystem as loopback file system


In the global zone configure global filesystem and it to /etc/vfstab on
all global nodes e.g.:
/dev/md/datads/dsk/d0 /dev/md/datads/dsk/d0 /global/fs ufs 2
yes global,logging
and
# clzc configure zc1
clzc:zc1> add fs
clzc:zc1:fs> set dir=/zone/fs (zc-lofs-mountpoint)
clzc:zc1:fs> set special=/global/fs (globalcluster-mountpoint)
clzc:zc1:fs> set type=lofs
clzc:zc1:fs> end
clzc:zc1> verify
clzc:zc1> commit
clzc:zc1> exit
Check setup with:
# clzc show -v zc1

In the zone cluster do: (Create scalable rg if not already done)


# clrg create -p desired_primaries=2 -p maximum_primaries=2 \
app-scal-rg
# clrs create -g app-scal-rg -t SUNW.HAStoragePlus -p \
FilesystemMountPoints=/zone/fs hasp-rs

Switch resource group online: For global filesystem:


# clrg online -eM app-rg # clrg online -eM app-scal-rg

Test: Switch of the resource group in the zone cluster:


# clrg switch -n zonehost2 app-rg

Oracle Internal and Approved Partners Only Page 31 of 44 Vn 1.1 Created: 7 Jan 2016
Task Comment Node
1 2 3 4
Add supported dataservice to
zone cluster.

Oracle Internal and Approved Partners Only Page 32 of 44 Vn 1.1 Created: 7 Jan 2016
Task Comment Node
1 2 3 4

CONFIGURATION STEPS FOR DATA SERVICE HA-NFS


NFS client – No Cluster node can be a Use the cluster file system to share
NFS client of a HA for NFS exported file files among global-cluster nodes.
system that is being mastered on a node in
the same cluster. Such cross-mounting
of HA for NFS is prohibited.
IF If both of these conditions are met,
LOFS must be disabled to avoid
• HA-NFS is configured with switchover problems or other
HAStoragePlus FFS (failover file failures. If one but not both of these
system) AND conditions is met, it is safe to enable
LOFS.
• automountd is running/used.
Solaris Zones require both LOFS and
THEN the automountd daemon to be
exclude: lofs enabled, exclude from the
automounter map all files that are
in file /etc/system part of the highly available local file
system that is exported by HA-NFS.
HA-NFS requires that all NFS client mounts be 'hard' mounts.
Consider if using NFSv3: If you are mounting file systems on the cluster
nodes from external NFS servers, such as NAS filers and you are using the
NFSv3 protocol, you cannot run NFS client mounts and the HA for NFS
data service on the same cluster node. If you do, certain HA for NFS data-
service activities might cause the NFS daemons to stop and restart,
interrupting NFS services. However, you can safely run the HA for NFS
data service if you use the NFSv4 protocol to mount external NFS file
systems on the cluster nodes. NFSv4 is the default in /etc/default/nfs for
NFS clients.
Locking – Applications that run locally on the cluster must not lock files
on a file system that is exported through NFS. Otherwise, local blocking
(for example, flock(3UCB) or fcntl(2)) might interfere with the ability to
restart the lock manager (lockd(1M)). During restart, a blocked local
process might be granted a lock which might be intended to be reclaimed
by a remote client. This situation would cause unpredictable behaviour.
HA-NFS data service sets the property Consequences of these settings:
application/auto_enable to FALSE and • When services that depend on
the property startd/duration to transient these services are enabled, these
services are not automatically
for the following services: enabled.
/network/nfs/server
/network/nfs/status • In the event of any failure, SMF
/network/nfs/nlockmgr does not restart the daemons that
are associated with these
services.
• In the event of any failure, SMF
does not restart these services.

Oracle Internal and Approved Partners Only Page 33 of 44 Vn 1.1 Created: 7 Jan 2016
Task Comment Node
1 2 3 4
Optional: NFS security features:
• SC does NOT support the following options of the share_nfs
command:
secure
sec=dh
• SC does support secure ports for NFS. If you wish to use it add the
following to the file /etc/system:
set nfssrv:nfs_portmon=1
• The use of Kerberos with NFS. Details:
http://docs.oracle.com/cd/E56676_01/html/E57645/fdkyv.html#scrolltoc
Optional: Customise the nfsd or lockd
startup options.
If you use ZFS as exported file system Verify with:
# zfs get sharenfs <zfs>
you must set sharenfs property to off.
e.g:
# zfs set sharenfs=off <zfs>
Edit /etc/hosts Problem Resolution in MOS
Document ID 1012164.1.
In case of Mx000 add internal sppp0. e.g:
192.168.224.2 sppp0 Bug 15232742
If you do not add these entries the
If IPMP group uses 0.0.0.0 add the HA-NFS will not failover correctly.
following dummy:
0.0.0.0 dummy Note: HA-NFS resolves all
configured network addresses.
Ensure that the solaris and ha-cluster publishers are valid. Refer to page 7
of this checklist to configure the SC publisher.
Install HA-NFS if not already done
# pkg info ha-cluster/data-service/nfs
# pkg install ha-cluster/data-service/nfs

Check file /etc/nsswitch.conf Refer to page 12 for setup details.


hosts: cluster files [SUCCESS=return]
rpc: files

Register resource type nfs: # clrt register SUNW.nfs

Set up the resource group: The name of the resource group can
be freely chosen!
# clrg create -n phy-host0,phy-host1
-p Pathprefix=/global/nfs1 nfs1-rg If not already done, run:
# mkdir -p /global/nfs1
NOTE: If using ZFS the Pathprefix changes to
zpool name e.g: /nfs1pool
Set up the logical host resource and bind Use the <logical_host> for the
to the resource group: resource name and add -rs on the
end. The IPMP groups (-N option is
# clrslh create -g nfs1-rg -h \ optional).
<logical_host> -N \
ipmp1@phy-host0,ipmp1@phy-host1 \
<logical_host>-rs

Oracle Internal and Approved Partners Only Page 34 of 44 Vn 1.1 Created: 7 Jan 2016
Task Comment Node
1 2 3 4
Optional: Setup HAStoragePlus resource
# clrt register SUNW.HAStoragePlus (check with clrt list)
Modify /etc/vfstab
E.g: of /etc/vfstab for failover filesystem:
/dev/md/nfsset/dsk/d10 /dev/md/nfsset/rdsk/d10 /global/nfs1 ufs 2 no logging

E.g: of /etc/vfstab for global filesystem:


/dev/md/nfsset/dsk/d10 /dev/md/nfsset/rdsk/d10 /global/nfs1 ufs 2 no global, logging

Create HAStoragePlus resource:


# clrs create -g nfs1-rg -t SUNW.HAStoragePlus -p \
FilesystemMountPoints=/global/nfs1 -p AffinityOn=True \
nfs1-hastp-rs

In case of ZFS:
# clrs create -g nfs1-rg -t SUNW.HAStoragePlus -p \
Zpools=nfs1pool -p AffinityOn=True nfs1-hastp-rs

Bring the resource group online: # clrg online -M nfs1-rg

Set up the directory for NFS state The SUNW.nfs directory must be in
information in an online shared the devicegroup. Recommendation:
SUNW.nfs directory should NOT be
filesystem visible by the NFS-Clients!
# cd /global/nfs1
# mkdir SUNW.nfs
# cd SUNW.nfs
# vi dfstab.nfs1-server-rs
(The name for the resource type can be freely chosen).
share -F nfs -o rw /global/nfs1/data
# cd; mkdir /global/nfs1/data

In case of ZFS:
# cd /nfs1pool
# mkdir SUNW.nfs
# cd SUNW.nfs
# vi dfstab.nfs1-server-rs
(The name for the resource type can be freely chosen).
share -F nfs -o rw /nfs1pool/data
# zfs create nfs1pool/data

Create the resource of type SUNW.nfs to resource group and dfstab.nfs1-


server-res:
# clrs create -g nfs1-rg -t SUNW.nfs nfs1-server-rs
Or if HAStoragePlus is used, setup required dependencies:
# clrs create -g nfs1-rg -t SUNW.nfs -p \
Resource_dependencies=nfs1-hastp-rs nfs1-server-rs

It's recommended to use Failover_mode soft for nfs server resource:


# clrs set -p Failover_mode=SOFT nfs1-server-rs

Test via switch of the resource group:


# clrg switch -n phy-host nfs1-rg

Oracle Internal and Approved Partners Only Page 35 of 44 Vn 1.1 Created: 7 Jan 2016
Task Comment Node
1 2 3 4

CONFIGURATION STEPS FOR HA-LDom (Oracle VM Server) for


SPARC when using “live” migration
• Minimum requirement is Oracle VM Server for SPARC 3.3 (delivered
with Solaris 11.3).
• It is not supported to run Solaris Cluster within a failover LDom.
• Migration “cold” and “live” (previously called “warm”) are possible.
• Possible options for the root file system of a domain with “live”
migration are: pxfs (UFS/SVM), NFS, iSCSI, and SAN LUNs because
all accessible at the same time from both nodes. ZFS as underlying
root filesystem for zfs root in LDom can only be used for “cold”
migration (for details see MOS Document ID 1366967.1).
• The domain configuration is retrieved by the “ldm list-constraints -x
ldom” command from Solaris Cluster and stored in the CCR. This
information is used to create or destroy the domain on the node where
the resource group is brought online or offline.
• When SR-IOV device is identical on potential primaries then “cold”
migration can be used. The “live” with SR-IOV device is not
supported.
Prepare all control domains (primary domains) which should manage the
failover LDom with the necessary services. They must be identical on all
the potential primary nodes.
# ldm add-vconscon port-range=5000-5100 primary-vcc0 primary
# svcadm enable svc:/ldoms/vntsd:default
# ldm add-vswitch net-dev=net0 public-vsw1 primary
# ldm add-vdiskserver primary-vds0 primary
To verify:
# ldm list-bindings primary
More details to install/configure LDom SW refer to EIS checklist for LDoms.
Set failure-policy on all primary domains:
# ldm set-domain failure-policy=reset primary
To verify:
# ldm list -o domain primary
Create failover guest domain (fgd0) on For more details to setup LDoms
one primary domain. Simple example: refer to EIS checklist for LDoms
# ldm add-domain fgd0
# ldm set-vcpu 16 fgd0
# ldm set-mem 8G fgd0
Add public network:
# ldm add-vnet public-net0 public- To verify:
vsw1 fgd0 # ldm list-bindings fgd0

Set necessary values on fgd0: To verify run:


# ldm list -o domain fgd0
# ldm set-domain master=primary fgd0
# ldm set-var auto-boot?=false fgd0 auto-boot?=false is a “must
have” to prevent data corruption –
see MOS Document ID 1585422.1.

Oracle Internal and Approved Partners Only Page 36 of 44 Vn 1.1 Created: 7 Jan 2016
Task Comment Node
1 2 3 4
Select boot device for failover guest The full raw disk can be provided via
domain fgd0. Recommendation is to use SAN or iSCSI.
full raw disk because its expected to do
“live” migration. Using PXFS as root disk has
performance issues (MOS Document
Remember ZFS as root filesystem can ID 1366967.1).
ONLY be used if doing 'cold'
migration.
Boot drive on raw filesystem. Prepare partition table and
Add boot device to fgd0:
# ldm add-vdsdev /dev/did/rdsk/d7s2 boot_fgd0@primary-vds0
# ldm add-vdisk root_fgd0 boot_fgd0@primary-vds0 fgd0
Optional: Configure MAC addresses of LDom. The LDom Manager
assigns MAC automatically but the following issues can occur:
• Duplicate MAC address if other guest LDoms are down when creating
a new LDom.
• MAC address can change after failover of a Ldom.
Assign your own MAC address this example use the suggested range
between 00:14:4F:FC:00:00 & 00:14:4F:FF:FF:FF as described
in Assigning MAC Addresses Automatically or Manually within the Oracle
VM Server for SPARC 3.0 Administration Guide.
Example:
Identify current automatically assigned MAC address:
# ldm list -l fgd0
to see the HOSTID which is similar as MAC a 'ldm bind fldg0' is
necessary. Unbind fldg0 afterwards 'ldm unbind fldg0':
MAC: 00:14:4f:fb:50:dc → change to 00:14:4f:fc:50:dc
HOSTID: 0x84fb50dc → change to 0x84fc50dc
public-net: 00:14:4f:fa:01:49 → change to 00:14:4f:fc:01:49
# ldm set-domain mac-addr=00:14:4f:fc:50:dc fgd0
# ldm set-domain hostid=0x84fc50dc fgd0
# ldm set-vnet mac-addr=00:14:4f:fc:01:49 public-net0 fgd0
# ldm list-constraints fgd0 (this shows assigned MAC now)
If HA-LDom is already running refer to MOS Document ID 1559415.1.
Bind and start the fgd0: # ldm bind fgd0
# ldm start fgd0

Login to LDom using console: # telnet localhost 5000

Oracle Internal and Approved Partners Only Page 37 of 44 Vn 1.1 Created: 7 Jan 2016
Task Comment Node
1 2 3 4
Install Solaris10 or Solaris 11 on LDom by using install server.
To identify the MAC address of a LDom do in the console of fgd0:
{0} ok devalias net
{0} ok cd /virtual-devices@100/channel-devices@200/network@0
{0} ok .properties
local-mac-address 00 14 4f fc 01 49

If you like to use a different method then refer to EIS checklist for LDoms.
Install HA-LDom (HA for Oracle VM Server Package) on all primary
domain nodes if not already done:
# pkg info ha-cluster/data-service/ha-ldom
# pkg install ha-cluster/data-service/ha-ldom

Check that 'cluster' is first entry in /etc/nsswitch.conf on all primary:


# svccfg -s name-service/switch listprop config/host
config/host astring "files dns"
# svccfg -s name-service/switch listprop config/ipnodes
config/ipnodes astring "files dns"
# svccfg -s name-service/switch listprop config/netmask
config/netmask astring files

If not (as in the above examples) add it:


# svccfg -s name-service/switch \
setprop config/host = astring: '("cluster files dns")'
# svccfg -s name-service/switch \
setprop config/ipnodes = astring: '("cluster files dns")'
# svccfg -s name-service/switch \
setprop config/netmask = astring: '("cluster files")'

Details in MOS Document ID 1554887.1.


Create resource group for fgd0 for primary domains:
# clrg create -n phys-host1,phys-host2 fldom-rg

Register SUNW.HAStoragePlus if not already done:


# clrt register SUNW.HAStoragePlus

Create HAStoragePlus resource for boot device if SMI labeled:


# clrs create -g fldom-rg -t SUNW.HAStoragePlus -p \
GlobalDevicePaths=/dev/global/dsk/d7s2 fgd0-has-rs
It is a requirement to use d7s2!!!
Create HAStoragePlus resource for boot device if EFI labeled:
# clrs create -g fldom-rg -t SUNW.HAStoragePlus -p \
GlobalDevicePaths=/dev/global/dsk/d7s0 fgd0-has-rs
Use d7s0 because EFI labeled disk does not have s2!!!
# clrg online -M -n \
Enable LDom resource group on current <currentnode> fldom-rg
node:
Register SUNW.ldom: # clrt register SUNW.ldom
SC4.3 delivers RT version 7

Oracle Internal and Approved Partners Only Page 38 of 44 Vn 1.1 Created: 7 Jan 2016
Task Comment Node
1 2 3 4
Setup password file for non-interactive “live” migration on one primary:
# clpstring create –b fgd0-rs fldom-rg_fgd0-rs_ldompasswd
Enter string value:
Enter string value again:

Create SUNW.ldom resource:


# clrs create -g fldom-rg -t SUNW.ldom -p Domain_name=fgd0 \
-p resource_dependencies=fgd0-has-rs fgd0-rs
# clrs show -v fgd0-rs | \
Check Migration_type property: grep Migration_type
If not MIGRATE then set it: # clrs set -p \
Migration_type=MIGRATE fgd0-rs

To stop/start the SUNW.ldom resource:


# clrs disable fgd0-rs
# clrs enable fgd0-rs
Verify setup with switching failover First failover of LDom is always
LDom to other node and back. “cold”.
# clrg switch -n <other_node> fldom-rg
# clrg switch -n <this_node> fldom-rg

Tune your timeout values depending on See MOS Document ID 1423937.1.


your system.
# clrs set -p STOP_TIMEOUT=1200 fldom-rg

Consider further tuning of timeout values.


For less frequent probing maybe the following setting can be used:
# clrs set -p Thorough_probe_interval=180 -p Probe_timeout=90 fldom-rs

Details described in SPARC: Tuning the HA for Oracle VM Server Fault


Monitor within the Oracle Solaris Cluster Data Service for Oracle VM
Server for SPARC Guide.
http://docs.oracle.com/cd/E56676_01/html/E56924/fumux.html#scrolltoc

Oracle Internal and Approved Partners Only Page 39 of 44 Vn 1.1 Created: 7 Jan 2016
Task Comment
CONFIGURATION STEPS FOR HA-ORACLE WITH
SUNW.HAStorageplus
Not yet finished for SC4.3
Refer to the Oracle Solaris Cluster Data Service for Oracle Guide:
http://docs.oracle.com/cd/E56676_01/html/E56737/index.html

Oracle Internal and Approved Partners Only Page 40 of 44 Vn 1.1 Created: 7 Jan 2016
Task Comment Node
1 2 3 4

CONFIGURATION STEPS FOR RAC CLUSTER


Not yet finished for SC4.3
Refer to the Oracle Solaris Cluster Data Service for Oracle Real
Application Cluster Guide:
http://docs.oracle.com/cd/E56676_01/html/E57757/index.html

Oracle Internal and Approved Partners Only Page 41 of 44 Vn 1.1 Created: 7 Jan 2016
Task Comment Node
1 2 3 4

VALIDATE BASIC CLUSTER CONFIGURATION


Additional to EIS validation, the cluster check command can be used to
check SC configuration. Perform all checks in the global cluster.
List all checks: # cluster list-checks
# cluster list-checks -v \
Details for a specific check: -C checkID

Run basic validation checks: Reports are in:


# cluster check -v /var/cluster/logs/cluster_check/<ti
mestamp>
Run basic checks against a node:
# cluster check -v -n phys-host-1

List functional validation checks: If the functional check that you


# cluster list-checks -k functional want to perform might interrupt
cluster functioning, ensure that the
Run functional check: cluster is not in production.
# cluster check -v -C <FcheckID>
You can not run F6984120 with
option 2). Refer to Bug 15907467.
No valid interactive validation checks
available at the moment.

Oracle Internal and Approved Partners Only Page 42 of 44 Vn 1.1 Created: 7 Jan 2016
Task Comment Node
1 2 3 4

SYSTEM COMPLETION
Make copy of the cluster configuration:
# cluster export -o /var/cluster/Config_after_EIS

Reboot => everything OK? Test Cluster Manager.

EXPLORER / INSTINFO & ANALYSIS


Run the Explorer Data Collector: explorer

Run ORAS/CLI (from EIS-DVD) cd /cdrom/...sun/tools


cd ORAS
locally to analyse the Explorer output sh run-oras.sh
files.
Examine the results. Examine the resulting report:
If necessary repair & repeat cd /var/tmp/ORAS
Explorer/ORAS sequence. more *EIS.Report.txt

Upload the Explorer file to the Oracle proactive directory.


If the newly-installed system has connectivity to the Internet:
# curl -T <filename to upload> -o <logfile> -u <SSO login> \
https://transport.oracle.com/upload/proactive/

Note the trailing / is required or the upload will fail.


Check the <logfile> after the upload completes to see any errors recorded.
Alternatively, to upload the Explorer file from a non-Solaris system
(Windows, Linux or Max OS X) use the Filezilla Client.
1. Required settings (under File then Site Manager):
Host: transport.oracle.com
Port: (leave empty)
Protocol: FTP - File Transfer Protocol
Encryption: Require implicit FTP over TLS
Logon Type: Ask for password (best option)
User: <SSO login> (usually your personal Oracle email address)
Transfer Mode: Passive (In the transfer settings tab)
2. Select the proactive directory on transport, note that you will not see
anything in this directory.
3. Upload the local files to the proactive directory.
The Explorer output file is normally in directory:
• IPS (Solaris 11): /var/explorer/output
The filename is of the form:
explorer.<hostid>.<hostname>-<date>.tar.gz

Oracle Internal and Approved Partners Only Page 43 of 44 Vn 1.1 Created: 7 Jan 2016
Task Comment Check

HANDOVER
MANDATORY: Ensure that the licenses are available for Solaris Cluster
Framework & Agents.
Perform Installation Assessment tests as EISdoc V4 – completed during preparation
described in the EIS Test Procedures of installation.
Plan.
Complete documentation and hand over EISdoc V4:
to customer. File EIS-DOCS-Operational Handover-
Document.odt
Short briefing: the configuration.

Copies of the checklists are available on the EIS web pages or on the EIS-DVD. We recommend that
you always check the web pages for the latest version.

Comments & RFEs welcome. Oracle staff should mail to EIS-SUPPORT_WW_GRP@oracle.com .


Partners should feedback via their partner manager at Oracle.

Thanks are due to Jürgen Schleich, TSC Munich, Germany.

Oracle Internal and Approved Partners Only Page 44 of 44 Vn 1.1 Created: 7 Jan 2016

You might also like