You are on page 1of 7

Veritas Cluster Cheat sheet

LLT and GAB Commands | Port Membership | Daemons | Log Files | Dynamic Configuration | Users | Resources | Resource Agents | Service Groups | Clusters | Cluster Status | System Operations | Sevice Group Operations | Resource Operations | Agent Operations | Starting and Stopping LLT and GRAB VCS uses two components, LLT and GAB to share data over the private networks among systems. These components provide the performance and reliability required by VCS.


LLT (Low Latency Transport) provides fast, ker network connections. The system admin configur file (llttab) that describes the systems in th among them. The LLT runs in layer 2 of the net GAB (Group membership and Atomic Broadcast) pr required to maintain a synchronised state amon such as that required by the VCS heartbeat uti driver by creating a configuration file ( gabt

LLT and GAB files

/etc/llthosts /etc/llttab /etc/gabtab /etc/VRTSvcs/conf/config/

The file is a database, containing one entry p ID with the hosts name. The file is identical The file contains information that is derived the utility lltconfig.

The file contains the information needed to co used by the gabconfig utility. The VCS configuration file. The file contains cluster and its systems.

Gabtab Entries

/sbin/gabdiskconf - i /dev/dsk/c1t2d0s2 -s 16 -S 1123 /sbin/gabdiskconf - i /dev 1124 /sbin/gabdiskhb -a /dev/dsk/c1t2d0s2 -s 16 -p a -s 1123 /sbin/gabdiskhb -a 1124 /sbin/gabconfig -c -n2

gabdiskconf gabdiskhb (heartbeat disks) gabconfig

-i -a -c

Initialises the disk region -s

Start Block -S


Add a gab disk heartbeat resource -s Configure the driver for use -n

Start Block -

Number of systems

LLT and GAB Commands

Verifying that links are active for LLT verbose output of the lltstat command open ports for LLT display the values of LLT configuration directives lists information about each configured LLT link List all MAC addresses in the cluster stop the LLT running start the LLT verify that GAB is operating stop GAB running start the GAB override the seed values in the gabtab file
lltstat -n lltstat -nvv | more lltstat -p lltstat -c lltstat -l lltconfig -a list lltconfig -U lltconfig -c gabconfig -a

Note: port a indicates that GAB is VCS is started gabconfig -U gabconfig -c -n <number of nodes> gabconfig -c -x

GAB Port Memberbership

List Membership Unregister port f
gabconfig -a

Port Function

/opt/VRTS/bin/fsclustadm cfsdeinit a gab driver b I/O fencing (designed to guara d ODM (Oracle Disk Manager) f CFS (Cluster File System) h VCS (VERITAS Cluster Server: h o VCSMM driver (kernel module ne q QuickLog daemon v CVM (Cluster Volume Manager) w vxconfigd (module for cvm)

Cluster daemons
High Availability Daemon Companion Daemon Resource Agent daemon Web Console cluster managerment daemon
had hashadow <resource>Agent CmdServer

Cluster Log Files

Log Directory primary log file (engine log file)

/var/VRTSvcs/log /var/VRTSvcs/log/engine_A.log

Starting and Stopping the cluster

"-stale" instructs the engine to treat the local config as stale "-force" instructs the engine to treat a stale config as a valid one Bring the cluster into running mode from a stale state using the configuration file from a particular server stop the cluster on the local server but leave the application/s running, do not failover the application/s stop cluster on local server but evacuate (failover) the application/s to another node within the cluster stop the cluster on all nodes but leave the application/s running

hastart [

hasys -fo

hastop -l

hastop -l

hastop -a

Cluster Status
display cluster summary continually monitor cluster verify the cluster is operating
hastatus -summary hastatus hasys -display

Cluster Details
information about a cluster value for a specific cluster attribute modify a cluster attribute Enable LinkMonitoring Disable LinkMonitoring
haclus haclus haclus haclus haclus

-display -value <attribute> -modify <attribute name> <n -enable LinkMonitoring -disable LinkMonitoring

add a user modify a user delete a user display all users
hauser hauser hauser hauser -add <username> -update <username> -delete <username> -display

System Operations
add a system to the cluster delete a system from the cluster Modify a system attributes list a system state Force a system to start Display the systems attributes
hasys hasys hasys hasys hasys hasys

-add <sys> -delete <sys> -modify <sys> <modify options -state -force -display [-sys]

List all the systems in the cluster Change the load attribute of a system Display the value of a systems nodeid (/etc/llthosts) Freeze a system (No offlining system, No groups onlining) Unfreeze a system ( reenable groups and resource back online)

hasys hasys hasys hasys

-list -load <system> <value> -nodeid -freeze [-persistent][-evacua

Note: must be in write mode hasys -unfreeze [-persistent]

Note: must be in write mode

Dynamic Configuration The VCS configuration must be in read/write mode in order to make changes. When in read/write mode the configuration becomes stale, a .stale file is created in $VCS_CONF/conf/config. When the configuration is put back into read only mode the .stale file is removed.
Change configuration to read/write mode Change configuration to read-only mode Check what mode cluster is running in
haconf -makerw haconf -dump -makero haclus -display |grep -i 'readonly' 0 = write mode 1 = read only mode hacf -verify /etc/VRTSvcs/conf/config

Check the configuration file convert a file into cluster commands convert a command file into a file

Note: you can point to any directory as lo

hacf -cftocmd /etc/VRTSvcs/conf/config -de

hacf -cmdtocf /tmp -dest /etc/VRTSvcs/conf

Service Groups

add a service group

delete a service group change a service group

haconf -makerw hagrp -add groupw hagrp -modify groupw SystemList s hagrp -autoenable groupw -sys sun haconf -dump -makero haconf -makerw hagrp -delete groupw haconf -dump -makero haconf -makerw hagrp -modify gro haconf -dump -makero

Note: use the "hagrp -display <grou

list the service groups list the groups dependencies list the parameters of a group display a service group's resource display the current state of the service group clear a faulted non-persistent resource in a specific grp

hagrp hagrp hagrp hagrp hagrp

-list -dep <group> -display <group> -resources <group> -state <group>

hagrp -clear <group> [-sys] <host>

# remove the host hagrp -modify grp

Change the system list in a cluster

# add the new host (don't forget to grp_zlnrssd SystemList -add <hostna # update the autostart list hagrp <host> <host>

Service Group Operations

Start a service group and bring its resources online Stop a service group and takes its resources offline Switch a service group from system to another Enable all the resources in a group Disable all the resources in a group Freeze a service group (disable onlining and offlining) Unfreeze a service group (enable onlining and offlining) Enable a service group. Enabled groups can only be brought online Disable a service group. Stop from bringing online Flush a service group and enable corrective action.
hagrp hagrp hagrp hagrp hagrp hagrp

-online <group> -sys <sys> -offline <group> -sys <sys> -switch <group> to <sys> -enableresources <group> -disableresources <group> -freeze <group> [-persistent]

note: use the following to check "h

hagrp -unfreeze <group> [-persisten

note: use the following to check "h haconf -makerw

hagrp -enable <gr

Note to check run the following com haconf -makerw

hagrp -disable <gr

Note to check run the following com hagrp -flush <group> -sys <system>

haconf -makerw hares -add appDG DiskGroup groupw hares -modify appDG Enabled 1 hares -modify appDG DiskGroup appdg hares -modify appDG StartVolumes 0

add a resource

delete a resource change a resource change a resource attribute to be globally wide change a resource attribute to be locally wide list the parameters of a resource list the resources list the resource dependencies

haconf -dump -makero haconf -makerw hares -delete <resource> haconf -dump -makero haconf -makerw hares -modify appDG Enab

Note: list parameters "hares -display <res

hares -global <resource> <attribute> <valu

hares -local <resource> <attribute> <value hares -display <resource> hares -list hares -dep

Resource Operations
Online a resource Offline a resource display the state of a resource( offline, online, etc) display the parameters of a resource Offline a resource and propagate the command to its children Cause a resource agent to immediately monitor the resource Clearing a resource (automatically initiates the onlining)
hares hares hares hares -online <resource> [-sys] -offline <resource> [-sys] -state -display <resource>

hares -offprop <resource> -sys <sy

hares -probe <resource> -sys <sys> hares -clear <resource> [-sys]

Resource Types
Add a resource type Remove a resource type List all resource types Display a resource type List a partitcular resource type Change a particular resource types attributes
hatype hatype hatype hatype hatype hatype -add <type> -delete <type> -list -display <type> -resources <type> -value <type> <attr>

Resource Agents
add a agent remove a agent change a agent list all ha agents Display agents run-time information i.e has it started, is it running ?
pkgadd -d . <agent package> pkgrm <agent package> n/a haagent -list haagent -display <agent_name>

Display agents faults

haagent -display |grep Faults

Resource Agent Operations

Start an agent Stop an agent
haagent -start <agent_name>[-sys] haagent -stop <agent_name>[-sys]