Professional Documents
Culture Documents
Introduction
This hands-on lab contains exercises that will allow the student to carry out new and existing
features in VCS6.0 for Linux and Windows.
At the end of
Explain the basic building blocks needed for clustering
these labs,
participants Describe how to install VCS
should be Demonstrate how to configure an application within VCS
able to do Understand the benefits of high availability
the following:
Lab Agenda
The lab is separated into Linux and Windows sections. Please follow one path.
Lab Student’s Guide
Pre-Lab
Presentation: Understand the building blocks of VCS 5 minutes
Features to review:
Windows VM 1
Hostname rhel-mq-sys2
Physical IP 192.168.1.152
Red Hat Release 5.9
Administrator password password
Database Information
SID orcl
Virtual IP 192.168.1.160 (orcl_inst1)
Network Interface Eth0
Listener Name LISTENER
Oracle Version 11gR2.0.2
$ORACLE_HOME /oraclebin/product/11.2.0/dbhome_2
$ORACLE_BASE /oraclebin
SPFile Location /oraclebin/product/11.2.0/dbhome_2/dbs/spfileorcl.ora
Oracle Data Files Mount /oracle
Volume Name oradata_vol
Disk Group Name oradata_dg
Cluster Settings
LLT Links Eth1 & eth2
Cluster IP 192.168.1.161
VCS Version 6.0.2
Windows VM 2
Hostname W2K8-SQL-SYS3
Physical IP 192.168.1.177
Windows Release 2008 R2
Administrator password password
Cluster Settings
Cluster Name VCS101_SQL_LAB
Cluster ID 1000
LLT Links W2K8-SQL-SYS2 W2K8-SQL-SYS3
HB1 00-50-56-AA-06-8C HB1 00-50-56-AA-06-98
HB2 00-0C-29-A6-33-19 HB2 00-0C-29-15-84-40
Cluster IP N/A
Service Group Names SQL_PROD_SG SQL_UAT_SG
VCS Version 6.0.1
HAD Helper Account windom.local\administrator
admin password N/A for Single Sign On password
Lab Exercise 1L
Lab: Configure VCS (Linux) 10 minutes
Lab Description
This section will work through the process to install and configure VCS. In our lab environment, we have already
installed the binaries. The section is just for your review and does not need to be done. To Skip past this section, go
to the bottom of page 5 or click here. For the Windows lab exercise you can click here. When installing the binaries we
used the CPI installation utility that comes with the DVD image of the software . This is initiated by issuing the following
CLI command from the base directory of the image:
Installation Process
When the installer program in started, the initial screen provides several choices for how to proceed:
For this lab, we chose to install SFHA or option 4. After accepting the terms and conditions, we are asked what level of
rpms to install:
We chose to install the recommended set of rpms. After this decision, we are asked to input the systems we would like
to install on. We typed in our two nodes to cluster (rhel -mq-sys1 and rhel-mq-sys2, leaving a space instead of a comma
or the word and).
Both systems go through a precheck process to determine the required patches and prerequisite requirements are met.
After this check is successful, the installation utility installs each individual rpm on each system defined during the
installer process. When the rpm installation is complete you are asked to configure the cluster. We chose not to do this
provide you the experience in this lesson.
To determine if a node meets the prerequisites needed for installation, we also provide checklists and utilities. These
can be accessed by going to https://sort.symantec.com/checklist/install
Lab Exercise
This section will go through a series of exercises to demonstrate the process to configure VCS. Commands are provided
to accomplish this. We will help along the way to understand what the cluster is looking for during the configuration
process when decisions are to be made.
Configure VCS
This exercise will be performed against our two virtual machines: rhel-mq-sys1 and rhel-mq-sys2. They have already the
SFHA binaries installed. To validate this, run the command:
# rpm–qa | grep VRTS
Notice the packages VRTSvcs and VRTSvxvm/VRTSvxfs. These are the base packages for VCS and Storage Foundation
(Volume Manager and File System).
Symantec Vision 2012 – MGM, Las Vegas
SM L06 - Lab Student’s Guide / Last Update:Saturday, April 13, 2013 5
Lab Student’s Guide
In our exercise, let’s now begin to configure VCS by running the command:
This command is specific to the version, though this is not always the case. When ready to configure the cluster, storage
foundation or cluster file system for the first time, go to the /opt/VRTS/install directory and determine which utility you
would like to use. Each perform functions specific to the chosen command which call the same backend code for all
common functions.
The first thing prompted for is the names of the systems in the cluster. This process will attempt to communicate to
each node through trusted ssh, which in our lab is already setup for you.
As you can see, the next step in the process is to validate the systems can talk to each other, that the same product is
installed on all nodes and the other prechecks have passed.
The first question we are asked is would we like to enable I/O fencing. I/O Fencing is a VCS feature that protects against
split-brain by proving data protection and membership arbitration. To setup fencing, we would need to have 3 SCSI3
Coordinator Disks or 3 Coordination Point Server instances. Neither were able to fit in the limited space we have in our
virtual machine environment, so we will say No to this question even though Yes is the default.
Now we are ready to configure the cluster. We will press Enter to continue past this process.
Each cluster needs to have a unique name. This helps to identify the cluster to work on when managing multiple. In our
environment, we are suggesting you call the cluster oracluster but you can make up a unique name of your own.
Next we need to determine which heartbeat links are connecting the cluster systems. You can ask the config to
automatically detect if the links have IPs on them or you can choose option 1 and fill in eth1 and eth2 for both systems
as the heartbeat links.
In our example, we walked through the process to auto-detect but could not and were assigned each interface
individually. We also chose to implement a low-priority heartbeat link. This is a heartbeat link over a public network
that is only used in case both of the private heartbeats are disabled or lose connection. Having multiple independent
heartbeat networks, each running on their own VLAN, prevents a Single Point of Failure or the loss of a single
component from taking down the environment. In our environment, all links are consistent but the configuration
process can be run across systems where the interfaces do not line up. Finally in this secti on, the process is asking for a
unique cluster ID. It essential to have an id no one uses to avoid conflicts with multiple clusters. If this auto generated
id number does not work for you, one can be entered in as well. The configuration utility will al so check to ensure there
is not a conflict if you would like, as seen in the next command:
With this info, the heartbeat configuration process is over. We move from this into configuring other communication
and notification features such as: Cluster VIP, Secure Mode, VCS Users and Password, SMTP Notification and SNMP
Notification.
The Cluster VIP is useful for having an IP associated with the Cluster Service SG. This is where the notification
resource is run from if SNMP and/or SMTP get configured.
Secure Mode enables all communication to be encrypted and NIS/System users are utilized for cluster
administration as well.
If Secure Mode is enabled, then VCS users are not needed. If not, then the default user is Admin with the
default password being “password”. Symantec recommends changing this for security reasons, but it is not
required. You can also add additional users with different levels of authority (Admin, Operator, Guest) with
restricted levels of control (Group, Cluster). You can configure extra users to see the effect of user access.
Now we completed with this configuration process and can start working on VCS.
In our test environment, we do not have space to have a VOM server, so we w ill use the Java GUI installed on the
Windows node. To startup the GUI double-click the Java GUI icon on the desktop.
Click on File, and then choose New Cluster to connect to the cluster that was just built. When it asks for a hostname,
you can enter a cluster systems IP address (192.168.1.151, 192.168.152). Finally to connect, it will ask for your
username/password. Please enter in what was setup in the previous cluster configuration exercise.
Windows Cluster Note: The VCS Java GUI is installed as part of the Storage Foundation HA for Windows deployment. You
can also access a standalone instance of the Java GUI from the W2k8-Console Virtual Machine.
To continue the lab on the Linux VMs, move to Lab Exercise 2 or use this LINK to continue,
Lab Exercise 1W
Lab: Configure VCS (Windows) 10 minutes
Lab Description
This section will work through the process of configuring VCS cluster for Windows 2008 R2
1. From the Start Menu, launch the VCS “Cluster Configuration Wizard” and click Next.
3. Select the cluster nodes (W2K8-SQL-SYS2 and W2K8-SQL-SYS3) and add them to the list.
4. Verify that the installation pre-checks have passed and click “Next”
6. Enter the information as shown below (or from the supplemental Appendix), select both available systems and
click “Next”
7. Verify that the nodes have been validated and click “Next”
8. Select “Configure LLT over Ethernet” and then choose the correpsonding NIC’s labled “HB1” and “HB2” for each
host (as defined in Section 1). Click “Next”
9. Choose “Existing User” and select Administrator for the drop down menu. When prompted to authenticate the
user enter password and click “OK” then click “Next”
10. Choose the “Use Single Sign On” option for OS cluster authentication. Selecting “Use VCS User Privileges” will
require separate authentication each time you access the cluster. Click “Next”
11. The configuration wizard will now establish the SSO authentication for the cluster
12. The VCS Configuration Wizard will connect to the cluster using the SSO credential
13. For the purposes of this lab leave the Notifier and GCO options unchecked. Click “Finish”
The VCS configuration is now complete. You may verify cluster membership be running the following command:
14. Log into the cluster using the VCS Cluster Manger Java Console icon on the Desktop. Select “Connect to
localhost” and observe the authentication window and click “Ok”
If the Connect to localhost option is not available, simply click the plus sign at the top of the Cluster Monitor and
enter localhost in the Host name field and click Ok.
15. The Java Console will prompt you to create a new Service Group, choose “No”
16. You can exit the Java GUI by selecting File Logout. For the purposes of this lab however, leave the Java GUI
open in the back ground so that you can see the Service Groups being auto-generated in the next section.
Lab Exercise 2
Lab: Review the currently installed application (Oracle on Linux, SQL on Windows) 5 minutes
Feature Description
This section will describe the process to ensure your application is online.
Run the following commands to begin the validate process. /var/tmp/oracle_setup.sh
#> su – oracle
$> sqlplus /nolog
SQL>connect / as sysdba
SQL>startup
SQL>exit
Your Oracle instance is active, you may proceed with the lab.
Oracle
As we have said previously, the exercises performed in the examples are based on a customer with a pre -existing
database wanting to cluster a database instance. Will run through a series of commands to validate that your operating
system is correctly set up, that your storage is configured, and your database is online and working before we begin to
cluster the application. Let’s begin by logging into Oracle so from the command line run # su – oracle
This treats us as if we logged into the system as the oracle user. Now we can validate if the database is online.
Since the select statement worked, the database is online and active.
Network
Once we have validated that Oracle databases and the listener are online, we can now ensure that the network is set up
correctly. We will run the ifconfig –a command to see what is currently configured:
Symantec Vision 2012 – MGM, Las Vegas
SM L06 - Lab Student’s Guide / Last Update:Saturday, April 13, 2013 21
Lab Student’s Guide
We can see both the system IP and the Oracle database virtual IP are listed.
Filesystems
to determine if file systems are in place we will run the # df -k command. This command displays the currently mounted
filesystems on the operating system. Running the command should look like the example below:
SQL Server
With the release of SFWHA 6.0, Symantec has introduced the concept of Multi-Node Disk Group Access. What this
feature allows for is the concurrent importation of a Cluster Dynamic Disk Group with a single node retaining R/W access
while all remaining cluster nodes maintain Read-Only. When a failover is initiated the VMDg agent simply alters the R/W
& Read-Only flags as opposed to having to fully deport and import the disk group. This substantially reduces the failover
time for VMDg resources. VCS for Windows 6.0 has also added support for the IMF (Intelligent Monitoring Framework).
The following steps are to ensure that the environment you are using is properly setup to cluster SQL Server
1. Open the WK28-SQL-SYS2 Console and double click on “My Computer.” Review and explore the volumes
available on W2K8-SQL-SYS2
2. Double click on PROD_MNT (P:) to explore the sub directories and mounts.
If the drive partitions are not available, please execute the following command:
C:\> vxdg –g PROD import –s (PROD Instance)
C:\> vxdg –g UAT import –s (UAT Instance)
Symantec Vision 2012 – MGM, Las Vegas
SM L06 - Lab Student’s Guide / Last Update:Saturday, April 13, 2013 23
Lab Student’s Guide
3. Switch to the W2K8-SQL-SYS3 console and double click” My Computer.” Review and explore the volumes
available on W2K8-SQL-SYS3
Note the differences in the partition layouts. The PROD instance of SQL is using empty folder paths while the
UAT instance has individuals drive letter mounts. This is important to understand as it is required when
deploying active/active or multi-instance SQL clusters.
4. SQL Configuration
View the installed instances of SQL by clicking on the SQL Server Configuration Manager from the Start Menu on
each Node. Do not however start the services.
Please note the services are all set to Manual start. This is
required for VCS to manage the SQL Instances. (This was
established as part of the initial SQL Server installation).
5. Review the network configuration by opening a DOS prompt on either node and executing the following
command:
c:\> getmac –v
Note the names of the interfaces have been labeled “Public” and “HB n” (Heartbeat). All the VCS internode
communication will be handled via the HB interfaces while the SQL Server configuration will use the Public
interface.
To continue with Linux VM testing, go to the page 26 (Next Page) or click on this LINK (Linux)
To continue with Windows VM testing, go to page 37 or click on this LINK (Windows)
Lab Exercise 3L
Lab: Configure the application within the cluster (Linux) 20 minutes
Feature Description
This section will describe what is needed to configure the application within the cluster. We will provide commands
needed to implement as well as a description for each step along the way. For those students following the Windows
track, you may click here to skip to the section of the document.
With local binaries, every cluster node has their own copy of the binaries. This is a good thing for availability
since a binary corruption would not prevent the application from working on different binaries on another
cluster node. The negative point is all maintenance to the binaries (patching) is required to be performed
multiple times since there are multiple copies of the binaries.
With shared binaries, every cluster node uses the same binaries. They are installed on shared storage which
requires the cluster to migrate the storage before attempting to start the application. Having to install patches
only once is a positive in this configuration. The flip-side to that would be a binary issue would result in all
nodes not being able to bring online the application.
3) A Database was created with the data being put in the /oracle file system. It was configured with the failover
virtual IP address (VIP) for communication. For instances that do not have this, depending on the app , it could
be an easy fix like changing the listener configuration file to point to the new IP address or as difficult as some
app reconfiguration. Most enterprise applications can easily be adjusted for this change. It just needs to be
noted and properly configured to ensure successful failover. If the current nodes IP address is being used, it
would work just fine until the cluster attempts to start it on the failover node. At that point, the app may start,
but other applications could not communicate with it.
Cluster – A collection of systems working together for increased application availability. There are other
computer functional areas that use the name cluster. They have to do with Availability,
Performance and Multiprocessing. VCS uses the clustering term in the context of availability.
System or Node – The term for a cluster member. It could be running one or multiple applications or it could serve
as standby location for an application to failover to. There are multiple configuration types that
we will not discuss in this lab.
Service Group – This is a collection of services or resources, which will be defined next. The Service Group
contains all components of an application needing to independently function. These
components fall into different categories: Infrastructure (Network, Storage), depending on
application. Each Service Group will have multiple resources to make an application work.
For example, in our Oracle Database configuration we will configure 7 resources. Additionally,
to ensure the application starts up correctly, each resource is started in a specific order.
Resource – This is the term for a hardware or software service or scripted function needed to startup an
application. It is a component of the application that can be uniquely controlled. For each
resource we need a process to be able to start stop and monitor the component. They are
treated as individual objects, with the entire application being controlled by the service group.
When VCS detects a fault in a resource, the action that is taken is indivi dual and unique to that
configured resource. Depending on defined attributes we can choose to restart, to fault the
resource, or to do nothing when a resource fails.
Attribute – This is the term for the settings associated with an object to affect behavi or. The attribute can
be global, as in settings for the cluster, they can be local, as in settings for a resource, or they
can be localized as in settings for a system. There are attributes for service groups, resources,
systems, for the cluster and for agents. Most attributes have default values. Where there are
not default values if no attribute is supplied then the cluster will not work properly and may not
startup. In our configuration we need to configure the device to be used for the Oracle NIC.
Device is an attribute of the oranic resource. Without it that resource would not know what
system device to monitor.
Agents – An agent is a script or binary used to control a specific service. VCS comes with hundreds of
agents included in the product and downloadable from the Internet. Oracle for example is
installed by default. The Oracle agent is comprised of scripts that can control the database and
the database listener. They are what is used to bring the database online and offline when the
cluster is taking action.
Dependencies – Dependencies are the way to link to resources together within a service group. This is necessary
to ensure that one resource starts before another. In our terminology the service group that is
on the top is the parent service group. The service group that is on the bottom is the child
service group. Dependency is linking the parent to the child service group.
Configuration files – There are multiple files needed to run VCS. The cluster configuration files live in the directo ry:
/opt/VRTSvcs/conf/config
The primary VCS configuration files are main.cf. and types.cf when service groups or resources
are created through the GUI that information is captured and stored within the VCS
configuration files. This is also the case when attributes are changed.
Lab Exercises
This section will go through a series of exercises to demonstrate how to set up and configure an Oracle database that is
already running into a VCS cluster environment. Commands are provided to implement the feature. We will also expand
on this feature.
This information is needed to configure our Oracle service group. Each piece of information corresponds to an attribute
of one of the resources we will configure. This information can be gathered from the tables on page 2. As we explain
how to configure
Generate an Oracle service group for VCS within the command line:
The easiest way to configure a VCS cluster is through the GUI. In our lab we have the Java GUI loaded on both Red Hat
virtual machines. In the next section we will discuss how to configure and Oracle service group through the GUI, but this
section will contain a brief description of how to do so from the command line. Those familiar with the command line
can be very functional within VCS. The commands are consistent and have been the same since VCS was introduced. A
cheat sheet is available at as an appendix to this document. Please also note that although this example speaks to Linux
the syntax, caveats and procedures are the same for a Windows VCS environment.
There are two ways to generate a new configuration within the command line: edit the main.cf o r run commands that
edit the configuration. The first way is to copy a sample main.cf file, edit it to include the specifics for your resources,
and then copy it into place as the main.cf. The issue with this is in order to edit the main.cf file the cluste r has to be
down. If the main.cf file is edited while the cluster is online, those changes do not take effect until the cluster started
again. If someone in the meantime runs commands to edit an attribute at a new resource or modify the configuration,
when those changes are saved they will overwrite the current main.cf file and erase any changes that are there. The
cluster can be brought off-line multi-applications continue to run, but VCS starting up with a new configuration is
generally more risk than most enterprises are willing to take. This works just fine for the first service group to be
created, but then subsequent service groups put all online service groups at risk.
The second way to generate a new configuration is to run commands to build it whil e the cluster is online. This can be
complicated and is not something for new users. There are methods though to help in that regard. With the first option,
we edited the main.cf to include the configuration. We can take that configuration and generate usable commands that
add additional resources and service groups to run against the current online cluster. This way we don’t worry about
syntax and can validate formatting before putting it in place. The methodology to do this is relatively straightforward. In
the /opt/VRTSvcs/conf/ directory a set of sample configuration files have their own directories. Simply cd into one of
those directories that are associated with your application, copy the main.cf file as a backup and edit the main.cf file to
include the changes to be made. The next step would be to copy all of the files ending in .cf from the config directory
into the sample directory. When finished editing the news service group and resources run the following commands to
validate they are in the correct format and nothing is missing:
# /opt/VRTSvcs/bin/haconf – verify .
If the previous command comes back without a response, then all syntax is in order. To generate a script file from the
configuration file run the command:
# /opt/VRTSvcs/bin/hacf–cftocmd.
Now you should see not only a main.cf file but also a main.cmd file. The main.cmd file contains all commands necessary
to form a cluster and includes cluster information, service group information, resource information and dependencies.
The very beginning of the file contains information on each type of resource the cluster can configure. All of the
information on service groups, resources and dependencies is located at the end of the file. To get to the commands
that you are interested in go to the very last page in the file and scroll back until you see the systems defined. You can
take the commands directly from the end of this document and run them on the command line to introduce new
resources, service groups and dependencies to an already running cluster.
The first thing we need to do is to open the configuration. If the configuration is not open then we are unable to change
the currently running cluster configuration. To open the configuration we first click on the file menu up on the top left
of the window. We then select the first option -- open configuration.
Once this is done the configuration is open we can proceed adding a service group and resources. To add a servi ce group
we will select the edit drop-down menu and choose the add button. We then have the choice to service group, resource
or system. Since there is no service group defined we will add a service group.
After selecting service group an additional window pops up to configure the service group.
In this window we type in the service group name, select the systems the service group can run on and designate which
systems it starts on. We also choose if the service group is failover or parallel. One complete press okay.
At this point our service group is created and we can begin the creation process for our resources. We go through the
same process as before in the edit drop-down menu, select add, but this time we select resource.
The add resource menu is completely different than the ad service group menu. This menu does not ask for systems, it
has all of the possible attributes that can be set on a resource depending on the type. For our example let’s create the
Oracle disk group or oradg resource. We will first type in or a DG in the resource name field, then select DiskGroup from
the resource type.
Since the disk group resource only requires that we add the attribute DiskGroup, we will click on the icon under the edit
column next that attribute. This will generate a pop-up that allows us to edit the attribute.
When finished, press the okay button in the edit Attribute screen will be closed. As you can see there are other
attributes that can be modified for this resource and they are filled in with default values. You can modify these values if
you’d like but we would advise against it.
There is one more thing to do before we finish with this resource. Before clicking okay we need to check the enable
button on the resource. The Enabled flag needs to be set or it will not be able to monitor the resource. For resources
being pre-created but are not currently available, the Enabled flag can be set when the resources become available.
After creating the disk group we will now create the volume resource. This resource requires two attributes the
diskgroup name and the volume name. For brevity sake I have filled that information in already.
Now that we have two resources the next step would be to link them for resource depen dencies. The way we will do this
is start out by clicking the link button above the resources. After clicking on the link button, click on the resource that
will be the parent resource or another way to think about this, the resource that will be on top in the tree or get started
second.
Next drag your mouse over to the resource that will be the child resource or the resource that starts up first. While you
are dragging you will notice that a yellow line will follow you from the first resource clicked to where your mouse is. To
complete the link, click on the child resource.
A confirmation dialog box will appear. Click yes to complete the link.
You will now see the two resources linked together in a dependency tree. We will show one more chall enging resource
and allow you to work at your own pace to complete the resource tree.
The Mount resource has multiple options that it requires to be functional. You’ll notice bolded print for each item that is
required for the resource to be added. For each attribute click on the edit button to the right open up the new dialog
enter in the attribute and press okay. An additional thing to note, if you press show command at the bottom left you
can see all of the commands used to generate that resource which may be useful if you’re going to do this multiple
times as it is faster on the command line. When finished press okay. Continue on through the rest of the resources as
well as linking the resources together so they properly startup and shutdown.
Here is the final resource view that your service group should have when you’re complete:
Also do not forget to close the configuration when you were done. Closing the configuration both saves to the
configuration files on all nodes in the cluster and stops edits from happening.
To continue with Linux VM testing, go to the page 51 or click on this LINK (Linux)
Lab Exercise 3W
Lab: Configure the application within the cluster (Windows) 20 minutes
Feature Description
This section will describe what is needed to configure SQL Server instance failover within a VCS cluster. The lab will
consist of using the SQL Server 2008 Configuration Wizard. CLI syntax however can be extracted f rom the sample main.cf
at end of this document.
1) The Symantec SFWHA software was installed. This is true as this step was completed to take out the time to
install and deploy the SFWHA binaries from the lab, which would reduce the time to go through these exercises.
2) The SQL Server software was also installed and 2 Named Instances were created. This meant that Storage
Foundation was configured prior to the lab setup to enable the file systems needed for failover with the cluster.
Depending on which instance you are referencing, a number of partitions were created to support the various
user and system database objects. In a cluster environment there are two options for installing software: Local
Binaries and Shared Binaries.
1. From the Start Menu, launch the VCS “Cluster Configuration Wizard” and click Next.
2. The Wizard will guide you the steps necessary to configure the service group and the resource dependencies for
supporting SQL Server 2008. Read through the prerequisites prior to proceeding. Note the req uirement for the
SQL Server system data files and SCSI-3 reservations and Click “Next”
3. With a new VCS installation there will be only one option available “Create service group”
4. Enter the Service Group name as shown below. Add both systems from the available list and leave the
AutoStartList attribute checked then click “Next”
5. The wizard automatically detects all defined SQL Instances. Select only the “INST_PROD” instance and expand
the list so that you can select the “SQLAgent” option. Click “Next”
6. Review the locations for the master, model, msdb and temp system components then click “Next”
7. Leave the “Configure SQL Server Cluster Account” unchecked as the SQL Service account already possesses the
requisite permissions.
8. Leave INST_PROD unchecked. Detail Monitoring is not in scope for the purposes of this lab.
9. Select P:\PROD_REG from the drop down menu to establish this volume as the location for all SQL Registry
updates necessary for instance failover.
10. Enter the network information as shown below or from the VCS appendix. At this point you can choose to
assign the Virtual Computer Object to a specific location within Active Directory by clicking “Advanced Settings.”
Note the Adapter Display Name as defined from the previous section.
12. You can optionally rename the Service Group and its constituent resources. For the purposes of the lab leave all
resource names as shown. Check the box for “Enable Fast Failover” and Click “Next.”
13. You will then be prompted as to whether you want the wizard to modify the configuration. Click “Yes”
14. At this point you can use the VCS Java GUI to view the updates being made to the cluster. Click on the
“SQL_PROD_SG” from the left panel and then selecting the “Resources” tab in the center.
15. Switch back to the configuration wizard to activate all SQL resources by checking “Bring the service group online”
and clicking “Finish”
16. The Java GUI will now reflect that all the necessary SQL resources have been brought online.
17. Verify you can successfully failover the Instance by right clicking on the “SQL_PROD_SG” and selecting “Switch
To” w2k8-sql-sys3. Note the down arrows indicating the resource is being taken offline. Repeat this step and
bring the service back to w2k8-sql-sy2.
1. From the w2k8-sql-sys2 console launch the SQL Server Management Studio shortcut on the desktop.
2. If not already populated, enter the SQL connection details as shown below and click “Connect” Take note of
the Server name “w2k8-sql-vsys\inst_prod” as the virtual computer object configured previously by the
configuration wizard.
3. Exit the SQL Management Studio by select File Exit. You can optionally repeat this step after switching the
service group to wsk8-sql-sys3 as previously shown.
One of the additional features added to VCS for Windows in version 6.0 was the incorporation of the IMF or
Intelligent Monitoring Framework. Conventional VCS monitoring on Windows had previously been exclusively
Poll-Based intervals that executed in “User Space.” The IMF component moves the Agent Framework
monitoring to the Kernel and as such reduces the aggregate resources consumed by VCS but more importantly
enables instantaneous or real-time fault detection. This combined with the Multi-Node Disk Group Access
outlined in section 3 serves to create extremely robust and efficient HA/DR architecture for S QL Server.
Symantec Vision 2012 – MGM, Las Vegas
SM L06 - Lab Student’s Guide / Last Update:Saturday, April 13, 2013 46
Lab Student’s Guide
2. Located on the Desktop, access the folder titled Scripts. Contained in the folder are two batch command files.
Ensure that you have the Java GUI open and are viewing the resources for the SQL Service Group
SQL_SERVER_SG.
3. While watching the Java GUI double click the kill_inst_prod script to simulate a SQL crash.
4. Note the time it takes for VCS to detect the SQL crash and failover the service group. You can optionally use the
included StopWatch Application if you wish to record the failover time exactly.
5. Note the faulted resource and automatic failover of the remaining active resources.
6. You must now clear the resource fault before returning the service to w2k8-sql-sys2. Right click on the
SQL_SERVER_SG service group in the Java GUI and select Clear Fault W2K8-SQL-SYS2
7. You may now switch the Service group back to w2k8-sql-sys2 by right clicking on the service group and
selecting Switch To W2K8-SQL-SYS2
3. Follow the same procedure for the INST_PROD configuration only replacing the relevant information for the
INST_UAT instance. You begin by launching the SQL Configuration Wizard from W2K8-SQL-SYS3. All
configuration details are outlined in the cluster and host settings table.
4. All available test procedures can be executed against the UAT instance.
Lab Exercise 4L
Lab: Control the application 20 minutes
Feature Description
This section will assist in helping users understand home to control the service group and resources.
clusteroracluster (
UserNames = { admin = aPQiPKpMQlQQoYQkPN }
Administrators = { admin }
)
system rhel-mq-sys1 (
Symantec Vision 2012 – MGM, Las Vegas
SM L06 - Lab Student’s Guide / Last Update:Saturday, April 13, 2013 53
Lab Student’s Guide
system rhel-mq-sys2 (
)
grouporasg (
SystemList = { rhel-mq-sys2 = 0, rhel-mq-sys1 = 1 }
AutoStartList = { rhel-mq-sys2, rhel-mq-sys1 }
)
DiskGrouporadg (
DiskGroup = oradata_dg
)
IP oraip (
Device = eth0
Address = "192.168.1.160"
NetMask = "255.255.255.0"
)
Mount oramount (
MountPoint = "/oracle"
BlockDevice = "/dev/vx/dsk/oradata_dg/oradata_vol"
FSType = vxfs
FsckOpt = "-y"
)
NIC oranic (
Device = eth0
)
Netlsnroralsnr (
Owner = oracle
Home = "/oraclebin/product/11.2.0/dbhome_2"
TnsAdmin = "/oraclebin/product/11.2.0/dbhome_2/network/admin"
)
Oracle oradb (
Sid = orcl
Owner = oracle
Home = "/oraclebin/product/11.2.0/dbhome_2"
Pfile = "/oraclebin/product/11.2.0/dbhome_2/dbs/spfileoracl.ora"
)
Volume oravol (
DiskGroup = oradata_dg
Volume = oradata_vol
)
cluster VCS101_SQL_LAB (
SecureClus = 1
)
system W2K8-SQL-SYS2 (
)
system W2K8-SQL-SYS3 (
)
group SQL_SERVER_SG (
SystemList = { W2K8-SQL-SYS2 = 0, W2K8-SQL-SYS3 = 1 }
AutoStartList = { W2K8-SQL-SYS2, W2K8-SQL-SYS3 }
)
GenericService SQLServerAgent-INST_PROD (
Critical = 0
ServiceName = "SQLAgent$INST_PROD"
UseVirtualName = 1
LanmanResName = SQL_SERVER_SG-Lanman
)
IP SQL_SERVER_SG-IP (
Address = "192.168.1.178"
SubNetMask = "255.255.255.0"
MACAddress @W2K8-SQL-SYS2 = 00-50-56-AA-06-8B
MACAddress @W2K8-SQL-SYS3 = 00-50-56-AA-06-97
)
Lanman SQL_SERVER_SG-Lanman (
VirtualName = W2K8-SQL-VSYS
IPResName = SQL_SERVER_SG-IP
DNSUpdateRequired = 1
ADUpdateRequired = 1
DNSCriticalForOnline = 1
ADCriticalForOnline = 1
)
MountV SQL_SERVER_SG-MountV (
MountPath = "P:"
VolumeName = PROD_MNT
VMDGResName = SQL_SERVER_SG-VMDg
)
MountV SQL_SERVER_SG-MountV-1 (
MountPath = "P:\\PROD_DB"
VolumeName = PROD_DB
VMDGResName = SQL_SERVER_SG-VMDg
)
MountV SQL_SERVER_SG-MountV-2 (
MountPath = "P:\\PROD_Data"
VolumeName = PROD_DATA
VMDGResName = SQL_SERVER_SG-VMDg
)
MountV SQL_SERVER_SG-MountV-3 (
MountPath = "P:\\PROD_LOG"
VolumeName = PROD_LOG
VMDGResName = SQL_SERVER_SG-VMDg
)
MountV SQL_SERVER_SG-MountV-4 (
MountPath = "P:\\PROD_REG"
VolumeName = PROD_REG
VMDGResName = SQL_SERVER_SG-VMDg
)
NIC SQL_SERVER_SG-NIC (
MACAddress @W2K8-SQL-SYS2 = 00-50-56-AA-06-8B
MACAddress @W2K8-SQL-SYS3 = 00-50-56-AA-06-97
)
RegRep SQL_SERVER_SG-RegRep-MSSQL (
MountResName = SQL_SERVER_SG-MountV-4
ReplicationDirectory = "\\RegRep\\SQL_SERVER_SG-RegRep-MSSQL"
Keys = {
"HKLM\\SOFTWARE\\Microsoft\\Microsoft SQL
Server\\MSSQL10.INST_PROD\\MSSQLServer" = "SaveRestoreFile:SQL_SERVER_SG-RegRep-
MSSQL_MSSQLServer.reg",
"HKLM\\SOFTWARE\\Microsoft\\Microsoft SQL
Server\\MSSQL10.INST_PROD\\PROVIDERS" = "SaveRestoreFile:SQL_SERVER_SG-RegRep-
MSSQL_PROVIDERS.reg",
"HKLM\\SOFTWARE\\Microsoft\\Microsoft SQL
Server\\MSSQL10.INST_PROD\\Replication" = "SaveRestoreFile:SQL_SERVER_SG-RegRep-
MSSQL_Replication.reg",
"HKLM\\SOFTWARE\\Microsoft\\Microsoft SQL
Server\\MSSQL10.INST_PROD\\SQLServerAgent" = "SaveRestoreFile:SQL_SERVER_SG-RegRep-
MSSQL_SQLServerAgent.reg",
"HKLM\\SOFTWARE\\Microsoft\\Microsoft SQL
Server\\MSSQL10.INST_PROD\\SQLServerSCP" = "SaveRestoreFile:SQL_SERVER_SG-RegRep-
MSSQL_SQLServerSCP.reg" }
ExcludeKeys = {
"HKLM\\SOFTWARE\\Microsoft\\Microsoft SQL
Server\\MSSQL10.INST_PROD\\MSSQLServer\\CurrentVersion" }
)
SQLServer2008 SQLServer2008-INST_PROD (
Instance = INST_PROD
LanmanResName = SQL_SERVER_SG-Lanman
)
VMDg SQL_SERVER_SG-VMDg (
DiskGroupName = PROD
DGGuid = 36df99ee-39c1-4b89-9547-b60d67a34c19
FastFailOver = 1
)
SQLServer2008-INST_PROD requires SQL_SERVER_SG-MountV-1
SQLServer2008-INST_PROD requires SQL_SERVER_SG-MountV
SQLServer2008-INST_PROD requires SQL_SERVER_SG-MountV-3
SQLServer2008-INST_PROD requires SQL_SERVER_SG-RegRep-MSSQL
group SQL_SERVER_UAT_SG (
SystemList = { W2K8-SQL-SYS2 = 0, W2K8-SQL-SYS3 = 1 }
AutoStartList = { W2K8-SQL-SYS2, W2K8-SQL-SYS3 }
)
GenericService SQLServerAgent-INST_UAT (
ServiceName = "SQLAgent$INST_UAT"
UseVirtualName = 1
LanmanResName = SQL_SERVER_UAT_SG-Lanman
)
IP SQL_SERVER_UAT_SG-IP (
Address = "192.168.1.179"
SubNetMask = "255.255.255.0"
MACAddress @W2K8-SQL-SYS2 = 00-50-56-AA-06-8B
MACAddress @W2K8-SQL-SYS3 = 00-50-56-AA-06-97
)
Lanman SQL_SERVER_UAT_SG-Lanman (
VirtualName = W2K8_SQL_UAT_VS
IPResName = SQL_SERVER_UAT_SG-IP
DNSUpdateRequired = 1
ADUpdateRequired = 1
DNSCriticalForOnline = 1
ADCriticalForOnline = 1
)
MountV SQL_SERVER_UAT_SG-MountV (
MountPath = "E:"
VolumeName = Volume1
VMDGResName = SQL_SERVER_UAT_SG-VMDg
)
MountV SQL_SERVER_UAT_SG-MountV-1 (
MountPath = "F:"
VolumeName = Volume3
VMDGResName = SQL_SERVER_UAT_SG-VMDg
)
MountV SQL_SERVER_UAT_SG-MountV-2 (
MountPath = "G:"
VolumeName = Volume4
VMDGResName = SQL_SERVER_UAT_SG-VMDg
)
MountV SQL_SERVER_UAT_SG-MountV-3 (
MountPath = "R:"
VolumeName = Volume2
VMDGResName = SQL_SERVER_UAT_SG-VMDg
)
NIC SQL_SERVER_UAT_SG-NIC (
MACAddress @W2K8-SQL-SYS2 = 00-50-56-AA-06-8B
MACAddress @W2K8-SQL-SYS3 = 00-50-56-AA-06-97
)
RegRep SQL_SERVER_UAT_SG-RegRep-MSSQL (
MountResName = SQL_SERVER_UAT_SG-MountV-3
ReplicationDirectory = "\\RegRep\\SQL_SERVER_UAT_SG-RegRep-MSSQL"
Keys = {
"HKLM\\SOFTWARE\\Microsoft\\Microsoft SQL
Server\\MSSQL10.INST_UAT\\MSSQLServer" = "SaveRestoreFile:SQL_SERVER_UAT_SG-RegRep-
MSSQL_MSSQLServer.reg",
"HKLM\\SOFTWARE\\Microsoft\\Microsoft SQL
Server\\MSSQL10.INST_UAT\\PROVIDERS" = "SaveRestoreFile:SQL_SERVER_UAT_SG-RegRep-
MSSQL_PROVIDERS.reg",
"HKLM\\SOFTWARE\\Microsoft\\Microsoft SQL
Server\\MSSQL10.INST_UAT\\Replication" = "SaveRestoreFile:SQL_SERVER_UAT_SG-RegRep-
MSSQL_Replication.reg",
"HKLM\\SOFTWARE\\Microsoft\\Microsoft SQL
Server\\MSSQL10.INST_UAT\\SQLServerAgent" = "SaveRestoreFile:SQL_SERVER_UAT_SG-RegRep-
MSSQL_SQLServerAgent.reg",
"HKLM\\SOFTWARE\\Microsoft\\Microsoft SQL
Server\\MSSQL10.INST_UAT\\SQLServerSCP" = "SaveRestoreFile:SQL_SERVER_UAT_SG-RegRep-
MSSQL_SQLServerSCP.reg" }
ExcludeKeys = {
"HKLM\\SOFTWARE\\Microsoft\\Microsoft SQL
Server\\MSSQL10.INST_UAT\\MSSQLServer\\CurrentVersion" }
)
SQLServer2008 SQLServer2008-INST_UAT (
Instance = INST_UAT
LanmanResName = SQL_SERVER_UAT_SG-Lanman
)
VMDg SQL_SERVER_UAT_SG-VMDg (
DiskGroupName = UAT
DGGuid = 4eceffc6-4b82-4506-9cdf-95a722156097
FastFailOver = 1
)
System Operations
List systems in the cluster. # hasys -list
Get detailed information about each system. # hasys -display [system_name]
Add a system. Increase the system count in the GAB # hasys -add system_name
startup script.
Delete a system. # hasys -delete system_name
Resource Types
List resource types. # hatype -list
Get detailed information about a resource type # hatype -display [type_name]
List all resources of a particular Type. # hatype -resources type_name
Add a resource type. # hatype -add resource_type
Set the value of static attributes. # hatype -modify ...
Delete a resource type. # hatype -delete resource_type
Resource Operations
List all resources # hares -list
List a resource’s dependencies. # hares -dep[resource_name]
Get detailed information about a # hares -display [resource_name]
resource.
Add a resource. # hares -add resource_nameresource_typeservicegroup
Modify the attributes of the resource. # hares -modify resource_nameattribute_name value
Delete a resource, type. # hares -delete resource_name
Online a resource, type. # hares -online resource_name-sys system_name
Offline a resource, type. # hares -offline resource_name-sys system_name
Cause a resource’s agent to immediately # hares -probe resource_name-sys system_name
monitor the resource on a specific system
Clear a faulted resource. # hares -clear resource_name[-sys system_name]
Make a resource’s attribute value local. # hares –local resource_nameattribute_name value
Make a resource’s attribute value global. # hares –global resource_nameattribute_name value
Make a dependency between two resources # hares -link parent_reschild_res
Remove the dependency relationship # hares -unlink parent_reschild_res
between two resources:
Symantec Vision 2012 – MGM, Las Vegas
SM L06 - Lab Student’s Guide / Last Update:Saturday, April 13, 2013 59
Lab Student’s Guide
VCS Procedures
VCS Directory Structure
UNIX/LInux Windows
Binaries /opt/VRTSvcs/bin C:\Program Files\Veritas\cluster server\bin
C:\Program Files\Veritas\cluster
Configuration /etc/VRTSvcs/conf/config
server\conf\config
Logs /var/VRTSvcs/log C:\Program Files\Veritas\cluster server\log
hastatus –sum
hastatus
or check out the /var/VRTSvcs/log/engine.log_A
First determine the reason for the fault from the log files and messages files
Second run the command:
hares –clear <RESOURCE>
Hastart/Hastop options
Hastart has to be started from each box if the cluster goes down.
If you reboot the cluster (vcs) will be started upon boot.
Hastop has two primary options (-local or –all).
When stopping the cluster you have to consider if you want just the local system within
the cluster or if the entire cluster need to have VCS stopped. The “hastop –all –force”
command will stop VCS on all nodes in the cluster but will not stop the resources. This
allows for VCS to be shutdown without affecting the applications that VCS is configured
to manage.
The application will come offline and the system will be rebooted. The rebooting system
is executing the K10vcs rc script which contains:
$HASTOP –sysoffline
The “evacuate” option initiates the ServiceGroup failover. When the system comes online
the ServiceGroup should be located on a different system in the cluster.
The system will prompt you for a password. If none is entered then the user has read-
only permissions. If the added user needs more than guest permissions run the command:
haclus -modify Administrators/Operators -add <username>
hagrp -modify <grpname> Administrators/Operators -add <username>