You are on page 1of 28

V7.

cover

Front cover

PowerHA SystemMirror 6.1


Planning, Implementation and
Administration

(Course code AN41)

Lab Setup Guide


ERC 1.3
Lab Setup Guide

Trademarks
IBM® and the IBM logo are registered trademarks of International Business Machines
Corporation.
The following are trademarks of International Business Machines Corporation, registered in
many jurisdictions worldwide:
DB2® HACMP™ System i™
System p™ System x™ System z™
Windows is a trademark of Microsoft Corporation in the United States, other countries, or
both.
Linux is a trademark of Linus Torvalds in the United States, other countries, or both.
Other company, product, or service names may be trademarks or service marks of others.

February 2012 edition


The information contained in this document has not been submitted to any formal IBM test and is distributed on an “as is” basis without
any warranty either express or implied. The use of this information or the implementation of any of these techniques is a customer
responsibility and depends on the customer’s ability to evaluate and integrate them into the customer’s operational environment. While
each item may have been reviewed by IBM for accuracy in a specific situation, there is no guarantee that the same or similar results will
result elsewhere. Customers attempting to adapt these techniques to their own environments do so at their own risk.

© Copyright International Business Machines Corporation 2009, 2012.


This document may not be reproduced in whole or in part without the prior written permission of IBM.
Note to U.S. Government Users — Documentation related to restricted rights — Use, duplication or disclosure is subject to restrictions
set forth in GSA ADP Schedule Contract with IBM Corp.
V7.0
Lab Setup Guide

TOC Contents
Purpose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . v

Requirements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1

Setup instructions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

© Copyright IBM Corp. 2009, 2012 Contents iii


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Lab Setup Guide

iv PowerHA 6.1 Implementation © Copyright IBM Corp. 2009, 2012


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V7.0
Lab Setup Guide

LSGp Purpose
This Lab Set Up Guide provides directions for installing, preparing, and verifying the lab
hardware and software in preparation for conducting a class of course AN41 PowerHA
SystemMirror 6.1 Planning, Implementation and Administration.
The Requirements sections of this document may also be used to determine the specific
hardware and software needed to conduct a class.

© Copyright IBM Corp. 2009, 2012 Purpose v


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Lab Setup Guide

vi PowerHA 6.1 Implementation © Copyright IBM Corp. 2009, 2012


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V7.0
Lab Setup Guide

LSG Requirements
The following tables list the hardware, software, and other materials needed to set up a lab
to conduct a class of course AN41 PowerHA (HACMP) System.

Hardware requirements
Table 1 lists the hardware one system needs. When preparing for a class, you must
determine the number of systems needed. This depends on how many teams will share the
same system. A ratio of two students per team is acceptable.

Table 1: Hardware for one student lab set


Platform Machine
Machine type Features
use model
Each system must have:
• Minimum of 1.6 processors. Each additional
team on the same managed system requires
1.2 processors. (Three teams on a managed
system requires 4 processors)
• Minimum of 3 GB memory. Each additional
POWER5 or 6 team on the same managed system requires
processor-based p5-520 2.25 GB. (Three teams on a managed system
MOP lab
system firmware or p550 requires 7680 MB or 7.5 GB).
site
compatible with or p570
• One real 70 GB hdisk (or 2 if the lpar rootvgs
HMC 7.3.5
are placed on a 2nd hdisk
• Three additional 10 GB (or more) REAL shared
disks for each team on the same managed
system.
• 1 real Ethernet adapter
• PowerVM feature activated for VIO usage
HMC system for a 7310-CR3
MOP lab
POWER5 or 6 or • Tested with Version 7.3.5 + MH01195, MH01197
site
managed system 7310-C04
MOP VIO Server • VIOS 1.5 or higher

© Copyright IBM Corp. 2009, 2012 Requirements 1


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Lab Setup Guide

Software requirements
Table 2 lists the software needed to prepare the student and/or instructor lab set(s). When
preparing for a class, be sure you have the correct number of licensed copies of any
non-IBM software.

Table 2: Software for one student lab set


Operating
Platform Use Hardware Version
System
mksysb equivalent to
mk_an41_erc13_lpar_61tl7sp2 in the
MOP p5/6/7 system AIX
CLP. The image has AIX 6.1 TL7 SP2
installed.

Image customization
Post-restore tasks required to customize the image in Table 2 for this version of the course
are listed in "AIX partition setup instructions" on page -15. All content necessary to conduct
the course is in the image noted in the table above.

IP address requirements
Each student lab setup requires the use of 5 IP addresses on the external network: HMC,
VIO, Lpar1, Lpar2 and Lpar3. A sixth address may be required if the connection from the
HMC to the managed system is on a public network. All IP addresses and hostnames
should be available for resolution from a DNS server, since the HMC does not permit the
use of a local /etc/hosts file for name resolution.
• One IP address/hostname is required for the HMC system (not needed if
• One IP address/hostname is required for the Virtual I/O Server partition (VIOS).
• Three IP addresses/hostnames are required, one for each of three AIX partitions
node1, node2 and node3.
• The /etc/hosts file in the image must not be overwritten.
• Add the hostname/IP address resolution for the en0 (login) interface for both nodes in a
student team to the /etc/hosts file and create an alias called node1 for the login address
to node1 and an alias called node2 for the login address to node2.

2 PowerHA 6.1 Implementation © Copyright IBM Corp. 2009, 2012


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V7.0
Lab Setup Guide

LSG Instructor information


The instructor for the course must be provided with the following information:
• IP address, netmask and hostname information for all the student AIX Lpars,
associated VIO lpars, and associated HMCs.
• Gateway IP address to access the power systems
• Passwords that are configured for:
- hscroot userid on each HMC
(suggest abc1234)
- padmin userid on each Virtual I/O Server partition
(suggest viosrv)
- root userid on the AIX partitions
(suggest ibmaix)

Skills required to set up the lab


The following specialized skills are required to set up the lab:
Advanced AIX and POWER system administration skills, in particular, partition
configuration/activation with an HMC and use/setup of a NIM environment. This lab setup
guide does not explain how to install or configure a NIM server, however it provides hints
that may be of use when using a NIM server to install the lab systems.

© Copyright IBM Corp. 2009, 2012 Requirements 3


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Lab Setup Guide

4 PowerHA 6.1 Implementation © Copyright IBM Corp. 2009, 2012


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V7.0
Lab Setup Guide

LSG Setup instructions

Configuration information

AN41 lab environment for one PowerHA cluster


IBM Power Systems

VIO AIX LPARs


Two hdisks: VIOS node1 node2 node3
• rootvg vios (client) en2 (cluster interface)
• rootvg AIX lpars
VLAN ID 2
36 GB min.
internal only*
en1 (cluster interface)
VLAN ID 1
external through VIOS en0 (external interface -
AIX login session)
Three hdisks: ent
• Shared disks

node1/node2 lpars: HMC


share entire disk

Remote
gateway

Student laptops (Windows XP, browser, remote gateway client such as Citrix)
* Each additional cluster on the same managed system must use different internal
vlan ID, AIX lpar name (c#node1, c#node2, c#node3)
© Copyright IBM Corporation 2012

Notes:
• For nodes: The lowest virtual ethernet (en0) is the external virtual adapter and has vlan
ID =1. This is the interface students will use to access the LPAR from the internet. It
should be in the same VLAN as the SEA in the VIO server.
• The other virtual ethernet adapters (en1 and en2) will have a vlan ID > 1. If more than
one cluster is placed on same managed system the VLAN IDs for en1, en2 must have a
different value for different clusters.
• The following describes the configurations of the student lab systems.

© Copyright IBM Corp. 2009, 2012 Setup instructions 5


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Lab Setup Guide

Hardware setup instructions


Use the following information in addition to the normal hardware installation procedures to
set up the lab hardware.
• HMC information
__ a. The HMC can be connected to the managed system on either a private or public
network.
__ b. The firewall for the Ethernet port on the HMC must have the following ports
allowed by any IP address: 9090, 22, and 4412. If the HMC is set up on both a
private network to the managed system and on a public network, the Ethernet
port that needs to have its firewall opened for these ports is the one on the public
network. The application to use to alter the firewall is on the HMC: HMC
Management -> HMC Configuration -> Customize Network Settings.
__ c. The HMC must have the following two functions enabled: Remote Command
Execution and Remote Virtual Terminal. These can be found on the HMC
Management -> HMC Configuration application on the HMC.

Partition and profile defaults


This course requires several partitions and profiles to be defined on each HMC/managed
system. All of the partitions and profiles have some default values where applicable, as
described in Table 3 and Table 4 below.

Table 3: Partition and profile defaults


Partition/Profile attribute Value
Boot mode Normal
Auto start No
Shared processing pool utilization authority Yes
Connection monitoring Yes
Power controlling LPAR IDs None
Lpar I/O Pool IDs None

Table 4: Profile virtual device defaults


Virtual slot number Device type
0 Serial
1 Serial

6 PowerHA 6.1 Implementation © Copyright IBM Corp. 2009, 2012


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V7.0
Lab Setup Guide

LSG HMC and managed system setup instructions


Use the following information in addition to the normal software installation procedures to
set up the HMC and managed system partition configuration.
In the tables below, a Mode value of uncapped means the partition is an uncapped shared
processor partition. A Mode value of capped means the partition is a capped shared
processor partition. A Mode value of dedicated means the partition uses dedicated
processors. Memory amounts are MB unless otherwise specified as GB.
__ 1. On the HMC GUI, delete any current partition and profile definitions. You can delete
all partitions and profiles using the following steps in the HMC GUI.
__ 2. Select the managed system, and right click. A popup menu will appear.
__ 3. Select Profile Data > Initialize from the menu. A popup window will appear asking
you to confirm the removal of all partition and profile information from the managed
system.
__ 4. Click OK to continue.
You can either create the partitions and profiles described below using the HMC GUI, or
using scripts to match the I/O slot names on the hardware being used. Since you will likely
have to configure multiple machines, it is strongly recommended that you use a script to
reduce the chance of errors in the configuration.

© Copyright IBM Corp. 2009, 2012 Setup instructions 7


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Lab Setup Guide

__ 5. Define the VIOS partition and its profile as described in Table 5, Table 6 and
Table 7.
Table 5: VIOS partition information
Partition Name LPAR ID Environment
VIOS 1 Virtual I/O Server

Table 6: VIOS profile processor and memory definitions


Profile Memory Processing Units Processors Sharing
Mode
Name Min Des Max Min Des Max Min Des Max Mode
Normal 768 768 768 0.2 0.4 0.4 1 1 1 Shared Uncapped

Table 7: VIOS I/O definition for normal profile


Adapter Slot location Description
The VIOS LPAR must have at least one
physical SCSI adapter with at least four disks
connected, minimum of 36 GB each. The
remainder of this guide assumes that 4
SCSI Any disks are attached, as per standard setup.
Three additional disks are required for each
additional team configured on the same
managed system.
The VIOS LPAR must have at least one
Ethernet Any
physical Ethernet adapter.
PVID = 1, Bridged = Yes,
Virtual Ethernet virtual slot 11
IEEE 802.1Q Compat = No
Connect Any (from node1 slot 5)
Virtual SCSI Server virtual slot 12
(rootvg for node1)
Connect Any (from node1 slot 6)
Virtual SCSI Server virtual slot 13
(shared disks for node1)
Connect Any (from node2 slot 5)
Virtual SCSI Server virtual slot 14
(rootvg for node1)
Connect to node2 in slot 6
Virtual SCSI Server virtual slot 15
(shared disks for node2)
Connect Any (from node3 slot 5)
Virtual SCSI Server virtual slot 16
(rootvg for node3)
Connect Any (from node3 slot 6)
Virtual SCSI Server virtual slot 17
(shared disks for node3)
Note: One disk is vio rootvg, one disk is lpars rootvg, and three disks are shared.
Note: It is preferred to arrange the slots above in the manner shown. It makes it
much easier to add additional lpars, especially if putting more than one team on the
same managed system.

8 PowerHA 6.1 Implementation © Copyright IBM Corp. 2009, 2012


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V7.0
Lab Setup Guide

LSG Note: Four additional virtual scsi server slots are needed for each additional team
sharing the same managed system unless an additional vio server is implemented.
__ 6. Create the node1 partition and its profile as described in Table 8, Table 9 and
Table 10 below.
Table 8: node1 partition information
Partition Name LPAR ID Environment
node1 2 AIX/Linux

Table 9: node1 profile processor and memory definitions


Profile Memory Processing units Processors Sharing
Mode
name Min Des Max Min Des Max Min Des Max mode
Normal 768 768 768 0.2 0.4 0.4 1 1 1 Shared Uncapped

Table 10: node1 I/O definition for normal profile


Adapter Slot location Description
PVID = 1, Bridged = No,
Virtual Ethernet virtual slot 2
IEEE 802.1Q Compat = No
PVID = 2, Bridged = No,
Virtual Ethernet virtual slot 3
IEEE 802.1Q Compat = No
PVID = 2, Bridged = No,
Virtual Ethernet virtual slot 4
IEEE 802.1Q Compat = No
Virtual SCSI client virtual slot 5 Connects to Slot 12 on VIOS LPAR
Virtual SCSI client virtual slot 6 Connects to Slot 13 on VIOS LPAR

Important notes:
• Partition name: If this partition is part of a 2nd cluster (or more) sharing the same
managed system then preface the names with c# (eg c1node1, c1node2, c1node3,
c2node1, c2node2, c2node3, etc).
• LPAR ID: If this node is part of a 2nd cluster (or more) sharing the same managed
system then the chart above needs to be adjusted for this value.
• Virtual slots 3 and 4 PVID: If this node is part of a 2nd cluster (or more) sharing the
same managed system, then the PVID value for slot 3 and 4 must be set 1 number
higher for each additional cluster sharing the same managed system (eg 3 for cluster
two, 4 for cluster 3 etc)
• Virtual slots 5 and 6 SCSI client: If this node is part of a 2nd cluster (or more) sharing
the same managed system, then the VIO slot numbers shown in the chart above need
to be adjusted accordingly.

© Copyright IBM Corp. 2009, 2012 Setup instructions 9


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Lab Setup Guide

__ 7. Create the node2 partition and its profile as described in Table 11, Table 12 and
Table 13 below.
Table 11: node2 partition information
Partition name LPAR ID Environment
node2 3 AIX/Linux

Table 12: node2 profile processor and memory definitions


Profile Memory Processing units Processors Sharing
Mode
name Min Des Max Min Des Max Min Des Max mode
Normal 768 768 768 0.2 0.4 0.4 1 1 1 Shared Uncapped

Table 13: node2 I/O definition for normal profile


Adapter Slot location Description
PVID = 1, Bridged = No,
Virtual Ethernet virtual slot 2
IEEE 802.1Q Compat = No
PVID = 2, Bridged = No,
Virtual Ethernet virtual slot 3
IEEE 802.1Q Compat = No
PVID = 2, Bridged = No,
Virtual Ethernet virtual slot 4
IEEE 802.1Q Compat = No
Virtual SCSI client virtual slot 5 Connects to Slot 14 on VIOS LPAR
Virtual SCSI client virtual slot 6 Connects to Slot 15 on VIOS LPAR

• Partition name: If this partition is part of a 2nd cluster (or more) sharing the same
managed system then preface the names with “c#” (eg c1node1, c1node2, c1node3,
c2node1, c2node2, c2node3, etc).
• LPAR ID: If this node is part of a 2nd cluster (or more) sharing the same managed
system then the chart above needs to be adjusted for this value.
• Virtual slots 3 and 4 PVID: If this node is part of a 2nd cluster (or more) sharing the
same managed system, then the PVID value for slot 3 and 4 must be set 1 higher for
each additional cluster sharing the same managed system (eg 3 for cluster two, 4 for
cluster 3 etc)
• Virtual slots 5 and 6 SCSI client: If this node is part of a 2nd cluster (or more) sharing
the same managed system, then the VIO slot numbers shown in the chart above needs
to be adjusted accordingly (same for any additional clusters sharing the same managed
system)

10 PowerHA 6.1 Implementation © Copyright IBM Corp. 2009, 2012


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V7.0
Lab Setup Guide

LSG __ 8. Create the node3 partition and its profile as described in Table 14, Table 15 and
Table 16 below.
Table 14: node3 partition information
Partition Name LPAR ID Environment
node3 4 AIX/Linux

Table 15: node3 profile processor and memory definitions


Profile Memory Processing Units Processors Sharing
Mode
Name Min Des Max Min Des Max Min Des Max Mode
Normal 768 768 768 0.2 0.4 0.4 1 1 1 Shared Uncapped

Table 16: node3 I/O definition for normal profile


Adapter Slot location Description
PVID = 1, Bridged = No,
Virtual Ethernet virtual slot 2
IEEE 802.1Q Compat = No
PVID = 2, Bridged = No,
Virtual Ethernet virtual slot 3
IEEE 802.1Q Compat = No
PVID = 2, Bridged = No,
Virtual Ethernet virtual slot 4
IEEE 802.1Q Compat = No
Virtual SCSI client virtual slot 5 Connects to Slot 16 on VIOS LPAR
Virtual SCSI client virtual slot 6 Connects to Slot 17 on VIOS LPAR

Important notes:
• Partition name: If this partition is part of a 2nd cluster (or more) sharing the same
managed system then preface the names with “c#” (eg c1node1, c1node2, c1node3,
c2node1, c2node2, c2node3, etc).
• LPAR ID: If this node is part of a 2nd cluster (or more) sharing the same managed
system then the chart above needs to be adjusted for this value.
• Virtual slots 3 and 4 PVID: If this node is part of a 2nd cluster (or more) sharing the
same managed system, then the PVID value for slot 3 and 4 must be set 1 higher for
each additional cluster sharing the same managed system (eg 3 for cluster two, 4 for
cluster 3 etc)
• Virtual slots 5 and 6 SCSI client: If this node is part of a 2nd cluster (or more) sharing
the same managed system, then the VIO slot numbers shown in the chart above needs
to be adjusted accordingly (same for any additional clusters sharing the same managed
system)

© Copyright IBM Corp. 2009, 2012 Setup instructions 11


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Lab Setup Guide

Virtual I/O server setup instructions


__ 9. Install the VIOS LPAR at the level identified in Table 1.
This must be done before you can install the AIX LPARS.
__ a. Make sure you install on hdisk0.
__ b. Configure shared ethernet adapter for the VLAN 1 virtual ethernet adapter with
the appropriate hostname/ipaddress
__ 10. Configure virtual devices on the VIOS LPAR according to Table 7.
This must be done before you can install the AIX LPARs.
A sample list of commands is shown below. If typing the commands manually, you
do not need to prefix each command with ioscli. The vhosts are assigned to each
LPAR in pairs -- the first one is for the lpar rootvg, the second one is for the shared
disks.

12 PowerHA 6.1 Implementation © Copyright IBM Corp. 2009, 2012


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V7.0
Lab Setup Guide

LSG #!/bin/ksh
#
# set PATH so we can use some non-vio commands
export
PATH=/usr/ios/cli:/usr/ios/utils:/usr/ios/lpm/bin:/usr/ios/oem:/home/padm
in:/usr/bin:/usr/sbin

# make sure rootvg is on hdisk0


ioscli lspv | grep hdisk0 | grep rootvg
if [ $? -ne 0 ]
then
echo "rootvg doesn't seem to be on hdisk0"
echo "This script can't handle that situation - sorry"
exit 1
fi

## Make LVM components for AIX lpar rootvgs (per team on same system)
ioscli mklv -lv rvgnode1 rootvg 8G
ioscli mklv -lv rvgnode2 rootvg 8G
ioscli mklv -lv rvgnode3 rootvg 8G

## Make vtscsi devices for AIX Lpars 1,2, and 3(per team on same system)
## hdisk1 and hdisk2 are different for each team
ioscli mkvdev -vdev rvgnode1 -vadapter vhost0
ioscli mkvdev -f -vdev hdisk1 -vadapter vhost1
ioscli mkvdev -f -vdev hdisk2 -vadapter vhost1
ioscli mkvdev -f -vdev hdisk3 -vadapter vhost1
#
ioscli mkvdev -vdev rvgnode2 -vadapter vhost2
ioscli mkvdev -f -vdev hdisk1 -vadapter vhost3
ioscli mkvdev -f -vdev hdisk2 -vadapter vhost3
ioscli mkvdev -f -vdev hdisk3 -vadapter vhost3

ioscli mkvdev -vdev rvgnode3 -vadapter vhost4


ioscli mkvdev -f -vdev hdisk1 -vadapter vhost5
ioscli mkvdev -f -vdev hdisk2 -vadapter vhost5
ioscli mkvdev -f -vdev hdisk3 -vadapter vhost5

## Setup SEA and network for this VIO server


## bring down current interface
ioscli rmdev -dev en0

## Continued on next page

© Copyright IBM Corp. 2009, 2012 Setup instructions 13


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Lab Setup Guide

## make SEA for VLAN 1


ioscli mkvdev -sea ent0 -vadapter ent?? -default ent?? -defaultid 1

## Configure tcpip for VIOS


ioscli mktcpip -interface en?? -hostname ??? -inetaddr ??? \
-netmask 255.255.???.0 -gateway ???

14 PowerHA 6.1 Implementation © Copyright IBM Corp. 2009, 2012


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V7.0
Lab Setup Guide

LSG AIX partition setup instructions


__ 1. Install AIX into all AIX lpars using the mksysb identified above.
__ a. Make sure you install on hdisk0.
__ b. Use the IP addresses discussed above. See “IP address requirements” on
page 2..
__ 2. After AIX install, log in and configure interface et1 for vnc in all the AIX lpars:
• ip address 192.192.10.xxx, where xxx is the last octet of the en0 IP address
• subnet mask 255.255.255.0
__ 3. Configure a volume group called vgA on node1 and import it on node2.
__ a. On node1: /usr/sbin/mkvg -y vgA -s 32 -f -n -V 101 -C hdisk1
__ b. On node2: importvg -y vgA -V 101 hdisk1
__ 4. Configure cluster IP addresses on en1 and en2 in each LPAR
__ a. On node1:chdev -l en1 -a netaddr='192.168.1.1' -a netmask='255.255.255.0' -a
state='up'
__ b. On node1:chdev -l en2 -a netaddr='192.168.2.1' -a netmask='255.255.255.0' -a
state='up'
__ c. On node2:chdev -l en1 -a netaddr='192.168.1.2' -a netmask='255.255.255.0' -a
state='up'
__ d. On node2:chdev -l en2 -a netaddr='192.168.2.2' -a netmask='255.255.255.0' -a
state='up'
__ e. On node3:chdev -l en1 -a netaddr='192.168.1.3' -a netmask='255.255.255.0' -a
state='up'
__ f. On node3:chdev -l en2 -a netaddr='192.168.2.3' -a netmask='255.255.255.0' -a
state='up'
__ 5. Install the HA 6.1 SP5 update.
__ 6. CLP only: On node1 and node2, copy the HA ODM from
/lab/an41/objrepos_erc1..2 and add an entry in the inittab for harc.net as follows:
__ a. On node1 and node2: cp /lab/an41/objrepos_erc1.2/* /etc/es/objrepos
__ b. On node1 and node2: mkitab "harc:2:wait:/usr/es/sbin/cluster/etc/harc.net"

© Copyright IBM Corp. 2009, 2012 Setup instructions 15


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Lab Setup Guide

__ 7. Non-CLP only: Create a cluster on node1 and node2 that has the following
characteristics:
Cluster: an41cluster
Cluster services: inactive

#############
APPLICATIONS
#############
Cluster an41cluster provides the following applications:
appA
Application: appA
appA is started by /lab/config/startA
appA is stopped by /lab/config/stopA
No application monitors are configured for appA.
This application is part of resource group
'appAgroup'.
Resource group policies:
Startup: on home node only
Fallover: to next priority node in the list
Fallback: never
Nodes configured to provide appA: node1 node2
Resources associated with appA:
Service Labels
appAsvc(192.168.3.10) {}
Interfaces configured to provide appAsvc:
node1boot1 {}
with IP address: 192.168.1.1
on interface: en1
on node: node1 {}
on network: net_ether_01 {}
node1boot2 {}
with IP address: 192.168.2.1
on interface: en2
on node: node1 {}
on network: net_ether_01 {}
node2boot1 {}
with IP address: 192.168.1.2
on interface: en1
on node: node2 {}
on network: net_ether_01 {}
node2boot2 {}
with IP address: 192.168.2.2
on interface: en2
on node: node2 {}

16 PowerHA 6.1 Implementation © Copyright IBM Corp. 2009, 2012


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V7.0
Lab Setup Guide

LSG on network: net_ether_01 {}


Shared Volume Groups:
vgA

#############
TOPOLOGY
#############
an41cluster consists of the following nodes: node1 node2
node1
Network interfaces:
node1_hdisk1_01 {}
device: /dev/hdisk1
on network: net_diskhb_01 {}
node1boot1 {}
with IP address: 192.168.1.1
on interface: en1
on network: net_ether_01 {}
node1boot2 {}
with IP address: 192.168.2.1
on interface: en2
on network: net_ether_01 {}
node2
Network interfaces:
node2_hdisk1_01 {}
device: /dev/hdisk1
on network: net_diskhb_01 {}
node2boot1 {}
with IP address: 192.168.1.2
on interface: en1
on network: net_ether_01 {}
node2boot2 {}
with IP address: 192.168.2.2
on interface: en2
on network: net_ether_01 {}

© Copyright IBM Corp. 2009, 2012 Setup instructions 17


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Lab Setup Guide

Verification
__ 8. Check that you can telnet to all AIX LPARs using the addresses assigned.
For example:
If the AIX LPARs are:
node1: 10.31.88.141
node2: 10.31.88.142
node3: 10.31.88.142

Use these commands:


telnet 10.31.88.141
telnet 10.31.88.142
telnet 10.31.88.143

__ 9. Check that all nodes have four disks (


• hdisk0, which is rootvg
• hdisk1, which is vgA on node1 and node2, unassigned on node3
• Two unassigned disks
Note that hdisk2 and hdisk3 may or may not have a PVID. They should not be
assigned to a volume group.
For example, on node1 and node2:
# lspv
hdisk0 00025346192eed54 rootvg active
hdisk1 00c35b902a9a08c1 vgA
hdisk2 ?????? None
hdisk3 ?????? None
__ 10. Verify each AIX lpar sees ethernet interfaces: ent0, ent1, and ent2
(lscfg | grep Eth) but that only en0 and et1 are configured (netstat -i)
__ 11. Verify vnc client can access each AIX lpar at <en0 address>:1

Continued on next page.

18 PowerHA 6.1 Implementation © Copyright IBM Corp. 2009, 2012


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V7.0
Lab Setup Guide

LSG __ 12. Verify that HACMP is installed and fileset levels are as shown in the example output
below (if using supplied mksysb this is already done)
For example:
# lslpp -L cluster\*
Fileset Level State Type Description (Uninstaller)
----------------------------------------------------------------------------
cluster.adt.es.client.include
6.1.0.2 C F ES Client Include Files
cluster.adt.es.client.samples.clinfo
6.1.0.0 C F ES Client CLINFO Samples
cluster.adt.es.client.samples.clstat
6.1.0.0 C F ES Client Clstat Samples
cluster.adt.es.client.samples.libcl
6.1.0.0 C F ES Client LIBCL Samples
cluster.adt.es.java.demo.monitor
6.1.0.0 C F ES Web Based Monitor Demo
cluster.es.client.clcomd 6.1.0.4 C F ES Cluster Communication
Infrastructure
cluster.es.client.lib 6.1.0.3 C F ES Client Libraries
cluster.es.client.rte 6.1.0.4 C F ES Client Runtime
cluster.es.client.utils 6.1.0.2 C F ES Client Utilities
cluster.es.client.wsm 6.1.0.3 C F Web based Smit
cluster.es.cspoc.cmds 6.1.0.5 C F ES CSPOC Commands
cluster.es.cspoc.dsh 6.1.0.0 C F ES CSPOC dsh
cluster.es.cspoc.rte 6.1.0.5 C F ES CSPOC Runtime Commands
cluster.es.server.cfgast 6.1.0.0 C F ES Two-Node Configuration
Assistant
cluster.es.server.diag 6.1.0.4 C F ES Server Diags
cluster.es.server.events 6.1.0.5 C F ES Server Events
cluster.es.server.rte 6.1.0.5 C F ES Base Server Runtime
cluster.es.server.testtool
6.1.0.1 C F ES Cluster Test Tool
cluster.es.server.utils 6.1.0.5 C F ES Server Utilities
cluster.es.worksheets 6.1.0.1 C F Online Planning Worksheets
cluster.license 6.1.0.0 C F HACMP Electronic License
cluster.man.en_US.es.data 6.1.0.2 C F ES Man Pages - U.S. English
cluster.msg.En_US.cspoc 6.1.0.0 C F HACMP CSPOC Messages - U.S.
English IBM-850
cluster.msg.En_US.es.client
6.1.0.0 C F ES Client Messages - U.S.
English IBM-850
cluster.msg.En_US.es.server
6.1.0.0 C F ES Server Messages - U.S.
English IBM-850

Continued on next page.

© Copyright IBM Corp. 2009, 2012 Setup instructions 19


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Lab Setup Guide

__ 13. Execute, on node1, cltopinfo | grep -i name


You should see the name an41cluster in the response. If not, you may not have run
the customization on the nodes as mentioned above.
__ 14. Verify that IBM HTTP Server is installed (if using supplied mksysb, this is already
done):
# lslpp -L | grep IHS
IHS6 6.0.2.0 C P IBM HTTP Server
__ 15. Verify that Mozilla is installed I using supplied mksysb, this is already done):
# lslpp -L | grep -i moz
Mozilla.base.rte 1.7.3.0 C F Mozilla Web Browser

20 PowerHA 6.1 Implementation © Copyright IBM Corp. 2009, 2012


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V7.0

backpg
Back page

You might also like