Professional Documents
Culture Documents
HACMP Clustering Configuration Step by Step
HACMP Clustering Configuration Step by Step
1. Convert 32 bit kernel to 64 bit kernel with JFS as default fils system
Steps: -
Creating link in root
#ln –fs /usr/lib/boot/unix_64 /unix
Putting image on the device from which system boots i.e at cris its hdsik3.
#bosboot –ad /dev/ipldevice
bosboot creates boot image
-d device Specifies the boot device.
-a Creates complete boot image and device.
#shutdown –Fr or reboot
Steps: -
#smit lvm Paging Space Change / Show Characteristics of a Paging Space hd6 Just enter the
number of additional LP’s
EX: if current size of paging is 512 MB and you want to increase to 32 GB, then you have to calculate
how many additional PP’s are required to be 32 GB.
Size of PP for rootvg is 128 mb, our 1 LP is equal to 1 PP therefore LP size is 128 MB each.
For this we have to create a etherchannel and in that we have to add 2 network interfaces,
ehterchannel is like a creating a single group of network by assiging a single IP.
Step a: -
Change the setting of Network adapters to 10/100/100/ MBPS Full/Half duplex. Please verify
what setting is to be keep from network administrator.
Steps below are the conversion from Auto-Negotitiion mode to 100 MBPS Full Duplex, you can
have your own seeting also,
If the network adapter you choosen for etherchannel is up and running, make it down and
remove it from system or else use the adapters which are not in use currently.
#ifconfig –a
en4:
flags=5e080863,c0<UP,BROADCAST,NOTRAILERS,RUNNING,SIMPLEX,MULTICAST,GROUPR
T,64BIT,CHECKSUM_OFFLOAD,PSEG,CHAIN>
inet 192.168.100.12 netmask 0xffffff00 broadcast 192.168.100.255
inet 10.129.1.12 netmask 0xffffff00 broadcast 10.129.1.255
Steps b: -
Select ent0 and ent2 i.e 1 port from network PCI card and 1 port from onboard network
interface. In case of any error i.e you are working currently on ent0 netowork card, then you
cant use it for etherchannel since its in use.
So go for second port from PCI card and any port from onboard.
Step c: -
Know start the etherchannel it will name it as ent4 if available, ent5 or any
Our etherchannel logical name is ent4.
From above output you came to know that your etherchannel is ent4 which consist 2
adapters ent0 and ent2.
Know configure the newly created etherchannel adapter ent4.
Smit mktcpip select en4 Enter the below information given by your network
administrator.
* HOSTNAME [utscstm2]
* Internet ADDRESS (dotted decimal) [192.168.100.12]
Network MASK (dotted decimal) [255.255.255.0]
* Network INTERFACE en4
NAMESERVER
Internet ADDRESS (dotted decimal) []
DOMAIN Name []
Default Gateway
Address (dotted decimal or symbolic name) []
Cost [0]
Do Active Dead Gateway Detection? no
Your CABLE Type N/A
START Now Yes
Step d: -
Go to /etc/hosts
Entry will be there: - 192.168.100.11 utscstm1 change utscstm1 to utsvcstm1_boot
Internet Address Hostname # Comments
192.168.100.11 utscstm1_boot
192.168.100.12 utscstm2_boot
10.129.1.11 utscstm1_svc utscstm1
10.129.1.12 utscstm2_svc utscstm2
Test both the server by removing cables one by one for concurrency.
Etherchannel is created know move to the installation of Cluster software and configuration.
CLUSTER PRE-REQUISTE
Before installing cluster software, use installp and preview the cluster cd content, if any pre-requites
filesets are required install it first from IBM AIX O.S CD.
Install HACMP software on both the server, this cluster requires the following filesets
*.adt.libm 5.1.0.0, *.adt.syscalls 5.1.0.0, *.adt.data 5.1.0.0, *.rsct.compat.client.hacmp 2.3.1.0,
*.rsct.compat.basic.hacmp 2.3.1.0, *.rsct.compat.client.hacmp 2.2.1.30, *.rsct.compat.client.hacmp
2.2.1.30,
STEP I
Ensure that the disks to be used for disk heartbeating are assigned and configured to
each cluster node.
Enter: -
lspv -> ensure that a PVID is assigned to the disk on each cluster node
If a PVID is not assigned, run one of the following commands:
chdev -l hdisk8 -a pv=yes
STEP II
OR
Create an enhanced concurrent mode volume group on the disk or disks in question using
SMIT. Enter: smitty hacmp System Management (C-SPOC) HACMP Concurrent
Logical Volume Management Concurrent Volume Groups Create a Concurrent Volume
Group (with Datapath Devices, if applicable)
Press F7 to select each cluster node. Select the PVID of the disk to be added to the Volume
Group. Enter the Volume Group Name, Desired Physical Partition Size, and major number.
Enhanced Concurrent Mode should be set to True.
Put the entry of boot-IP’s and service-IP and there label int /etc/hosts
192.168.100.11 utscstm1_boot
192.168.100.12 utscstm2_boot
10.129.1.11 utscstm1_svc utscstm1
10.129.1.12 utscstm2_svc utscstm2
STEP V
STEP VI
STEP VII
Or
STEP I
STEP II
1) Configure Cluster
# smit hacmp
Extended Configuration
Extended Topology Configuration
Configure an HACMP Cluster
Add/Change/Show an HACMP Cluster
Assign a unique Cluster Name (< 32 characters)
NOTE: HACMP must be RESTARTED on all nodes in order for change to take effect
2) Configure Nodes
# smit hacmp
Extended Configuration
Extended Topology Configuration
Configure HACMP Nodes
Add a Node to the HACMP Cluster
Give Node Name, each node has a unique name (<32 characters)
Communication Path to Node: Press F4 and select an IP label (e.g.: boot address)
Node Name utscstm1
Communication Path to Node [192.168.100.11]
After this creates other node also on same primary server, we have utscstm2 as
secondary cluster.
* IP Label/Address [utscstm1_boot]
* Network Type ether
* Network Name net_ether_01
* Node Name [utscstm1]
Network Interface []
* IP Label/Address [utscstm2_boot]
* Network Type ether
* Network Name net_ether_01
* Node Name [utscstm2]
Network Interface []
Repeat the above same for secondary server also.
Repeat this step for all “boot and standby” addresses of all IP networks and of all nodes. E.g.:
node1_boot, node1_stby, node2_boot, node2_stby
5) Configure persistent IP addresses for network(s) where an initial “boot” address which will be
replaced by a “service” address
# smit hacmp
Extended Configuration
Extended Topology Configuration
Configure HACMP Persistent Node IP Label / Addresses
Add a Persistent Node IP Label / Address
Select a Node and give:
OR
# smit hacmp
Extended Configuration
Extended Verification and Synchronization
* IP Label/Address [utscstm1_svc]
* Network Name net_ether_01
Alternate HW Address to accompany IP Label/Address []
Press enter
Give interface service IP label or address (on the same subnet as “boot” address)
You shouldn’t specify an Alternate HW Address to accompany IP Label/Address
because AIX 5L does “gratuitous ARP” update.
Don’t specify an Alternate HW Address for Ethernet Gigabit adapters
Create application server name as app_server1 which will start the script route_add_def.sh
This script contain
bash-3.00# vi /usr/es/sbin/cluster/route_add_def.sh
route delete default
route add default 10.129.1.5
save it
Give:
- Server name (symbolic name for the resource)
- Start and Stop scripts full pathnames (must exist on ALL NODES, in non-shared filesystems).
Select the Resource Group Management Policy: cascading, rotating, concurrent or custom
Give Resource Group Name,
Inter-Site Management Policy (leave default ignore)
Give the list of Participating Node Names: for cascading the order defines the priority.
Note: -
Toggle between Ignore/Cascading/Concurrent/Rotating.
CASCADING resources are resources which may be assigned to be taken over by multiple sites in a
prioritized manner. When a sites fails, the active site with the highest priority acquires the resource.
When the failed site rejoins, the site with the highest priority acquires the resource.
ROTATING resources are resources which may be acquired by any site in its resource chain. When a
site fails, the resource will be acquired by the highest priority standby site. When the failed node
rejoins, the resource remains with its new owner. Ignore should be used if sites and replicated
resources are not defined or being used
Select a Resource Group name i.e rsg1, rsg2 and rsg3 as created above.
Define the resources belonging to the resource group (separated with pace):
Inactive Takeover Applied (true if you want first starting node to take all resources)
Cascading Without Fallback Enabled (true to decide when fallback will occur recommended if
HACMP Cluster Services started in /etc/inittab)
Application Servers
Service IP Labels / Addresses (Give ALL Service IP labels separated by space, in case of several
networks) Volume Groups give the name(s) separated by space Use forced varyon of volume groups, if
necessary (true for AIX mirrored VGs)
Example of rsg1: -
Tape Resources []
Raw Disk PVIDs []
Miscellaneous Data []
Example of rsg2 : -
Tape Resources []
Raw Disk PVIDs []
Miscellaneous Data []
Application Servers []
Tape Resources []
Raw Disk PVIDs []
Disk Fencing Activated false
Miscellaneous Data []
5. Synchronize Cluster Resources (everytime you change the configuration)
# smit hacmp
Extended Configuration
Extended Verification and Synchronization
which allows to:
* Verify, Synchronize or Both [Both]
Force synchronization if verification fails? [No]
* Verify changes only? [No]
* Logging [Standard / Verbose]
In case of problem, select Verbose Logging and look to log files:
/var/hacmp/clverify/clverify.log or /var/hacmp/clverify/…
4. Now you can start HACMP on all nodes (several nodes at the same time)
# smit clstart
* Start now, on system restart or both now
Start Cluster Services on these nodes [node1] you can specify several nodes
Use the exportvg and importvg commands to create VG and file system on another
server, first unmount all the file system currently created
Exit.
#umount
#varyoffvg internalvg
#exportvg internalvg
# ls -l /dev/inter*
crw-rw---- 1 root system 49, 0 Aug 06 12:28 /dev/internalvg
Know this 49 is your major number you can create the same VG with same
characterstics on another server by using below command and major number of this
disk.
On Another server
#importvg –y internalvg –V 49 hdisk0
this means you are creating internalvg on hdisk0 of m2 server.