You are on page 1of 99

AIX Boot Process

1.
When the server is Powered on Power on self test(POST) is run and
checks the hardware
2.
On successful completion on POST Boot logical volume is searched by
seeing the bootlist
3.
The AIX boot logical contains AIX kernel, rc.boot, reduced ODM & BOOT
commands. AIX kernel is loaded in the RAM.
4.
Kernel takes control and creates a RAM file system.
5.
Kernel starts /etc/init from the RAM file system
6.
init runs the rc.boot 1 ( rc.boot phase one) which configures the base
devices.
7.
rc.boot1 calls restbase command which copies the ODM files from Boot
Logical Volume to RAM file system
8.
rc.boot1 calls cfgmgr f command to configure the base devices
9.
rc.boot1 calls bootinfo b command to determine the last boot device
10.
Then init starts rc.boot2 which activates rootvg
11.
rc.boot2 calls ipl_varyon command to activate rootvg
12.
rc.boot2 runs fsck f /dev/hd4 and mount the partition on / of RAM file
system
13.
rc.boot2 runs fsck f /dev/hd2 and mounts /usr file system
14.
rc.boot2 runs fsck f /dev/hd9var and mount /var file system and runs
copy core command to copy the core dump if available from /dev/hd6
to /var/adm/ras/vmcore.0 file. And unmounts /var file system
15.
rc.boot2 runs swapon /dev/hd6 and activates paging space
16.
rc.boot2 runs migratedev and copies the device files from RAM file
system to /file system
17.
rc.boot2 runs cp /../etc/objrepos/Cu* /etc/objrepos and copies the ODM
files from RAM file system to / filesystem
18.
rc.boot2 runs mount /dev/hd9var and mounts /var filesystem
19.
rc.boot2 copies the boot log messages to alog
20.
rc.boot2 removes the RAM file system
21.
Kernel starts /etc/init process from / file system
22.
The /etc/init points /etc/inittab file and rc.boot3 is started. Rc.boot3
configures rest of the devices
23.
rc.boot3 runs fsck f /dev/hd3 and mount /tmp file system
24.
rc.boot3 runs syncvg rootvg &
25.
rc.boot3 runs cfgmgr p2 or cfgmgr p3 to configure rest of the
devices. Cfgmgr p2 is used when the physical key on MCA architecture is
on normal mode and cfgmgr p3 is used when the physical key on MCA
architecture is on service mode.
26.
rc.boot3 runs cfgcon command to configure the console
27.
rc.boot3 runs savebase command to copy the ODM files from /dev/hd4
to /dev/hd5
28.
rc.boot3 starts syncd 60 & errordaemon
29.
rc.boot3 turn off LEDs
30.
rc.boot3 removes /etc/nologin file

rc.boot3 checks the CuDv for chgstatus=3 and displays the missing
devices on the console
32.
The next line of Inittab is executed
/etc/inittab file format: identifier:runlevel:action:command
Mkitab-----Add records to the /etc/inittab file
Lsitab-----List records in the /etc/inittab file
Chitab-----changes records in the /etc/inittab file
Rmitab-----removes records from the /etc/inittab file
What is ODM?
ODM
o
Maintains system config, device and vital product data
o
Provide a more robust, secure and sharable resource
o
Provide a reliable object oriented database facility
31.

Data Managed By ODM


o
Device Configuration Information
o
Software Vital Product Data
o
SRC information
o
Communications configuration data
o
Menus and commands for SMIT
ODM has three components
o
Object class - These are datafiles.
o
Objects - Records within Datafiles
o
Descriptors - Field within a record

Where ODM Object Class files are stored?


This can be defined in /etc/environment file. The ODM object clases are held
in three repositories
1.
/etc/objrepos
2.
/usr/lib/objrepos
3.
/usr/share/lib/objrepos

Some important ODM Database files


Supported devices and attributes and connection information are stored in
PdDv, PdAt, PdCn, etc..
Records or customizrd Devices and attributes, VPD are stored in
CuDv, CuAt, CuDep, Config_Rules, CuVPD, etc ...
Have software information
lpp, history, product, inventory, etc..
SMIT menus, commands, options
sm_cmd_hdr, sm_cmd_opt, sm_menu_opt
NIM Resource and configuraion informations
nim_object, nim_pdattr, nim_altr
Errorlog, alog and dump file info
SWservAt

Useful ODM Commands


odmget - To retrives objects from an Object Class in stanza format
odmdelete - To delete objects what meet a specifig criteria. If no criteria
specified, all objects are deleted
odmadd - To add a new object to an object class
odmchange - To change all objects with in an Object Class that meet a
specifig criteria
odmshow - To display object class definition
odmcreate - To create Object Class for application that will use ODM DB

odmdrop - To remove an Oobject Class

Some ODM Command examples


To list all records with an Object Class CuDv
# odmget CuDv
To find out an object within CuAt with condition name=sys0 and
attibute=maxuproc
# odmget -q "name=sys0 and attribute=maxuproc" CuAt CuAt: name =
"sys0" attribute = "maxuproc" value = "2000" type = "R" generic = "DU" rep
= "nr" nls_index = 20
To delete the above object
# odmget -q "name=sys0 and attribute=maxuproc" CuAt > file.1 #
odmdelete -q "name=sys0 and attribute=maxuproc" -o CuAt
To add the deleted object again to the above object class
# odmadd file.1 # add the file content to appropriate Object class
Fix ODM related errors for devices (CuDv Object Class)
$ cfgmgr
cfgmgr: 0514-604 Cannot access the CuDv object class in the
device configuration database.
The Fix:
01. cd /etc/oberepos 02. cp Config_Rules Config_Rules.backup 03. odmget -q
rule="/etc/methods/darcfgrule" Config_Rules 04. odmdelete -q
rule="/etc/methods/darcfgrule" -o Config_Rules 05. savebase -v 06. cfgmgr

IBM AIX - HACMP Made Easy - A complete


Overview!!
Raaj Tilak S 5:08 AM AIX, IBM AIX - HACMP No Comments
HACMP : High Availability Cluster Multi-Processing
High Availability : Elimination of both planned and unplanned system and application
downtime. This is achieved through elimination of H/W and S/W single points of failure.
Cluster Topology : The Nodes, networks, storage, clients, persistent node ip label/devices
Cluster resources: HACMP can move these components from one node to others Ex: Service
labels, File systems and applications
RSCT Version: 2.4.2
SDD Version: 1.3.1.3
HA Configuration :
o

Define the cluster and nodes

Define the networks and disks

Define the topology

Verify and synchronize

Define the resources and resource groups

Verify and synchronize

After Installation
changes :
/etc/inittab,/etc/rc.net,/etc/services,/etc/snmpd.conf,/etc/snmpd.peers,/etc/syslog.conf,
/etc/trcfmt,/var/spool/cron/crontabs/root,/etc/host.
Software Components:
Application server
HACMP Layer
RSCT Layer
AIX Layer
LVM Layer
TCP/IP Layer

HACMP Services :
Cluster communication daemon(clcomdES)
Cluster Manager (clstrmgrES)
Cluster information daemon(clinfoES)
Cluster lock manager (cllockd)
Cluster SMUX peer daemon (clsmuxpd)
HACMP Deamons: clstrmgr, clinfo, clmuxpd, cllockd.
HA supports up to 32 nodes
HA supports up to 48 networks
HA supports up to 64 resource groups per cluster
HA supports up to 128 cluster resources
IP Label : The label that is associated with a particular IP address as defined by the DNS
(/etc/hosts)
Base IP label : The default IP address. That is set on the interface by aix on startup.
Service IP label: a service is provided and it may be bound on a single/multiple nodes. These
addresses that HACMP keep highly available.
IP alias: An IP alias is an IP address that is added to an interface. Rather than replacing its base
IP address.
RSCT Monitors the state of the network interfaces and devices.
IPAT via replacement : The service IP label will replace the boot IP address on the interface.
IPAT via aliasing: The service IP label will be added as an alias on the interface.
Persistent IP address: this can be assigned to a network for a particular node.
In HACMP the NFS export : /use/es/sbin/cluster/etc/exports
Shared LVM:

Shared volume group is a volume group that resides entirely on the external disks shared
by cluster nodes

Shared LVM can be made available on Non concurrent access mode, Concurrent Access
mode, Enhanced concurrent access mode.

NON concurrent access mode: This environment typically uses journaled file systems to
manage data.
Create a non concurrent shared volume group: smitty mkvgGive VG name, No for
automatically available after system restart, Yes for Activate VG after it is created, give VG major
number.
Create a non concurrent shared file system: smitty crjfsRename FS names, No to
mount automatically system restart, test newly created FS by mounting and unmounting it.
Importing a volume group to a fallover node:
o

Varyoff the volume group

Run discover process

Import a volume group

Concurrent Acccess Mode: Its not supported for file systems. Instead must use raw LVs and
Physical disks.
Creating concurrent access volume group:
o

Verify the disk status using lsdev Cc disk

Smitty cl_convgCreate a concurrent volume groupenter

Import the volume group using importvg C y vg_name physical_volume_name

Varyonvg vgname

Create LVs on the concurrent VG: smitty cl_conlv.


Enhanced concurrent mode VGs: This can be used for both concurrent and non
concurrent access. This VG is varied on all nodes in the cluster, The access for modifying the data
is only granted to the node that has the resource group active.
Active or passive mode:
Active varyon: all high level operations permitted.
Passive varyon: Read only permissions on the VG.
Create an enhanced concurrent mode VG: mkvg n s 32 C y myvg hdisk11 hdisk12

Resource group behaviour:


Cascading: Fallover using dynamic node priority. Online on first available node
Rotating : Failover to next priority node in the list. Never fallback. Online using distribution policy.
Concurrent : Online on all available nodes . never fallback
RG dependencies:Clrgdependency t
/etc/hosts : /etc/hosts for name resolution. All cluster node IP interfaces must be added on this
file.
/etc/inittab : hacmp:2:once:/usr/es/sbin/cluster/etc/rc.init>/dev/console 2> &1 will strat the
clcomdES and clstrmgrES.
/etc/rc.net file is called by cfgmgr. To configure and start TCP/IP during the boot process.
C-SPOC uses clcomdES to execute commands on remote nodes.
C-SPC commands located in /usr/es/sbin/cluster/cspoc
you should not stop a node with the forced option on more than one node at a time and also the
RG in concurrent mode.
Cluster commands are in /usr/es/sbin/cluster
User Administration : cl_usergroup
Create a concurrent VG > smitty cl_convg
To find the resource group information: clrginfo P
HACMP Planning:
Maximum no.of nodes in a cluster is 32
In an HACMP Cluster, the heartbeat messages are exchanged via IP networks and Point-to-Point
networks
IP Label represents the name associated with a specific IP address
Service IP label/address: The service IP address is an IP address used for client access.
2 types of service IP addresses:
Shared Service IP address: It can be active only on one node at a time.
Node bound service IP address: An IP address that can be configured only one node
Method of providing high availability service IP addresses:
IP address takeover via IP aliases
IPAT via IP replacement
IP alias is an IP address that is configured on a communication interface in addition to the base ip
address. IP alias is an AIX function that is supported by HACMP. AIX supports multiple IP aliases on
each communication interface. Each IP alias can be a different subnet.
Network Interface:

Service Interface: This interface used for providing access to the application running on that node.
The service IP address is monitored by HACMP via RSCT heartbeat.
Boot Interface: This is a communication interface. With IPAT via aliasing, during failover the
service IP label is aliased onto the boot interface
Persistent node IP label: Its useful for administrative purpose.
When an application is started or moved to another node together with its associated resource
group, the service IP address can be configured in two ways.
Replacing the base IP address of a communication interface. The service IP label and boot

IP label must be on same subnet.


Configuring one communication interface with an additional IP address on top of the

existing one. This method is IP aliasing. All Ip addresses/labels must be on different subnet.
Default method is IP aliasing.
HACMP Security: Implemented directly by clcomdES, Uses HACMP ODM classes and the
/usr/es/sbin/cluster/rhosts file to determine partners.
Resource Group Takeover relationship:
Resource Group: Its a logical entity containing the resources to be made highly available by
HACMP.
Resources: Filesystems, NFS, Raw logical volumes, Raw physical disks, Service IP addresses/Labels,
Application servers, startup/stop scripts.
To made highly available by the HACMP each resource should be included in a Resource group.
Resource group takeover relationship:
1.

Cascading

2.

Rotating

3.

Concurrent

4.

Custom

Cascading:
o

Cascading resource group is activated on its home node by default.

Resource group can be activated on low priority node if the highest priority node is
not available at cluster startup.

If node failure resource group falls over to the available node with the next
priority.

Upon node reintegration into the cluster, a cascading resource group falls back to
its home node by default.

Attributes:

1. Inactive takeover(IT): Initial acquisition of a resource group in case the home node is not
available.
2. Fallover priority can be configured in default node priority list.
3. cascading without fallback is an attribute that modifies the fall back behavior. If cwof flag is set
to true, the resource group will not fall back to any node joining. When the flag is false the
resource group falls back to the higher priority node.
Rotating:
At cluster startup first available node in the node priority list will activate the

resource group.
If the resource group is on the takeover node. It will never fallback to a higher

priority node if one becomes available.


Rotating resource groups require the use of IP address takeover. The nodes in the

resource chain must all share the same network connection to the resource group.
Concurrent:
A concurrent RG can be active on multiple nodes at the same time.

o
Custom:
o

Users have to explicitly specify the desired startup, fallover and fallback
procedures.

This support only IPAT via aliasing service IP addresses.

Startup Options:
o

Online on home node only

Online on first available node

Online on all available nodes

Online using distribution policyThe resource group will only be brought online if the node
has no other resource group online. You can find this by lssrc ls clstrmgrES

Fallover Options:
o

Fallover to next priority node in list

Fallover using dynamic node priorityThe fallover node can be selected on the basis of
either its available CPU, its available memory or the lowest disk usage. HACMP uses RSCT to
gather all this information then the resource group will fallover to the node that best meets.

Bring offlineThe resource group will be brought offline in the event of an error occur. This
option is designed for resource groups that are online on all available nodes.

Fallback Options:
o

Fallback to higher priority node in the list

Never fallback

Basic Steps to implement an HACMP cluster:


o

Planning

Install and connect the hardware

Configure shared storage

Installing and configuring application software

Install HACMP software and reboot each node

Define the cluster topology

Synchronize the cluster topology

Configure cluster resources

Configure cluster resource group and shared storage

Synchronize the cluster

Test the cluster

HACMP installation and configuration:


HACMP release notes : /usr/es/lpp/cluster/doc
Smitty install_all fast path for installation
Cluster.es and cluster.cspoc images must be installed on all servers
Start the cluster communication daemon startsrc s clcomdES
Upgrading the cluster options: node by node migration and snapshot conversion
Steps for migration:
o

Stop cluster services on all nodes

Upgrade the HACMP software on each node

Start cluster services on one node at a time

Convert from supported version of HAS to hacmp


o

Current s/w should be commited

Save snapshot

Remove the old version

Install HA 5.1 and verify

Check previous version of cluster: lslpp h cluster


To save your HACMP configuration, create a snapshot in HACMP
Remove old version of HACMP: smitty install_remove ( select software name cluster*)
Lppchk v and lppchk c cluster* both commands run clean if the installation is ok.
After you have installed HA on cluster nodes you need to convert and apply the snapshot.
converting the snapshot must be performed before rebooting the cluster nodes
Clconvert_snapshot C v version s It converts HA old version snapshot to new version
After installation rebooting the cluster services is required because to activate the new cluster
manager.
Verification and synchronization : smitty hacmpextended configuration extended verification
and configuration verify changes only
Perform Node-by-Node Migration:
o

Save the current configuration in snapshot.

Stop cluster services on one node using graceful with takeover

Verify the cluster services

Install hacmp latest version.

Check the installed software using lppchk

Reboot the node.

Restart the HACMP software ( smitty hacmpSystem ManagementManage cluster


servicesstart cluster services

Repeat above steps on all nodes

Logs documenting on /tmp/hacmp.out /tmp/cm.log /tmp/clstrmgr.debug

Config_too_long message appears when the cluster manager detects that an event has
been processing for more than the specified time. To change the time interval ( smitty hacmp
extended configurationextended event configurationchange/show time until warning)

Cluster snapshots are saved in the /usr/es/sbin/cluster/snapshots.


Synchronization process will fail when migration is incomplete. To back out from the change you
must restore the active ODM. (smitty hacmp Problem determination tools Restore HACMP
configuration database from active configuration.)
Upgrading HACMP new version involves converting the ODM from previous release to the current
release. That is done by /usr/es/sbin/cluster/conversion/cl_convert F v 5.1
The log file for the conversion is /tmp/clconvert.log.
Clean up process once installation interrupted.( smitty install software maintenance and
installation clean up after a interrupted installation)
Network Configuration:

Physical Networks: TCP/IP based, such as Ethernet and token ring Device based, RS 232 target
mode SSA(tmssa)
Configuring cluster Topology:
Standard and Extended configuration
Smitty hacmpInitialization and standard configuration
IP aliasing is used as the default mechanism for service IP label/address assignment to a network
interface.
Configure nodes : Smitty hacmpInitialization and standard configurationconfigure nodes

to an hacmp cluster (Give cluster name and node names)


Configure resources: Use configure resources to make highly

available ( configure IP address/label, Application server, Volume groups, Logical volumes,


File systems
Configure resource groups: Use configure HACMP resource groups . you can

choose cascading, rotating, custom, concurrent


Assign resources to each resource group: configure HACMP resource groups Change/show

resources for a Resource group.


o

Verify and synchronize the cluster configuration

Display the cluster configuration

Steps for cluster configuration using extended path:


Run discovery: Running discovery retrieves current AIX configuration information from all

cluster nodes.
Configuring an HA cluster: smitty hacmpextended configurationextended topology

configurationconfigure an HACMP clusterAdd/change/show an HA cluster


Defining a node: smitty hacmpextended configurationextended topology

configurationconfigure HACMP nodesAdd a node to the HACMP cluster


o

Defining sites: This is optional.

Defining network: Run discover before network configuration.


1.

IP based networks: smitty hacmpextended configurationextended topology


configurationconfigure HACMP networksAdd a network to the HACMP clusterselect the
type of network(enter network name, type, netmask, enable IP takeover via IP
aliases(default is true), IP address offset for heartbeating over IP aliases.

Defining communication interfaces: smitty hacmpextended configurationextended


topology configurationHACMP cotmmunication interfaces/DevicesSelect communication
interfacesadd node name, network name, network interface, IPlabel/address, network type

Defining communication devices: smitty hacmpextended configurationextended topology


configurationconfigure HACMP communication interface/devicesselect communication
devices

To see boot IP labels on a node use netstat in

Defining persistent IP labels: It always stays on the same node, does not require installing
an additional physical interface, its not part of any resource group.smitty hacmpextended
topology configurationconfigure persistent node IP label/addressesadd persistent node IP
label(enter node name, network name, node IP label/address)

Resource Group Configuration


o

Smitty hacmpinitialization and standard configurationConfigure HACMP resource groups


Add a standard resource group Select cascading/Rotating/Concurrent/Custom (enter
resource group name, participating node names)

Assigning resources to the RG. Smitty hacmpinitialization and standard configuration


Configure HACMP resource groupschange/show resources for a standard resource group( add
service IP label/address, VG, FS, Application servers.

Resource group and application management:


o

Bring a resource group offline: smitty cl_adminselect hacmp resource group and
application managementBring a resource group offline.

Bring a resource group online: smitty hacmp select hacmp resource group and
application managementBring a resource group online.

Move a resource group: smitty hacmp select hacmp resource group and application
management Move a resource group to another node

C-SPOC: Under smitty cl_admin


o

Manage HACMP services

HACMP Communication interface management

HACMP resource group and application manipulation

HACMP log viewing and management

HACMP file collection management

HACMP security and users management

HACMP LVM

HACMP concurrent LVM

HACMP physical volume management

Post Implementation and administration:


C-Spoc commands are located in the /usr/es/sbin/cluster/cspoc directory.
HACMP for AIX ODM object classes are stored in /etc/es/objrepos.
User group administration in hacmp is smitty cl_usergroup
Problem Determination:
To verify the cluster configuration use smitty clverify.dialog
Log file to store output: /var/hacmp/clverify/clverify.log
HACMP Log Files:
/usr/es/adm/cluster.log: Generated by HACMP scripts and daemons.
/tmp/hacmp.out: This log file contains line by line record of every command executed by
scripts.
/usr/es/sbin/cluster/history/cluster.mmddyyyy: System creates cluster history file everyday.
/tmp/clstrmgr.debug: This messages generated by clstrmgrES activity.
/tmp/cspoc.log: generated by hacmp c-spoc commands
/tmp/dms_loads.out: stores log messages every time hacmp triggers the deadman switch
/var/hacmp/clverify/clverify.log: cluster verification log.
/var/ha/log/grpsvcs, /var/ha/log/topsvcs, /var/ha/log/grpglsm: daemon logs.
Snapshots: The primary information saved in a cluster snapshot is the data stored in the HACMP
ODM classes(HACMPcluster, HACMPnode, HACMPnetwork, HACMPdaemons).
The cluster snapshot utility stores the data it saves in two separate files:
ODM data file(.odm), Cluster state information file(.info)
To create a cluster snapshot: smitty hacmphacmp extended configurationhacmp snapshot
configurationadd a cluster snapshot
Cluster Verification and testing:
High and Low water mark values are 33 and 24
The default value for syncd is 60.
Before starting the clu ster clcomd daemon is added to the /etc/inittab and started by init.
Verify the status of the cluster services: lssrc g cluster ( cluster manager daemon(clstrmgrES),
cluster SMUX peer daemon(clsmuxpd) and cluster topology services daemon(topsvcd) should be
running.
Status of different cluster subsystems: lssrc g topsvcs and lssrc g emsvcs.
In /tmp/hacmp.out file look for the node_up and node_up_complete events.
To check the HACMP cluster status: /usr/sbin/cluster/clstat. To use this command you should have
started the clinfo daemon.
To change the snmp version : /usr/sbin/snmpv3_ssw -1.

Stop the cluster services by using smitty clstop : graceful, takeover, forced. In the log file
/tmp/hacmp.out search for node_down and node_down_complete.
Graceful: Node will be released, but will not be acquired by other nodes.
Graceful with takeover: Node will be released and acquired by other nodes.
Forced: Cluster services will be stopped but resource group will not be released.
Resource group states: online, offline, aquiring, releasing, error, temporary error, or unknown.
Find the resource group status: /usr/es/sbin/cluster/utilities/clfindres or clRGinfo.
Options: -t : If you want to display the settling time p: display priority override locations
To review cluster topology: /usr/es/sbin/cluster/utilities/cltopinfo.
Different type of NFS mounts: hard and soft
Hard mount is default choice.
NFS export file: /usr/es/sbin/cluster/etc/exports.
If the adapter configured with a service IP address : verify in /tmp/hacmp.out event
swap_adapter has occurred, Service IP address has been moved using the command netstat in .
You can implement RS232 heartbeat network between any 2 nodes.
To test a serial connection lsdev Cc tty, baud rate is set to 38400, parity to none, bits per
character to 8
Test to see RSCT is functioning or not : lssrc ls topsvcs
RSCT verification: lssrc ls topsvcs. To check RSCT group services: lssrc ls grpsvcs
Monitor heartbeat over all the defines networks: cllsif.log from /var/ha/run/topsvcs.clustername.
Prerequisites:
PowerHA Version 5.5 AIX v5300-9 RSCT levet 2.4.10
BOS components: bos.rte.*, bos.adt.*, bos.net.tcp.*,
Bos.clvm.enh ( when using the enhanced concurrent resource manager access)
Cluster.es.nfs fileset comes with the powerHA installation medium installs the NFSv4. From aix
BOS bos.net.nfs.server 5.3.7.0 and bos.net.nfs.client 5.3.7.0 is required.
Check all the nodes must have same version of RSCT using lslpp l rsct
Installing powerHA: release notes: /usr/es/sbin/cluster/release_notes
Enter smitty install_allselect input devicePress f4 for a software listingenter
Steps for increase the size of a shared lun:
o

Stop the cluster on all nodes

Run cfgmgr

Varyonvg vgname

Lsattr El hdisk#

Chvg g vgname

Lsvg vgname

Varyoffvg vgname

On subsequent cluster nodes that share the vg. (run cfgmgr, lsattr El hdisk#, importvg L
vgname hdisk#)

Synchronize

PowerHA creates a backup copy of the modified files during synchronization on all nodes. These
backups are stored in /var/hacmp/filebackup directory.
The file collection logs are stored in /var/hacmp/log/clutils.log file.
User and group Administration:
Adding a user: smitty cl_usergroupselect users in a HACMP clusterAdd a user to the cluster.(list
users, change/show characteristics of a user in cluster, Removing a user from the cluster
Adding a group: smitty cl_usergroupselect groups in a HACMP clusterAdd a group to the cluster.
(list groups, change/show characteristics of a group in cluster, Removing a group from the cluster
Command is used to change password on all cluster nodes: /usr/es/sbin/cluster/utilities/clpasswd
Smitty cl_usergroupusers in a HACMP cluster
o

Add a user to the cluster

List users in the cluster

Change/show characteristics of a user in the cluster

Remove a user from the cluster

Smitty cl_usergroupGroups in a HACMP cluster


o

Add a group to the cluster

List groups to the cluster

Change a group in the cluster

Remove a group

Smitty cl_usergroupPasswords in an HACMP cluster


Importing VG automatically: smitty hacmpExtended configurationHACMP extended
resource configurationChange/show resources and attributes for a resource group Automatically
import volume groups to true.
C-SPOC LVM: smitty cl_admin HACMP Logical Volume Management
o

Shared Volume groups

Shared Logical volumes

Shared File systems

Synchronize shared LVM mirrors (Synchronize by VG/Synchronize by LV)

Synchronize a shared VG definition

C-SPOC concurrent LVM: smitty cl_admin HACMP concurrent LVM


o

Concurrent volume groups

Concurrent Logical volumes

Synchronize concurrent LVM mirrors

C-SPOC Physical volume management: smitty cl_adminHACMP physical


volume management
o

Add a disk to the cluster

Remove a disk from the cluster

Cluster disk replacement

Cluster datapath device management

Cluster Verification: smitty hacmpExtended verificationExtended verification and


synchronization. Verification log files stored in /var/hacmp/clverify.
/var/hacmp/clverify/clverify.log Verification log
/var/hacmp/clverify/pass/nodename If verification succeeds
/var/hacmp/clverify/fail/nodename If verification fails
Automatic cluster verification: Each time you start cluster services and every 24 hours.
Configure automatic cluster verification: smitty hacmpproblem determination toolshacmp
verification Automatic cluster configuration monitoring.
Cluster status Monitoring: /usr/es/sbin/cluster/clstat a and o.
/usr/es/sbin/cluster/utilities/cldumpIt provides snapshot of the key cluster status components.
Clshowsrv: It displays the status
Disk Heartbeat:
o

Its a non-IP heartbeat

Its use dedicated disk/LUN

Its a point to point network

If more than 2 nodes exist in your cluster, you will need a minimum of n number of non-IP
heartbeat networks.

Disk heartbeating will typically requires 4 seeks/second. That is each of two nodes will
write to the disk and read from the disk once/second. Filemon tool monitors the seeks.

Configuring disk heartbeat:


o

Vpaths are configured as member disks of an enhanced concurrent volume group. Smitty
lvmselect volume groupsAdd a volume groupGive VG name, PV names, VG major number,
Set create VG concurrent capable to enhanced concurrent.

Import the new VG on all nodes using smitty importvg or importvg V 53 y c23vg vpath5

Create the diskhb networksmitty hacmpextended configuration extended topology


configurationconfigure hacmp networksAdd a network to the HACMP clusterchoose diskhb

Add 2 communication devices smitty hacmpextended configuration extended topology


configurationConfigure HACMP communication Interfaces/DevicesAdd communication
interfaces/devicesAdd pre-defined communication interfaces and devices communication
deviceschoose the diskhb

Create one communication device for other node also

Testing Disk Heartbeat connectivity:/usr/sbin/rsct/dhb_read is used to test the validity


of a diskhb connection.
Dhb_read p vpath0 r for receives data over diskhb network
Dhb_read p vpath3 t for transmits data over diskhb network.
Monitoring disk heartbeat: Monitor the activity of the disk heartbeats via lssrc ls topsvcs.
Monitor the Missed HBS field.
Configure HACMP Application Monitoring: smitty cm_cfg_appmonAdd a process application
monitorgive process names, app startup/stop scripts
Application availability analysis tool: smitty hacmpsystem managementResource group and
application managementapplication availability analysis
Commands:
List the cluster topology : /usr/es/sbin/cluster/utilities/cllsif
/usr/es/sbin/cluster/clstat
Start cluster : smitty clstart .. Monitor with /tmp/hacmp.out and check for node_up_complete.
Stop the cluster : smitty cl_stop Monitor with /tmp/hacmp.out and check fr
node_down_complete.
Determine the state of cluster: /usr/es/sbin/cluster/utilities/clcheck_server
Display the status of HACMP subsystems: clshowsrv v/-a
Display the topology information: cltopinfo c/-n/-w/-i
Monitor the heartbeat activity: lssrc ls topsvcs [ check for dropped, errors]
Display resource group attributes: clrginfo v, -p, -t, -c, -a OR clfindres

Email ThisBlogThis!Share to TwitterShare to Facebook

IBM AIX - VIO Made Easy - A Complete


Overview!!
Raaj Tilak S 5:06 AM AIX, IBM AIX - VIO No Comments
PowerVM: It allows to increase the utilization of servers. Power VM includes
Logical partitioning, Micro Partitioning, Systems Virtualization, VIO,
hypervisor and so on.
Simultaneous Multi Threading : SMT is an IBM microprocessor technology
that allows 2 separate H/W instruction streams to run concurrently on the
same physical processor.
Virtual Ethernet : VLAN allows secure connection between logical partitions
without the need for a physical IO adapter or cabling. The ability to securely
share Ethernet bandwidth across multiple partitions increases H/W utilization.
Virtual SCSI: VSCSI provides secure communication between the partitions
and VIO server.The combination of VSCSI and VIO capabilities allows you to
share storage adapter bandwidth and to subdivide single large disks into
smaller segments. The adapters and disks can shared across multiple
partitions, increase utilization.
VIO server : Physical resources allows you to share the group of
partitions.The VIO server can use both virtualized storage and network
adapters, making use of VSCSI and virtual Ethernet.
Redundant VIO server: AIX or linux partitions can be a client of one or
more VIO servers at the same time. A good strategy to improve availability
for sets of client partitions is to connect them to 2 VIO servers. The reason
for redundancy is ability to upgrade latest technologies without affecting
production workloads.
Micro Partitioning: Sharing the processing capacity from one or more
logical partitions. The benefit of Micro Partitioning is that it allows
significantly increased overall utilization . n of processor resources. A micro
partition must have 0.1 processing units. Maximum no.of partitions on any
system P server is 254.
Uncapped Mode : The processing capacity can exceed the entitled capacity
when resources are available in the shared processor pool and the micro
partition is eligible to run.

Capped Mode : The processing capacity can never exceed the entitled
capacity.
Virtual Processors :A virtual processor is a representation of a physical
processor that is presented to the operating system running in a micro
partition.
If a micro partition is having 1.60 processing units , and 2 virtual processors.
Each virtual processor will have 0.80 processing units.
Dedicated processors : Dedicated processors are whole processors that
are assigned to dedicated LPARs . The minimum processor allocation for an
LPAR is one.
IVM(Integrated virtualization manager): IVM is a h/w management
solution that performs a subset of the HMC features for a single server,
avoiding the need of a dedicated HMC server.
Live partition Mobility: Allows you to move running AIX or Linux partitions
from one physical Power6 server to another without disturb.
VIO
Version for VIO 1.5
For VIO command line interface is IOSCLI
The environment for VIO is oem_setup_env
The command for configuration through smit is cfgassist
Initial login to the VIO server is padmin
Help for vio commands ex: help errlog
Hardware requirements for creating VIO :
1.
Power 5 or 6
2.
HMC
3.
At least one storage adapter
4.
If you want to share Physical disk then one big Physical disk
5.
Ethernet adapter
6.
At least 512 MB memory
Latest version for vio is 2.1 fixpack 23
Copying the virtual IO server DVD media to a NIM server:
Mount /cdrom
Cd /cdrom
Cp /cdrom/bosinst.data /nim/resources
Execute the smitty installios command
Using smitty installios you can install the VIO S/w.
Topas cecdisp flag shows the detailed disk statistics
Viostat extdisk flag shows detailed disk statistics.
Wklmgr and wkldagent for handling workload manager. They can be used to
record performance data and that can be viewed by wkldout.
Chtcpip command for changing tcpip parameters

Viosecure command for handling the secure settings


Mksp : to create a storage pool
Chsp: Adds or removes physical volumes from the storage pool
Lssp: lists information about storage pool
Mkbdsp: Attaches storage from storage pool to virtual SCSI adapter
Rmbdsp: removes storage from virtual scsi adapter and return it to storage
pool
Default storage pool is rootvg
Creation of VIO server using HMC version 7 :
Select the managed system -> Configuration -> Create Logical Partition ->
VIO server
Enter the partition name and ID.
Check the mover service box if the VIO server partition to be created will be
supporting partition mobility.
Give a partition profile name ex:default
Processors : You can assign entire processors to your partition for dedicated
use, or you can assign partial processors units from the shared processor
pool. Select shared.
Specify the minimum, desired and maximum processing units.
Specify minimum, desired and maximum virtual processors. And select the
uncapped weight is 191
The system will try to allocate the desired values
The partition will not start if the managed system cannot provide the
minimum amount of processing units.
You cannot dynamically increase the amount of processing units to more
than the maximum,
Assign the memory also min, desired and max.
The ratio between minimum and maximum amount of memory cannot be
more than 1/64
IO selects the physical IO adapters for the partition. Required means the
partition will not be able to start unless these are available in this partition.
Desired means that the partition can start also without these adapters. A
required adapter can not be moved in a dynamic LPAR operation.
VIO server partition requires a fiber channel adapter to attach SAN disks for
the client partitions. It also requires an Ethernet adapter for shared Ethernet
adapter bridging to external networks.
VIO requires minimum of 30GB of disk space.
Create Virtual Ethernet and SCSI adapters: increase the maximum no of
virtual adapters to 100
The maximum no of adapters must not set more than 1024.
In actions -> select create -> Ethernet adapter give Adapter ID and VLAN id.
Select Access External Network Check Box to use this adapter as a gateway
between internal and external network.
And also create SCSI adapter also.
VIO server S/W installation :

Place the CD/DVD in P5 Box


Activate the VIO server by clicking the activate. Select the default
partition
3.
Then check the Open terminal window or console section and click the
advanced. And OK.
4.
Under the boot mode drop down list select SMS.
After installation is complete login with padmin and press a(for s/w
maintenance agreement terms)
License accept for accepting the license.
Creating a shared Ethernet adapter
1.
lsdev virtual ( check the virtual Ethernet adapter)
2.
lsdev type adapter ( Check the physical Ethernet adapter)
3.
you use the lsmap all net command to check the slot numbers of the
virtual Ethernet adapter.
4.
mkvdev sea ent0 vadapter ent2 default ent2 defaultid 1
5.
lsmap all net
6.
use the cfgassist or mktcpip command configure the tcp/ip or
7.
mktcpip hostname vio_server1 inetaddr 9.3.5.196 interface ent3
netmask 255.255.244.0 gateway 9.3.4.1
Defining virtual disks
Virtual disks can either be whole physical disks, logical volumes or files. The
physical disks can be local or SAN disks.
Create the virtual disks
1.
login to the padmin and run cfgdev command to rebuild the list of
visible devices.
2.
lsdev virtual (make sure virtual scsi server adapters available
ex:vhost0)
3.
lsmap all --> to check the slot numbers and vhost adapter numbers.
4.
mkvg f vg rootvg_clients hdisk2 --> Creating rootvg_clients vg.
5.
mklv lv dbsrv_rvg rootvg_clients 10G
Creating virtual device mappings:
1.
lsdev vpd |grep vhost
2.
mkvdev vdev dbsrv_rvg -vadapter vhost2 dev dbsrv_rvg
3.
lsdev virtual
4.
lsmap all
fget_config Av command provided on the IBM DS4000 series for a listing of
LUN names.
Virtual SCSI Optical devices:
A dvd or cd device can be virtualized and assigned to client partitions. Only
one VIO client can access the device at a time.
Steps :
1.
let the DVD drive assign to VIO server
2.
Create a server SCSI adapter using the HMC.
3.
Run the cfgdev command to get the new vhost adapter. Check using
lsdev virtual
1.
2.

Create the virtual device for the DVD drive.(mkvdev vdev cd0
vadapter vhost3 dev vcd)
5.
Create a client scsi adapter in each lpar using the HMC.
6.
Run the cfgmgr
Moving the drive :
1.
Find the vscsi adapter using lscfg |grep Cn(n is the slot number)
2.
rmdev Rl vscsin
3.
run the cfgmgr in target LPAR
Through dsh command find which lpar is currently holding the drive.
4.

Unconfiguring the dvd drive :


1.
rmdev dev vcd ucfg
2.
lsdev slots
3.
rmdev dev pci5 recursive ucfg
4.
cfgdev
5.
lsdev virtual
Mirroring the VIO rootvg:
1.
chvg factor 6 rootvg (rootvg can include upto 5 PVs with 6096 PPs)
2.
extendvg f rootvg hdisk2
3.
lspv
4.
mirrorios f hdisk2
5.
lsvg lv rootvg
6.
bootlist mode normal ls
Creating Partitions :
1.
Create new partition using HMC with AIX/linux
2.
give partition ID and Partition name
3.
Give proper memory settings(min/max/desired)
4.
Skip the physical IO
5.
give proper processing units (min/desired/max)
6.
Create virtual ethernet adapter ( give adapter ID and VLAN id)
7.
Create virtual SCSI adapter
8.
In optional settings
o
Enable connection monitoring
o
Automatically start with managed system
o
Enable redundant error path reporting
9.
bootmodes select normal
Advanced Virtualization:
Providing continuous availability of VIO servers : use multiple VIO servers for
providing highly available virtual scsi and shared Ethernet services.
IVM supports a single VIO server.
Virtual scsi redundancy can be achieved by using MPIO and LVM mirroring at
client partition and VIO server level.
Continuous availability for VIO
o
Shared Ethernet adapter failover
o
Network interface backup in the client
o
MPIO in the client with SAN

LVM Mirroring
Virtual Scsi Redundancy:
Virtual scsi redundancy can be achieved using MPIO and LVM mirroring.
Client is using MPIO to access a SAN disk, and LVM mirroring to access 2 scsi
disks.
MPIO: MPIO for highly available virtual scsi configuration. The disks on the
storage are assigned to both virtual IO servers. The MPIO for virtual scsi
devices only supports failover mode.
Configuring MPIO:
o
Create 2 virtual IO server partitions
o
Install both VIO servers
o
Change fc_err_recov( to fast_fail and dyntrk(AIX tolerate cabling
changes) to yes. ( chdev dev fscsi0 attr fc_err_recov=fast_fail
dyntrk=yes perm
o
Reboot the VIO servers
o
Create the client partitions. Add virtual Ethernet adapters
o
Use the fget_config(fget_config vA) command to get the LUN to hdisk
mappings.
o
Use the lsdev dev hdisk vpd command to retrieve the information.
o
The reserve_policy for each disk must be set to no_reserve.(chdev dev
hdisk2 attr reserve_policy=no_reserve)
o
Map the hdisks to vhost adapters.( mkvdev vdev hdisk2 vadapter
vhost0 dev app_server)
o
Install the client partitions.
o
Configure the client partitions
o
Testing MPIO
Configure the client partitions:
o
Check the MPIO configuration (lspv, lsdev Cc disk)
o
Run lspath
o
Enable the health check mode (chdev l hdisk0 a hcheck_interval=50
P
o
Enable the vscsi client adapter path timeout ( chdev l vscsi0 a
vscsi_path_to=30 P)
o
Changing the priority of a path( chpath l hdisk0 p vscsi0 a
priority=2)
Testing MPIO:
o
Lspath
o
Shutdown VIO2
o
Lspath
o
Start the vio2
o
Lspath
LVM Mirroring: This is for setting up highly available virtual scsi
configuration. The client partitions are configured with 2 virtual scsi
adapters. Each of these virtual scsi adapters is connected to a different VIO
server and provides one disk to the client partition.
o

Configuring LVM Mirroring:


o
Create 2 virtual IO partitions, select one Ethernet adapter and one
storage adapter
o
Install both VIO servers
o
Configure the virtual scsi adapters on both servers
o
Create client partitions. Each client partition needs to be configured
with 2 virtual scsi adapters.
o
Add one or two virtual Ethernet adapters
o
Create the volume group and logical volumes on VIO1 and VIO2
o
A logical volume from the rootvg_clients VG should be mapped to each
of the 4 vhost devices.( mkvdev vdev nimsrv_rvg vadapter vhost0 dev
vnimsrv_rvg)
o
Lsmap all
o
When you bring up the client partitions you should have hdisk0 and
hdisk1. Mirror the rootvg.
o
Lspv
o
Lsdev Cc disk
o
Extendvg rootvg hdisk1
o
Mirrorvg m rootvg hdisk1
o
Test LVM mirroring
Testing LVM mirroring:
o
Lsvg l rootvg
o
Shutdown VIO2
o
Lspv hdisk1 (check the pvstate, stale partitions)
o
Reactivate VIO and varyonvg rootvg
o
Lspv hdisk1
o
Lsvg l rootvg
Shared Ethernet adapter: It can be used to connect a physical network to
a virtual Ethernet network. Several client partitions to share one physical
adapter.
Shared Ethernet Redundancy: This is for temporary failure of
communication with external networks. Approaches to achieve continuous
availability:
o
Shared Ethernet adapter failover
o
Network interface backup
Shared Ethernet adapter failover: It offers Ethernet redundancy. In a SEA
failover configuration 2 VIO servers have the bridging functionality of the
SEA. They use a control channel to determine which of them is supplying the
Ethernet service to the client. The client partition gets one virtual Ethernet
adapter bridged by 2 VIO servers.
Requirements for configuring SEA failover:
o
One SEA on one VIOs acts as the primary adapter and the second SEA
on the second VIOs acts as a backup adapter.

Each SEA must have at least one virtual Ethernet adapter with the
access external network flag(trunk flag) checked. This enables the SEA
to provide bridging functionality between the 2 VIO servers.
o
This adapter on both the SEAs has the same pvid
o
Priority value defines which of the 2 SEAs will be the primary and
which is the secondary. An adapter with priority 1 will have the highest
priority.
Procedure for configuring SEA failover:
o
Configure a virtual Ethernet adapter via DLPAR. (ent2)
o
Select the VIO-->Click task button-->choose DLPAR-->virtual
adapters
o
Click actions-->Create-->Ethernet adapter
o
Enter Slot number for the virtual Ethernet adapter into adapter
ID
o
Enter the Port virtual Lan ID(PVID). The PVID allows the virtual
Ethernet adapter to communicate with other virtual Ethernet adapters
that have the same PVID.
o
Select IEEE 802.1
o
Check the box access external network
o
Give the virtual adapter a low trunk priority
o
Click OK.
o
Create another virtual adapter to be used as a control channel on
VIOS1.( give another VLAN ID, do not check the box access external
network (ent3)
o
Create SEA on VIO1 with failover attribute. ( mkvdev sea ent0
vadapter ent2 default ent2 defaultid 1 attr ha_mode=auto
ctl_chan=ent3. Ex: ent4
o
Create VLAN Ethernet adapter on the SEA to communicate to the
external VLAN tagged network ( mkvdev vlan ent4 tagid 222) Ex:ent5
o
Assign an IP address to SEA VLAN adapter on VIOS1. using mktcpip
o
Same steps to VIO2 also. ( give the higher trunk priority:2)
Client LPAR Procedure:
o
Create client LPAR same as above.
Network interface backup : NIB can be used to provide redundant access
to external networks when 2 VIO servers used.
Configuring NIB:
o
Create 2 VIO server partitions
o
Install both VIO servers
o
Configure each VIO server with one virtual Ethernet adapter. Each VIO
server needs to be a different VLAN.
o
Define SEA with the correct VLAN ID
o
Add virtual Scsi adapters
o
Create client partitions
o
Define the ether channel using smitty etherchannel
Configuring multiple shared processor pools:
o

Configuration --> Shared processor pool management --> Select the pool
name
VIOs Security:
Enable basic firewall settings: viosecure firewall on
view all open ports on firewall configuration: viosecure firewall view
To view current security settings: viosecure view nonint
Change system security settings to default: viosecure level default
List all failed logins : lsfailedlogin
Dump the global command log: lsgcl
Backup:
Create a mksysb file of the system on a nfs mount: backupios file
/mnt/vios.mksysb mksysb
Create a backup of all structures of VGs and/or storage pools: savevgstruct
vdiskvg ( data will be stored to /home/ios/vgbackups)
List all backups made with savevgstruct: restorevgstruct ls
Backup the system to a NFS mounted file system: backupios file /mnt
Performance Monitoring:
Retrieve statistics for ent0: entstat all ent0
Reset the statistics for ent0: entstat reset ent0
View disk statistics: viostat 2
Show summary for the system in stats: viostat sys 2
Show disk stats by adapter: viostat adapter 2
Turn on disk performance counters: chdev dev sys0 attr iostat=true
Topas cecdisp
Link aggregation on the VIO server:
Link aggregation means you can give one IP address to two network cards
and connect to two different switches for redundancy purpose. One network
card will be active on one time.
Devices --> communication --> Etherchannel/IEEE 802.3 ad Link Aggregation
--> Add an etherchannel / Link aggregation
Select ent0 and mode 8023ad
Select backup adapter as redundancy ex: ent1
Automatically virtual adapter will be created named ent2.
Then put IP address : smitty tcpip --> Minimum configuration and startup -->
select ent2 --> Put IP address

IBM AIX - Few Important definitions and


Accronyms
Raaj Tilak S 5:05 PM AIX, IBM AIX - General No Comments

IBM AIX - Few Important definitions and


Accronyms
CoD - Capacity on Demand. The ability to add compute capacity in the form of CPU or memory to
a running system by simply activating it. The resources must be pre-staged in the system prior to
use and are (typically) turned on with an activation key. There are several different pricing models
for CoD.
DLPAR - Dynamic Logical Partition. This was used originally as a further clarification on the
concept of an LPAR as one that can have resources dynamically added or removed. The most
popular usage is as a verb; ie: to DLPAR (add) resources to a partition.
HEA - Host Ethernet Adapter. The physical port of the IVE interface on some of the Power 6
systems. A HEA port can be added to a port group and shared amongst LPARs or placed in
promiscuous mode and used by a single LPAR. (See IVE)
HMC - Hardware Management Console. An "appliance" server that is used to manage Power 4, 5,
and 6 hardware. The primary purpose is to enable / control the virtualization technologies as well
as provide call-home functionality, remote console access, and gather operational data.
IVE - Integrated Virtual Ethernet. The capability to provide virtualized Ethernet services to LPARs
without the need of VIOS. This functionality was introduced on several Power 6 systems.
IVM - Integrated Virtualization Manager. This is a management interface that installs on top of
the VIOS software that provides much of the HMC functionality. It can be used instead of a HMC for
some systems. It is the only option for virtualization management on the blades as they cannot
have HMC connectivity.
LHEA - Logical Host Ethernet Adapter. The virtual interface of a IVE in a client LPAR. These
communicate via a HEA to the outside / physical world. (See IVE)
LPAR - Logical Partition. This is a collection of system resources (CPU, Memory, I/O adapters)
that can host an operating system. To the operating system this collection of resources appears to
be a complete physical system. Some or all of the resources on a LPAR may be shared with other
LPARs in the physical system.
LV - Logical Volume. A collection of one or more LPs (Logical Partitions) in a VG (Volume Group)
that provide storage for filesystems, journal logs, paging space, etc... See the LVM section for
additional information.
LVCB - Logical Volume Control Block. A LVM structure, traditionally within the LV, that contains
metadata for the LV. See the LVM section for additional information.

MES - Miscellaneous Equipment Specification. This is a change order to a system, typically in the
form of an upgrade. A RPO MES is for Record Purposes Only. Both specify to IBM changes that are
made to a system.
MSPP - Multiple Shared Processor Pools. This is a capability introduced in Power 6
systems that allows for more than one SPP.
NIM - Network Installation Management / Network Install Manager (IBM documentation
refers to both expansions of the acronym.) NIM is a means to perform remote initial BOS
installs, and manage software on groups of AIX systems.
ODM - Object Data Manager. A database and supporting methods used for storing
system configuration data in AIX. See the ODM section for additional information.
PP - Physical Partition. An LVM concept where a disk is divided into evenly sized
sections. These PP sections are the backing of LPs (Logical Partitions) that are used to
build volumes in a volume group. See the LVM section for additional information.
PV - Physical Volume. A PV is an LVM term for an entire disk. One or more PVs are used
to construct a VG (Volume Group). See the LVM section for additional information.
PVID - Physical Volume IDentifier. A unique ID that is used to track disk devices on a
system. This ID is used in conjunction with the ODM database to define /dev directory
entries. See the LVM section for additional information.
SMIT - System Management Interface Tool. An extensible X Window / curses interface to
administrative commands. See the SMIT section for additional information.
SPOT - Shared Product Object Tree. This is an installed copy of the /usr file system. It is
used in a NIM environment as a NFS mounted resource to enable remote booting and
installation.
SPP - Shared Processor Pool. This is an organizational grouping of CPU resources that
allows caps and guaranteed allocations to be set for an entire group of LPARs. Power 5
systems have a single SPP, Power 6 systems can have multiple.
VG - Volume Group. A collection of one or more PVs (Physical Volumes) that have been
divided into PPs (Physical Partitions) that are used to construct LVs (Logical Volumes).
See the LVM section for additional information.
VGDA - Volume Group Descriptor Area. This is a region of each PV (Physical Volume) in
a VG (Volume Group) that is reserved for metadata that is used to describe and manage
all resources in the VG. See the LVM section for additional information.

IBM AIX Installation with CD


Raaj Tilak S 2:21 AM IBM AIX - Installation / Migration No Comments
IBM AIX Installation with CD

Before you perform this step, make sure you have reliable backups of your
data and any customized applications or volume groups. The instructions on
how to create a system backup are described later in this article.
Using this scenario, you can install the AIX operating system for the first time
or overwrite an existing version of the operating system. This scenario
involves the following steps:

Step 1. Prepare your system


There must be adequate disk space and memory available. AIX 5L

Version 5.2 and AIX 5L Version 5.3 require 128MB of memory and 2.2GB
of physical disk space.
o
Make sure your hardware installation is complete, including all external
devices.
o
If your system needs to communicate with other systems and access
their resources, make sure you have the information in the following
worksheet before proceeding with the installation:
Network Attribute

Value

Network interface

For example: en0, et0

Host name
IP address
Network mask
Nameserver
Domain name
Gateway

Step 2. Boot from the AIX product CD


1.

Insert the AIX Volume 1 CD into the CD-ROM device.

2.

3.
4.

5.
6.

7.

Make sure all external devices attached to the system, such as CDROM drives, tape drives, DVD drives, and terminals, are turned on. Only
the CD-ROM drive from which you will install AIX should contain the
installation media.
Power on the system.
When the system beeps twice, press F5 on the keyboard or 5 on an
ASCII terminal. If you have a graphics display, you will see the keyboard
icon on the screen when the beeps occur. If you have an ASCII terminal,
you will see the word keyboard when the beeps occur.
Select the system console by pressing F1 or 1 on an ASCII terminal
and press Enter.
Select the English language for the BOS installation menus by typing
a 1 in the Choice field. Press Enter to open the Welcome to Base
Operating System Installation and Maintenancescreen.
Type 2 to select 2 Change/Show Installation Settings and
Install in the Choice field and pressEnter.

8.

Welcome to Base Operating System


Installation and Maintenance

Type the number of your choice and press Enter. Choice is


indicated by >>>.
1 Start Install Now with Default Settings
2 Change/Show Installation Settings and Install
3 Start Maintenance Mode for System Recovery
88 Help ?
99 Previous Menu
>>>
9.

Choice [1]: 2

Step 3. Set and verify BOS installation settings


1.

In the Installation and Settings screen, verify that the installation


settings are correct by checking the method of installation (new and
complete overwrite), the disk or disks you want to install, the primary
language environment settings, and the advanced options.
If the default choices are correct, type 0 and press Enter to begin the
BOS installation. The system automatically reboots after installation is
complete. Go to Step 4. Configure the system after installation.

Otherwise, go to sub-step 2.
2.
To change the System Settings, which includes the method of
installation and disk where you want to install, type 1 in the Choice field
and press Enter.
3.

Installation and Settings

Either type 0 and press Enter to install with current settings,


or type the number of the setting you want to change
and press Enter.

1 System Settings:
Method of Installation..................New and Complete Overwrite
Disk Where You Want to Install..hdisk0

>>> Choice [0]: 1


4.
5.

6.

Type 1 for New and Complete Overwrite in the Choice field and
press Enter. The Change Disk(s) Where You Want to Install screen
now displays.

Change Disk(s) Where You Want to Install

Type one or more numbers for the disk(s) to be used for


installation and press Enter. To cancel a choice, type the
corresponding number and Press Enter. At least one
bootable disk must be selected. The current choice is
indicated by >>>.
Name

Location Code Size(MB) VG Status Bootable

1 hdisk0 04-B0-00-2,0 4296 None Yes


2 hdisk1 04-B0-00-5,0 4296 None Yes
3 hdisk2 04-B0-00-6,0 12288 None Yes
>>> 0 Continue with choices indicated above
66
77
88
99

Disks not known to Base Operating System Installation


Display More Disk Information
Help ?
Previous Menu

>>> Choice [0]:


7.

In the Change Disk(s) Where You Want to Install screen:


1.
Select hdisk0 by typing a 1 in the Choice field and
press Enter. The disk will now be selected as indicated by >>>. To
unselect the destination disk, type the number again and press Enter.
2.
To finish selecting disks, type a 0 in the Choice field and
press Enter. The Installation and Settings screen now displays with
the selected disks listed under System Settings.
9.
Change the Primary Language Environment Settings
to English (United States). Use the following steps to change the Cultural
Convention, Language, and Keyboard to English.
8.

Type 2 in the Choice field on the Installation and


Settings screen to select the Primary Language Environment
Settings option.
2.
Type the number corresponding to English (United States) as the
Cultural Convention in theChoice field and press Enter.
3.
Select the appropriate keyboard and language options.
You do not need to select the More Options selection, because you are
using the default options in this scenario.
10.
Verify that the selections are correct in the Overwrite Installation
Summary screen, as follows:
1.

Overwrite Installation Summary


Disks: hdisk0
Cultural Convention: en_US
Language: en_US
Keyboard: en_US
64 Bit Kernel Enabled: No
JFS2 File Systems Created: No
Desktop: CDE
Enable System Backups to install any system:

Yes

Optional Software being installed:


>>> 1 Continue with Install
88 Help ?
99 Previous Menu
>>> Choice [1]:
11.
12.

Press Enter to begin the BOS installation. The system automatically


reboots after installation is complete.

Step 4. Configure the system after installation

After a new and complete overwrite installation, the Configuration


Assistant opens on systems with a graphics display. On systems with an
ASCII display, the Installation Assistant opens.
2.
Select the Accept Licenses option to accept the electronic licenses
for the operating system.
3.
Set the date and time, set the password for the administrator (root
user), and configure the network communications (TCP/IP).
1.

Use any other options at this time. You can return to the Configuration
Assistant or the Installation Assistant by typing configassist or smitty assist at
the command line.
4.
Select Exit the Configuration Assistant and select Next. Or,
press F10 or ESC+0 to exit the Installation Assistant.
5.
If you are in the Configuration Assistant, select Finish now. Do not
start the Configuration Assistant when restarting AIX and select Finish.
At this point, the BOS Installation is complete, and the initial configuration of
the system is complete.

IBM AIX - Migration with CD


Raaj Tilak S 2:31 AM AIX, IBM AIX - Installation / Migration No Comments
If you are overwriting an existing system, gather the TCP/IP information before you begin this
scenario. Also, before you perform a migration installation, make sure you have reliable backups
of your data and any customized applications or volume groups. The instructions on how to create
a system backup are described later in this article.
Using this scenario, you can migrate a system from AIX 4.3.3 (or later) to AIX 5.3.

Step 1. Prepare for the migration


Before starting the migration, complete the following prerequisites:
1.

Ensure that the root user has a primary authentication method of SYSTEM. You
can check this condition by typing the following command:
# lsuser -a auth1 root

2.
If needed, change the value by typing the following command:
# chuser auth1=SYSTEM root

3.

Before you begin the installation, other users who have access to your system
must be logged off.

4.

Verify that your applications will run on AIX 5L Version 5.3. Also, check if your
applications are binary compatible with AIX 5L Version 5.3. For details on binary
compatibility, check out the AIX 5L Version 5 binary compatibility Web site. If your
system is an application server, verify that there are no licensing issues. Refer to
your application documentation or provider to verify on which levels of AIX your
applications are supported and licensed.

5.

Check that your hardware microcode is up to date.

6.

All requisite hardware, including any external devices, such as tape drives or
CD/DVD-ROM drives, must be physically connected and powered on.

7.

Use the errpt command to generate an error report from entries in the system
error log. To display a complete detailed report, type the following:
# errpt -a

8.

There must be adequate disk space and memory available. AIX 5L Version 5.3
requires 128MB of memory and 2.2GB of physical disk space.

9.

Run the pre-migration script located in the mount_point/usr/lpp/bos directory


on your CD. To mount the CD, run the following command:
# mount -v cdrfs -o ro /dev/cdN /mnt

10.
where "N" is your CD drive number.
11.

Make a backup copy of your system software and data. The instructions on how
to create a system backup are described elsewhere in this article.

12.

Always refer to the release notes for the latest migration information.

Step 2. Boot from the AIX product CD


1.

If they are not already on, turn on your attached devices.

2.

Insert the AIX Volume 1 CD into the CD-ROM device.

3.

Reboot the system by typing the following command:


# shutdown -r

4.

When the system beeps twice, press F5 on the keyboard or 5 on an ASCII


terminal. If you have a graphics display, you will see the keyboard icon on the screen
when the beeps occur. If you have an ASCII terminal, you will see the word keyboard
when the beeps occur.

5.

Select the system console by pressing F1 or 1 on an ASCII terminal and


press Enter.

6.

Select the English language for the BOS installation menus by typing a 1 at
the Choicefield and press Enter. The Welcome to Base Operating System
Installation and Maintenance menu opens.

7.

Type 2 to select Change/Show Installation Settings and Install in


the Choice field and press Enter.
Welcome to Base Operating System
Installation and Maintenance

Type the number of your choice and press Enter. Choice is


indicated by >>>.

1 Start Install Now with Default Settings

2 Change/Show Installation Settings and Install


3 Start Maintenance Mode for System Recovery

>>>

88

Help ?

99

Previous Menu
Choice [1]: 2

Step 3. Verify migration installation settings


and begin installation
1.

Verify that migration is the method of installation. If migration is not the method
of installation, select it now. Select the disk or disks you want to install.
1 System Settings:
Method of Installation....................Migration
Disk Where You Want to Install............hdisk0

2.

Select Primary Language Environment Settings after install.

3.

Type 3 and press Enter to select More Options. To use the Help menu to learn
more about the options available during a migration installation, type 88 and
press Enter in the Installation Options menu.

4.

Verify the selections in the Migration Installation Summary screen and


press Enter.

5.

When the Migration Confirmation menu displays, follow the menu instructions to
list system information or continue with the migration by typing 0 and
pressing Enter.
Migration Confirmation

Either type 0 and press Enter to continue the installation,


or type the number of your choice and press Enter.

1. List the saved Base System configuration files which


will not be merged into the system. These files are
saved in /tmp/bos.

2. List the filesets which will be removed and not replaced.


3. List directories which will have all current contents
removed.
4. Reboot without migrating.

Acceptance of license agreements is required before using


system. You will be prompted to accept after the system
reboots.

>>> 0 Continue with the migration.


88 Help ?

------------------------------------------------------------

WARNING: Selected files, directories, and filesets


(installable options) from the Base System will be removed.
Choose 2 or 3 for more information.

>>> Choice[0]:

Step 4. Verify system configuration after


installation
After the migration is complete, the system will reboot. Verify the system configuration, as
follows:
1.

After a migration installation, the Configuration Assistant opens on systems with


a graphics display. And after a migration installation, the Installation Assistant opens
on systems with an ASCII display.

2.

Select the Accept Licenses option to accept the electronic licenses for the
operating system.

3.

Verify the administrator (root user) password and network communications


(TCP/IP) information.
Use any other options at this time. You can return to the Configuration Assistant or
the Installation Assistant by typing configassist or smitty assist at the command line.

4.

Select Exit the Configuration Assistant and select Next. Or,


press F10 or ESC+0 to exit the Installation Assistant.

5.

If you are in the Configuration Assistant, select Finish now. Do not start
theConfiguration Assistant when restarting AIX and select Finish.

6.

When the login prompt displays, log in as the root user to perform system
administration tasks.

7.

Run the /usr/lpp/bos/post_migration script.

IBM AIX - OS Upgrade nimadm 12 phases


Raaj Tilak S 12:13 AM AIX, IBM AIX - Installation / Migration, IBM AIX - NIM No
Comments

IBM AIX - OS Upgrade nimadm 12 phases!!

The nimadm utility offers several advantages over a conventional migration. Following are
the advantages of nimadm over other migration methods:
Reduced downtime for the client: The migration can execute while the system is up and
running as normal. There is no disruption to any of the applications or services running on
the client. Therefore, the upgrade can be done at a anytime time. Once upgrade complete

we need take a downtime from the client and scheduled a reboot in order to restart the
system at the later level of AIX.
Flexibility: The nimadm process is very flexible and it can be customized using some of the
optional NIM customization resources, such as image_data, bosinst_data, pre/post_migration
scripts, exclude_files, and so on.
Quick recovery from migration failures: All changes are performed on the copied rootvg
(altinst_rootvg). If there are any problems with the migration, the original rootvg is still
available and the system has not been impacted. If a migration fails or terminates at any
stage, nimadm is able to quickly recover from the event and clean up afterwards. There is
little for the administrator to do except determine why the migration failed, rectify the
situation, and attempt the nimadm process again. If the migration completed but issues are
discovered after the reboot, then the administrator can back out easily by booting from the
original rootvg disk.
The nimadm command performs a migration in 12 phases. All migration activity is logged on
the NIM master in the /var/adm/ras/alt_mig directory. It is useful to have knowledge of each
phase before performing a migration. After starting the alt_disk process from NIM master we
output as below, these are pre ALT_DISK steps

0513-029 The biod Subsystem is already active.


Multiple instances are not supported.
0513-059 The nfsd Subsystem has been started. Subsystem PID is 3780796.
0513-059 The rpc.mountd Subsystem has been started. Subsystem PID is 1237104.
0513-059 The nfsrgyd Subsystem has been started. Subsystem PID is 3477732.
0513-059 The gssd Subsystem has been started. Subsystem PID is 3743752.
0513-029 The rpc.lockd Subsystem is already active.
Multiple instances are not supported.
0513-029 The rpc.statd Subsystem is already active.

Multiple instances are not supported.


starting upgrade now
Initializing the NIM master.
Initializing NIM client webmanual01.
Verifying alt_disk_migration eligibility.
Initializing log: /var/adm/ras/alt_mig/webmanual01_alt_mig.log
Starting Alternate Disk Migration.
Explanation of Phase 1 : After starting nfsd , rpc.mountd , gssd , nfsrgyd,rpc.lockd, rpc.statd process in
the pre ALT_DISK , the master issues the alt_disk_install command to the client, which makes a copy of
the clients rootvg to the target disks. In this phase, the alternate root volume group (altinst_rootvg) is
created.

+----------------------------------------------------------------------------+
Executing nimadm phase 1.
+----------------------------------------------------------------------------+
Cloning altinst_rootvg on client, Phase 1.
Client alt_disk_install command: alt_disk_copy -j -M 6.1 -P1 -d "hdisk0"
Calling mkszfile to create new /image.data file.
Checking disk sizes.
Creating cloned rootvg volume group and associated logical volumes.
Creating logical volume alt_hd5
Creating logical volume alt_hd6
Creating logical volume alt_hd8

Creating logical volume alt_hd4


Creating logical volume alt_hd2
Creating logical volume alt_hd9var
Creating logical volume alt_hd3
Creating logical volume alt_hd1
Creating logical volume alt_hd10opt
Creating logical volume alt_lg_dumplv
Creating logical volume alt_lv_admin
Creating logical volume alt_lv_sw
Creating logical volume alt_lg_crmhome
Creating logical volume alt_lv_crmhome
Creating logical volume alt_paging00
Creating logical volume alt_hd11admin
Creating /alt_inst/ file system.
Creating /alt_inst/admin file system.
Creating /alt_inst/adminOLD file system.
Creating /alt_inst/crmhome file system.
Creating /alt_inst/home file system.
Creating /alt_inst/opt file system.
Creating /alt_inst/software file system.
Creating /alt_inst/tmp file system.
Creating /alt_inst/usr file system.
Creating /alt_inst/var file system.

Generating a list of files


for backup and restore into the alternate file system...
Phase 1 complete.
Explanation of Phase 2 : The NIM master creates the cache file systems in the nimadmvg volume
group. Some initial checks for the required migration disk space are performed.

+----------------------------------------------------------------------------+
Executing nimadm phase 2.
+----------------------------------------------------------------------------+
Creating nimadm cache file systems on volume group nimvg.
Checking for initial required migration space.
Creating cache file system /webmanual01_alt/alt_inst
Creating cache file system /webmanual01_alt/alt_inst/admin
Creating cache file system /webmanual01_alt/alt_inst/adminOLD
Creating cache file system /webmanual01_alt/alt_inst/crmhome
Creating cache file system /webmanual01_alt/alt_inst/home
Creating cache file system /webmanual01_alt/alt_inst/opt
Creating cache file system /webmanual01_alt/alt_inst/sw
Creating cache file system /webmanual01_alt/alt_inst/tmp
Creating cache file system /webmanual01_alt/alt_inst/usr
Creating cache file system /webmanual01_alt/alt_inst/var

Explanation of Phase 3 : The NIM master copies the NIM clients data to the cache file systems in
nimvg. This data copy is done by either rsh or nimsh.

+----------------------------------------------------------------------------+
Executing nimadm phase 3.
+----------------------------------------------------------------------------+
Syncing client data to cache ...
cannot access ./tmp/alt_lock: A file or directory in the path name does not
exist.
Explanation of Phase 4 : If a pre-migration script resource has been specified, it is executed at this time.

+----------------------------------------------------------------------------+
Executing nimadm phase 4.
+----------------------------------------------------------------------------+
nimadm: There is no user customization script specified for this phase.
Explanation of Phase 5 : System configuration files are saved. Initial migration space is calculated and
appropriate file system expansions are made. The bos image is restored and the device database is
merged. All of the migration merge methods are executed, and some miscellaneous processing takes
place.

+----------------------------------------------------------------------------+
Executing nimadm phase 5.

+----------------------------------------------------------------------------+
Saving system configuration files.
Checking for initial required migration space.
Setting up for base operating system restore.
/webmanual01_alt/alt_inst
Restoring base operating system.
Merging system configuration files.
Running migration merge method: ODM_merge Config_Rules.
Running migration merge method: ODM_merge SRCextmeth.
Running migration merge method: ODM_merge SRCsubsys.
Running migration merge method: ODM_merge SWservAt.
Running migration merge method: ODM_merge pse.conf.
Running migration merge method: ODM_merge vfs.
Running migration merge method: ODM_merge xtiso.conf.
Running migration merge method: ODM_merge PdAtXtd.
Running migration merge method: ODM_merge PdDv.
Running migration merge method: convert_errnotify.
Running migration merge method: passwd_mig.
Running migration merge method: login_mig.
Running migration merge method: user_mrg.
Running migration merge method: secur_mig.
Running migration merge method: RoleMerge.

Running migration merge method: methods_mig.


Running migration merge method: mkusr_mig.
Running migration merge method: group_mig.
Running migration merge method: ldapcfg_mig.
Running migration merge method: ldapmap_mig.
Running migration merge method: convert_errlog.
Running migration merge method: ODM_merge GAI.
Running migration merge method: ODM_merge PdAt.
Running migration merge method: merge_smit_db.
Running migration merge method: ODM_merge fix.
Running migration merge method: merge_swvpds.
Running migration merge method: SysckMerge.
Explanation of Phase 6: All system filesets are migrated using installp. Any required RPM images are
also installed during this phase.

+----------------------------------------------------------------------------+
Executing nimadm phase 6.
+----------------------------------------------------------------------------+
Installing and migrating software.
Updating install utilities.
+----------------------------------------------------------------------------+
Pre-installation Verification...

+----------------------------------------------------------------------------+
Verifying selections...done
Verifying requisites...done
Results...

SUCCESSES
--------Filesets listed in this section passed pre-installation verification
and will be installed.

Mandatory Fileset Updates


------------------------(being installed automatically due to their importance)
bos.rte.install 6.1.6.15 # LPP Install Commands

<< End of Success Section >>

+----------------------------------------------------------------------------+
BUILDDATE Verification ...
+----------------------------------------------------------------------------+
Verifying build dates...done

FILESET STATISTICS
-----------------1 Selected to be installed, of which:
1 Passed pre-installation verification
---1 Total to be installed

+----------------------------------------------------------------------------+
Installing Software...
+----------------------------------------------------------------------------+

installp: APPLYING software for:


bos.rte.install 6.1.6.15

. . . . . << Copyright notice for bos >> . . . . . . .


Licensed Materials - Property of IBM

[LOTS OF OUTPUT]

Installation Summary
-------------------Name Level Part Event Result

-----------------------------------------------------------------------------lwi.runtime 6.1.6.15 USR APPLY SUCCESS


lwi.runtime 6.1.6.15 ROOT APPLY SUCCESS
X11.compat.lib.X11R6_motif 6.1.6.15 USR APPLY SUCCESS
Java5.sdk 5.0.0.395 USR APPLY SUCCESS
Java5.sdk 5.0.0.395 ROOT APPLY SUCCESS
Java5.sdk 5.0.0.395 USR COMMIT SUCCESS
Java5.sdk 5.0.0.395 ROOT COMMIT SUCCESS
lwi.runtime 6.1.6.15 USR COMMIT SUCCESS
lwi.runtime 6.1.6.15 ROOT COMMIT SUCCESS
X11.compat.lib.X11R6_motif 6.1.6.15 USR COMMIT SUCCESS

install_all_updates: Generating list of updatable rpm packages.


install_all_updates: No updatable rpm packages found.

install_all_updates: Checking for recommended maintenance level 6100-06.


install_all_updates: Executing /usr/bin/oslevel -rf, Result = 6100-06
install_all_updates: Verification completed.
install_all_updates: Log file is /var/adm/ras/install_all_updates.log
install_all_updates: Result = SUCCESS
Known Recommended Maintenance Levels
------------------------------------

Restoring device ODM database.


Explanation of Phase 7 : If a post-migration script resource has been specified, it is executed at this
time.

+----------------------------------------------------------------------------+
Executing nimadm phase 7.
+----------------------------------------------------------------------------+
nimadm: There is no user customization script specified for this phase.
Explanation of Phase 8 : The bosboot command is run to create a client boot image, which is written to
the clients alternate boot logical volume (alt_hd5)

+----------------------------------------------------------------------------+
Executing nimadm phase 8.
+----------------------------------------------------------------------------+
Creating client boot image.
bosboot: Boot image is 47136 512 byte blocks.
Writing boot image to client's alternate boot disk hdisk0.
Explanation of Phase 9 : All the migrated data is now copied from the NIM masters local cache file and
synced to the clients alternate rootvg.

+----------------------------------------------------------------------------+

Executing nimadm phase 9.


+----------------------------------------------------------------------------+
Adjusting client file system sizes ...
Adjusting size for /
Adjusting size for /admin
Adjusting size for /adminOLD
Adjusting size for /crmhome
Adjusting size for /home
Adjusting size for /opt
Adjusting size for /sw
Adjusting size for /tmp
Adjusting size for /usr
Expanding /alt_inst/usr client filesystem.
Filesystem size changed to 12058624
Adjusting size for /var
Syncing cache data to client ...
Explanation of Phase 10 : The NIM master cleans up and removes the local cache file systems.

+----------------------------------------------------------------------------+
Executing nimadm phase 10.
+----------------------------------------------------------------------------+

Unmounting client mounts on the NIM master.


forced unmount of /webmanual01_alt/alt_inst/var
forced unmount of /webmanual01_alt/alt_inst/usr
forced unmount of /webmanual01_alt/alt_inst/tmp
forced unmount of /webmanual01_alt/alt_inst/sw
forced unmount of /webmanual01_alt/alt_inst/opt
forced unmount of /webmanual01_alt/alt_inst/home
forced unmount of /webmanual01_alt/alt_inst/crmhome
forced unmount of /webmanual01_alt/alt_inst/adminOLD
forced unmount of /webmanual01_alt/alt_inst/admin
forced unmount of /webmanual01_alt/alt_inst
Removing nimadm cache file systems.
Removing cache file system /webmanual01_alt/alt_inst
Removing cache file system /webmanual01_alt/alt_inst/admin
Removing cache file system /webmanual01_alt/alt_inst/admin
Removing cache file system /webmanual01_alt/alt_inst/crmhome
Removing cache file system /webmanual01_alt/alt_inst/home
Removing cache file system /webmanual01_alt/alt_inst/opt
Removing cache file system /webmanual01_alt/alt_inst/sw
Removing cache file system /webmanual01_alt/alt_inst/tmp
Removing cache file system /webmanual01_alt/alt_inst/usr
Removing cache file system /webmanual01_alt/alt_inst/var

Explanation of Phase 11 :The alt_disk_install command is called again to make the final adjustments
and put altinst_rootvg to sleep. The bootlist is set to the target disk

+----------------------------------------------------------------------------+
Executing nimadm phase 11.
+----------------------------------------------------------------------------+
Cloning altinst_rootvg on client, Phase 3.
Client alt_disk_install command: alt_disk_copy -j -M 6.1 -P3 -d "hdisk0"
## Phase 3 ###################
Verifying altinst_rootvg...
Modifying ODM on cloned disk.
forced unmount of /alt_inst/var
forced unmount of /alt_inst/usr
forced unmount of /alt_inst/tmp
forced unmount of /alt_inst/sw
forced unmount of /alt_inst/opt
forced unmount of /alt_inst/home
forced unmount of /alt_inst/crmhome
forced unmount of /alt_inst/admin
forced unmount of /alt_inst/admin
forced unmount of /alt_inst
Changing logical volume names in volume group descriptor area.

Fixing LV control blocks...


Fixing file system superblocks...
Bootlist is set to the boot disk: hdisk0 blv=hd5
Explanation of Phase 12: Cleanup is executed to end the migration.

+----------------------------------------------------------------------------+
Executing nimadm phase 12.
+----------------------------------------------------------------------------+
Cleaning up alt_disk_migration on the NIM master.
Cleaning up alt_disk_migration on client webmanual01.
Please review log to verify success
Initializing the NIM master.
Initializing NIM client webmanual01.
Verifying alt_disk_migration eligibility.
Initializing log: /var/adm/ras/alt_mig/webmanual01_alt_mig.log
Starting Alternate Disk Migration.
After the migration is complete, login to client and confirm bootlist is set to the altinst_rootvg disk.
# lspv | grep rootvg
Hdisk1 0000273ac30fdcfc rootvg active
hdisk0 000273ac30fdd6e altinst_rootvg active

# bootlist -m normal -o
Hdisk0 blv=hd5

BM AIX - OS Upgrade nimadm 12 phases


Raaj Tilak S 12:13 AM AIX, IBM AIX - Installation / Migration, IBM AIX - NIM No
Comments

IBM AIX - OS Upgrade nimadm 12 phases!!


The nimadm utility offers several advantages over a conventional migration. Following are
the advantages of nimadm over other migration methods:
Reduced downtime for the client: The migration can execute while the system is up and
running as normal. There is no disruption to any of the applications or services running on
the client. Therefore, the upgrade can be done at a anytime time. Once upgrade complete
we need take a downtime from the client and scheduled a reboot in order to restart the
system at the later level of AIX.
Flexibility: The nimadm process is very flexible and it can be customized using some of the
optional NIM customization resources, such as image_data, bosinst_data, pre/post_migration
scripts, exclude_files, and so on.
Quick recovery from migration failures: All changes are performed on the copied rootvg
(altinst_rootvg). If there are any problems with the migration, the original rootvg is still
available and the system has not been impacted. If a migration fails or terminates at any
stage, nimadm is able to quickly recover from the event and clean up afterwards. There is
little for the administrator to do except determine why the migration failed, rectify the
situation, and attempt the nimadm process again. If the migration completed but issues are
discovered after the reboot, then the administrator can back out easily by booting from the
original rootvg disk.
The nimadm command performs a migration in 12 phases. All migration activity is logged on
the NIM master in the /var/adm/ras/alt_mig directory. It is useful to have knowledge of each
phase before performing a migration. After starting the alt_disk process from NIM master we
output as below, these are pre ALT_DISK steps

0513-029 The biod Subsystem is already active.


Multiple instances are not supported.
0513-059 The nfsd Subsystem has been started. Subsystem PID is 3780796.
0513-059 The rpc.mountd Subsystem has been started. Subsystem PID is 1237104.
0513-059 The nfsrgyd Subsystem has been started. Subsystem PID is 3477732.
0513-059 The gssd Subsystem has been started. Subsystem PID is 3743752.
0513-029 The rpc.lockd Subsystem is already active.
Multiple instances are not supported.
0513-029 The rpc.statd Subsystem is already active.
Multiple instances are not supported.
starting upgrade now
Initializing the NIM master.
Initializing NIM client webmanual01.
Verifying alt_disk_migration eligibility.
Initializing log: /var/adm/ras/alt_mig/webmanual01_alt_mig.log
Starting Alternate Disk Migration.
Explanation of Phase 1 : After starting nfsd , rpc.mountd , gssd , nfsrgyd,rpc.lockd, rpc.statd process in
the pre ALT_DISK , the master issues the alt_disk_install command to the client, which makes a copy of
the clients rootvg to the target disks. In this phase, the alternate root volume group (altinst_rootvg) is
created.

+----------------------------------------------------------------------------+

Executing nimadm phase 1.


+----------------------------------------------------------------------------+
Cloning altinst_rootvg on client, Phase 1.
Client alt_disk_install command: alt_disk_copy -j -M 6.1 -P1 -d "hdisk0"
Calling mkszfile to create new /image.data file.
Checking disk sizes.
Creating cloned rootvg volume group and associated logical volumes.
Creating logical volume alt_hd5
Creating logical volume alt_hd6
Creating logical volume alt_hd8
Creating logical volume alt_hd4
Creating logical volume alt_hd2
Creating logical volume alt_hd9var
Creating logical volume alt_hd3
Creating logical volume alt_hd1
Creating logical volume alt_hd10opt
Creating logical volume alt_lg_dumplv
Creating logical volume alt_lv_admin
Creating logical volume alt_lv_sw
Creating logical volume alt_lg_crmhome
Creating logical volume alt_lv_crmhome
Creating logical volume alt_paging00

Creating logical volume alt_hd11admin


Creating /alt_inst/ file system.
Creating /alt_inst/admin file system.
Creating /alt_inst/adminOLD file system.
Creating /alt_inst/crmhome file system.
Creating /alt_inst/home file system.
Creating /alt_inst/opt file system.
Creating /alt_inst/software file system.
Creating /alt_inst/tmp file system.
Creating /alt_inst/usr file system.
Creating /alt_inst/var file system.
Generating a list of files
for backup and restore into the alternate file system...
Phase 1 complete.
Explanation of Phase 2 : The NIM master creates the cache file systems in the nimadmvg volume
group. Some initial checks for the required migration disk space are performed.

+----------------------------------------------------------------------------+
Executing nimadm phase 2.
+----------------------------------------------------------------------------+
Creating nimadm cache file systems on volume group nimvg.
Checking for initial required migration space.

Creating cache file system /webmanual01_alt/alt_inst


Creating cache file system /webmanual01_alt/alt_inst/admin
Creating cache file system /webmanual01_alt/alt_inst/adminOLD
Creating cache file system /webmanual01_alt/alt_inst/crmhome
Creating cache file system /webmanual01_alt/alt_inst/home
Creating cache file system /webmanual01_alt/alt_inst/opt
Creating cache file system /webmanual01_alt/alt_inst/sw
Creating cache file system /webmanual01_alt/alt_inst/tmp
Creating cache file system /webmanual01_alt/alt_inst/usr
Creating cache file system /webmanual01_alt/alt_inst/var
Explanation of Phase 3 : The NIM master copies the NIM clients data to the cache file systems in
nimvg. This data copy is done by either rsh or nimsh.

+----------------------------------------------------------------------------+
Executing nimadm phase 3.
+----------------------------------------------------------------------------+
Syncing client data to cache ...
cannot access ./tmp/alt_lock: A file or directory in the path name does not
exist.
Explanation of Phase 4 : If a pre-migration script resource has been specified, it is executed at this time.

+----------------------------------------------------------------------------+

Executing nimadm phase 4.


+----------------------------------------------------------------------------+
nimadm: There is no user customization script specified for this phase.
Explanation of Phase 5 : System configuration files are saved. Initial migration space is calculated and
appropriate file system expansions are made. The bos image is restored and the device database is
merged. All of the migration merge methods are executed, and some miscellaneous processing takes
place.

+----------------------------------------------------------------------------+
Executing nimadm phase 5.
+----------------------------------------------------------------------------+
Saving system configuration files.
Checking for initial required migration space.
Setting up for base operating system restore.
/webmanual01_alt/alt_inst
Restoring base operating system.
Merging system configuration files.
Running migration merge method: ODM_merge Config_Rules.
Running migration merge method: ODM_merge SRCextmeth.
Running migration merge method: ODM_merge SRCsubsys.
Running migration merge method: ODM_merge SWservAt.
Running migration merge method: ODM_merge pse.conf.

Running migration merge method: ODM_merge vfs.


Running migration merge method: ODM_merge xtiso.conf.
Running migration merge method: ODM_merge PdAtXtd.
Running migration merge method: ODM_merge PdDv.
Running migration merge method: convert_errnotify.
Running migration merge method: passwd_mig.
Running migration merge method: login_mig.
Running migration merge method: user_mrg.
Running migration merge method: secur_mig.
Running migration merge method: RoleMerge.
Running migration merge method: methods_mig.
Running migration merge method: mkusr_mig.
Running migration merge method: group_mig.
Running migration merge method: ldapcfg_mig.
Running migration merge method: ldapmap_mig.
Running migration merge method: convert_errlog.
Running migration merge method: ODM_merge GAI.
Running migration merge method: ODM_merge PdAt.
Running migration merge method: merge_smit_db.
Running migration merge method: ODM_merge fix.
Running migration merge method: merge_swvpds.
Running migration merge method: SysckMerge.

Explanation of Phase 6: All system filesets are migrated using installp. Any required RPM images are
also installed during this phase.

+----------------------------------------------------------------------------+
Executing nimadm phase 6.
+----------------------------------------------------------------------------+
Installing and migrating software.
Updating install utilities.
+----------------------------------------------------------------------------+
Pre-installation Verification...
+----------------------------------------------------------------------------+
Verifying selections...done
Verifying requisites...done
Results...

SUCCESSES
--------Filesets listed in this section passed pre-installation verification
and will be installed.

Mandatory Fileset Updates

------------------------(being installed automatically due to their importance)


bos.rte.install 6.1.6.15 # LPP Install Commands

<< End of Success Section >>

+----------------------------------------------------------------------------+
BUILDDATE Verification ...
+----------------------------------------------------------------------------+
Verifying build dates...done
FILESET STATISTICS
-----------------1 Selected to be installed, of which:
1 Passed pre-installation verification
---1 Total to be installed

+----------------------------------------------------------------------------+
Installing Software...
+----------------------------------------------------------------------------+

installp: APPLYING software for:


bos.rte.install 6.1.6.15

. . . . . << Copyright notice for bos >> . . . . . . .


Licensed Materials - Property of IBM

[LOTS OF OUTPUT]

Installation Summary
-------------------Name Level Part Event Result
-----------------------------------------------------------------------------lwi.runtime 6.1.6.15 USR APPLY SUCCESS
lwi.runtime 6.1.6.15 ROOT APPLY SUCCESS
X11.compat.lib.X11R6_motif 6.1.6.15 USR APPLY SUCCESS
Java5.sdk 5.0.0.395 USR APPLY SUCCESS
Java5.sdk 5.0.0.395 ROOT APPLY SUCCESS
Java5.sdk 5.0.0.395 USR COMMIT SUCCESS
Java5.sdk 5.0.0.395 ROOT COMMIT SUCCESS
lwi.runtime 6.1.6.15 USR COMMIT SUCCESS
lwi.runtime 6.1.6.15 ROOT COMMIT SUCCESS
X11.compat.lib.X11R6_motif 6.1.6.15 USR COMMIT SUCCESS

install_all_updates: Generating list of updatable rpm packages.


install_all_updates: No updatable rpm packages found.

install_all_updates: Checking for recommended maintenance level 6100-06.


install_all_updates: Executing /usr/bin/oslevel -rf, Result = 6100-06
install_all_updates: Verification completed.
install_all_updates: Log file is /var/adm/ras/install_all_updates.log
install_all_updates: Result = SUCCESS
Known Recommended Maintenance Levels
-----------------------------------Restoring device ODM database.
Explanation of Phase 7 : If a post-migration script resource has been specified, it is executed at this
time.

+----------------------------------------------------------------------------+
Executing nimadm phase 7.
+----------------------------------------------------------------------------+
nimadm: There is no user customization script specified for this phase.
Explanation of Phase 8 : The bosboot command is run to create a client boot image, which is written to
the clients alternate boot logical volume (alt_hd5)

+----------------------------------------------------------------------------+
Executing nimadm phase 8.
+----------------------------------------------------------------------------+
Creating client boot image.
bosboot: Boot image is 47136 512 byte blocks.
Writing boot image to client's alternate boot disk hdisk0.
Explanation of Phase 9 : All the migrated data is now copied from the NIM masters local cache file and
synced to the clients alternate rootvg.

+----------------------------------------------------------------------------+
Executing nimadm phase 9.
+----------------------------------------------------------------------------+
Adjusting client file system sizes ...
Adjusting size for /
Adjusting size for /admin
Adjusting size for /adminOLD
Adjusting size for /crmhome
Adjusting size for /home
Adjusting size for /opt
Adjusting size for /sw
Adjusting size for /tmp

Adjusting size for /usr


Expanding /alt_inst/usr client filesystem.
Filesystem size changed to 12058624
Adjusting size for /var
Syncing cache data to client ...
Explanation of Phase 10 : The NIM master cleans up and removes the local cache file systems.

+----------------------------------------------------------------------------+
Executing nimadm phase 10.
+----------------------------------------------------------------------------+
Unmounting client mounts on the NIM master.
forced unmount of /webmanual01_alt/alt_inst/var
forced unmount of /webmanual01_alt/alt_inst/usr
forced unmount of /webmanual01_alt/alt_inst/tmp
forced unmount of /webmanual01_alt/alt_inst/sw
forced unmount of /webmanual01_alt/alt_inst/opt
forced unmount of /webmanual01_alt/alt_inst/home
forced unmount of /webmanual01_alt/alt_inst/crmhome
forced unmount of /webmanual01_alt/alt_inst/adminOLD
forced unmount of /webmanual01_alt/alt_inst/admin
forced unmount of /webmanual01_alt/alt_inst
Removing nimadm cache file systems.

Removing cache file system /webmanual01_alt/alt_inst


Removing cache file system /webmanual01_alt/alt_inst/admin
Removing cache file system /webmanual01_alt/alt_inst/admin
Removing cache file system /webmanual01_alt/alt_inst/crmhome
Removing cache file system /webmanual01_alt/alt_inst/home
Removing cache file system /webmanual01_alt/alt_inst/opt
Removing cache file system /webmanual01_alt/alt_inst/sw
Removing cache file system /webmanual01_alt/alt_inst/tmp
Removing cache file system /webmanual01_alt/alt_inst/usr
Removing cache file system /webmanual01_alt/alt_inst/var

Explanation of Phase 11 :The alt_disk_install command is called again to make the final adjustments
and put altinst_rootvg to sleep. The bootlist is set to the target disk

+----------------------------------------------------------------------------+
Executing nimadm phase 11.
+----------------------------------------------------------------------------+
Cloning altinst_rootvg on client, Phase 3.
Client alt_disk_install command: alt_disk_copy -j -M 6.1 -P3 -d "hdisk0"
## Phase 3 ###################
Verifying altinst_rootvg...
Modifying ODM on cloned disk.
forced unmount of /alt_inst/var

forced unmount of /alt_inst/usr


forced unmount of /alt_inst/tmp
forced unmount of /alt_inst/sw
forced unmount of /alt_inst/opt
forced unmount of /alt_inst/home
forced unmount of /alt_inst/crmhome
forced unmount of /alt_inst/admin
forced unmount of /alt_inst/admin
forced unmount of /alt_inst
Changing logical volume names in volume group descriptor area.
Fixing LV control blocks...
Fixing file system superblocks...
Bootlist is set to the boot disk: hdisk0 blv=hd5
Explanation of Phase 12: Cleanup is executed to end the migration.

+----------------------------------------------------------------------------+
Executing nimadm phase 12.
+----------------------------------------------------------------------------+
Cleaning up alt_disk_migration on the NIM master.
Cleaning up alt_disk_migration on client webmanual01.
Please review log to verify success
Initializing the NIM master.

Initializing NIM client webmanual01.


Verifying alt_disk_migration eligibility.
Initializing log: /var/adm/ras/alt_mig/webmanual01_alt_mig.log
Starting Alternate Disk Migration.
After the migration is complete, login to client and confirm bootlist is set to the altinst_rootvg disk.
# lspv | grep rootvg
Hdisk1 0000273ac30fdcfc rootvg active
hdisk0 000273ac30fdd6e altinst_rootvg active

# bootlist -m normal -o
Hdisk0 blv=hd5

THE UNIX

Home

IBM AIX - Commands (Contd) - LVM - Disks & Filesystems


Raaj Tilak S 9:30 PM AIX, IBM AIX - Commands No Comments

IBM AIX - Commands (contd)

IBM AIX Operating System - Some useful Commands gathered from IBM
and other websites!!!

LVM - Disks & Filesystems

List all PVs in a system (along) with VG membership

lspv

List all LVs on PV hdisk6

lspv -l hdisk6

List all imported VGs

lsvg

List all VGs that are imported and on-line

lsvg -o

The difference between lsvg and lsvg -o are the imported VGs that are offline.
List all LVs on VG vg01

lsvg -l vg01

List all PVs in VG vg02

lsvg -p vg02

List filesystems in a fstab-like format

lsfs

Get extended info about the /home filesystem

lsfs -q /home

Create the datavg VG on hdisk1 with 64 MB PPs

mkvg -y datavg -s 64 hdisk1

Create a 1 Gig LV on (previous) datavg

mklv -t jfs2 -y datalv datavg 16

Create a log device on datavg VG using 1 PP

mklv -t jfs2log -y datalog1 datavg 1

Format the log device created in previous example

logform /dev/datalog1

Place a filesystem on the previously created datalv

crfs -v jfs2 -d datalv -m /data01 -A y

A jfs2 log must exist in this VG and be logform(ed). (This was done in the
previous steps.) -mspecifies the mount point for the fs, and -A y is a option to
automatically mount (with mount -a).
Create a scalable VG called vg01 with two disks

mkvg -S -y vg01 hdisk1 hdisk2


Create a FS using the VG as a parameter

crfs -v jfs2 -g simplevg -m /data04 -A y -a size=100M

The VG name here is "simplevg". A default LV naming convention of fslvXX will


be used. The LV, and in this case log-LV, will be automatically created.
Take the datavg VG offline

varyoffvg datavg
Vary-on the datavg VG

varyonvg datavg

By default the import operation will vary-on the VG. An explicit vary-on will be
required for concurrent volume groups that can be imported onto two (or more)
systems at once, but only varied-on on one system at a time.
Remove the datavg VG from the system

exportvg datavg
Import the VG on hdisk5 as datavg

importvg -y datavg hdisk5

The VG in this example spans multiple disks, but it is only necessary to specify
a single member disk to the command. The LVM system will locate the other
member disks from the metadata provided on the single disk provided.
Import a VG on a disk by PVID as datavg

importvg -y datavg 00cc34b205d347fc


Grow the /var filesystem by 1 Gig

chfs -a size=+1G /var

In each of the chfs grow filesystem examples, AIX will automatically grow the
underlying LV to the appropriate size.
Grow the /var filesystem to 1 Gig

chfs -a size=1G /var

List the maximum LPs for LV fslv00

lslv fslv00 | grep MAX


Increase the maximum LPs for fslv00 LV

chlv -x 2048 fslv00

Create a mirrored copy of fslv08

mklvcopy -k -s y fslv08 2

syncvg -l fslv08 must be run if the -k (sync now) switch is not used
for mklvcopy.
Add hdisk3 and hdisk4 to the vg01 VG

extendvg vg01 hdisk3 hdisk4

Mirror rootvg (on hdisk0) to hdisk1

extendvg rootvg hdisk1

mirrorvg -S rootvg hdisk1

bosboot -ad hdisk0


bosboot -ad hdisk1
bootlist -m normal hdisk0 hdisk1

The -S option to mirrorvg mirrors the VG in the background.


Running bosboot on hdisk0 is not required - just thorough.

Find the file usage on the /var filesystem

du -smx /var

List users & PIDs with open files in /data04 mount

fuser -xuc /data04

List all mounted filesystems in a factor of Gigabytes


df -g (-m and -k are also available)
Find what PV the LV called datalv01 is on

lslv -l datalv01

The "COPIES" column relates the mirror distribution of the PPs for each LP. (PPs
should only be listed in the first part of the COPIES section. See the next example.)
The "IN BAND" column tells how much of the used PPs in this PV are used for this LV.
The "DISTRIBUTION" column reports the number of PPs in each region of the PV.
(The distribution is largely irrelevant for most modern SAN applications.)
Create a LV with 3 copies in a VG with a single PV

mklv -c 3 -s n -t jfs2 -y badlv badvg 4

Note: This is an anti-example to demonstrate how the COPIES column works.


This LV violates strictness rules. The COPIES column from lslv -l badlv looks
like: 004:004:004
Move a LV from hdisk4 to hdisk5

migratepv -l datalv01 hdisk4 hdisk5


Move all LVs on hdisk1 to hdisk2

migratepv hdisk1 hdisk2

The migratepv command is an atomic command in that it does not return until
complete. Mirroring / breaking LVs is an alternative to explicitly migrating them. See
additional migratepv,mirrorvg, and mklvcopy examples in this section.
Put a PVID on hdisk1
chdev -l hdisk1 -a pv=yes
PVIDs are automatically placed on a disk when added to a VG
Remove a PVID from a disk

chdev -l hdisk1 -a pv=clear

This will remove the PVID but not residual VGDA and other data on the
disk. dd can be used to scrub remaining data from the disk. The AIX install CD/DVD
also provides a "scrub" feature to (repeatedly) write patterns over data on disks.
Move (migrate) VG vg02 from hdisk1 to hdisk2

extendvg vg02 hdisk2


migratepv hdisk1 hdisk2
reducevg vg02 hdisk1

Mirroring and then unmirroring is another method to achieve this. See the
next example
Move (mirror) VG vg02 from hdisk1 to hdisk2

extendvg vg02 hdisk2


mirrorvg -c 2 vg02
unmirrorvg vg02 hdisk1
reducevg vg02 hdisk1

In this example it is necessary to wait for the mirrors to synchronize before


breaking the mirror. The mirrorvg command in this example will not complete until
the mirror is established. The alternative is to mirror in the background, but then it
is up to the administrator to insure that the mirror process is complete.
Create a striped jfs2 partition on vg01

mklv -C 2 -S 16K -t jfs2 -y vg01_lv01 vg01 400 hdisk1 hdisk2

This creates a stripe width of 2 with a (total) stripe size of 32K. This command
will result in an upper bound of 2 (same as the stripe size) for the LV. If this LV is to
be extended to another two disks later, then the upper bound must be changed to 4
or specified during creation. The VG in this example was a scalable VG.
Determine VG type of VG myvg

lsvg myvg | grep "MAX PVs"

MAX PVs is 32 for normal, 128 for big, and 1024 for scalable VGs.
Set the system to boot to the CDROM on next boot

bootlist -m normal cd0 hdisk0 hdisk1

The system will boot to one of the mirror pairs (hdisk0 or hdisk1) if the boot
from the CD ROM does not work. This can be returned to normal by repeating the
command without cd0.
List the boot device for the next boot

bootlist -m normal -o

IBM AIX - Commands


Raaj Tilak S 12:17 PM IBM AIX - Commands No Comments

IBM AIX Operating System - Some useful


Commands gathered from IBM and other
websites!!!

Kernel
How would I know if I am running a 32-bit kernel or 64-bit kernel?
To display if the kernel is 32-bit enabled or 64-bit enabled, type:

bootinfo -K
How do I know if I am running a uniprocessor kernel or a multiprocessor kernel?
/unix is a symbolic link to the booted kernel. To find out what kernel mode is running, enter ls
-l /unix and see what file /unix it links to. The following are the three possible outputs from the ls
-l /unix command and their corresponding kernels:

/unix -> /usr/lib/boot/unix_up # 32 bit uniprocessor kernel /unix ->


/usr/lib/boot/unix_mp # 32 bit multiprocessor kernel /unix ->
/usr/lib/boot/unix_64 # 64 bit multiprocessor kernel
Note:
AIX 5L Version 5.3 does not support a uniprocessor kernel.
How can I change from one kernel mode to another?

During the installation process, one of the kernels, appropriate for the AIX version and the
hardware in operation, is enabled by default. Use the method from the previous question and
assume that the 32-bit kernel is enabled. Also assume that you want to boot it up in the 64-bit
kernel mode. This can be done by executing the following commands in sequence:

ln -sf /usr/lib/boot/unix_64 /unix ln -sf /usr/lib/boot/unix_64


/usr/lib/boot/unix bosboot -ad /dev/hdiskxx shutdown -r
The /dev/hdiskxx directory is where the boot logical volume /dev/hd5 is located. To find out what
xx is in hdiskxx, run the following command:

lslv -m hd5
Note:
In AIX V5.2, the 32-bit kernel is installed by default. In AIX V5.3, the 64-bit kernel is installed on
64-bit hardware and the 32-bit kernel is installed on 32-bit hardware by default.

Hardware
How do I know if my machine is capable of running AIX 5L Version 5.3?
AIX 5L Version 5.3 runs on all currently supported CHRP (Common Hardware Reference Platform)based POWER hardware.
How do I know if my machine is CHRP-based?
Run the prtconf command. If it's a CHRP machine, the string chrp appears on the Model
Architecture line.
How do I know if my System p machine (hardware) is 32-bit or 64-bit?
To display if the hardware is 32-bit or 64-bit, type:

bootinfo -y
How much real memory does my machine have?
To display real memory in kilobytes (KB), type one of the following:

bootinfo -r
lsattr -El sys0 -a realmem
Can my machine run the 64-bit kernel?

64-bit hardware is required to run the 64-bit kernel.


What are the values of attributes for devices in my system?
To list the current values of the attributes for the tape device, rmt0, type:

lsattr -l rmt0 -E
To list the default values of the attributes for the tape device, rmt0, type:

lsattr -l rmt0 -D
To list the possible values of the login attribute for the TTY device, tty0, type:

lsattr -l tty0 -a login -R


To display system level attributes, type:

lsattr -E -l sys0
How many processors does my system have?
To display the number of processors on your system, type:

lscfg | grep proc


How many hard disks does my system have and which ones are in use?
To display the number of hard disks on your system, type:

lspv
How do I list information about a specific physical volume?
To find details about hdisk1, for example, run the following command:

lspv hdisk1
How do I get a detailed configuration of my system?
Type the following:

lscfg
The following options provide specific information:

Displays platform-specific device information. The flag is

applicable to AIX V4.2.1 or later.

Displays the VPD (Vital Product Database) found in the


customized VPD object class.

For example, to display details about the tape drive, rmt0, type:

lscfg -vl rmt0

You can obtain similar information by running the prtconf command.


How do I find out the chip type, system name, node name, model number, and so forth?
The uname command provides details about your system.

uname
-p

Displays the chip type of the system. For example,


PowerPC.

uname
-r

Displays the release number of the operating system.

uname
-s

Displays the system name. For example, AIX.

uname
-n

Displays the name of the node.

uname
-a

Displays the system name, nodename, version, machine


ID.

uname
-M

Displays the system model name. For example, IBM,


9114-275.

uname
-v

Displays the operating system version.

uname
-m

Displays the machine ID number of the hardware


running the system.

uname
-u

Displays the system ID number.

AIX
What version, release, and maintenance level of AIX is running on my system?
Type one of the following:

oslevel -r
lslpp -h bos.rte
How can I determine which fileset updates are missing from a particular AIX level?
To determine which fileset updates are missing from 5300-04, for example, run the following
command:

oslevel -rl 5300-04


What SP (Service Pack) is installed on my system?
To see which SP is currently installed on the system, run the oslevel -s command. Sample output
for an AIX 5L Version 5.3 system, with TL4, and SP2 installed, would be:

oslevel -s 5300-04-02
Is a CSP (Concluding Service Pack) installed on my system?
To see if a CSP is currently installed on the system, run the oslevel -s command. Sample output for
an AIX 5L Version 5.3 system, with TL3, and CSP installed, would be:

oslevel -s 5300-03-CSP
How do I create a file system?
The following command will create, within volume group testvg, a jfs file system of 10MB with
mounting point /fs1:

crfs -v jfs -g testvg -a size=10M -m /fs1


The following command will create, within volume group testvg, a jfs2 file system of 10MB with
mounting point /fs2 and having read-only permissions:

crfs -v jfs2 -g testvg -a size=10M -p ro -m /fs2


How do I change the size of a file system?
To increase the /usr file system size by 1000000 512-byte blocks, type:

chfs -a size=+1000000 /usr


Note:

In AIX V5.3, the size of a JFS2 file system can be shrunk, as well.
How do I mount a CD?
Type the following:

mount -V cdrfs -o ro /dev/cd0 /cdrom


How do I mount a file system?
The following command will mount file system /dev/fslv02 on the /test directory:

mount /dev/fslv02 /test


How do I mount all default file systems (all standard file systems in the /etc/filesystems file
marked by the mount=true attribute)?
The following command will mount all such file systems:

mount {-a|all}
How do I unmount a file system?
Type the following command to unmount /test file system:

umount /test
How do I display mounted file systems?
Type the following command to display information about all currently mounted file systems:

mount
How do I remove a file system?
Type the following command to remove the /test file system:

rmfs /test
How can I defragment a file system?
The defragfs command can be used to improve or report the status of contiguous space within a
file system. For example, to defragment the file system /home, use the following command:

defragfs /home
Which fileset contains a particular binary?
To show bos.acct contains /usr/bin/vmstat, type:

lslpp -w /usr/bin/vmstat
Or to show bos.perf.tools contains /usr/bin/svmon, type:

which_fileset svmon
How do I display information about installed filesets on my system?
Type the following:

lslpp -l
How do I determine if all filesets of maintenance levels are installed on my system?
Type the following:

instfix -i | grep ML
How do I determine if a fix is installed on my system?
To determine if IY24043 is installed, type:

instfix -ik IY24043


How do I install an individual fix by APAR?
To install APAR IY73748 from /dev/cd0, for example, enter the command:

instfix -k IY73748 -d /dev/cd0


How do I verify if filesets have required prerequisites and are completely installed?
To show which filesets need to be installed or corrected, type:

lppchk -v
How do I get a dump of the header of the loader section and the symbol entries in symbolic
representation?
Type the following:

dump -Htv
How do I determine the amount of paging space allocated and in use?
Type the following:

lsps -a
How do I increase a paging space?
You can use the chps -s command to dynamically increase the size of a paging space. For example,
if you want to increase the size of hd6 with 3 logical partitions, you issue the following command:

chps -s 3 hd6
How do I reduce a paging space?
You can use the chps -d command to dynamically reduce the size of a paging space. For example,
if you want to decrease the size of hd6 with four logical partitions, you issue the following
command:

chps -d 4 hd6
How would I know if my system is capable of using Simultaneous Multi-threading (SMT)?
Your system is capable of SMT if it's a POWER5-based system running AIX 5L Version 5.3.
How would I know if SMT is enabled for my system?
If you run the smtctl command without any options, it tells you if it's enabled or not.
Is SMT supported for the 32-bit kernel?
Yes, SMT is supported for both 32-bit and 64-bit kernel.
How do I enable or disable SMT?
You can enable or disable SMT by running the smtctl command. The following is the syntax:

smtctl [ -m off | on [ -w boot | now]]


The following options are available:

-m
off

Sets SMT mode to disabled.

-m
on

Sets SMT mode to enabled.

-w
boot

Makes the SMT mode change effective on next and


subsequent reboots if you run the bosboot command before
the next system reboot.

-w
now

Makes the SMT mode change immediately but will not


persist across reboot.

If neither the -w boot or the -w now options are specified, then the mode change is made
immediately. It persists across subsequent reboots if you run the bosboot command before the
next system reboot.
How do I get partition-specific information and statistics?
The lparstat command provides a report of partition information and utilization statistics. This
command also provides a display of Hypervisor information.

Volume groups and logical


volumes
How do I know if my volume group is normal, big, or scalable?
Run the lsvg command on the volume group and look at the value for MAX PVs. The value is 32 for
normal, 128 for big, and 1024 for scalable volume group.
How can I create a volume group?
Use the following command, where spartition_size sets the number of megabytes (MB) in each
physical partition where the partition_size is expressed in units of MB from 1 through 1024. (It's 1
through 131072 for AIX V5.3.) The partition_size variable must be equal to a power of 2 (for
example: 1, 2, 4, 8). The default value for standard and big volume groups is the lowest value to
remain within the limitation of 1016 physical partitions per physical volume. The default value for
scalable volume groups is the lowest value to accommodate 2040 physical partitions per physical
volume.

mkvg -y name_of_volume_group -s partition_size list_of_hard_disks


How can I change the characteristics of a volume group?
You use the following command to change the characteristics of a volume group:

chvg
How do I create a logical volume?
Type the following:

mklv
-y name_of_logical_volume name_of_volume_group number_of_partition
How do I increase the size of a logical volume?
To increase the size of the logical volume represented by the lv05 directory by three logical
partitions, for example, type:

extendlv lv05 3
How do I display all logical volumes that are part of a volume group (for example, rootvg)?
You can display all logical volumes that are part of rootvg by typing the following command:

lsvg -l rootvg
How do I list information about logical volumes?
Run the following command to display information about the logical volume lv1:

lslv lv1
How do I remove a logical volume?
You can remove the logical volume lv7 by running the following command:

rmlv lv7
The rmlv command removes only the logical volume, but does not remove other entities, such as
file systems or paging spaces that were using the logical volume.
How do I mirror a logical volume?
1.

mklvcopy LogicalVolumeName Numberofcopies

2.

syncvg VolumeGroupName

How do I remove a copy of a logical volume?


You can use the rmlvcopy command to remove copies of logical partitions of a logical volume. To
reduce the number of copies of each logical partition belonging to logical volume testlv, enter:

rmlvcopy testlv 2
Each logical partition in the logical volume now has at most two physical partitions.
Queries about volume groups
To show volume groups in the system, type:

lsvg
To show all the characteristics of rootvg, type:

lsvg rootvg
To show disks used by rootvg, type:

lsvg -p rootvg
How to add a disk to a volume group?
Type the following:

extendvg VolumeGroupName hdisk0 hdisk1 ... hdiskn


How do I find out what the maximum supported logical track group (LTG) size of my hard disk?
You can use the lquerypv command with the -M flag. The output gives the LTG size in KB. For
instance, the LTG size for hdisk0 in the following example is 256KB.

/usr/sbin/lquerypv -M hdisk0 256


You can also run the lspv command on the hard disk and look at the value for MAX REQUEST.
What does the syncvg command do?
The syncvg command is used to synchronize stale physical partitions. It accepts names of logical
volumes, physical volumes, or volume groups as parameters.
For example, to synchronize the physical partitions located on physical volumes hdisk6 and hdisk7,
use:

syncvg -p hdisk4 hdisk5


To synchronize all physical partitions from volume group testvg, use:

syncvg -v testvg
How do I replace a disk?
1.

extendvg VolumeGroupName hdisk_new

2.

migratepv hdisk_bad hdisk_new

3.

reducevg -d VolumeGroupName hdisk_bad

How can I clone (make a copy of) the rootvg?


You can run the alt_disk_copy command to copy the current rootvg to an alternate disk. The
following example shows how to clone the rootvg to hdisk1.

alt_disk_copy -d hdisk1

Network
How can I display or set values for network parameters?

The no command sets or displays current or next boot values for network tuning parameters.
How do I get the IP address of my machine?
Type one of the following:

ifconfig -a host Fully_Qualified_Host_Name


For example, type host cyclop.austin.ibm.com.
How do I identify the network interfaces on my server?
Either of the following two commands will display the network interfaces:

lsdev -Cc if
ifconfig -a
To get information about one specific network interface, for example, tr0, run the command:

ifconfig tr0
How do I activate a network interface?
To activate the network interface tr0, run the command:

ifconfig tr0 up
How do I deactivate a network interface?
For example, to deactivate the network interface tr0, run the command:

ifconfig tr0 down


How do I display routing table, interface, and protocol information?
To display routing table information for an Internet interface, type:

netstat -r -f inet
To display interface information for an Internet interface, type:

netstat -i -f inet
To display statistics for each protocol, type:

netstat -s -f inet

How do I record packets received or transmitted?


To record packets coming in and going out to any host on every interface, enter:

iptrace /tmp/nettrace
The trace information is placed into the /tmp/nettrace file.
To record packets received on an interface en0 from a remote host airmail over the telnet port,
enter:

iptrace -i en0 -p telnet -s airmail /tmp/telnet.trace


The trace information is placed into the /tmp/telnet.trace file.

Workload partitions
How do I create a workload partition?
To create a workload partition named temp with the IP Address xxx.yyy.zzz.nnn, type:

mkwpar -n temp -N address= xxx.yyy.zzz.nnn


To create a workload partition with the specification file wpar1.spec, type:

mkwpar -f /tmp/wpar1.spec
How do I create a new specification file for an existing workload partition wpar1?
To create a specification file wpar2.spec for an existing workload partition wpar1, type:

mkwpar -e wpar1 -o /tmp/wpar2.spec -w


How do I start a workload partition?
To start the workload partition called temp, type:

startwpar temp
How do I stop a workload partition?
To stop the workload partition called temp, type:

stopwpar temp

How do I view the characteristics of workload partitions?


To view the characteristics of all workload partitions, type:

lswpar Name State Type Hostname Directory


------------------------------------------------------ bar A S bar.austin.ibm.com
/wpars/bar foo D S foo.austin.ibm.com /wpars/foo trigger A A trigger /
How do I log in to a workload partition?
To log in to the workload partition named wpar1 as user foo, type:

clogin wpar1 -l foo


How do I run a command in a workload partition?
To run the /usr/bin/ps command as user root in a workload partition named howdy, type:

clogin howdy -l root /usr/bin/ps


How do I remove a workload partition?
To remove the workload partition called temp, type:

rmwpar temp
To stop and remove the workload partition called temp preserving data on its file system, type:

rmwpar -p -s temp
Note: Workload Partitions (WPARs), a set of completely new software-based system
virtualization features, were introduced in IBM AIX Version 6.1.

Performance monitoring tools


How do I display virtual memory statistics?
To display a summary of the virtual memory statistics since boot, type:

vmstat
To display five summaries at 2-second intervals, type:

vmstat 2 5

To display a summary of the statistics for all of the workload partitions after boot, type:

vmstat -@ ALL
To display all of the virtual memory statistics available for all of the workload partitions, type:

vmstat -vs -@ ALL


How do I display statistics for all TTY, CPU, and Disks?
To display a single set of statistics for all TTY, CPU, and Disks since boot, type:

iostat
To display a continuous disk report at 2-second intervals for the disk with the logical name disk1,
type:

iostat -d disk1 2
To display 6 reports at 2-second intervals for the disk with the logical name disk1, type:

iostat disk1 2 6
To display 6 reports at 2-second intervals for all disks, type:

iostat -d 2 6
To display only file system statistics for all workload partitions, type:

iostat -F -@ ALL
To display system throughput of all workload partitions along with the system, type:

iostat -s -@ ALL
How do I display detailed local and remote system statistics?
Type the following command:

topas
To go directly to the process display, enter:

topas -P
To go directly to the logical partition display, enter:

topas -L
To go directly to the disk metric display, enter:

topas -D
To go directly to the file system display, enter:

topas -F
How do I report system unit activity?
Type the following command:

sar
To report processor activity for the first two processors, enter:

sar -u -P 0,1
This produces output similar to the following:
cpu %usr %sys %wio %idle 0 45 45 5 5 1 27 65 3 5