Professional Documents
Culture Documents
Front Cover: Power Systems For AIX - Virtualization I: Implementing Virtualization
Front Cover: Power Systems For AIX - Virtualization I: Implementing Virtualization
cover
Front cover
Trademarks
IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International
Business Machines Corp., registered in many jurisdictions worldwide.
The following are trademarks of International Business Machines Corporation, registered in
many jurisdictions worldwide:
Active Memory™ AIX 6™ AIX®
BladeCenter® DS4000® DS6000™
DS8000® Electronic Service Agent™ EnergyScale™
Enterprise Storage Server® Express® Focal Point™
HACMP™ IBM Systems Director Active Initiate®
Energy Manager™
i5/OS™ Notes® Passport Advantage®
POWER Hypervisor™ Power Systems™ Power Systems Software™
Power® PowerHA® PowerPC®
PowerVM® POWER6® POWER7+™
POWER7® pSeries® Redbooks®
SystemMirror® Tivoli®
Linux is a registered trademark of Linus Torvalds in the United States, other countries, or
both.
Windows and Windows NT are trademarks of Microsoft Corporation in the United States,
other countries, or both.
UNIX is a registered trademark of The Open Group in the United States and other
countries.
Java™ and all Java-based trademarks and logos are trademarks or registered trademarks
of Oracle and/or its affiliates.
Other product and service names might be trademarks of IBM or other companies.
TOC Contents
Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . v
Purpose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii
Requirements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Hardware requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Software requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
Skills required to set up the lab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
Setup instructions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
Configuration information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
Hardware setup instructions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
POWER7 processor-based servers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
Check HMC configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
Software setup instructions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
HMC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
External Storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
Load AIX on the partitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
NIM environment requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
Verification procedures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
Reference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
TMK Trademarks
The reader should recognize that the following terms, which appear in the content of this
training document, are official trademarks of IBM or other companies:
IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International
Business Machines Corp., registered in many jurisdictions worldwide.
The following are trademarks of International Business Machines Corporation, registered in
many jurisdictions worldwide:
Active Memory™ AIX 6™ AIX®
BladeCenter® DS4000® DS6000™
DS8000® Electronic Service Agent™ EnergyScale™
Enterprise Storage Server® Express® Focal Point™
HACMP™ IBM Systems Director Active
Energy Manager™
i5/OS™ Passport Advantage®
POWER Hypervisor™ Power Systems™ Power Systems Software™
Power® PowerHA® PowerPC®
PowerVM® POWER6® POWER7+™
POWER7® pSeries® Redbooks®
SystemMirror® Tivoli®
Linux is a registered trademark of Linus Torvalds in the United States, other countries, or
both.
Windows and Windows NT are trademarks of Microsoft Corporation in the United States,
other countries, or both.
UNIX is a registered trademark of The Open Group in the United States and other
countries.
Java™ and all Java-based trademarks and logos are trademarks or registered trademarks
of Oracle and/or its affiliates.
Other product and service names might be trademarks of IBM or other companies.
LSGp Purpose
This Lab Setup Guide provides directions for installing, preparing, and verifying the lab
hardware and software in preparation for conducting a class of course AN30.
The Requirements sections of this document can also be used to determine the specific
hardware and software needed to conduct a class.
LSG Requirements
The following tables list the hardware, software, and other materials needed to set up a lab
to conduct a class of course AN30.
Hardware requirements
Table 1 lists the hardware needed to prepare one student lab set. When preparing for a
class, multiply the items below by the number of lab sets needed for the class.
The machines listed below represent the minimum needed to perform the exercises in the
AN30 course.
The students need to access an HMC connected to a POWER7 processor-based server to
perform most of the exercises. Only Exercise 1 does not require HMC or IBM Power
System access. The student access can be accomplished with remote connections from
workstations over a network.
The instructor only needs a way to project the overheads. This could be a PC with a
projector.
The host server is an IBM Power 750 (called the managed server). In addition, an AIX 7
NIM server is also needed to be accessible over a network to install the virtual I/O servers
operating system and the client partitions AIX operating system.
Software requirements
Table 2 lists the software needed to prepare the student and/or instructor lab sets. When
preparing for a class, be sure you have the correct number of licensed copies of any
non-IBM software.
Table 2: Software for one student lab set
Operating Application Licensing
Platform use OS version Applications
system version requirement
Windows NT,
Windows 2000,
Web browser
Student Windows XP. If
and ssh tool
workstation Linux or AIX is
(such as putty)
used, SPT
cannot be used.
Latest
release and
TL for
NIM Server AIX 7.1
serving the
AIX mksysb
image
Note
The initial install of a VIOS partition by students must use version 2.1.3.10-FP23. Firstly,
this is the minimum version required for POWER7 hardware. Secondly, VIOS 2.1.x is more
forgiving of errors in SEA failover configuration, since it forwards BPDU packets which are
used by phsyical switches to detect an ARP storm situation using the BPDU guard feature.
While this is a useful function, the VIOS developers considered it a defect, so VIOS 2.2.0.1
and above no longer forward BPDU packets. It is OK for students to update their VIOS to
2.2.x in the labs, since by that stage they will already have correctly completed the SEA
failover configuration.
The exercises for this class were tested and verified as working correctly using the
following configuration:
• IBM Power 750
• AIX client partitions running AIX V7.1 Technology level 1 Service Pack 3. While this TL
was used for testing purposes, no part of the student exercises mentions the particular
TL or SP to expect on the lab systems, so any AIX 7.1 TL would be sufficient.
• Virtual I/O Server partition running V2.1.3.10-FP-23 of the Virtual I/O Server code for
initial install.
• An available external NIM master with mksysb resources for installing the virtual I/O
servers and the AIX client LPARs.
• There is no course-specific customization required for the VIO mksysb image. The only
customization required on the AIX image is that the OpenSSH software must be
installed. Students will obtain any other required files from the NFS server. This means
you can either use the mksysb images provided by the course developer, or create your
own mksysb images at the appropriate code levels. If creating your own AIX mksysb, a
minimal install plus openssh.base.client is sufficient for this course. This means you can
disable the default options which normally will install graphics software and system
management client software. This software does not get used during the course, and
only serves to make the mksysb image larger than necessary.
Configuration information
The following describes the configurations of the student and/or lab set systems.
__ b. Delete all other HMC users than the default users (hscroot ... hscpe.). Some
HMC users can exist from a previous class session.
__ c. Allow remote connection and command execution.
LSG HMC
__ 1. Log in to the HMC as hscroot. The password is probably either abc123 or abc1234.
__ 2. Click the Server Management application. If the right panel on the Server
Management application is blank, perform the following steps.
__ a. Choose Add Managed System from the Server Management.
__ b. If you know the IP address of the Server Processor, choose Add a Managed
System, enter the IP address and password.
__ c. If you do not know the IP address of the Server Processor, choose Find
Managed Systems, and enter 192.168.255.0 for the beginning IP address and
192.168.255.255 for the ending IP address. This find operation will take few
minutes and should find all the managed systems connected to your network.
This is assuming that no one has changed the IP address on the service
processors.
__ 3. Configure six logical partitions on each POWER7 managed system.
• Each LPAR is a dedicated type LPAR. Each LPAR has one dedicated Fibre
Channel disk attachment and a physical Ethernet adapter as network interface
card.
__ 4. Create the partitions, go to the Server Management application on the HMC, select
the managed system name, and choose Create -> Logical Partition on the menu.
Use the partition name, partition ID and default profile name listed in the following
table:
Student
System Name LPAR name LPAR ID Profile name
number:
First managed system
student 1 partition1 1 Normal
student 2 partition2 2 Normal
student 3 partition3 3 Normal
student 4 partition4 4 Normal
student 5 partition5 5 Normal
student 6 partition6 6 Normal
Second managed system
student 7 partition7 7 Normal
student 8 partition8 8 Normal
student 9 partition9 9 Normal
student 10 partition10 10 Normal
student 11 partition11 11 Normal
student 12 partition12 12 Normal
The LPAR names, can be prefixed with the system name such as sys4_partition1
instead of partition1 (sys4 being the managed system name). This is necessary if
multiple classes are to be supported at the same time to avoid confusion.
__ 5. For each partition profile, use the following Processor and Memory settings. Logical
partitions use Dedicated processors.
Desired Minimum Maximum Desired Minimum Maximum
Memory Memory Memory CPU CPU CPU
2 GB 1 MB 4 GB 1 1 3
__ a. Add one Storage Fibre Channel adapter and one physical Ethernet adapter to
the partition profile. The following picture shows an example of the storage
controllers and physical Ethernet adapters that can be used on a IBM Power
750.
External Storage
The DS6000 Storage System has to be configured to provide LUNs for the managed
systems. Each student must have access to two non shared LUNs and one shared LUN.
This shared LUNs must be accessible to both virtual I/O servers in the students's team for
LSG setting up MPIO on the client partition. Additionally, for the NPIV exercise, one 8 GB LUN
per student (six per POWER7 managed system) has to be mapped to the virtual WWPNs
generated by the modify.fc.wwpns.sh script.
Note
Configure SAN storage for shared and non-shared disks such that the 30 GB non-shared
disk appears first. Configure the 30 GB size LUN such that it appears first in the LPAR
configuration (as hdisk0). The order of configuration of the remaining LUNs is less
important.
Note
The differences in LUNs's sizes are important. Students will have to use different LUN size
depending on the exercise. For example non-shared LUN of 30 GB size is needed for
installing the VIO Server code. A shared LUN of 10 GB size will be use as virtual SCSI disk
for installing client logical partition and the non-shared LUN of 8 GB size will be used for
creating the virtual media repository. The simplest way for the students to make a LUN
selection during the exercises is to have different LUN size for each purpose, specially
when performing an AIX or VIOS code installation.
Note
When configuring a system for a class of up to 6 students, the 10 GB shared LUNs will be
zoned to be visible on all six of the FC adapters on a single managed system.
When configuring two managed systems for a class of up to 12 students, each set of six
shared 10 GB LUNs should only be zoned for access by the six FC adapters on a single
managed system. Do not zone the twelve 10 GB shared LUNs to be visible across all of the
FC adapters on both managed systems.
Here is an overall view of the SAN LUNs setup for this course.
__ 7. NON-Shared LUN creation: For each managed system, create six non-shared
LUNs of 30 GB size and six non-shared LUNs of 8 GB size. These LUNs must not
be shared between LPARs. Assign one LUN of 30 GB size and one LUN of 8 GB
size per Fibre Channel Adapter WWPN. When you will create the LPAR, you will
assign one Fibre Channel adapter to its profile. You will see those two LUNs (not
shared with the other LPARs). The students will use the 10 GB size LUN for
installing the AIX LPARs rootvg.
LSG
Table 4: NON Shared LUNs
Number of non
LUN size LPARs System #
shared LUNs
1 x 30 GB
2 LUNs partition1 1
1 x 8 GB
1 x 30 GB
2 LUNs partition2 1
1 x 8 GB
1 x 30 GB
2 LUNs partition3 1
1 x 8 GB
1 x 30 GB
2 LUNs partition4 1
1 x 8 GB
1 x 30 GB
2 LUNs partition5 1
1 x 8 GB
1 x 30 GB
2 LUNs partition6 1
1 x 8 GB
1 x 30 GB
2 LUNs partition7 2
1 x 8 GB
1 x 30 GB
2 LUNs partition8 2
1 x 8 GB
1 x 30 GB
2 LUNs partition9 2
1 x 8 GB
1 x 30 GB
2 LUNs partition10 2
1 x 8 GB
1 x 30 GB
2 LUNs partition11 2
1 x 8 GB
1 x 30 GB
2 LUNs partition12 2
1 x 8 GB
__ 8. Shared LUNs creation: Create and zone six shared LUNs of 10 GB size per
managed system. These LUNs must be Shared LUNs, and all the Fibre Channel
adapters (all the LPARs) in a single managed system must be able to access all six
of these LUNs. For a class of 12 students, each managed system will have a
separate set of six shared LUNs.
__ 9. Virtual WWPN (NON Shared) LUNs creation: Create six LUNs of 8 GB size each
per managed system. For a 12-student class, you need to create 12 LUNs. Then
use the following worldwide port names naming convention to perform the LUN
mapping. These WWPN will be used by the students when configuring virtual Fibre
Channel adapters. Those predefined WWPNs have 16 digits and must contain the
managed system number:
Table 5: WWPN pair
Student WWPN pair
number
student 1 c<system number>000000000001,c<system number>000000000002
LSG the normal software installation procedures to set up an additional NIM mksysb
resource(s).
__ 1. mksysb resources must be available for installing the virtual I/O servers version
2.1.3.10-FP23, and the client AIX LPARs with AIX 7.1 TL1 SP3 or newer.
__ 2. In Exercise 5, the students must select a hdisk (with a specific size) when
performing the virtual I/O server installation, so do not allocate any bosinst.data file
to the group of resources.
__ 3. In Exercise 5, the lab instructions expect students to interact with the AIX installation
menus when performing the AIX client partition installation, so do not allocate any
bosinst.data file to the group of resources.
__ 4. In Exercise 7, the lab instructions expect hdisk0 inside the AIX client partition to
have the hcheck_interval attribute for hdisk0 at the default value of 0. Do not change
this value by using a NIM customization script.
__ 5. The NIM environment needs to have NIM client objects using the naming convention
used in this course. The following logical partition naming convention is used in the
exercises. In case of different naming, please inform the instructors. Be sure to have
these NIM objects defined in the NIM master.
• Virtual I/O servers names: vios1, vios2, vios3, vios4, vios5, vios6, vios7, vios8,
vios9, vios10, vios11, vios12
• Client LPARs names: lpar1, lpar2, lpar3, lpar4, lpar5, lpar6, lpar7, lpar8, lpar9,
lpar10, lpar11, lpar12
Important
Enable the bos_inst operation for all of the clients (VIO servers and client LPARs). This
way the students will not have to directly access the NIM master. The students are not
being directed to enable the bos_inst for the install of this image. It is imperative that IT
Support does this and provide the instructor with the IPs which should be referenced to
install these clients! Do not use any bosinst_data resource for the VIO or AIX client LPARs.
The exercise instructions expect that students will interact with the operating system install
menus.
Verification procedures
Use the following information to verify the LPAR installation and configuration.
__ 1. Verify that the NFS server has the following files:
• modify.fc.wwpns.sh script to update virtual WWPNs under the
/export/labfiles/an30/ex8.
LSG Reference
The contents of the modify.fc.wwpns.sh script is shown below for reference.
# script for assigning WWPNs pair to Virtual Fiber Channel adapters
# Course AN31
# 7/08/09
#
(($# !=4)) && { print usage $0 "<hmc name>" "<managed system name>" "<lpar
name>" "<student number>" ; exit 29 ; }
#### variables
hmc=$1
system=$2
lpar=$3
student=$4
sys=$(echo $system | cut -c 4,5,6)
EOF
chmod 600 $HOME/.ssh/id_rsa
chmod 600 $HOME/.ssh/id_rsa.pub
### Get the authorized keys file from the remote lpar we want ssh without
password promp
### for lpars, remote $HOME is / if hmc, remote $HOME is /home/hscroot
### if remote is an hmc then remote file is
/home/hscroot/.ssh/authorized_keys2
USER=hscroot
scp ${USER}@${hmc}:/home/${USER}/.ssh/authorized_keys2
$HOME/.ssh/authorized_keys2_$hmc
(($? !=0)) && ssh ${USER}@${LPAR} mkdir /.ssh
#### append this host public key in the remote lpar or hmc authorized file
cat $HOME/.ssh/id_rsa.pub >> $HOME/.ssh/authorized_keys2_$hmc
### Resend modified system authorized keys file to remote lpar or remote
hmc we want ssh to
scp $HOME/.ssh/authorized_keys2_$hmc
${USER}@${hmc}:/home/hscroot/.ssh/authorized_keys2
wwpns="c${sys}0000000000${studfirst},c${sys}0000000000${studsecond}"
backpg
Back page