You are on page 1of 28

V8.

cover

Front cover

Power Systems for AIX -


Virtualization I: Implementing
Virtualization

(Course code AN30)

Lab Setup Guide


ERC 4.0
Lab Setup Guide

Trademarks
IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International
Business Machines Corp., registered in many jurisdictions worldwide.
The following are trademarks of International Business Machines Corporation, registered in
many jurisdictions worldwide:
Active Memory™ AIX 6™ AIX®
BladeCenter® DS4000® DS6000™
DS8000® Electronic Service Agent™ EnergyScale™
Enterprise Storage Server® Express® Focal Point™
HACMP™ IBM Systems Director Active Initiate®
Energy Manager™
i5/OS™ Notes® Passport Advantage®
POWER Hypervisor™ Power Systems™ Power Systems Software™
Power® PowerHA® PowerPC®
PowerVM® POWER6® POWER7+™
POWER7® pSeries® Redbooks®
SystemMirror® Tivoli®
Linux is a registered trademark of Linus Torvalds in the United States, other countries, or
both.
Windows and Windows NT are trademarks of Microsoft Corporation in the United States,
other countries, or both.
UNIX is a registered trademark of The Open Group in the United States and other
countries.
Java™ and all Java-based trademarks and logos are trademarks or registered trademarks
of Oracle and/or its affiliates.
Other product and service names might be trademarks of IBM or other companies.

March 2013 edition


The information contained in this document has not been submitted to any formal IBM test and is distributed on an “as is” basis without
any warranty either express or implied. The use of this information or the implementation of any of these techniques is a customer
responsibility and depends on the customer’s ability to evaluate and integrate them into the customer’s operational environment. While
each item may have been reviewed by IBM for accuracy in a specific situation, there is no guarantee that the same or similar results will
result elsewhere. Customers attempting to adapt these techniques to their own environments do so at their own risk.

© Copyright International Business Machines Corporation 2009, 2013.


This document may not be reproduced in whole or in part without the prior written permission of IBM.
Note to U.S. Government Users — Documentation related to restricted rights — Use, duplication or disclosure is subject to restrictions
set forth in GSA ADP Schedule Contract with IBM Corp.
V8.0
Lab Setup Guide

TOC Contents
Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . v

Purpose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii

Requirements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Hardware requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Software requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
Skills required to set up the lab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

Setup instructions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
Configuration information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
Hardware setup instructions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
POWER7 processor-based servers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
Check HMC configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
Software setup instructions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
HMC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
External Storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
Load AIX on the partitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
NIM environment requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
Verification procedures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
Reference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

© Copyright IBM Corp. 2009, 2013 Contents iii


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Lab Setup Guide

iv Power Virtualization I © Copyright IBM Corp. 2009, 2013


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Lab Setup Guide

TMK Trademarks
The reader should recognize that the following terms, which appear in the content of this
training document, are official trademarks of IBM or other companies:
IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International
Business Machines Corp., registered in many jurisdictions worldwide.
The following are trademarks of International Business Machines Corporation, registered in
many jurisdictions worldwide:
Active Memory™ AIX 6™ AIX®
BladeCenter® DS4000® DS6000™
DS8000® Electronic Service Agent™ EnergyScale™
Enterprise Storage Server® Express® Focal Point™
HACMP™ IBM Systems Director Active
Energy Manager™
i5/OS™ Passport Advantage®
POWER Hypervisor™ Power Systems™ Power Systems Software™
Power® PowerHA® PowerPC®
PowerVM® POWER6® POWER7+™
POWER7® pSeries® Redbooks®
SystemMirror® Tivoli®
Linux is a registered trademark of Linus Torvalds in the United States, other countries, or
both.
Windows and Windows NT are trademarks of Microsoft Corporation in the United States,
other countries, or both.
UNIX is a registered trademark of The Open Group in the United States and other
countries.
Java™ and all Java-based trademarks and logos are trademarks or registered trademarks
of Oracle and/or its affiliates.
Other product and service names might be trademarks of IBM or other companies.

© Copyright IBM Corp. 2009, 2013 Trademarks v


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Lab Setup Guide

vi Power Virtualization I © Copyright IBM Corp. 2009, 2013


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V7.0.2
Lab Setup Guide

LSGp Purpose
This Lab Setup Guide provides directions for installing, preparing, and verifying the lab
hardware and software in preparation for conducting a class of course AN30.
The Requirements sections of this document can also be used to determine the specific
hardware and software needed to conduct a class.

© Copyright IBM Corp. 2009, 2013 Purpose vii


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Lab Setup Guide

viii Power Virtualization I © Copyright IBM Corp. 2009, 2013


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V7.0
Lab Setup Guide

LSG Requirements
The following tables list the hardware, software, and other materials needed to set up a lab
to conduct a class of course AN30.

Hardware requirements
Table 1 lists the hardware needed to prepare one student lab set. When preparing for a
class, multiply the items below by the number of lab sets needed for the class.
The machines listed below represent the minimum needed to perform the exercises in the
AN30 course.
The students need to access an HMC connected to a POWER7 processor-based server to
perform most of the exercises. Only Exercise 1 does not require HMC or IBM Power
System access. The student access can be accomplished with remote connections from
workstations over a network.
The instructor only needs a way to project the overheads. This could be a PC with a
projector.
The host server is an IBM Power 750 (called the managed server). In addition, an AIX 7
NIM server is also needed to be accessible over a network to install the virtual I/O servers
operating system and the client partitions AIX operating system.

Table 1: Hardware for one student lab set


Platform Machine Minimum Network
Minimum free DASD Features
use type memory adapter
Web browser,
ssh connection
Student PC 1 GB tool such as
putty, Internet
access
Same as above
Instructor PC and connection
to projector
Any
hardware
HMC
supporting
HMC v7
Separate from
NIM AIX 7 the managed
Server LPAR systems used by
students.

© Copyright IBM Corp. 2009, 2013 Requirements 1


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Lab Setup Guide

Table 1: Hardware for one student lab set


Platform Machine Minimum Network
Minimum free DASD Features
use type memory adapter
Each student has 1
Fibre Channel
adapter with 1 shared
LUN and 2
2 GB non-shared LUNs.
(for the AIX Each student
Additionally, there has 1
client LPAR) PowerVM
IBM should be one 8 GB Physical
POWER7 Virtualization
Power 750 2 GB LUN masked to virtual Ethernet
(for the WWPN that will be Feature
adapter
virtual I/O used for NPIV assigned
server) exercise.
Shared LUN - 10 GB.
Non-shared LUNs -
30 GB and 8 GB.

Software requirements
Table 2 lists the software needed to prepare the student and/or instructor lab sets. When
preparing for a class, be sure you have the correct number of licensed copies of any
non-IBM software.
Table 2: Software for one student lab set
Operating Application Licensing
Platform use OS version Applications
system version requirement
Windows NT,
Windows 2000,
Web browser
Student Windows XP. If
and ssh tool
workstation Linux or AIX is
(such as putty)
used, SPT
cannot be used.
Latest
release and
TL for
NIM Server AIX 7.1
serving the
AIX mksysb
image

2 Power Virtualization I © Copyright IBM Corp. 2009, 2013


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V7.0
Lab Setup Guide

LSG Table 2: Software for one student lab set


Operating Application Licensing
Platform use OS version Applications
system version requirement
At or above
Standard HMC Version 7
compatible Release 7.4 with
with POWER7 all latest fixes
applied.
Table 3 software needed to prepare the host or server machine.
Table 3: Software for host machine
Software product Version Notes
AIX 7.1 TL1 SP3 or later mksysb openssh.base.client must
available on NIM Master for be installed in the AIX
installing AIX client LPARs. mksysb image
VIOS 2.1.3.10-FP23 mksysb 2.1.3.10-FP23 is the
image available on NIM Master minimum level required for
for vio LPARs installation. POWER7 hardware.
VIOS Fixpack update available The update image should
on an NFS server 2.2.2.1-FP26 or later be a fixpack, not a service
available on NFS server for pack, since service packs
(in CLP this will normally be updating vio LPARs. require the base fixpack to
10.6.252.1:/export/labfiles) be installed first.

Note

The initial install of a VIOS partition by students must use version 2.1.3.10-FP23. Firstly,
this is the minimum version required for POWER7 hardware. Secondly, VIOS 2.1.x is more
forgiving of errors in SEA failover configuration, since it forwards BPDU packets which are
used by phsyical switches to detect an ARP storm situation using the BPDU guard feature.
While this is a useful function, the VIOS developers considered it a defect, so VIOS 2.2.0.1
and above no longer forward BPDU packets. It is OK for students to update their VIOS to
2.2.x in the labs, since by that stage they will already have correctly completed the SEA
failover configuration.

The exercises for this class were tested and verified as working correctly using the
following configuration:
• IBM Power 750

© Copyright IBM Corp. 2009, 2013 Requirements 3


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Lab Setup Guide

• HMC running V7 R7.4.0

• AIX client partitions running AIX V7.1 Technology level 1 Service Pack 3. While this TL
was used for testing purposes, no part of the student exercises mentions the particular
TL or SP to expect on the lab systems, so any AIX 7.1 TL would be sufficient.
• Virtual I/O Server partition running V2.1.3.10-FP-23 of the Virtual I/O Server code for
initial install.
• An available external NIM master with mksysb resources for installing the virtual I/O
servers and the AIX client LPARs.
• There is no course-specific customization required for the VIO mksysb image. The only
customization required on the AIX image is that the OpenSSH software must be
installed. Students will obtain any other required files from the NFS server. This means
you can either use the mksysb images provided by the course developer, or create your
own mksysb images at the appropriate code levels. If creating your own AIX mksysb, a
minimal install plus openssh.base.client is sufficient for this course. This means you can
disable the default options which normally will install graphics software and system
management client software. This software does not get used during the course, and
only serves to make the mksysb image larger than necessary.

Skills required to set up the lab


The following specialized skills are required to set up the lab:
A good knowledge of POWER7 systems, HMC V7, how to create LPARs and install AIX.
Also a knowledge of basic NIM tasks.

4 Power Virtualization I © Copyright IBM Corp. 2009, 2013


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V7.0
Lab Setup Guide

LSG Setup instructions

Configuration information
The following describes the configurations of the student and/or lab set systems.

Hardware setup instructions


Use the following information in addition to the normal hardware installation procedures to
set up the lab hardware.

POWER7 processor-based servers


Each student is assigned a dedicated processor type LPAR.
__ 1. The AIX LPAR assigned to the student and created during the class set up consists
of:
• One dedicated processor as desired value, 1 as minimum, and 3 as maximum
• 2 GB of desired memory, 1 as minimum, and 4 GB as maximum
• One dedicated Fibre Channel adapter for external disk attachment
• One dedicated physical Ethernet adapter as network interface card
__ 2. Basically, each POWER7-based system used for this course delivery is able to
support 6 students.
__ 3. Each student should have access to 1 LUN of 30 GB, 1 LUN of 8 GB and 1 LUN of
10 GB. Additionally, 1 LUN of 8 GB is masked to the virtual WWPNs generated by
the modify.fc.wwpns.sh script for the NPIV exercise (Exercise 8). So, the total SAN
disk capacity required for this configuration is 56 GB per student and 336 GB per
managed system. If you host 12 students in the class, two managed systems are
required for a total of about 672 GB disks space.
__ 4. The POWER7 processor-based systems must have one Host Ethernet adapter
(HEA). The HEA Multi Core scaling value must be set up in order to be able to
create at least 6 logical ports (One logical hea port per logical partition). An MCS
value of 1 or 2 is a good value.
If an adapter is a dual port adapter, and only one port is cabled to the external
physical switch, be sure that it is the first port and not the second. If that is not
possible, please inform the instructor of the situation.
Identify the configuration of the ethernet ports to which these ports are cabled. For
example, they might be configured to auto-negotiation or they might be configured
for a specific speed and duplex mode. Inform the instructor of the speed and duplex

© Copyright IBM Corp. 2009, 2013 Setup instructions 5


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Lab Setup Guide

mode if the configuration is anything other than auto-negotiation. (We recommend


using the common default of auto-negotiation, which matches the AIX default.)
__ 5. The systems and the HMCs do not need to be physically accessible by the students,
but they must be accessible over the Internet network.
__ 6. The labs have been tested on IBM Power 750, but the lab systems can be other
POWER7 systems configured with the physical resources corresponding to the
class size.

Check HMC configuration


HMC setup must be done according to the following:
__ 7. Log in to the HMC as hscroot. The password is probably either abc123 or abc1234.
If the password has been changed during a previous class, you might be required to
reload the HMC code.
__ a. Check HMC code version, minimum is V7R7.4.0. You can check the version in
the HMC Welcome page:

__ b. Delete all other HMC users than the default users (hscroot ... hscpe.). Some
HMC users can exist from a previous class session.
__ c. Allow remote connection and command execution.

Software setup instructions


Use the following information in addition to the normal software installation procedures to
set up the lab software.

6 Power Virtualization I © Copyright IBM Corp. 2009, 2013


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V7.0
Lab Setup Guide

LSG HMC
__ 1. Log in to the HMC as hscroot. The password is probably either abc123 or abc1234.
__ 2. Click the Server Management application. If the right panel on the Server
Management application is blank, perform the following steps.
__ a. Choose Add Managed System from the Server Management.
__ b. If you know the IP address of the Server Processor, choose Add a Managed
System, enter the IP address and password.
__ c. If you do not know the IP address of the Server Processor, choose Find
Managed Systems, and enter 192.168.255.0 for the beginning IP address and
192.168.255.255 for the ending IP address. This find operation will take few
minutes and should find all the managed systems connected to your network.
This is assuming that no one has changed the IP address on the service
processors.
__ 3. Configure six logical partitions on each POWER7 managed system.
• Each LPAR is a dedicated type LPAR. Each LPAR has one dedicated Fibre
Channel disk attachment and a physical Ethernet adapter as network interface
card.
__ 4. Create the partitions, go to the Server Management application on the HMC, select
the managed system name, and choose Create -> Logical Partition on the menu.
Use the partition name, partition ID and default profile name listed in the following
table:
Student
System Name LPAR name LPAR ID Profile name
number:
First managed system
student 1 partition1 1 Normal
student 2 partition2 2 Normal
student 3 partition3 3 Normal
student 4 partition4 4 Normal
student 5 partition5 5 Normal
student 6 partition6 6 Normal
Second managed system
student 7 partition7 7 Normal
student 8 partition8 8 Normal
student 9 partition9 9 Normal
student 10 partition10 10 Normal
student 11 partition11 11 Normal
student 12 partition12 12 Normal

© Copyright IBM Corp. 2009, 2013 Setup instructions 7


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Lab Setup Guide

The LPAR names, can be prefixed with the system name such as sys4_partition1
instead of partition1 (sys4 being the managed system name). This is necessary if
multiple classes are to be supported at the same time to avoid confusion.
__ 5. For each partition profile, use the following Processor and Memory settings. Logical
partitions use Dedicated processors.
Desired Minimum Maximum Desired Minimum Maximum
Memory Memory Memory CPU CPU CPU
2 GB 1 MB 4 GB 1 1 3
__ a. Add one Storage Fibre Channel adapter and one physical Ethernet adapter to
the partition profile. The following picture shows an example of the storage
controllers and physical Ethernet adapters that can be used on a IBM Power
750.

__ b. No virtual adapter is required for the setup of the class.


__ 6. Define an IP range and IP addresses for the partitions. They should be in the same
subnet as the HMC. It is strongly suggested that the partition name be the same as
the partition hostname. Whatever design is chosen, notify the instructor of the
naming scheme.

External Storage
The DS6000 Storage System has to be configured to provide LUNs for the managed
systems. Each student must have access to two non shared LUNs and one shared LUN.
This shared LUNs must be accessible to both virtual I/O servers in the students's team for

8 Power Virtualization I © Copyright IBM Corp. 2009, 2013


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V7.0
Lab Setup Guide

LSG setting up MPIO on the client partition. Additionally, for the NPIV exercise, one 8 GB LUN
per student (six per POWER7 managed system) has to be mapped to the virtual WWPNs
generated by the modify.fc.wwpns.sh script.

Note

Configure SAN storage for shared and non-shared disks such that the 30 GB non-shared
disk appears first. Configure the 30 GB size LUN such that it appears first in the LPAR
configuration (as hdisk0). The order of configuration of the remaining LUNs is less
important.

Note

The differences in LUNs's sizes are important. Students will have to use different LUN size
depending on the exercise. For example non-shared LUN of 30 GB size is needed for
installing the VIO Server code. A shared LUN of 10 GB size will be use as virtual SCSI disk
for installing client logical partition and the non-shared LUN of 8 GB size will be used for
creating the virtual media repository. The simplest way for the students to make a LUN
selection during the exercises is to have different LUN size for each purpose, specially
when performing an AIX or VIOS code installation.

Note

When configuring a system for a class of up to 6 students, the 10 GB shared LUNs will be
zoned to be visible on all six of the FC adapters on a single managed system.
When configuring two managed systems for a class of up to 12 students, each set of six
shared 10 GB LUNs should only be zoned for access by the six FC adapters on a single
managed system. Do not zone the twelve 10 GB shared LUNs to be visible across all of the
FC adapters on both managed systems.

© Copyright IBM Corp. 2009, 2013 Setup instructions 9


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Lab Setup Guide

Here is an overall view of the SAN LUNs setup for this course.

__ 7. NON-Shared LUN creation: For each managed system, create six non-shared
LUNs of 30 GB size and six non-shared LUNs of 8 GB size. These LUNs must not
be shared between LPARs. Assign one LUN of 30 GB size and one LUN of 8 GB
size per Fibre Channel Adapter WWPN. When you will create the LPAR, you will
assign one Fibre Channel adapter to its profile. You will see those two LUNs (not
shared with the other LPARs). The students will use the 10 GB size LUN for
installing the AIX LPARs rootvg.

10 Power Virtualization I © Copyright IBM Corp. 2009, 2013


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V7.0
Lab Setup Guide

LSG
Table 4: NON Shared LUNs
Number of non
LUN size LPARs System #
shared LUNs
1 x 30 GB
2 LUNs partition1 1
1 x 8 GB
1 x 30 GB
2 LUNs partition2 1
1 x 8 GB
1 x 30 GB
2 LUNs partition3 1
1 x 8 GB
1 x 30 GB
2 LUNs partition4 1
1 x 8 GB
1 x 30 GB
2 LUNs partition5 1
1 x 8 GB
1 x 30 GB
2 LUNs partition6 1
1 x 8 GB
1 x 30 GB
2 LUNs partition7 2
1 x 8 GB
1 x 30 GB
2 LUNs partition8 2
1 x 8 GB
1 x 30 GB
2 LUNs partition9 2
1 x 8 GB
1 x 30 GB
2 LUNs partition10 2
1 x 8 GB
1 x 30 GB
2 LUNs partition11 2
1 x 8 GB
1 x 30 GB
2 LUNs partition12 2
1 x 8 GB
__ 8. Shared LUNs creation: Create and zone six shared LUNs of 10 GB size per
managed system. These LUNs must be Shared LUNs, and all the Fibre Channel
adapters (all the LPARs) in a single managed system must be able to access all six
of these LUNs. For a class of 12 students, each managed system will have a
separate set of six shared LUNs.
__ 9. Virtual WWPN (NON Shared) LUNs creation: Create six LUNs of 8 GB size each
per managed system. For a 12-student class, you need to create 12 LUNs. Then
use the following worldwide port names naming convention to perform the LUN
mapping. These WWPN will be used by the students when configuring virtual Fibre
Channel adapters. Those predefined WWPNs have 16 digits and must contain the
managed system number:
Table 5: WWPN pair
Student WWPN pair
number
student 1 c<system number>000000000001,c<system number>000000000002

© Copyright IBM Corp. 2009, 2013 Setup instructions 11


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Lab Setup Guide

Table 5: WWPN pair


Student WWPN pair
number
student 2 c<system number>000000000003,c<system number>000000000004
student 3 c<system number>000000000005,c<system number>000000000006
student 4 c<system number>000000000007,c<system number>000000000008
student 5 c<system number>000000000009,c<system number>000000000010
student 6 c<system number>000000000011,c<system number>000000000012
student 7 c<system number>000000000013,c<system number>000000000014
student 8 c<system number>000000000015,c<system number>000000000016
student 9 c<system number>000000000017,c<system number>000000000018
student 10 c<system number>000000000019,c<system number>000000000020
student 11 c<system number>000000000021,c<system number>000000000022
student12 c<system number>000000000023,c<system number>000000000024

Load AIX on the partitions


The following steps assume that a NIM server is used for loading AIX on the logical
partitions.
__ 10. Define a machine object for each partition to install during this setup. We
recommend using the IP address name resolution as the object name. Whatever
naming convention is used, inform the instructor of the naming convention.
__ 11. On the NIM master, there needs to be a mksysb and matching SPOT. For the AIX
partitions, this needs to be for AIX 7.1 TL1 SP3 or above (at the time of writing the
exercises were tested using AIX 7.1 TL1 SP3). Then allocate the resources for the
LPARs boot and AIX installation. Provide the instructor with name of this AIX SPOT
object.
__ 12. Select the logical partition in the HMC server management and Activate it.
__ 13. In the Activate Logical Partition window, put a check in the Open a terminal
window box, click the Advanced button and on the Boot Mode drop-down menu,
select SMS and click OK in both windows.
__ 14. Complete the installation. The system will automatically re-boot when done.
__ 15. Log in as root on the logical partitions. Check the installation logs, the available
disks, the file systems, the installed filesets, the time and date, the hostname, and
so on.

NIM environment requirements


During the exercise, the students will install a virtual I/O server and an AIX client logical
LPAR. A pre-existing NIM master is required to provide mksysb images for installing the
virtual I/O servers and the client logical LPARs. Use the following information in addition to

12 Power Virtualization I © Copyright IBM Corp. 2009, 2013


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V7.0
Lab Setup Guide

LSG the normal software installation procedures to set up an additional NIM mksysb
resource(s).
__ 1. mksysb resources must be available for installing the virtual I/O servers version
2.1.3.10-FP23, and the client AIX LPARs with AIX 7.1 TL1 SP3 or newer.
__ 2. In Exercise 5, the students must select a hdisk (with a specific size) when
performing the virtual I/O server installation, so do not allocate any bosinst.data file
to the group of resources.
__ 3. In Exercise 5, the lab instructions expect students to interact with the AIX installation
menus when performing the AIX client partition installation, so do not allocate any
bosinst.data file to the group of resources.
__ 4. In Exercise 7, the lab instructions expect hdisk0 inside the AIX client partition to
have the hcheck_interval attribute for hdisk0 at the default value of 0. Do not change
this value by using a NIM customization script.
__ 5. The NIM environment needs to have NIM client objects using the naming convention
used in this course. The following logical partition naming convention is used in the
exercises. In case of different naming, please inform the instructors. Be sure to have
these NIM objects defined in the NIM master.
• Virtual I/O servers names: vios1, vios2, vios3, vios4, vios5, vios6, vios7, vios8,
vios9, vios10, vios11, vios12
• Client LPARs names: lpar1, lpar2, lpar3, lpar4, lpar5, lpar6, lpar7, lpar8, lpar9,
lpar10, lpar11, lpar12

Important

Enable the bos_inst operation for all of the clients (VIO servers and client LPARs). This
way the students will not have to directly access the NIM master. The students are not
being directed to enable the bos_inst for the install of this image. It is imperative that IT
Support does this and provide the instructor with the IPs which should be referenced to
install these clients! Do not use any bosinst_data resource for the VIO or AIX client LPARs.
The exercise instructions expect that students will interact with the operating system install
menus.

Verification procedures
Use the following information to verify the LPAR installation and configuration.
__ 1. Verify that the NFS server has the following files:
• modify.fc.wwpns.sh script to update virtual WWPNs under the
/export/labfiles/an30/ex8.

© Copyright IBM Corp. 2009, 2013 Setup instructions 13


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Lab Setup Guide

• The fixes for VIOS 2.2.2.1-FP26 located in the /export/labfiles/an30/ex10


directory. The bff files should be located in this directory. Do not place them in a
subdirectory, since this will cause problems with the exercise instructions.
• On the NFS server, change directory to /export/labfiles, and run the following
command to make sure the files that are available to students have the correct
permissions:
chmod -R 755 an30
__ 2. Telnet or ssh to each partition and successfully ping the HMC with both the IP
address and the hostname. Use the HMC Management > HMC Configuration >
Test Network Connectivity application on the HMC to ping each of the partitions
from the HMC.
__ 3. Test if the dynamic LPAR function works. Dynamically add 64 MB (the smallest
block of memory: LMB size) of memory to each partitions. Remove it to get the
original memory capacity to the LPAR.
Check on each LPAR the available hdisks devices. You should see 8 hdisk devices
(external LUNs). The LPAR's rootvg should be installed on hdisk0, and it should be
30 GB in size.
# lspv
hdisk0 00f784aee0fd700a rootvg
active
hdisk1 none None
hdisk2 none None
hdisk3 none None
hdisk4 none None
hdisk5 none None
hdisk6 none None
hdisk7 none None
# bootinfo -s hdisk0
30720
__ 4. Provide the instructor the following information:
__ a. IP addresses (HMC, AIX LPARs, and so on) and login information (user name
and password).
__ b. NIM server access (IP address) and login information (user name and
password).
__ c. Class-related NIM object names for AIX installation support (lpp_source, spot,
mksysb, and machine objects). Providing a scheme or naming convention might
be acceptable, in lieu of listing each one.
__ d. For Ethernet Switch ports, if not auto-configuration, provide the speed and
duplex mode.

14 Power Virtualization I © Copyright IBM Corp. 2009, 2013


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V7.0
Lab Setup Guide

LSG Reference
The contents of the modify.fc.wwpns.sh script is shown below for reference.
# script for assigning WWPNs pair to Virtual Fiber Channel adapters
# Course AN31
# 7/08/09
#

(($# !=4)) && { print usage $0 "<hmc name>" "<managed system name>" "<lpar
name>" "<student number>" ; exit 29 ; }

#### variables
hmc=$1
system=$2
lpar=$3
student=$4
sys=$(echo $system | cut -c 4,5,6)

############ generating ssh keys for accessing the HMC


#
ssh-keygen -t rsa -N "" <<EOF

EOF
chmod 600 $HOME/.ssh/id_rsa
chmod 600 $HOME/.ssh/id_rsa.pub

### Get the authorized keys file from the remote lpar we want ssh without
password promp
### for lpars, remote $HOME is / if hmc, remote $HOME is /home/hscroot
### if remote is an hmc then remote file is
/home/hscroot/.ssh/authorized_keys2
USER=hscroot
scp ${USER}@${hmc}:/home/${USER}/.ssh/authorized_keys2
$HOME/.ssh/authorized_keys2_$hmc
(($? !=0)) && ssh ${USER}@${LPAR} mkdir /.ssh

#### append this host public key in the remote lpar or hmc authorized file
cat $HOME/.ssh/id_rsa.pub >> $HOME/.ssh/authorized_keys2_$hmc

### Resend modified system authorized keys file to remote lpar or remote
hmc we want ssh to
scp $HOME/.ssh/authorized_keys2_$hmc
${USER}@${hmc}:/home/hscroot/.ssh/authorized_keys2

© Copyright IBM Corp. 2009, 2013 Setup instructions 15


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Lab Setup Guide

##### remove temp file


rm $HOME/.ssh/authorized_keys2_$hmc

typeset -RZ2 student=$student


print $student
studtemp=$(( student*2 ))
studfirst=$(( studtemp-1 ))
typeset -RZ2 studfirst=$studfirst
studsecond=$(( student*2 ))
typeset -RZ2 studsecond=$studsecond

##### WWPNs modification


# WWPNs are defined based on managed system name, and student number

wwpns="c${sys}0000000000${studfirst},c${sys}0000000000${studsecond}"

# Retrieve each variable of the virtual fiber channel adapter.


#
slot_num=$(ssh hscroot@${hmc} lssyscfg -r prof -m ${system} --filter
profile_names=Normal-NPIV, lpar_names=${lpar} -F virtual_fc_adapters | awk
'BEGIN{FS = "/"}{print $1}' | cut -c 4- )

adapter_type=$(ssh hscroot@${hmc} lssyscfg -r prof -m ${system} --filter


profile_names=Normal-NPIV, lpar_names=${lpar} -F virtual_fc_adapters | awk
'BEGIN{FS = "/"}{print $2}')

remote_lpar_id=$(ssh hscroot@${hmc} lssyscfg -r prof -m ${system} --filter


profile_names=Normal-NPIV, lpar_names=${lpar} -F virtual_fc_adapters | awk
'BEGIN{FS = "/"}{print $3}')

remote_lpar_name=$(ssh hscroot@${hmc} lssyscfg -r prof -m ${system} --filter


profile_names=Normal-NPIV, lpar_names=${lpar} -F virtual_fc_adapters | awk
'BEGIN{FS = "/"}{print $4}')

remote_slot_num=$(ssh hscroot@${hmc} lssyscfg -r prof -m ${system} --filter


profile_names=Normal-NPIV, lpar_names=${lpar} -F virtual_fc_adapters | awk
'BEGIN{FS = "/"}{print $5}')

#wwpns=$(ssh hscroot@${hmc} lssyscfg -r prof -m ${system} --filter


profile_names=Normal-NPIV, lpar_names=${lpar} -F virtual_fc_adapters | awk
'BEGIN{FS = "/"}{print $6}')

required=$(ssh hscroot@${hmc} lssyscfg -r prof -m ${system} --filter


profile_names=Normal-NPIV, lpar_names=${lpar} -F virtual_fc_adapters | awk

16 Power Virtualization I © Copyright IBM Corp. 2009, 2013


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V7.0
Lab Setup Guide

LSG 'BEGIN{FS = "/"}{print $7}' | cut -c 1 )

# let's perform the modification


ssh hscroot@${hmc} chsyscfg -r prof -m $system -i name=Normal-NPIV,
lpar_name=${lpar},
\\\"virtual_fc_adapters=\\\"\\\"${slot_num}/${adapter_type}/${remote_lpar_i
d}/${remote_lpar_name}/${remote_slot_num}/${wwpns}/${required}\\\"\\\"\\\"

© Copyright IBM Corp. 2009, 2013 Setup instructions 17


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Lab Setup Guide

18 Power Virtualization I © Copyright IBM Corp. 2009, 2013


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0

backpg
Back page

You might also like