This action might not be possible to undo. Are you sure you want to continue?
Integrated Virtualization Manager on IBM System p5
No dedicated Hardware Management Console required Powerful integration for entry-level servers Key administration tasks explained
International Technical Support Organization Integrated Virtualization Manager on IBM System p5 December 2006
All rights reserved.3 that is part of the Advanced POWER Virtualization hardware feature on IBM System p5 and eServer p5 platforms.Use. .Note: Before using this information and the product it supports. Note to U. Government Users Restricted Rights -.S. read the information in “Notices” on page v. © Copyright International Business Machines Corporation 2005. Second Edition (December 2006) This edition applies to IBM Virtual I/O Server Version 1. duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp. 2006.
.4 Virtualization feature activation . . . . . . . . . . .3 ASMI IP address setup . . . . . . . . . . . . . . . . . 8 1. . . . . . . . . . . . .3 Considerations for partition setup . . . . . . . 2. . . . . . . . . . . . . . . . . . .1. . .2 Set the date and time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2. . . . . Installation. . . . . . . . . . . . .2 LPAR configuration . . . . . .7 VIOS partition configuration . . . . . . . . . . . . 3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1 Hardware management. . . . . . . . . .9 Virtual Storage management . . . . . . . . . . . . . . 1 1. . . . . . . . . . . . .8 Network management . vii Become a published author . . . . .1 Architecture . . . 2006. . . . . . . .2 Microcode update . . . . . . . . . . . . . . . . . . . . . . . 2. . . . . . . . . . . 2. . . . . . . . . . . . . . . . . . . . . . . . . . . .6. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii The team that wrote this Redpaper . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2. . . . . . . . . . . .3 Advanced System Management Interface . . . . . . . . . . .6 Monitoring tasks . . . . . . .1. . 2. . . . . . . . . . 19 20 21 23 24 25 26 28 30 30 31 31 31 32 33 34 35 35 35 37 38 38 38 39 44 49 51 52 53 54 iii © Copyright IBM Corp. . . . . . . . . . . . . 3. . . . . . . . . . . . . . . . . . . . . . . . . .6 Initial configuration . . . . . . . . . . . . . . . . . . . . . .6. . . . . . . . . . . . . . . . . . . 2.v Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2. . . . . . . . . . . . . . . .2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1 Virtual I/O Server image installation from DVD . . . . 2 1. . . . . . . . . . . . . . . . . . . . . . . . . . 2. . . . . 2. . 3. .10 Installing and managing the Virtual I/O Server on a JS21 . .4 Changing the TCP/IP settings on the Virtual I/O Server . . . . . . . . . . 3. . . . . . . 2. . . . . . . . . . . . . . Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 1.1 Address setting using the ASMI . . . . . . . . . . . . .2 Virtual I/O Server image installation from a NIM server . . . .2 Hardware Management Console . . . . . . . . . . . . . . . . . . . . . . . . 3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5 Shutting down logical partitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1 Virtualization setup . . . . . 15 Chapter 2. . . . . .1. . 3. . . . . . . . . . . . .1 Connect to the IVM . . .2. . . . . . . . . . . . . . . .1 Configure and manage partitions . . . . . . . . . . . . . . . . . . . . . . . . 2. . . .2. . . . . . . . . . . . . . . . . . .3 IVM command line interface . . . . viii Comments welcome. . 2. . . . . . . . . . . .1 Integrated Virtualization Manager . . . . . . . . . . . . 2005. . . . . . Logical partition creation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3. . . . . . . . . . . . . . . . . . . .2 Address setting using serial ports . All rights reserved. . . . . . . . 3. . . .10. . . . . . . . . . . . . . . . . . 2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3 Create logical partitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2. . . . . . . . . . . . .4 Create an LPAR based on an existing partition . . . . . . . . . 10 1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2. . . . . . . . . . . . .5 VIOS image installation . 2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6. . . . . . . . . . . . .2. . . . . . . . . . . . . . 2. . . . . . . . . . . . . . . . . . . . . . . . .6. . . . . . . . . . 8 1. . . . . . . . . . . . . . . . . . . 2 1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2. . . . . . 4 1. . . .2. . . . . . . 3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3. . . . . .Contents Notices . . . . . . . . . . . . .2 IVM graphical user interface . . . . . . . . . . . . . . . . . . .7 Hyperlinks for object properties. . . . . . . . .1 Reset to Manufacturing Default Configuration . . . . . . . . . . . . . . . . Chapter 3. .2. . . . . . . . . . .2 IVM design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2 Storage pool disk management . . . . . . . . . . . . . . . . . . . . . 2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3. . viii Chapter 1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vi Preface . . . . . . 3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3 Initial network setup . . . . .2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . .1 IVM maintenance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1 Backup and restore of the logical partition definitions. . . . . . . . . . . . .3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 55 55 56 57 57 59 67 71 72 72 74 76 76 77 79 81 83 83 86 Chapter 5. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94 5. 3. . . . . . . . . . . . . . . . . . .4 Migration from an IVM environment to HMC. . Online resources . . . . . . . . . . . .2. 93 5. . . . . . . . . . . . . . . . . . . . . . . . . . . . .1 Backup of the operating system . . . . . . . . . . . . . . .2 Virtual disk extension . . .2. . . . . . . . . . . . . . . . 92 5. . . . . . . . . . . . . Chapter 4. . . . . . . . . . . . . . . . .1. . . . . . . . . . . . . . . . . . . . .3 IVM system disk mirroring. . . . . . . . . . . . . . . . . . . . . 3. . . . . . . . . . . . . . IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . .3 System maintenance. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2 Capacity on Demand operations. . . .5 SCSI RAID adapter use . . . . .4 Logical partition maintenance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 5. . . . .3 Adding a client LPAR to the partition workload group. . . . . . . . . 92 5. . . . . . . . . . . . . . . . . . . . . . . . . .4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Other publications . . . . . . . . . . . . . . . 103 5. . . . . . . . . . . . .2 Restore of the operating system . . . 3. . . 4. . . . . . 4. . . . . . .3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3. . . . . . . . . . . . . . . .2. . .2. . . . . . . . . . . . . . . .5. . . . . . . . 4. .3. . . . . . . . . . . 98 5. . . . . . . . . .1 Network management . . . . . . . . . .4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2 Migration considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4. . . . . . . . . . . . . . . . . . . . . . . .2 Ethernet link aggregation . . . . Help from IBM . . . . . 119 Appendix B. . . . . . .3 Migration from HMC to an IVM environment. . . . . .2 Power on a logical partition . . . . . . . . . . . . . . . . . . . . . .3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1. . . . . . . . . . . . . . . . . . . . . . . . . . . . .2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 5. . . . . . . . . . . . . . . . . . . . . . .3 Install an operating system on a logical partition . . . 115 Appendix A. . .1 Microcode update . . . . . . . . .4 Connecting to the Virtual I/O Server using OpenSSH. . . .3 Securing the Virtual I/O Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1 Update the logical partition’s profile . . . . . . . . .5. . . . .5. . . . . . . .5 LPAR configuration changes. . . . . . . . . . . . . . . . . . . . . . 110 5. . . .3 IVM updates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 5. 113 5. . . . . . . . . . . . . . . . . 91 5. . .2. . . . . . . . . . . . . . . . 4. . . . . . . . . . . 3. . . . . . . . . . . . . .2 The migration between HMC and IVM . . .1 Recovery after an improper HMC connection . . . . . . 3. . . . . . . . . . . . . 113 5. . . 4. . . . . . . . . . . . . How to get IBM Redbooks . . . . . . . . . . . . . . . . . . . 114 5. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2 LPAR resources management . . . . . . . . . . . . . . . 4. . . . . . . . . System requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114 5. . . . .1 Virtual storage assignment to a partition. . . .1 Dynamic LPAR operations on an IVM partition. . . . . . . . . . . . . . . . 4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1 Ethernet bridging. . . . . .5 Command logs . . . . . . . . . .1. . . . . . . . . . . . . . .1. . . . . . . . . . . . . . . . 110 5. . . . . 125 125 125 125 127 127 iv Integrated Virtualization Manager on IBM System p5 . . .2. . . . . . . . . . . . 4. . . . . . . . . . . . . . . . . . . . . . . . . . . .2 Backup and restore of the IVM operating system . . .2. . . . . . . . . 3. . . . . . . 123 Related publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1. . . . . . . . . . . Advanced configuration. . . . . . . . . . . . . . . . . . . . . . . . .4 Optical device sharing . . . . . . . . . 3. . . . . . . . . . . . . . . . . . . . . .6 Integration with IBM Director . . . . . . . . . Maintenance . . . . . . IVM and HMC feature summary .4 AIX 5L mirroring on the managed system LPARs. 98 5. . . . . . . . . 4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2 Storage management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
or function of these programs. program. the examples include the names of individuals. it is the user's responsibility to evaluate and verify the operation of any non-IBM product. This information could include technical inaccuracies or typographical errors. marketing. The materials at those Web sites are not part of the materials for this IBM product and use of those Web sites is at your own risk. Any references in this information to non-IBM Web sites are provided for convenience only and do not in any manner serve as an endorsement of those Web sites. To illustrate them as completely as possible. some measurement may have been estimated through extrapolation. All rights reserved. IBM may make improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time without notice. or service may be used. Furthermore. Any performance data contained herein was determined in a controlled environment. or service is not intended to state or imply that only that IBM product. serviceability. for the purposes of developing. However. cannot guarantee or imply reliability. compatibility or any other claims related to non-IBM products. or service that does not infringe any IBM intellectual property right may be used instead. Therefore. marketing or distributing application programs conforming to the application programming interface for the operating platform for which the sample programs are written. and distribute these sample programs in any form without payment to IBM.A.A. 2006. THE IMPLIED WARRANTIES OF NON-INFRINGEMENT. This information contains examples of data and reports used in daily business operations. therefore. You may copy. Information concerning non-IBM products was obtained from the suppliers of those products.S. using. Changes are periodically made to the information herein. All of these names are fictitious and any similarity to the names and addresses used by an actual business enterprise is entirely coincidental. Some measurements may have been made on development-level systems and there is no guarantee that these measurements will be the same on generally available systems. their published announcements or other publicly available sources. services. 2005. therefore. IBM has not tested those products and cannot confirm the accuracy of performance. © Copyright IBM Corp. to: IBM Director of Licensing. or features discussed in this document in other countries. IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring any obligation to you. You may copy. program. these changes will be incorporated in new editions of the publication. or distributing application programs conforming to IBM's application programming interfaces. which illustrates programming techniques on various operating platforms. COPYRIGHT LICENSE: This information contains sample application programs in source language. this statement may not apply to you. Any functionally equivalent product. IBM may have patents or pending patent applications covering subject matter described in this document. IBM Corporation. North Castle Drive Armonk. or service. Actual results may vary. the results obtained in other operating environments may vary significantly. brands. program. You can send license inquiries. INCLUDING. in writing. The furnishing of this document does not give you any license to these patents. modify. using. IBM. Questions on the capabilities of non-IBM products should be addressed to the suppliers of those products. and distribute these sample programs in any form without payment to IBM for the purposes of developing. v . modify.Notices This information was developed for products and services offered in the U. MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Consult your local IBM representative for information on the products and services currently available in your area. Any reference to an IBM product. companies. BUT NOT LIMITED TO. and products.S. EITHER EXPRESS OR IMPLIED. These examples have not been thoroughly tested under all conditions. program. Users of this document should verify the applicable data for their specific environment. The following paragraph does not apply to the United Kingdom or any other country where such provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND. Some states do not allow disclaimer of express or implied warranties in certain transactions. NY 10504-1785 U. IBM may not offer the products.
Other company. and the Windows logo are trademarks of Microsoft Corporation in the United States. or both. product. other countries. Windows. or both: AIX® AIX 5L™ BladeCenter® eServer™ HACMP™ i5/OS® IBM® Micro-Partitioning™ OpenPower™ POWER™ POWER Hypervisor™ POWER5™ POWER5+™ pSeries® Redbooks™ Redbooks (logo) ™ System p™ System p5™ Virtualization Engine™ The following terms are trademarks of other companies: Internet Explorer.Trademarks The following terms are trademarks of the International Business Machines Corporation in the United States. Microsoft. Linux is a trademark of Linus Torvalds in the United States. other countries. or service names may be trademarks or service marks of others. or both. vi Integrated Virtualization Manager on IBM System p5 . other countries.
52A. and 550 IBM eServer OpenPower 710 and 720 IBM BladeCenter JS21 This IBM Redpaper provides an introduction to IVM by describing its architecture and showing how to install and configure a partitioned server using its capabilities. It is a single-function appliance that resides in an IBM POWER5™ and POWER5+™ processor-based system’s logical partition (LPAR) and facilitates the sharing of physical I/O resources between client partitions (IBM AIX® 5L™ or Linux®) within the server. SAN. All rights reserved. With its intuitive. logical partitioning. Using IVM. 1. He has 11 years of experience in the Information Technology field. The VIOS provides virtual SCSI target and Shared Ethernet Adapter (SEA) virtual I/O function to client LPARs.0. eight years within IBM. 520. Starting with Version 1. vii . The latest version of VIOS. 55A. companies can more cost-effectively consolidate multiple partitions onto a single server. the IVM is easy to use and significantly reduces the time and effort required to manage virtual devices and partitions. browser-based interface. Guido Somers is a Senior Accredited IT Specialist working for IBM Belgium.2. security additions such as viosecure and firewall. The team that wrote this Redpaper This Redpaper was produced by a team of specialists from around the world working at the International Technical Support Organization. His areas of expertise include AIX 5L. as well as other IBM hardware offerings.Preface The Virtual I/O Server (VIOS) is part of the Advanced POWER™ Virtualization hardware feature on IBM® System p5™ and IBM eServer™ p5 platforms and part of the POWER Hypervisor™ and VIOS feature on IBM eServer OpenPower™ systems. and 561 IBM eServer p5 510. 2005. 2006. such as support for dynamic logical partitioning for memory (dynamic reconfiguration of memory is not supported on the JS21) and processors in managed systems. system performance and tuning. the VIOS provided a hardware management function named the Integrated Virtualization Manager (IVM). IVM is available on these IBM systems: IBM System p5 505. A complete understanding of partitioning is required prior to reading this document. adds a number of new functions. Austin Center. 51A.0. HACMP™. He currently works as an IT Architect for Infrastructure and ISV Solutions in the e-Business Solutions Technical Support (eTS) organization. task manager monitor for long-running tasks. IBM eServer pSeries® and System p5. The authors of the First Edition were: Nicolas Guerin Federico Vagnini © Copyright IBM Corp. virtualization.3. and other improvements. It is also supported on the IBM BladeCenter® JS21.
com/redbooks/residencies. and apply online at ibm.to six-week residency program! Help write an IBM Redbook dealing with specific products or solutions. Ramagiri. Volker Haug IBM Germany Morten Vagmo IBM Norway Dai Williams. Business Partners. Find out more about the residency program. and increase your productivity and marketability.The project that produced this paper was managed by: Scott Vetter IBM Austin Thanks to the following people for their contributions to this project: Amartey S. As a bonus. NY 12601-5400 viii Integrated Virtualization Manager on IBM System p5 . Vani D.com/redbooks Send your comments in an e-mail to redbook@us. browse the residency index. Bob G. HYTD Mail Station P099 2455 South Road Poughkeepsie.ibm. International Technical Support Organization Dept. Become a published author Join us for a two. You will team with IBM technical professionals. Send us your comments about this Redpaper or other IBM Redbooks™ in one of the following ways: Use the online Contact us review redbook form found at ibm. Your efforts will help increase product acceptance and client satisfaction. you will develop a network of contacts in IBM development labs.K. Pearson.html Comments welcome Your comments are important to us! We want our papers to be as helpful as possible. while getting hands-on experience with leading-edge technologies. Jim Parumi. Jim Partridge IBM Austin Dennis Jurgensen IBM Raleigh Jaya Srikrishnan IBM Poughkeepsie Craig Wilcox IBM Rochester Peter Wuestefeld. Nigel Griffiths IBM U. or clients.com Mail your comments to: IBM Corporation. Kovacs.
2005. 2006. and BladeCenter JS21.1 Chapter 1. browser-based interface. OpenPower solutions. Overview This chapter describes several available methods for hardware management and virtualization setup on IBM System p5 and eServer p5. and introduces the Integrated Virtualization Manager (IVM). which is part of the Advanced POWER Virtualization hardware feature. With its intuitive. All rights reserved. The Integrated Virtualization Manager is a component that has been included since the Virtual I/O Server Version 1. It enables companies to consolidate multiple partitions onto a single server in a cost-effective way.2. the IVM is easy to use and it significantly reduces the time and effort required to manage virtual devices and partitions. © Copyright IBM Corp. 1 .
It manages a single server.1 Hardware management With the exploitation of virtualization techniques. within an independent security model Presentation of virtual operating system consoles IBM has developed several solutions for hardware management that target different environments depending on the complexity of hardware setup. It is designed to provide a solution that enables the administrator to reduce system setup time and to make hardware management easier. hardware management has become more of an independent task. configuring. IVM is a simplified hardware management solution that inherits most of the HMC features. system administrators must now focus on the management of resources that have been assigned to them. Table 1-1 Supported server models for IVM IBM System p5 Model 505 Model 51A Model 52A Model 55A Model 561 IBM eServer p5 Model 510 Model 520 Model 550 IBM eServer OpenPower Model 710 Model 720 IBM BladeCenter JS21 2 Integrated Virtualization Manager on IBM System p5 . This environment requires advanced platform management applications capable of: Server configuration prior to operating system deployment Service when operating systems are unavailable Coordination of platform-related operations across multiple operating system images. avoiding the need of an independent personal computer.1. 1. the HMC solution might not fit in small and simple environments where only a few servers are deployed or not all HMC functions are required. either for test reasons or for specific requirements. IVM provides a management model for a single system. There are many environments where there is the need for small partitioned systems. In order to be independent from operating system issues. Operating systems have a less direct visibility and control over physical server hardware. Table 1-1 lists the systems that were supported at the time this paper was written. for which the HMC solution is not ideal. at a lower cost. IVM targets the small and medium systems that are best suited for this product. it enables the exploitation of IBM Virtualization Engine™ technology. hardware management requires a separate computing environment capable of accessing.1. A sample situation is where there are small partitioned systems that cannot share a common HMC because they are in multiple locations. and maintaining the server hardware and firmware. therefore. Although complexity has been kept low by design and many recent software revisions support this.1 Integrated Virtualization Manager The HMC has been designed to be the comprehensive solution for hardware management that can be used either for a small configuration or for a multiserver environment. Although it does not offer all of the HMC capabilities. monitoring. controlling.
and the other two partitions are configured to use only virtual devices. network. and optical devices only through the VIOS as virtual devices. The VIOS is automatically configured to own all of the I/O resources and it can be configured to provide service to other LPARs through its virtualization capabilities. by default. Overview 3 . the product that enables I/O virtualization in POWER5 and POWER5+ processor-based systems. Therefore. The first operating system to be installed must be the VIOS. some implicit rules apply to configuration and setup: When a system is designated to be managed by IVM. Because one of the goals of IVM is simplification of management. LPAR #2 Virtual adapters Administrator’s browser LPAR #1 VIOS + IVM Physical adapters Corporate LAN Figure 1-1 Integrated Virtualization Manager configuration The system Hypervisor has been modified to enable the VIOS to manage the partitioned system without an HMC. memory. This device is activated. the LPARs operate as they have previously with respect to processor and memory resources. It enables management of VIOS functions and uses a Web-based graphical interface that enables the administrator to remotely manage the server with a browser. it must not be partitioned. A specific device named the Virtual Management Channel (VMC) has been developed on the VIOS to enable a direct Hypervisor configuration without requiring additional network connections. all other logical partitions (LPARs) do not own any physical adapters and they must access disk.IVM is an enhancement of the Virtual I/O Server (VIOS). The administrator can use a browser to connect to IVM to set up the system configuration. Figure 1-1 shows a sample configuration using IVM. and processor resources. Otherwise. Chapter 1. the design has been developed to have a minimal impact on disk. The HTTPS protocol and server login with password authentication provide the security required by many enterprises. The IVM does not interact with system’s service processor. The VIOS owns all of the physical adapters. when the VIOS is installed as the first partition. The software that is normally running on the HMC has been rewritten to fit inside the VIOS and to provide a simpler user interface. Because the IVM is running using system resources.
IVM and HMC are two unique management systems: The IVM is designed as an integrated solution designed to lower your cost of ownership. and great effort is made to improve their functions and ease of use. because they would also have been managed by an HMC. the configuration setup has to be rebuilt manually. packaged as an external tower or rack-mounted personal computer. and ASMI must be used. An HMC expert should study the differences before using the IVM. If a client wants to migrate an environment from IVM to HMC.3. It has been deployed on all IBM POWER5 processor-based systems. Important: The internal design of IVM requires that no HMC should be connected to a working IVM system. Important: The IVM provides a unique setup and interface with respect to the HMC for managing resources and partition configuration.The VMC enables IVM to provide basic logical partitioning functions: Logical partitioning configuration Boot. a server power-on must be performed by physically pushing the server power-on button or remotely accessing ASMI. IVM has support for dynamic LPAR. LPAR management using IVM is through a common Web interface developed for basic administration tasks. For example.0.2 Hardware Management Console The primary hardware management solution developed by IBM relies on an appliance server named the Hardware Management Console (HMC). IVM also handles all virtualization tasks that normally require VIOS commands to be run. and the HMC is designed for flexibility and a comprehensive set of functions.0. 4 Integrated Virtualization Manager on IBM System p5 . ASMI and IVM together provide a simple but effective solution for a single partitioned server. Figure 1-2 on page 5 depicts some possible configurations showing select systems that are managed by their own console. it has limited service-based functions. This includes systems that had previous software levels of VIOS running on them.1. start. because IVM does not execute while the server power is off. each with its own specific set of management tools. This provides you the freedom to select the one ideal solution for your production workload requirements. Being integrated within the VIOS code. 1. and stop actions for individual partitions Display of partition status Management of virtual Ethernet Management of virtual storage Basic system management Because IVM executes on an LPAR. starting with Version 1.
In the unlikely case that the HMC requires manual intervention. the systems continue to be operational and a new HMC can be plugged into the network and configured to download from the managed systems the current configuration. but OpenPower systems require a POWER Hypervisor and Virtual I/O Server feature. A hardware administrator can either log in to the physical HMC and use the native GUI. or download a client application from the HMC. System p™ servers require an Advanced POWER Virtualization feature. thus becoming operationally identical to the replaced HMC. This application can be used to remotely manage the HMC from a remote desktop with the same look and feel of the native GUI. sharing the processors in the system and enabling I/O sharing.Figure 1-2 Hardware Management Console configurations The HMC is a centralized point of hardware control. and console Dynamic reconfiguration of partitions Management of virtual Ethernet among partitions Clustering Concurrent firmware updates Hot add/remove of I/O drawers POWER5 and POWER5+ processor-based systems are capable of Micro-Partitioning™. Overview 5 . power off. the HMC is capable of modifying the hardware configuration of the managed system. and managing service calls. and the Hypervisor can support multiple LPARs. Because it is a stand-alone personal computer. Chapter 1. and two HMCs can manage the same set of servers in a dual-active configuration designed for high availability. Hardware management is performed by an HMC using a standard Ethernet connection to the service processor of each system. The major HMC functions include: Monitoring of system status Management of IBM Capacity on Demand Creation of logical partitioning with dedicated processors Management of LPARs including power on. In an System p5 environment. querying for changes. Reboots and software maintenance on the HMC do not have any impact on the managed systems. the HMC is does not use any managed system resources and can be maintained without affecting system activity. a single HMC can manage multiple POWER5 processor-based systems. Interacting with the service processor.
and optical device access can be shared. Partition configuration can be changed dynamically by issuing commands on the HMC or using the HMC GUI.On systems with Micro-Partitioning enabled. such as CPU. as shown in Figure 1-3 on page 7. In order to enable dynamic reconfiguration. The allocation of resources. Network. Using a Remote Monitoring and Control (RMC) protocol. and I/O. 1.3 Advanced System Management Interface Major hardware management activity is done by interacting with the service processor that is installed on all POWER5 and POWER5+ processor-based systems. such as the Service Focal Point feature. an HMC requires an Ethernet connection with every involved LPAR besides the basic connection with the service processor. The service processor can be locally accessed through a serial connection using system ports when the system is powered down and remotely accessed in either power standby or powered-on modes using an HTTPS session with a Web browser pointing to the IP address assigned to the service processor’s Ethernet ports. call-home. The Web GUI is called the Advanced System Management Interface (ASMI). and the VIOS partitions manage physical device sharing. disk. can be modified without making applications aware of the change. the HMC provides additional functions: Creation of shared processor partitions Creation of the Virtual I/O Server (VIOS) partition for physical I/O virtualization Creation of virtual devices for VIOS and client partitions The HMC interacts with the Hypervisor to create virtual devices among partitions. The HMC also provides tools to ease problem determination and service support. and error log notification through a modem or the Internet. 6 Integrated Virtualization Manager on IBM System p5 . The HMC has access to the service processor through Ethernet and uses it to configure the system Hypervisor.1. memory. the HMC is capable of securely interacting with the operating system to free and acquire resources and to coordinate these actions with hardware configuration changes.
each in turn. such as accessing the service processor’s error log The scope of every action is restricted to the same server. and troubleshooting. In order to deploy LPARs. In the case of multiple systems. This can be done either with an HMC or using the Integrated Virtualization Manager (IVM). the administrator can run the following basic operations: Viewing system information Controlling system power Changing the system configuration Setting performance options Configuring the service processor’s network services Using on demand utilities Using concurrent maintenance utilities Executing system service aids. Using ASMI. Overview 7 . The ASMI does not allow LPARs to be managed. The other functions are related to system configuration changes. such as access to service processor’s logs. going beyond basic hardware configuration setup. such as virtualization feature activation.Figure 1-3 Advanced System Management Interface ASMI is the major configuration tool for systems that are not managed by an HMC and it provides basic hardware setup features. the administrator must contact each of them independently. typical ASMI usage is remote system power on and power off. Chapter 1. After the initial setup. but some of its features are disabled. It is extremely useful when the system is a stand-alone system. a higher level of management is required. ASMI can be accessed and used when the HMC is connected to the system.
and so on). There are three components in APV: Micro-Partitioning Partition Load Manager The Virtual I/O Server (which includes the Integrated Virtualization Manager). On some systems. viosecure). While configured using the Manufacturing Default Configuration. 1. hyperlinks.1. Base platform management functions. Advanced Power Virtualization (APV) is a priced option. The partition has system service authority.2. the IVM requires management access to the Hypervisor. displaying its operating system messages and error codes. security improvements (firewall. the system configuration can be changed to make the Hypervisor manage multiple LPARs.0. Standard operating system installation methods apply for the partition (network or media-based). so it can update the firmware.2 IVM design All System p servers and the IBM BladeCenter JS21 have the capability of being partitioned because they are all preloaded with all of the necessary firmware support for a partitioned environment. In order to set up LPARs. such as power control. The system’s physical control panel is mapped to the partition. are provided through integrated system control functions (for example.3. every system is set up by manufacturing in the same Manufacturing Default Configuration that can be changed or reset to when required.0. task monitor. service processor and control panel). it is a standard element Because the partitioning schema is designed by the client. 1. such as the p5-590 and p-5-595. When an HMC is not available and the administrator wants to exploit virtualization features. the system has the following setup from a partitioning point of view: There is a single predefined partition. on high-end systems. All hardware resources are assigned to the single partition. It has no service processor connection used by the HMC and it relies on a new virtual I/O device type 8 Integrated Virtualization Manager on IBM System p5 . The Manufacturing Default Configuration enables the system to be used immediately as a stand-alone server with all resources allocated to a single LPAR. The partition is auto-started at power-on. and usability additions (TCP/IP GUI configuration. such as dynamic LPAR-capability of the client LPARs.2. comes with several IVM improvements. If an HMC is attached to a POWER5 processor-based system’s service processor. Starting with Version 1. the IVM can be used. the VIOS has been enhanced to provide management features using the IVM. The VIOS has most of the required features because it can provide virtual SCSI and virtual networking capability. simple LPAR creation. The current version of the Virtual I/O Server.1 Architecture The IVM has been developed to provide a simple environment where a single control program has the ownership of the physical hardware and other LPARs use it to access resources.
The system has not been managed by an HMC. The system is in Manufacturing Default Configuration.3. In order to fulfill these requirements.0. VIOS 1. as shown in Example 1-1. Figure 1-4 on page 10 provides the schema of the IVM architecture. Figure 1-4 on page 10 shows the integration with IBM Director (Pegasus CIM server). The Web server provides a simple GUI and runs commands using the same command line interface that can be used for logging in to the VIOS. VMC is present on VIOS only when the following conditions are true: The virtualization feature has been enabled. For each IVM managed system.0 also enables secure (encrypted) shell access (SSH). By using the ASMI they can enter the virtualization activation code. Chapter 1. The primary user interface is a Web browser that connects to port 80 of the VIOS. an administrator has to use the ASMI. Overview 9 . The presence of the virtual device can be detected using the lsdev -virtual command. This device is activated only when VIOS installation detects that the environment has to be managed by IVM. One set of commands provides LPAR management through the VMC. and a second set controls VIOS virtualization capabilities. A system reset removes any previous LPAR configuration and any existing HMC connection configuration.called Virtual Management Channel (VMC). it can manage only the system on which it is installed. reset the system to the Manufacturing Default Configuration. the administrator must open an independent Web browser session. a new ibmvmc0 virtual device is present and a management Web server is started listening to HTTP port 80 and to HTTPS port 443. and so on. Example 1-1 Virtual Management Channel device $ lsdev -virtual | grep ibmvmc0 ibmvmc0 Available Virtual Management Channel Because IVM relies on VMC to set up logical partitioning. On a VIOS partition with IVM activated.
A new user with no HMC skills will easily manage the system in an effective way. Compared to HMC managed systems. 10 Integrated Virtualization Manager on IBM System p5 Gig E . Only the IVM has been enabled to perform limited actions on the other LPARs such as: Activate and deactivate Send a power off (EPOW) signal to the operating system Create and delete View and change configuration 1. the administrator is asked simple questions. Resources that are assigned to an LPAR are immediately allocated and are no longer available to other partitions. At each step of the process.Web Browser Telnet SSH CIM Client IBM Director IBM IVM CSM VIOS IVM Pegasus CIM Server Web Server Command Shell I/O Subsystem LPAR CLI VIOS CLI P A R T I T I O N 1 P A R T I T I O N 2 Gig E VMC POWER5 Hypervisor Figure 1-4 IVM high-level design LPARs in an IVM managed system are isolated exactly as before and cannot interact except using the virtual devices. configuration flexibility has been reduced to provide a basic usage model. which provide the range of possible answers. changing partition properties if needed after initial setup. Most of the parameters that are related to LPAR setup are hidden during creation time to ease the setup and can be finely tuned.2. memory. LPAR management has been designed to enable a quick deployment of partitions. regardless of whether the LPAR is activated or powered down.2 LPAR configuration The simplification of the user interface of a single partitioned system is one of the primary goals of the IVM. and virtual I/O using a Web GUI wizard. LPAR configuration is made by assigning CPU.
It is important to understand that any unused processor resources do become available to other partitions through the shared pool when any LPAR is not using all of its processor entitlement. Available resources Allocated resources Figure 1-5 System configuration status Chapter 1. the allocated resources are shown in terms of memory and processing units.2 created by the wizard to 0. System configuration is described in the GUI. The system has 4 GB of global memory. Overview 11 . In the Partition Details panel.This behavior makes management more direct and it is a change compared to HMC managed systems where resource over commitment is allowed. it cannot use the resources belonging to a powered-off partition.1. Even if the LPAR2 and LPAR3 partitions have not been activated. In this example. 2 processing units. but it can be defined using the available free resources shown in the System Overview panel. LPAR1 can use up to one processor because it has one virtual processor and has been guaranteed to use up to 0. The processing units for the LPAR named LPAR1 (ID 2) have been changed from the default 0. If a new LPAR is created. their resources have been allocated and the available system’s memory and processing units have been updated accordingly. as shown in Figure 1-5. an unbalanced system has been manually prepared as a specific scenario. and four LPARs defined.1 processing units.
the memory size is 12 Integrated Virtualization Manager on IBM System p5 . but it can be changed using ASMI on the Performance Setup menu. or 256 MB. as shown in Figure 1-6. as shown in Figure 1-7. Figure 1-6 Memory allocation to LPAR The minimum allocation size of memory is related to the system’s logical memory block (LMB) size. 32 MB. the entire system has to be shut down. 128 MB. The default automatic setting can be changed to the following values: 16 MB. If an existing partition has a memory size that does not fit in the new LMB size. The wizard provides this information.Memory Memory is assigned to an LPAR using available memory on the system with an allocation unit size that can vary from system to system depending on its memory configuration. Figure 1-7 Logical Memory Block Size setup In order to change the LMB setting. It is defined automatically at the boot of the system depending on the size of physical memory. 64 MB.
and it is quite unbalanced. The configuration described in Figure 1-5 on page 11 shows manually changed processing units. Available parameters are: Processing unit value Virtual processor number Capped or uncapped property Uncapped weight The default LPAR configuration provided using the partition creation wizard is designed to keep the system balanced. When shared processors are selected for a partition. Because only 0. The wizard provides available resources in both cases and asks which processor resource type to use.1 processing units. no dedicated processors can be selected and a maximum number of two virtual processors are allowed.changed to the nearest value that can be allowed by the new LMB size. Processors An LPAR can be defined either with dedicated or with shared processors. Figure 1-8 shows the wizard panel related to the system configuration described in Figure 1-5 on page 11. Manual changes to the partition configuration should be made after careful planning of the resource distribution. the wizard only asks the administrator to choose the number of virtual processors to be activated. Chapter 1. without exceeding original memory size. Selecting one virtual processor will allocate 0. Overview 13 . with a maximum value equal to the number of system processors. It is suggested to keep the default automatic setting.7 processing units are available. Virtual or dedicated processors Figure 1-8 Processor allocation to LPAR The LPAR configuration can be changed after the wizard has finished creating the partition. Larger LMB sizes can slightly increase the firmware reserved memory size.1 processing units are implicitly assigned and the LPAR is created in uncapped mode with a weight of 128. 0. A small LMB size provides a better granularity in memory assignment to partitions but requires higher memory allocation and deallocation times because more operations are required for the same amount of memory. For each virtual processor.
The same physical adapter or physical adapter aggregation cannot bridge more than one virtual Ethernet. each with a virtual Ethernet ID ranging from 1 to 4. A virtual disk device has the following characteristics: The size is defined by the administrator. Figure 1-9 Virtual Ethernet allocation to LPAR The virtual Ethernet is a bootable device and can be used to install the LPAR’s operating system. select appropriate virtual processors and keep the default processing units when possible. but on system peak utilization periods. if configured. Every LPAR can have up to two virtual Ethernet adapters that can be connected to any of the four virtual networks in the system. they can be important for VIOS to provide service to highly active partitions. adapter 1 is assigned to virtual Ethernet 1 and the second virtual Ethernet is unassigned. By default. Do not underestimate processing units assigned to VIOS. All four virtual networks are described with the corresponding bridging physical adapter. Virtual storage Every LPAR can be equipped with one or more virtual devices using a single virtual SCSI adapter. they remain available in the shared pool. They are available to all LPARs that require them. If not needed. It is treated by the operating system as a normal SCSI disk. 14 Integrated Virtualization Manager on IBM System p5 . Leave some system processing units unallocated.1. Figure 1-9 shows a Virtual Ethernet wizard panel. It is bootable. Virtual Ethernet Every IVM managed system is configured with four predefined virtual Ethernet devices.As a general suggestion: For the LPAR configuration. An administrator can decide how to configure the two available virtual adapters. Each virtual Ethernet can be bridged by the VIOS to a physical network using only one physical adapter. a physical adapter aggregation can be made on one of these bridges instead. If higher performance or redundancy is required. See 4. “Network management” on page 72 for more details.
Virtual optical devices can be used to install the operating system and. Overview 15 . 1. one pool is defined to be the default storage pool and most virtual storage actions implicitly refer to it. Before making changes in the virtual disk device allocation or size. Virtual disk device content is preserved if moved from one LPAR to another or increased in size. on the storage area network). when a DVD-RAM is available. Normal access to the partition is made through the network using services such as telnet and ssh. Chapter 1. This provides a connection from the IVM to the LPAR through the Hypervisor. or DVD-RAM) can be virtualized and assigned to any LPAR. DVD-ROM. one on the LPAR and one on the IVM. When a new LPAR is defined. because there is only one virtual serial connection. because most of the complexity of the LPAR setup is hidden. the IVM provides a virtual terminal environment for LPAR console handling. It can be defined either using an entire physical volume (SCSI disk or a logical unit number of an external storage server) or a portion of a physical volume. Each LPAR can be configured with one or two virtual networks that can be bridged by VIOS into physical networks connected to the system. Remove an existing console connection. The following commands are provided: mkvt rmvt Connect to a console. Virtual TTY In order to allow LPAR installation and management. However. The IVM does not provide a Web-based terminal session to partitions.2. the administrator has to log in to the VIOS and use the command line interface. Virtual disk devices can be created spanning multiple disks in a storage pool. to make backups. In order to simplify management. either internal or external to the physical system (for example. it is important to understand how configurations are applied and can be changed. which is a set of physical volumes. and they can be extended if needed. two matching virtual serial adapters are created for console access. Virtual optical devices Any optical device that is assigned to the VIOS partition (either CD-ROM. It can be assigned only to a single partition at a time. it is easy to create and manage a partitioned system. the owning partition should deconfigure the device to prevent data loss. one at a time.It is created using the physical storage owned by the VIOS partition. We recommend keeping the storage pool to a single physical SCSI adapter at the time of writing.3 Considerations for partition setup When using the IVM. The virtual terminal is provided for initial installation and setup of the operating system and for maintenance reasons. Only one session for each partition is allowed. A new user can quickly learn an effective methodology to manage the system. In order to connect to an LPAR’s console. using the same virtual SCSI adapter provided to virtual disks. The IVM can manage multiple storage pools and change their configurations by adding or removing physical disks to them. A virtual disk device that does not require an entire physical volume can be defined using disk space from a storage pool created on the VIOS.
and a warning message is displayed to highlight the fact that the resources of an affected LPAR are not yet synchronized. Changes using the command line are shown in the Web GUI.The VIOS is the only LPAR that is capable of management interaction with the Hypervisor and is able to react to hardware configuration changes.3. it is possible to change any resource allocation for the client LPARs through the IVM Web interface. Starting with IVM 1. Its configuration can be changed dynamically while it is running. and virtual adapter setup while the LPAR is activated. memory allocation. The IVM command line interface enables an experienced administrator to make modifications to a partition configuration. The other LPARs do not have access to the Hypervisor and have no interaction with IVM to be aware of possible system changes.0. In order to detect the actual values. This enables the user to change processing unit configuration. Figure 1-10 shows a case where the memory has been changed manually on the command line. the administrator must select the partition on the GUI and click the Properties link or by just clicking on the hyperlink for more details about synchronization of the current and pending values. This is possible with the introduction of a new concept called DLPAR Manager (with an RMC daemon). Figure 1-10 Manual LPAR configuration 16 Integrated Virtualization Manager on IBM System p5 .0.
Every LPAR is created with one virtual serial and one virtual SCSI connection. the Web GUI hides its presence and shows virtual disks and optical devices as assigned directly to the partition. deleted. Overview 17 .Figure 1-11 shows a generic LPAR schema from an I/O point of view. the administrator only has to define whether to create one or two virtual Ethernet adapters on each LPAR and the virtual network to which it has to be connected. and VIOS already is equipped with one virtual adapter connected to each of them. Only virtual adapter addition and removal and virtual network assignment require the partition to be shut down. the virtual SCSI adapter must be taken into account. or assigned to any virtual SCSI channel. There are four predefined virtual networks. When the command line interface is used. Ethernet bridging between a virtual network and a physical adapter can be created. Chapter 1. For virtual I/O adapter configuration. POWER5 System LPAR1 LPARn Virtual serial Virtual SCSI Virtual serial Virtual SCSI 1 Virtual 2 3 networks 4 VIOS Ethernet bridge Corporate networks Figure 1-11 General I/O schema on an IVM managed system Because there is only one virtual SCSI adapter for each LPAR. or changed at any time. All remaining I/O configurations are done dynamically: An optical device can be assigned to any virtual SCSI channel. A virtual disk device can be created. deleted.
18 Integrated Virtualization Manager on IBM System p5 .
skip the first steps and start with the IVM media installation in 2. The virtualization feature has been enabled. Installation Starting with Version 1. It is activated during the VIOS installation only if all of the following conditions are true: The system is in the Manufacturing Default Configuration. IP address for the Advanced System Management Interface (ASMI) This chapter describes how to install the IVM on a supported system. If a system supports the IVM. and an IBM sales representative should be contacted to order the activation code. If the system has ever been managed by an HMC. however. it can be ordered with IVM preinstalled.2. “VIOS image installation” on page 28. The IVM installation requires the following items: A serial ASCII console and cross-over cable (a physical ASCII terminal or a suitable terminal emulator) connected to one of the two system ports for initial setup An IP address for IVM An optional. we start with a complete reset of the server. 2006.2 Chapter 2. but recommended. the administrator is required to reset it to the Manufacturing Default Configuration. 2005.5. In this case. A new system from manufacturing that has been ordered with the virtualization feature will be ready for the IVM. If the system is in Manufacturing Default Configuration and the Advanced POWER Virtualization feature is enabled. the IVM can be activated. © Copyright IBM Corp. If virtualization has not been activated. the system cannot manage micropartitions. The procedure is valid for any system as long as the IVM requirements are satisfied. IVM is shipped with the VIOS media. All rights reserved. 19 . The system has never been managed by an HMC.
Enter 1 to confirm or 2 to cancel: 1 The service processor will reboot in a few seconds.1 Reset to Manufacturing Default Configuration This operation is needed only if the system has been previously managed by the HMC. It resets the system. Before continuing with this operation make sure you have manually recorded all settings that need to be preserved. you will lose the platform error logs and partition-related information. “The migration between HMC and IVM” on page 98. etc. 5. Note: After a factory configuration reset. the service processor will be reset. removing all partition configuration and any personalization that has been made to the service processor. Enter 1 to confirm. The following steps describe how to reset the system: 1.2. Make sure that the interface HMC1 or HMC2 not being used by ASMI or HMC is disconnected from the network. Additionally.) that you may have set via user interfaces.2. Power off the system. Enter the System Service Aids menu and select the Factory Configuration option. 2. 20 Integrated Virtualization Manager on IBM System p5 . Also. Press any key on the TTY’s serial connection to receive the service processor prompt. Follow the instructions in the system service publications to configure the network interfaces after the reset. 4. The default password is admin. Check the firmware levels in the permanent and temporary images before resetting the system. network configuration. the system activates the microcode version present in the permanent firmware image. The port settings are: – 19200 bits per second – 8 data bits – No parity – 1 stop bit – Xon/Xoff flow control 3. hardware deconfiguration policies. A warning message similar to what is shown in Example 2-1 describes the effect of the reset and asks for confirmation. Note: More information about migration between the HMC and IVM can be found in 5. time of day. Example 2-1 Factory configuration reset Continuing will result in the loss of all configured system settings (such as the HMC access and ASMI passwords. Log in as the user admin and answer the questions about the number of lines and columns for the output. Connect a serial ASCII console to a system port using a null-modem (cross-over) cable.
Power/Restart Control System Service Aids System Information System Configuration Network Services Performance Setup On Demand Utilities Concurrent Maintenance Login Profile Log out Microcode Level Figure 2-1 Current microcode level display using the ASMI Chapter 2. 4. 8. “Reset to Manufacturing Default Configuration” on page 20. If the update is not needed.2 Microcode update In order to install the IVM. Example 2-2 Current microcode level display using system port System name: Server-9111-520-SN10DDEDC Version: SF235_160 User: admin Copyright ) 2002-2005 IBM Corporation. If the system is powered off. a microcode level SF235 or later is required. 2. The active microcode level is provided by service processor. 9. 7. 5. Installation 21 . see 2.2. 1. S1> If the service processor’s IP address is known.3. The first menu shows the system’s microcode level in the Version line (Example 2-2). as shown in Figure 2-1. and log in as the admin user. For a description of the default IP configuration.1. connect to system ports as described in 2. All rights reserved. 3. “ASMI IP address setup” on page 23. skip this section. 6. the same information is provided using the ASMI in the upper panel of the Web interface. 99.
Advanced Diagnostics. 3.services. refer to 5. Wrap plugs and other advanced functions will be used. In order to use a diagnostic CD. etc. 3 Task Selection (Diagnostics. Service Aids. 22 Integrated Virtualization Manager on IBM System p5 .If the system microcode must be updated. Remove the diagnostic CD from the drive and insert the microcode CD. Download the microcode as an ISO image and burn it onto a CD-ROM. the code and installation instructions are available from the following Web site: http://www14.com/webapp/set2/firmware Microcode can be installed through one of the following methods: HMC Running operating system Running IVM Diagnostic CD The HMC and running operating system methods require the system to be reset to the Manufacturing Default Configuration before installing the IVM.3. If the system is already running the IVM. To make a selection. You will be prompted to initialize the terminal after selecting one of the above options. 5. a task menu will be presented showing all tasks that can be run on the resource(s). following steps 1 to 7 described in 2.) → Update and Manage System Flash → Validate and Update System Firmware. Service Aids. 2 Advanced Diagnostics Routines This selection will test the machine hardware. Wrap plugs and other advanced functions will not be used.ibm.1. Once a task is selected. Advanced Diagnostics. Follow the instructions on the screen until the main menu screen (Example 2-3) opens. Example 2-3 Main diagnostic CD menu FUNCTION SELECTION 1 Diagnostic Routines This selection will test the machine hardware. [ ] 4. The latest image is available at: http://techsupport.) This selection will list the tasks supported by these procedures. Insert the diagnostic CD in the system drive and boot the system from it. The following steps describe how to update the microcode using a diagnostic CD: 1. 4 Resource Selection This selection will list the resources in the system that are supported by these procedures.iso 2.software.com/server/mdownload/p5andi5. Select Task Selection (Diagnostics. “Reset to Manufacturing Default Configuration” on page 20. type the number and press Enter. a serial connection to the system port is required with the setup described in 2.1. “VIOS image installation” on page 28. 99 Exit Diagnostics NOTE: The terminal is not properly initialized. Once a resource is selected.5. etc. a resource menu may be presented showing all resources supported by the task. “Microcode update” on page 110 for instructions.ibm.
the following default values are used: Port HMC1: 192.147. netmask 255. Select the CD drive from the menu. ***** WARNING: Continuing will reboot the system! ***** Do you wish to continue? Make selection. Both Ethernet ports can be used if a valid network address is given. but when the system is powered off and IVM is not running. The current permanent system firmware image is SF220_051.255.255. On the final screen. otherwise. When prompted for the flash update image file. 8.255.0 The DHCP-managed addresses are mainly intended to be used in an HMC environment. labeled HMC1 and HMC2. for network access. it might become difficult to contact ASMI because the addresses might change when the service processor reboots.168. it provides an IP address to the port. press the F7 key to commit. By default. If the console does not support it.6. Serial ports are available for service processor access only if the system is powered off. when the system is connected to a power source and the service processor boots. ASMI enables remote hardware administration and service agent setup and relies on the HTTPS protocol. NO YES 802816 2.255. Installation 23 . Chapter 2.168. use 'Enter' to continue. use the Esc-7 sequence.0 Port HMC2: 192. select YES and wait for the firmware update to be completed and for the subsequent system reboot to be executed. The current temporary system firmware image is SF220_051. In an IVM environment.3. ASMI can be reached only if the current IP configuration is known or if the default addresses are in use.2.3 ASMI IP address setup The service processor is equipped with two standard Ethernet ports. 7. If a DHCP server is available.147. The IP configuration of the ports can be changed using the ASMI menu or connecting to the system serial ports. shown in Example 2-4. IVM is capable of showing the IP addresses of both HMC ports. netmask 255. Example 2-4 Confirmation screen for microcode update UPDATE AND MANAGE FLASH The image is valid and would update the temporary image to SF235_137. they are used to access ASMI menus using a Web browser. The new firmware level for the permanent image would be SF220_051. a Dynamic Host Configuration Protocol (DHCP) request is sent in the network through both HMC ports.
168. Opera 7. 24 Integrated Virtualization Manager on IBM System p5 . You need a system equipped with a Web browser (Netscape 7. Review your configuration and click Save settings to apply the change. Log in as the user admin with the password admin. or later versions) and configured with the following network configuration: IP 192. The Network interface eth0 corresponds to port HMC1. reconnect the power. Microsoft® Internet Explorer® 6.148 Netmask 255. 4.3. Connect the Web browser using the following URL: https://184.108.40.206 Use the following steps to set up the addressing: 1. remove all power to the system. Figure 2-2 shows the corresponding menu.2. eth1 corresponds to HMC2.2. Expand the Network Services menu and click Network Configuration.255.0. If you are not sure about DHCP. 2. and then wait for service processor to boot.1 Address setting using the ASMI The following procedure uses the default address assigned to port HMC1. you can disconnect the Ethernet cable from HMC1 port.255. That address is in use if no other address has been manually configured and if no DHCP server gave an IP address to the port when the system was connected to a power source. 5. Complete the fields with the desired network settings and click Continue.147 3.168. Use an Ethernet cable to connect the HMC1 port with the Ethernet port of your system.2. select Figure 2-2 HMC1 port setup using the ASMI 6.
To define a fixed IP address. 7. 98. 99. Host name Domain name IP address (Currently: 192. provide the IP address. Example 2-5 HMC1 port configuration Network Configuration 1. 9.2. 3. Expand the Network Services menu and click Network Configuration.147 2. possibly. Installation 25 .2 Address setting using serial ports When the HMC port’s IP addresses are not known and ASMI cannot be used.168.2. 2. “Reset to Manufacturing Default Configuration” on page 20. and. 6. 5.147) Subnet mask Default gateway IP address of first DNS server IP address of second DNS server IP address of third DNS server Save settings and reset the service processor Chapter 2. 2. Power off the system. it is possible to access the service processor by attaching an ASCII console to one of system serial ports. 2. 8. 3.2. Connect to the system port as described in 2. Return to previous menu 99.1. the default gateway. Dynamic Currently: 192.168. netmask. The following steps describe how to assign a fixed IP address to an HMC port: 1.3. Log out S1> 2 Configure interface Eth0 MAC address: 00:02:55:2F:BD:E0 Type of IP address: Static 1. The menu enables you to configure the interfaces Eth0 and Eth1 that correspond to system ports HMC1 and HMC2. Example 2-5 shows the steps to configure the port HMC1. Static 98. Configure interface Eth0 Configure interface Eth1 Return to previous menu Log out S1> 1 Configure interface Eth0 MAC address: 00:02:55:2F:BD:E0 Type of IP address Currently: Dynamic 1. 4.
Virtualization is enabled using a specific code that is shipped with the system. select Standby and click Save settings and power on. new systems with this feature ordered come from manufacturing with virtualization active. Log out S1> 2.98. Expand the Power/Restart Control menu and click Power On/Off System. 2. or can be retrieved from the following address after providing the system type and serial number: http://www-912.4 Virtualization feature activation This step is needed only if the system has not been enabled yet with virtualization. Connect to the ASMI with a Web browser using the HTTPS protocol to the IP address of one of HMC ports and log in as the user admin. Return to previous menu 99.com/pod/pod The ASMI is used to activate the virtualization feature with the following steps: 1. Figure 2-3 ASMI menu to boot system in standby mode 26 Integrated Virtualization Manager on IBM System p5 . The default password is admin. Normally.ibm. Set the system in standby state. In the Boot to system server firmware field. Figure 2-3 shows the corresponding ASMI menu.
Fill in the activation code Figure 2-4 ASMI virtualization code activation Chapter 2.3. Expand the On Demand Utilities menu and click CoD Activation. Figure 2-4 shows the corresponding menu. Enter the code provided to activate the feature in the specific system and click Continue. A confirmation message appears. Enter the activation code as soon as the system has finished booting. Installation 27 .
Again. as shown in Figure 2-5. and click Save settings and power off. Wait for the System Management Services (SMS) menu shown in Example 2-6 on page 29 and enter 1 after the word keyboard appears on the screen. you might be prompted to define it as an active console. Installing it requires a serial connection to the system port with the setup described in 2. select Running for the Boot to system server firmware field. select the Power On/Off System menu. Power on the system using either the ASMI or pushing the power-on (white) button at the front of the system. press the key that is indicated on the screen.1. Set the system in running mode and shut it off. “Reset to Manufacturing Default Configuration” on page 20. 2. Figure 2-5 ASMI menu to bring system in running mode and power off 2.4. The following steps describe how to install the VIOS: 1. When connecting using a TTY to the serial connection. It contains the IVM component. 28 Integrated Virtualization Manager on IBM System p5 .5 VIOS image installation The Virtual I/O Server is shipped as a single media that contains a bootable image of the software. 3. If so.
5 (c) Copyright IBM Corp. provide the password for the service processor’s admin user. Choose the default settings. Select Select Boot Options → Select Install/Boot Device → CD/DVD → IDE and choose the right device from a list similar to the one shown in Example 2-7.001. 1 IDE CD-ROM ( loc=U787B. Select Normal Mode Boot and exit from the SMS menu. ------------------------------------------------------------------------------Select Device Device Current Device Number Position Name 1. Use the SMS menus to select the CD or DVD device to boot. 8. The default password is admin. Chapter 2. as shown in Example 2-8.DNW108F-P4-D2 ) ------------------------------------------------------------------------------Navigation keys: M = return to Main Menu ESC key = return to previous screen X = eXit System Management Services ------------------------------------------------------------------------------Type the number of the menu item and press Enter or select Navigation Key:1 7. Insert the VIOS installation media in the drive. When requested.Example 2-6 SMS menu selection IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM 1 = SMS Menu 8 = Open Firmware Prompt 5 = Default Boot List 6 = Stored Boot List memory keyboard network scsi speaker 4. 9. Example 2-8 VIOS installation setup Welcome to Base Operating System Installation and Maintenance Type the number of your choice and press Enter. 5.2003 All rights reserved.Select the installation preferences. 6. Example 2-7 Choose optical device from which to boot Version: SF240_261 SMS 1. Installation 29 . 10. Select the console number and press Enter. >>> 1 Start Install Now with Default Settings 2 Change/Show Installation Settings and Install 3 Start Maintenance Mode for System Recovery Choice is indicated by >>>. Select the preferred installation language from the menu. 2000.
the MAC address can no longer be modified. A progress status is shown. The administrator must execute the mkgencfg command to create them with the following syntax: mkgencfg -o init [-i "configuration data"] The optional configuration data can be used to define the prefix of the MAC address of all four VIOS’ virtual Ethernet adapters and to define the maximum number of partitions supported by the IVM after the next restart.Log in to the VIOS using the user padmin and the default password padmin. 12. 13. Example 2-10 The mkgencfg command $ lsdev | grep ^ent ent0 Available ent1 Available ent2 Available ent3 Available $ mkgencfg -o init 30 10/100 2-Port 2-Port 10/100 Mbps Ethernet PCI Adapter 10/100/1000 Base-TX PCI-X 10/100/1000 Base-TX PCI-X Mbps Ethernet PCI Adapter II (1410ff01) Adapter (1410890 Adapter (1410890 II (1410ff01) Integrated Virtualization Manager on IBM System p5 . as in Example 2-9. all management is performed using the Web interface..6. Example 2-9 VIOS installation progress status Installing Base Operating System Please wait.Wait for the VIOS restore. Approximate % tasks complete 28 Elapsed time (in minutes) 7 29% of mksysb data restored.. When prompted. VIOS reboots.6 Initial configuration The new VIOS requires a simple configuration setup using the command line interface.88 99 Help ? Previous Menu >>> Choice : 1 11. change the login password to something secure. Although the maximum number of partitions can be changed later using the IVM Web GUI. 2. Example 2-10 shows the effect of the command. At the end. 2. Then.Accept the VIOS licence by issuing the license -accept command.1 Virtualization setup The four virtual Ethernet interfaces that IVM manages are not created during the VIOS installation.
6.3. Users with the View Only role can view the TCP/IP settings but cannot change them.$ lsdev | grep ^ent ent0 Available ent1 Available ent2 Available ent3 Available ent4 Available ent5 Available ent6 Available ent7 Available 10/100 Mbps Ethernet PCI Adapter II (1410ff01) 2-Port 10/100/1000 Base-TX PCI-X Adapter (1410890 2-Port 10/100/1000 Base-TX PCI-X Adapter (1410890 10/100 Mbps Ethernet PCI Adapter II (1410ff01) Virtual I/O Ethernet Adapter (l-lan) Virtual I/O Ethernet Adapter (l-lan) Virtual I/O Ethernet Adapter (l-lan) Virtual I/O Ethernet Adapter (l-lan) 2. Authentication requires the use of the padmin user.3 Initial network setup The IVM Web interface requires a valid network configuration to work.2 Set the date and time Use the chdate command to set the VIOS date and time.000 -gateway9. Installation 31 . using the following syntax: mktcpip -hostname HostName -inetaddr Address -interface Interface [-start] [-netmask SubnetMask] [-cabletype CableType] [-gateway Gateway] [-nsrvaddr NameServerAddress [-nsrvdomain Domain]] Example 2-11 shows setting the host name. Example 2-11 IVM network setup at the command line $ mktcpip -hostname ivm -inetaddr 9.0 with the Integrated Virtualization Manager enables you to change the TCP/IP settings on the Virtual I/O Server through the graphical user interface. Chapter 2. check that both the name and IP address resolution of the IVM host name are correct.0.255. and IP address for the IVM. requires a valid name resolution to work correctly.41 Important: The IVM.4 Changing the TCP/IP settings on the Virtual I/O Server Using the Virtual I/O Server 1.5.123 -interface en0 -start -netmask 255. like a Web server.3.255. After the IVM Web server has access to the network. 2.3. Configure the IP by choosing a physical network adapter and issuing the mktcpip command from the command line. Use any role other than the View Only role to perform this task.6. If DNS is involved. Before you can view or modify the TCP/IP settings.5. using the following syntax: chdate [-year YYyy] [-month mm] [-day dd] [-hour HH] [-minute MM] [-timezone TZ] chdate mmddHHMM[YYyy | yy] [-timezone TZ] 2. it is possible to use the Web GUI with the HTTP or the HTTPS protocol pointing to the IP address of the IVM server application. you must have an active network interface. unless other users have been created.6. address.
Connect using HTTP or HTTPS to the IP address assigned to VIOS and log in as the user padmin. Depending on which setting you want to view or modify. Click Apply to activate the new settings. subnet mask. click one of the following tabs: – General to view or modify the host name and the partition communication IP address – Network Interfaces to view or modify the network interface properties.Important: Modifying your TCP/IP settings remotely might result in the loss of access to the current session. the IVM interface is available and can be accessed using a Web browser. and domain server search order – Routing to view or modify the default gateway 3. perform the following steps: 1. From the IVM Management menu. To view or modify the TCP/IP settings. Ensure that you have physical console access to the Integrated Virtualization Manager partition prior to making changes to the TCP/IP settings. 32 Integrated Virtualization Manager on IBM System p5 . Figure 2-6 View/Modify TCP/IP settings 2. name server search order. 2. and the state of the network interface – Name Services to view or modify the domain name. such as the IP address.7 VIOS partition configuration After you complete the network configuration of VIOS. click View/Modify TCP/IP Settings. The View/Modify TCP/IP Settings panel opens (Figure 2-6).
there is only the VIOS partition on the system with the following characteristics: The ID is 1. 2. If the administrator wants to change memory or processing unit allocation of the VIOS partition. The name is equal to the system’s serial number. as described in 3. as shown in Figure 2-7. No bridging is provided with physical adapters at installation time. “LPAR configuration changes” on page 57.0. The state is Running. After the initial installation of the IVM. each belonging to a separate virtual network. a dynamic reconfiguration action can be made either using the Web GUI or the command line. Installation 33 . provided that the same physical adapter is not used to bridge more than one virtual network. The number of virtual processors is equal to or greater than the number of processing units.1 times the total number of virtual processors in the LPAR. With VIOS/IVM 1.5.8 Network management When installed. The allocated memory is the maximum value between 512 MB and one-eighth of the installed system memory. The IVM enables any virtual network to be bridged using any physical adapter.The first panel that opens after the login process is the partition configuration.3. Any partition can be created with its own virtual adapters connected to any of the four available virtual networks. Chapter 2. the VIOS configures one network device for each physical Ethernet present on the system and creates four virtual Ethernet adapters. and the processing units are equal to at least 0. Figure 2-7 Initial partition configuration The default configuration for the partition has been designed to be appropriate for most IVM installations.0 dynamic reconfiguration of memory and processors (AIX 5L) or processors (Linux) of the client partitions is also supported.
Create another pool and set it as the default before creating other partitions. The storage pool can then be selected/assigned by selecting the Storage Pool tab on the View/Modify Virtual Storage view.2. “Storage pool disk management” on page 39 and in 4. A virtual disk is created in the default storage pool and assigned to the partition. assigning a 73. there is only one storage pool named rootvg. Storage pool Virtual disk Both physical volumes and virtual disks can be assigned to an LPAR to provide disk space. Because it is the only pool available at installation time. All remaining physical volumes are available but not assigned to any pool. depending on the IVM menu that is used: During LPAR creation. There can be multiple pools and they cannot share physical disks. Using the Create Virtual Storage link. Logical volume that the IVM assigns to a single partition as a virtual device. From any storage pool. we describe the network bridging setup. For example. A virtual disk is not assigned to any partition and it is created in the default storage pool.1. We discuss basic storage management in 3. The rootvg pool is used for IVM management. Each of them is represented by the LPAR operating system as a single disk. One pool is defined as the default storage pool. During the VIOS image installation. representing the array itself. and we do not recommend using it to provide disk space to LPARs. the administrator must first boot the server using the Standalone Diagnostics CD-ROM provided with the system and create the array.In 4. Important: Create at least one additional storage pool so that the rootvg pool is not the default storage pool. 2. A physical volume not belonging to any storage pool can be assigned whole to a single partition as a virtual device. At installation time. normally containing only one physical volume. In this case.2. only one disk will be available. They are all owned by the IVM.4 GB physical disk and a 3 GB virtual disk to an LPAR running AIX 5L makes the operating system create two hdisk devices. You can use rootvg as a storage pool on a system equipped with a SCSI RAID adapter when all of the physical disks are configured as a single RAID array. virtual disks can be defined and configured. They can be created in several ways.9 Virtual Storage management The IVM uses the following virtual storage management concepts: Physical volume A physical disk or a logical unit number (LUN) on a storage area network (SAN). 34 Integrated Virtualization Manager on IBM System p5 . “Network management” on page 72. it is also defined as the default pool.2. “Storage management” on page 76. A set of physical volumes treated as a single entity.
1 Virtual I/O Server image installation from DVD Virtual I/O Server 1. Mount the VIOS installation DVD in the computer and transfer the mksysb and bosinst. IBM Director is the management choice for a BladeCenter JS21. Note: You can also use the installios command or NIM to install the IVM without the HMC.3. IVM functions in the same way as on a System p5 server. 3. To perform the VIOS installation via NIM. Configure the NIM server. 2. When installing VIOS/IVM from DVD.data files from the /nimol/ioserver_res directory on the DVD to the NIM server (Example 2-12). for example Linux or Windows®.10 Installing and managing the Virtual I/O Server on a JS21 This section discusses the Virtual I/O Server with respect to the JS21 platform.2 Virtual I/O Server image installation from a NIM server It is also possible to install the Virtual I/O Server from a NIM server. follow these steps: 1.10. The command will set up the resources and services for the installation. get access to a computer with a DVD drive and a network connection. you must assign the media tray to the desired blade and then mount the VIOS installation media. The network installation will then proceed as usual. The remaining steps are similar to a normal AIX 5L operating system installation. This computer may run an operating system other than the AIX 5L operating system. 2. the VIOS can be installed through the network from a NIM server or a Linux server. Note: When using the JS21 in a BladeCenter chassis that does not have a DVD drive in the media tray. Installation 35 . LPARs are managed identically. see Chapter 7 of the IBM BladeCenter JS21: The POWER of Blade Innovation. If your NIM server does not have a DVD drive. Install or define an existing server running AIX 5L that can be configured as a NIM server.0. 2.0 is shipped as a single DVD media that contains a bootable image of the software.data /export/vios For more information. All that is needed is to point the installing machine from the SMS network boot menu (in this case. 4. the IVM) to the server that ran the installios or nim command. SG24-7273.10. Chapter 2.2. Example 2-12 NIM installation # mount –oro –vcdrfs /dev/cd0 /mnt # cp /mnt/nimol/ioserver_res/mksysb /export/vios # cp /mnt/nimol/ioserver_res/bosinst.
36 Integrated Virtualization Manager on IBM System p5 .
This chapter discusses the following topics related to LPAR management using the IVM: LPAR creation. All rights reserved. 2005.3 Chapter 3. 2006. deletion. 37 . and update Graphical and command line interfaces Dynamic operations on LPARs © Copyright IBM Corp. Logical partition creation The Integrated Virtualization Manager (IVM) provides a unique environment to administer LPAR-capable servers.
“Create an LPAR based on an existing partition” on page 49. 3. “IVM command line interface” on page 54. It enables you to create LPARs on a single managed system. The default user ID is padmin. Figure 3-1 IVM login page After the authentication process.4. See 3.1 Configure and manage partitions The IVM provides several ways to configure and manage LPARs: A graphical user interface.6.2. designed to be as simple and intuitive as possible. a Welcome window that contains the login and the password prompts opens.0 there is also a so-called simple partition creation by using the option “Create Based On” in the task area. storage management. See 3.2.3. incorporating partition management. 38 Integrated Virtualization Manager on IBM System p5 . As a result.1 Connect to the IVM Open a Web browser window and connect using the HTTP or HTTPS protocol to the IP address that has been assigned to the IVM during the installation process. The IVM graphical user interface is composed of several elements.3. “IVM graphical user interface” on page 38. as shown in Figure 3-2 on page 39.2.3. The following sections explain these methods. as described in 2. Starting with IVM version 1.2 IVM graphical user interface The new graphical user interface (GUI) is an HTML-based interface. See 3. A command line interface. to enable scripting capabilities. and monitoring capabilities. serviceability.0. 3. manage the virtual storage and virtual Ethernet on the managed system. log in and the default IVM console window opens. “Initial network setup” on page 31. as shown in Figure 3-1. and the password you defined at IVM installation time. and view service information related to the managed system.3.
The task area lists the tasks that you can perform for items displayed in the work area. Logical partition creation 39 .2 Storage pool disk management During the installation of the VIOS. We recommend that you create another storage pool and add virtual disks to it for the LPARs. Work area Task area Navigation area Figure 3-2 IVM console: View/Modify Partitions Task area 3. a disk can only be a member of a single storage pool. For advanced configuration of the storage pool.2. During the process of creating the LPAR. The tasks listed in the task area can change depending on the page that is displayed in the work area. “Storage management” on page 76. You can create multiple storage pools. The work area contains information related to the management tasks that you perform using the IVM and to the objects on which you can perform management tasks. however. a default storage pool is created and named rootvg. Chapter 3. refer to 4. Storage pool creation A storage pool consists of a set of physical disks that can be different types and sizes. the IVM automatically creates virtual disks in the default storage pool. or even depending on the tab that is selected in the work area.The following elements are the most important: Navigation area Work area The navigation area displays the tasks that you can access in the work area.2.
as shown in Figure 3-3. Click Create Storage Pool in the work area. 2. Under the Virtual Storage Management menu in the navigation area. click the Create Virtual Storage link. Figure 3-3 Create Virtual Storage 40 Integrated Virtualization Manager on IBM System p5 .Important: All data of a physical volume is erased when you add this volume to a storage pool. The following steps describe how to create a storage pool: 1.
Because the IVM is installed in rootvg. when IVM is reinstalled. Figure 3-4 Create Virtual Storage: Storage pool name 4. The following steps describe how to change the default storage pool: 1. Important: Create at least one additional storage pool. thus preventing the loss of user data during an IVM update. The default storage pool should also be changed to another one to avoid creating virtual disks within the rootvg by default. A new storage pool called datapoolvg2 with hdisk2 and hdisk3 has been created. This is because rootvg is the only volume group created at that time. Chapter 3. The rootvg storage pool should not be the default storage pool. Default storage pool The default storage pool created during the IVM installation is rootvg. Type a name in the Storage pool name field and select the needed disks. Under the Virtual Storage Management menu in the navigation area.3. the rootvg storage pool is overwritten. as shown in Figure 3-4. Click OK to create the storage pool. click View/Modify Virtual Storage. this would result in IVM and user data being merged on the same storage devices. Logical partition creation 41 .
Storage Pools list 3. as shown in Figure 3-6. Figure 3-5 View/Modify Virtual Storage . In this example datapoolvg2 will be the new default storage pool. Click Assign as default storage pool in the task area. Select the storage pool you want as the default. as shown in Figure 3-5. 4. Click OK to validate the change.2. 5. Figure 3-6 Assign as Default Storage Pool 42 Integrated Virtualization Manager on IBM System p5 . A summary with the current and the next default storage pool opens.
Click Create Virtual Disk in the work area. Under the Virtual Storage Management menu in the navigation area. Figure 3-7 Create Virtual Storage Chapter 3. They can be created in several ways. After or before LPAR creation: A virtual disk is not assigned to any partition and is created in the default storage pool. depending on the menu that is in use: During LPAR creation: A logical volume is created in the default storage pool and assigned to the partition. The following steps describe how to create a new logical volume: 1. click Create Virtual Storage. Logical partition creation 43 . 2.Virtual disk/logical volume creation Logical volumes belong to a storage pool and are also known as virtual disks. Logical volumes are used to provide disk space to LPARs but are not assigned to LPARs when you create them.
Figure 3-8 Create Virtual Disk: name and size In order to view your new virtual disk/logical volume and use it. Enter a name for the virtual disk. Click OK to create the virtual disk. The IVM does not allow the overcommitment of resources. 3. and I/O devices. 4.3 Create logical partitions A logical partition is a set of resources: processors.3. click Create Partitions. Each resource assigned to an LPAR is allocated regardless of whether the LPAR is running or not.2. and then click Start Wizard in the work area. select the View/Modify Virtual Storage link under the Virtual Storage Management menu in the navigation area. memory. and add a size for the virtual disk. Under the Partition Management menu in the navigation area. as shown in Figure 3-8. 44 Integrated Virtualization Manager on IBM System p5 . select a storage pool name from the drop-down list. The following steps describe how to create an LPAR: 1. The list of available virtual disks is displayed in the work area.
2. Figure 3-9 Create Partition: Name 3. Figure 3-10 Create Partition: (assigned) Memory Chapter 3. Type a name for the new partition. Logical partition creation 45 . Click Next. Enter the amount of memory needed. as shown in Figure 3-10. Click Next. as shown in Figure 3-9.
“Ethernet bridging” on page 72. In the figure.1. The Virtual Ethernet Bridge Overview section of the panel shows on which physical network interface every virtual network is bridged. Each partition has two virtual Ethernet adapters that can be configured to one of the four available virtual Ethernets. Figure 3-12 Create Partition: Virtual Ethernet 46 Integrated Virtualization Manager on IBM System p5 . Select the number of processors needed and choose a processing mode. adapter 1 uses virtual Ethernet ID 1. In shared mode. This procedure is described in 4. The bridge enables the partition to connect to the physical network.4. In Figure 3-12.1 processing units. Click Next. each virtual processor uses 0. as shown in Figure 3-11. a virtual Ethernet bridge has been created. Figure 3-11 Create Partition: Processors (and Processing Mode) 5.1. Click Next.
Click Next.6.2. as shown in Figure 3-14. Figure 3-14 Create Partition: Storage Chapter 3. “Storage pool disk management” on page 39. Select needed virtual disks from the list. refer to 3. Click Next. Logical partition creation 47 . You can also let the IVM create a virtual disk for you by selecting Create virtual disk. Figure 3-13 Create Partition: Storage Type 7. but be aware that the virtual disk will be created in the default storage pool. Select Assign existing virtual disks and physical volumes. as shown in Figure 3-13.2. To create storage pool and virtual disks or change the default storage pool.
A list opens in the work area. Select needed optical devices. Click Finish to create the LPAR. A summary of the partition to be created appears. click the View/Modify Partitions link. 48 Integrated Virtualization Manager on IBM System p5 . under the Partition Management menu in the navigation area. Click Next. as shown in Figure 3-15. Figure 3-15 Create Partition: Optical (Devices) 9.8. Figure 3-16 Create Partition: Summary To view the new LPAR and use it. as shown in Figure 3-16.
Logical partition creation 49 . 2. Any role other than View Only can be used to perform this task. and optical devices. Under Partition Management. This task enables you to create a new LPAR with the same properties as the selected existing partition with the exception of ID.4 Create an LPAR based on an existing partition The Integrated Virtualization Manager can be used to create a new LPAR that is based on an existing partition on your managed system. name.3. physical volumes. In the Tasks section. click Create based on as shown in Figure 3-17. Figure 3-17 Create based on selection from the Tasks menu Chapter 3. click View/Modify Partitions. The View/Modify Partitions panel opens. To create an LPAR based on an existing partition.2. Select the LPAR that you want to use as a basis for the new partition. perform the following steps: 1. 3.
the data in these disks is not cloned.New logical partition has been created The virtual disks that are created have the same size and are in the same storage pool as the selected partition. The Create Based On panel opens (Figure 3-18). Figure 3-18 Create based on . showing the new partition (Figure 3-19). and click OK.4. The View/Modify Partitions panel opens. However.name of the new LPAR 5. Enter the name of the new partition. 50 Integrated Virtualization Manager on IBM System p5 . Figure 3-19 Create based on .
3.2.5 Shutting down logical partitions
The Integrated Virtualization Manager provides the following types of shutdown options for LPARs: Operating System (recommended) Delayed Immediate The recommended shutdown method is to use the client operating system shutdown command. The delayed shutdown is handled by AIX 5L gracefully (however, for a Linux LPAR you have to load a special RPM: the Linux on POWER Service and Productivity toolkit). The immediate shutdown method should be used as a last resort because this causes an abnormal shutdown that might result in data loss. (It is equivalent to pulling the power cord.) If you choose not to use the operating system shutdown method, be aware of these considerations: Shutting down the LPARs is equivalent to pressing and holding the white control-panel power button on a server that is not partitioned. Use this procedure only if you cannot successfully shut down the LPARs through operating system commands. When you use this procedure to shut down the selected LPARs, the LPARs wait a predetermined amount of time to shut down. This gives the LPARs time to end jobs and write data to disks. If the LPAR is unable to shut down within the predetermined amount of time, it ends abnormally, and the next restart might take a long time. To shut down an LPAR: 1. From the Partition Management menu, click View/Modify Partitions. The View/Modify Partitions panel opens. 2. Select the LPAR that you want to shut down. 3. From the Tasks menu, click Shutdown. The Shutdown Partitions panel opens (Figure 3-20).
Figure 3-20 Shutdown Partitions: new options
Chapter 3. Logical partition creation
4. Select the shutdown type. 5. Optionally, select Restart after the shutdown completes if you want the LPAR to start immediately after it shuts down. 6. Click OK to shut down the partition. The View/Modify Partitions panel is displayed, and the partition is shut down. Note: If the LPAR does not have an RMC connection, the Operating System shutdown type will be disabled, and the Delayed type will be the default selection. When the IVM/VIOS logical partition is selected, the only available option is OS shutdown. In addition, a warning is placed first thing on the panel indicating that shutting down the IVM/VIOS LPAR will affect other running LPARs.
3.2.6 Monitoring tasks
In Virtual I/O Server Version 1.2, the GUI had no support for long-running tasks. If the user navigated away from a page after starting a task, no status would be received on the completion of the task. As of Virtual I/O Server 220.127.116.11 you can now view and monitor the most recent 40 tasks that are running on the Integrated Virtualization Manager. All actions that a user can do in the GUI will become Tasks. All tasks will be audited at the task level. Each task can have subtasks - the status of each subtask is managed. When performing a task, the user will get a Busy dialog indicating the task is currently running. You navigate away from the page, and perform other tasks. To view the properties of the tasks, do the following: 1. In the Service Management menu, click Monitor Tasks. The Monitor Tasks panel opens. 2. Select the task for which you want to view the properties (1 in Figure 3-21 on page 53). 3. In the Tasks menu, click Properties (2 in Figure 3-21 on page 53). The Task Properties window opens.
Integrated Virtualization Manager on IBM System p5
Figure 3-21 Monitor Tasks: the last 40 tasks
4. Click Cancel to close the Task Properties window. The Monitor Tasks panel appears. You can also just click the hyperlink of the task from which you want to view the properties (arrow without number in Figure 3-21). This eliminates steps 2 and 3. See more about hyperlinks in the following section.
3.2.7 Hyperlinks for object properties
Starting with Virtual I/O Server Version 18.104.22.168, there are two ways to access the properties of an object. Previously you had to select the object and then select the Properties task. Because this is a frequent operation, a new method to quickly access the properties of an object with one click has been introduced: the hyperlinks. In a list view, the object in question will have a hyperlink (typically on the Name) that, when selected, displays the properties sheet for the object. It behaves exactly as the Select → Properties method, but it requires only one click. Even another object is selected, clicking the hyperlink of an object will always bring it up, as shown in Figure 3-22 on page 54.
Chapter 3. Logical partition creation
and it has been defined during the installation process. The IP address is the same as the one used to connect to the GUI.3.Figure 3-22 Hyperlinks 3. 3.3 IVM command line interface The text-based console with the command line interface (CLI) is accessible through an ASCII terminal attached to a system port or through network connectivity using the telnet command. but it offers more possibilities to tune the partition’s definitions and can be automated using scripts. Example 3-1 Profile update $ lssyscfg -r prof --filter "lpar_names=LPAR2" -F lpar_name LPAR2 $ chsyscfg -r prof -i "lpar_name=LPAR2. The CLI requires more experience to master than the GUI.new_name=LPAR2_new_name" $ lssyscfg -r prof --filter "lpar_names=LPAR2_new_name" -F lpar_name LPAR2_new_name 54 Integrated Virtualization Manager on IBM System p5 .1 Update the logical partition’s profile Example 3-1 shows how to change the name of a LPAR with the chsyscfg command.
Chapter 3. Select the system console and the language.3.3. The main steps are: 1. Open a virtual terminal for the LPAR to be installed with the mkvt command. as shown in Example 3-3. 2. Example 3-2 Power on a partition $ chsysstate -o on -r lpar -n LPAR2 $ lsrefcode -r lpar --filter "lpar_names=LPAR2" -F refcode CA00E1F1 $ lsrefcode -r lpar --filter "lpar_names=LPAR2" -F refcode CA00E14D 3. as shown in Example 3-4. use the lsrefcode command. You can change the boot mode in the properties of the partition’s profile before starting it. You have to specify the ID of the LPAR. Start the LPAR in SMS mode. 2005. Logical partition creation 55 .3. Boot the LPAR. Example 3-4 Boot display IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM 1 = SMS Menu 8 = Open Firmware Prompt 5 = Default Boot List 6 = Stored Boot List Memory Keyboard Network SCSI Speaker 4. Example 3-3 Open a virtual terminal $ mkvt -id 3 AIX Version 5 (C) Copyrights by IBM and by others 1982. or a network for the Network Installation Management (NIM) installation. Select a boot device. To follow the boot process. such as virtual optical device.2 Power on a logical partition Example 3-2 shows how to start a LPAR using the chsysstate command. Console login: 3. 5. Log in to the IVM partition.3 Install an operating system on a logical partition The operating system installation process is similar to the process for stand-alone systems. 6. or enter 1 on the virtual terminal at the very beginning of the boot process.
Under the Virtual Storage Management menu in the navigation area. Select the optical device you want to modify. regardless of whether the LPAR is running. as shown in Figure 3-23. Figure 3-23 Optical Devices selection 56 Integrated Virtualization Manager on IBM System p5 . 2. 3. the Storage Management navigation area has changed and is now called Virtual Storage Management. and select the Optical Devices tab in the work area. 3.0.3. Click Modify partition assignment in the tasks area. Select the disks to be installed.7. move. Proceed as directed by your operating system installation instructions. or remove optical devices from or to any LPAR.0. The following steps describe how to change the assignment of an optical device: 1. With VIOS 1. The installation of the operating system starts. click View/Modify Virtual Storage.4 Optical device sharing You can dynamically add.
log in to the LPAR and remove the optical device before going further. As of V1. On AIX 5L. 8. some updates could be done dynamically (in case of the VIOS LPAR).5 LPAR configuration changes As needed.0. you are prompted to confirm the forced removal before the optical device is removed. If you move or remove an optical device from a running LPAR. this is an indication that the device is not mounted.0. or statically in case of the client LPARs. Click OK. Log in to the related LPAR and use the appropriate command to discover the new optical device. Select the name of the LPAR to which you want to assign the optical device. 6.1 Dynamic LPAR operations on an IVM partition Resources such as processors and memory can be dynamically allocated or released on the IVM partition. 3. Click OK. 7. Prior to Virtual I/O Server Version 1.0. 9.5. Press the Eject button. Chapter 3. all LPARs support dynamic reconfiguration. The new list of optical devices is displayed with the changes you made.3.4. you might want to modify the properties of the IVM or LPARs.0. Logical partition creation 57 . If the drawer opens. Because the optical device will become unavailable. use the cfgmgr command. You can also remove the optical device from the current LPAR by selecting None. On AIX 5L. as shown in Figure 3-24.3. use the rmdev command. Figure 3-24 Optical Device Partition Assignment 5. You can run those operations either on the GUI or the CLI. 3.
“Hyperlinks for object properties” on page 53). click View/Modify Partitions. Figure 3-25 View/Modify Partitions: Dynamic LPAR memory operation 3. 58 Integrated Virtualization Manager on IBM System p5 . 2.7. Click Properties in the task area (or use the one-click hyperlink method explained in 3.2. Select the IVM partition. as shown in Figure 3-25.Dynamic LPAR operation on memory using the GUI The following steps describe how to increase memory size dynamically for the IVM partition: 1. Under Storage Management in the navigation area.
0. the user simply has to modify the pending value. and the DLPAR Manager will do the appropriate operations on the LPAR to complete the runtime change. Figure 3-26 Partition Properties: VIOS . the concept of a dynamic LPAR Manager is introduced.desired_proc_units=0.4. and the client LPARs are managed through the RMC daemon.0 also allows dynamic operations on resources such as the processor and memory on a client partition. Modify the pending values as needed.Increase memory size Memory is not added or removed in a single operation. and it drives the runtime and pending values to be in sync. To perform a dynamic LPAR operation.20 $ chsyscfg -r prof -i lpar_name=VIOS. The DLPAR Manager (dlparmgr) manages the Virtual I/O Server LPAR directly. but in 16 MB blocks. Logical partition creation 59 . partition name. You can monitor the status by looking at partition properties. the assigned memory is increased by 512 MB. Example 3-5 Dynamic LPAR virtual processor operation $ lshwres -r proc --level lpar --filter "lpar_names=VIOS" -F curr_proc_units 0. Dynamic LPAR operation on virtual processors using the CLI Log in to the IVM using the CLI and run your dynamic LPAR operation.5.1 processing unit dynamically to the IVM using the chsyscfg command. If the runtime and pending values are in sync. Dynamic operations on the disks. In Figure 3-26.2 LPAR resources management Virtual I/O Server Version 1. optical devices. Example 3-5 shows how to add a 0. To accomplish this goal.Memory . It will not poll when in this state.30 3. This is a daemon task that runs in the Virtual I/O Server LPAR and monitors the pending and runtime values for processing and memory values.3.3 $ lshwres -r proc --level lpar --filter "lpar_names=VIOS" -F curr_proc_units 0. Chapter 3. then the DLPAR Manager will block until another configuration change is made. Click OK. and boot mode were already allowed in the previous version. as shown in Example 3-5.
This log contains the last drmgr command run for each object type. This only applies to processors and memory.) All chsyscfg command functions will continue to work. In the example. the IVM interface now provides a menu for configuring additional TCP/IP interfaces in the IVM LPAR. The user will not be notified directly of these errors. Setup of dynamic LPAR To enable dynamic LPAR operations. manual intervention is required. Clicking Retrieve Capabilities verifies that dynamic LPAR operations are possible. The dynamic LPAR capabilities for each logical partition will be returned as an attribute on the lssyscfg -r lpar command. and communication status equal to Active. and the IP address is grayed-out because there is no need to make any change. for each LPAR. 2. For the IVM LPAR. We then need to wait 2 or 3 minutes while the RMC subsystem completes the handshake between the client LPAR and the IVM LPAR. in this case VLAN-1 and en0. or we could not use the IVM browser interface. which can be read with the lssvcevents -t dlpar command. 60 Integrated Virtualization Manager on IBM System p5 . If there are multiple interfaces. Viewing client LPAR Properties should then show the IP address. The GUI enables you to see the state and gives you more information about the result of the operation. the workload management software must be aware of this change and dynamically adapt to it. will selectively enable or disable function based on the capabilities. otherwise. It includes the any responses from the drmgr command.3. 3.Note: If a change is made to a pending value of an LPAR in a workload management group with another LPAR. their indication will be that the pending values are out of sync. TCP/IP must have been configured in the IVM LPAR. even if the partition does not have dynamic LPAR support as they do today. the one that will show here as default is the one that is shown first when running. only one Ethernet interface is configured. Otherwise. however.0. If multiple Interfaces are configured in the Virtual I/O Server (and thus. so it is automatically the default. 4.0. The client LPAR can have a TCP/IP configuration on that same subnet or on another one. depending on their external network or switch. the LPARs must be on the same network. This allows the GUI to selectively enable/disable dynamic LPAR based on the current capabilities of the logical partition. Note: In Version 1. This address will be used by the client LPARs to communicate with the IVM/VIOS partition. you have to enable RMC communication between the IVM LPAR and the client LPARs: 1. (See Figure 3-31 on page 64. The GUI. Use the lstcpip -interfaces command to see an overview of all available interfaces. so they have to be able to ping each other. select which physical interface will be used to provide the bridge between the internal VLAN and the real external network. multiple TCP/IP addresses). uncheck Default and enter the IP address that should be used. it will be written to the dlparmgr status log. The client LPAR must have TCP/IP configured (on the same subnet that we selected for IVM-to-Client communication). When the dlparmgr encounters an error.
The GUI will also display a details link next to the warning exclamation point when resources are out of sync. Specifically. Because the chsyscfg command can be used for dynamic LPAR. For LPARs that do support certain dynamic LPAR operations (for example. Unknown. the GUI can use the same commands for both static and dynamic reconfiguration. Unknown will be the default state if communication is Active but the user has not selected Retrieve Capabilities. Yes. the initial capabilities will show up Unknown (if the partition communication state is Active).ibm. but no guarantee. processing. Note: If RMC is active. Processing dynamic LPAR capable Retrieve Capabilities Chapter 3. but not memory). The GUI disables changes for LPARs that do not support dynamic LPAR.) Table 3-1 gives you more information about the different fields of Figure 3-27 on page 62. Unknown.software. and an inline message will be displayed indicating why they may not change the value. Table 3-1 Field name values Field Name Partition host name or IP address Partition communication state Memory dynamic LPAR capable Values The IP address (or DNS host name) of the LPAR. this will nearly always be Yes. This will yield a pop-up window with the last run status of the dlparmgr for the specified LPAR and resource. No.Note: The following site contains the RMC and RSCT requirements for dynamic LPAR. Note: The name of this field matches the “Partition communication” field in the TCP/IP settings. the IP address of the LPAR. Yes. including the additional filesets that have to be installed on Linux clients: https://www14. Button that is visible if the Partition Communication State is active and the user has not previously selected the button on this properties sheet. Clicking Retrieve Capabilities will retrieve them and update the fields. Note: This will always default to “No” if we have not successfully been able to query the partition (RMC state is not active). Because retrieving these dynamic LPAR capabilities can be a time-consuming operation (generally less than one second. Active. Partition Properties changes for dynamic LPAR A new section was added to the General panel of the properties sheet that indicates the RMC Connection state. and the dynamic LPAR capabilities of the logical (client) partition. the operation that is not supported will be grayed-out appropriately. Not Configured. No Linux LPARs are currently memory dynamic LPAR capable. No.com/webapp/set2/sas/f/lopdiags/home. This may be blank if RMC is not configured. Unknown will be the default state if communication is Active but the user has not selected Retrieve Capabilities. Inactive.html GUI changes for dynamic LPAR The graphical user interface for performing dynamic LPAR operations will be the same as the interface for performing static operations. (See Figure 3-27 on page 62. but can be up to 90 seconds with a failed network connection). Logical partition creation 61 . the user simply changes the pending value.
then the Pending assigned value will be enabled. (min and max are still disabled. the user will see the Warning icon as in Figure 3-30 on page 64. The dlparmgr daemon will then work to bring the pending and current (runtime) values into sync. The change will take effect immediately for the pending value. Figure 3-28 on page 63 and Figure 3-29 on page 63 show a change. If these values are not in sync. 62 Integrated Virtualization Manager on IBM System p5 .) The user may change this value and select OK.Figure 3-27 Dynamic LPAR properties Memory Tab If the LPAR is powered on and memory is dynamic LPAR capable (see capabilities on the General tab).
Figure 3-28 Partition Properties: Memory tab Figure 3-29 Dynamic LPAR of memory: removal of 256 MB of memory Chapter 3. Logical partition creation 63 .
Figure 3-30 Warning in work area because pending and current values are not in sync Click the details hyperlink for more information about the resource synchronization. shown in Figure 3-31. Figure 3-31 Resource synchronization details 64 Integrated Virtualization Manager on IBM System p5 .
This LPAR does not currently support modifying these values while running. The user may change these values and select OK. the reason will be provided. A details link is now added next to the icon. the same rules apply with respect to the enabled fields and introductory text for the various capability options. details about the previous two drmgr commands that were run against the LPAR in an attempt to synchronize the pending and current (runtime) values will be shown. Modify the settings by changing the pending values. a warning icon appears in the main partition list view next to the Memory or Processors resource. The dlparmgr daemon will then work to bring the pending and current (runtime) values into sync. but synchronizing the current and pending values might take some time. If a resource is out of sync. so pending values can be edited only when the LPAR is powered off. so pending values can be edited only when the LPAR is powered off. the dlparmgr will consider the assigned values in sync. No None Unknown Assigned Memory Processing tab If the LPAR is powered on and processor dynamic LPAR capable (see capabilities on the General tab). In addition. Changes will be applied immediately. This yields a pop-up window showing the current dynamic LPAR status of the logical partition. the warning icon appears on the View/Partition modify page with a Details link. Chapter 3. The change will take effect immediately for the pending value. as shown in Figure 3-32 on page 66. As with the Memory panel. Under the Virtual Storage Management menu in the navigation area. Table 3-2 Possible field modifications: memory Capability setting Yes Enabled fields Assigned Memory Introduction text Modify the settings by changing the pending values. If these values are not in sync. (See Figure 3-31 on page 64.) In the first IVM release when a dynamic resource is not in sync. Dynamic LPAR operation on virtual disks using the GUI The following steps describe how to assign virtual disks to a partition using the GUI: 1. All resources are shown in this window. click View/Modify Virtual Storage. then click the Virtual Disks tab in the work area. then the Pending assigned values will be enabled. the user will see the Warning icon as in the memory panel. The minimum and maximum (processing units as well as virtual processors) values are still disabled. Logical partition creation 65 . Table 3-2 provides possible memory field values. The Reason field generally matches the latest command run reason. but the warning icon will still be present. This LPAR does not currently support modifying these values while running. Selecting the details link yields the synchronization details pop-up window. Modify the settings by changing the pending values. Select the needed virtual disks. Dynamic LPAR status When a resource is not in sync. This icon also appears with text in the partition properties sheet.Note: The minimum and maximum memory values are enabled for the VIOS/IVM LPAR at all times. But if the user modifies minimum/maximum values without changing the assigned value.
and click OK to validate the virtual disk partition assignment. 3.Figure 3-32 View/Modify Virtual Storage: Virtual disks selection 2. Click Modify partition assignment in the task area. Figure 3-33 Modify Virtual Disk Partition Assignment 66 Integrated Virtualization Manager on IBM System p5 . as shown in Figure 3-33. Select the partition name you want to assign to the virtual disks.
You can perform the same operations with the CLI as with the GUI. Logical partition creation 67 . Some workload management tools require that additional software be installed on the LPARs Chapter 3.3" $ lssyscfg -r prof --filter "lpar_names=LPAR1" -F desired_proc_units 0. use the cfgmgr command.4. Example 3-6 Virtual disk discovery # lsdev -Ccdisk hdisk0 Available # cfgmgr # lsdev -Ccdisk hdisk0 Available hdisk1 Available Virtual SCSI Disk Drive Virtual SCSI Disk Drive Virtual SCSI Disk Drive You can also assign virtual disks by editing the properties of the LPAR.30 A warning icon with an exclamation point inside it is displayed in the View/Modify Partitions screen if current and pending values are not synchronized.desired_proc_units=0.40 $ chsyscfg -r prof -i "lpar_name=LPAR1.desired_mem+=256" $ lssyscfg -r prof --filter "lpar_names=LPAR1" -F desired_mem 768 3.5. Example 3-6 shows how the partition discovers two new virtual disks on AIX 5L. Example 3-7 Decrease processing units of LPAR1 $ lssyscfg -r prof --filter "lpar_names=LPAR1" -F desired_proc_units 0. Operation on partition definitions using the CLI The command line interface for performing dynamic LPAR operations is the same as on the HMC. you must add the client LPAR to the partition workload group. On AIX 5L. Example 3-8 Increase memory of LPAR1 with 256 MB $ lssyscfg -r prof --filter "lpar_names=LPAR1" -F desired_mem 512 $ chsyscfg -r prof -i "lpar_name=LPAR1. The DLPAR Manager keys off of differences between runtime and pending operations. A partition workload group identifies a set of LPARs that reside on the same physical system. The chsyscfg command is used for dynamic configuration changes because it updates the pending values.3 Adding a client LPAR to the partition workload group If you want to manage logical partition resources using a workload management tool. Example 3-8 shows an increase of memory operation. Example 3-7 shows how to decrease processing units for an LPAR using the chsyscfg command. Log in to the related LPAR and discover the new disks.
workload management tools often require that you install some type of management or agent software on the LPARs. Workload management tools use partition workload groups to identify which LPARs they can manage. To manage LPAR resources. Table 3-3 Dynamic LPAR support Logical partition type AIX Linux Supports processor dynamic LPAR Yes Yes Supports memory dynamic LPAR Yes Yes/no (SLES 10 and RHEL 5 support memory add. but AIX supports both processor and memory dynamic LPAR.to monitor its workload. do not install additional software on the management partition. an AIX client LPAR does not have an RMC connection to the management partition. The dynamic LPAR support listed in the previous table reflects what each operating system supports in regard to dynamic LPAR functions. because AIX supports processor and memory dynamic LPAR. The following recommendations are for workload management: Do not add the management partition to the partition workload group. but not memory removal at this moment For example. However. Enterprise Workload Manager (EWLM) can dynamically and automatically redistribute processing capacity within a partition workload group to satisfy workload performance goals. each LPAR in the partition workload group must support dynamic LPAR. EWLM can adjust the processing capacity for AIX and Linux LPARs. manage its resources. Note: Systems managed by the Integrated Virtualization Manager can have only one partition workload group per physical server. In this situation. The dynamic LPAR capabilities that are shown in the partition properties for an LPAR reflect a combination of: – A Resource Monitoring and Control (RMC) connection between the management partition and the client LPAR – The operating system’s support of dynamic LPAR (see Table 3-3) For example. Verify that the LPAR that you want to add to the partition workload group supports dynamic LPAR for the resource type that your workload management tool adjusts as shown in Table 3-3. It is not required that all LPARs on a system participate in a partition workload group. EWLM adjusts processing capacity based on calculations that compare actual performance of work processed by the partition workload group to the business goals defined for the work. or both. the dynamic LPAR capabilities shown in the partition properties for the AIX LPAR indicate that the AIX LPAR is not capable of processor or memory dynamic LPAR. For example. To avoid creating an unsupported environment. Therefore. The dynamic LPAR support listed in the previous table is not the same as the dynamic LPAR capabilities that are in the partition properties for an LPAR. Thus. a workload management 68 Integrated Virtualization Manager on IBM System p5 . Workload management tools manage the resources of only those LPARs that are assigned to a partition workload group. Workload management tools use dynamic LPAR to make resource adjustments based on performance goals. the partition management function of EWLM adjusts processor resources based on workload performance goals. but they cannot manage the LPAR’s resources. Workload management tools can monitor the work of an LPAR that is not assigned to a partition workload group.
tool can dynamically manage its processor and memory resources. If an LPAR is part of the partition workload group. Logical partition creation 69 . complete the following steps: 1. In the Settings section. Figure 3-34 Partition Properties: Selecting Partition Workload Group Chapter 3. Select the logical partition that you want to include in the partition workload group and click Properties. Click OK. EWLM controls dynamic resource management of both processor resources and memory for the AIX LPAR. For example. select Partition workload group participant. To add an LPAR to the partition workload group. 2. Because EWLM has control of dynamic resource management. The Partition Properties window opens (Figure 3-34). but EWLM does not dynamically manage memory. you cannot dynamically manage memory for the AIX LPAR from the Integrated Virtualization Manager. Workload management tools are not dependent on RMC connections to dynamically manage LPAR resources. EWLM dynamically manages processor resources. When you implement a workload management tool that manages only one resource type. you cannot dynamically manage its resources from the Integrated Virtualization Manager because the workload management tool is in control of dynamic resource management. you limit your ability to dynamically manage the other resource type. AIX supports both processor and memory dynamic LPAR. but not memory. Not all workload management tools dynamically manage both processor and memory resources.
70 Integrated Virtualization Manager on IBM System p5 .
2005. 2006.4 Chapter 4. storage management. Advanced configuration Logical partitions require an available connection to the network and storage. This chapter describes the following advanced configurations on networking. and security: Virtual Ethernet bridging Ethernet link aggregation Disk space management Disk data protection Virtual I/O Server firewall SSH support © Copyright IBM Corp. The Integrated Virtualization Manager (IVM) provides several solutions using either the Web graphical interface or the command line interface. 71 . All rights reserved.
each connected to one of the four virtual networks that are present in the system. click View/Modify Virtual Ethernet.4. For each network. For each virtual Ethernet. Ethernet link aggregation is also available using the VIOS capabilities. a separate adapter is required. the virtual Ethernet panel shows what partitions are connected to the four available networks. Logical partitions can have at most two virtual Ethernet adapters. every virtual network can be bridged to a physical adapter. IVM provides a Web interface to configure bridging. The IVM is connected to a physical network and four virtual network adapters are available. Example 4-1 on page 73 describes the VIOS configuration before the creation of the bridge. as shown in Figure 4-1. Use the drop-down menu to select the physical Ethernet and click Apply to create the bridging device. In order to allow partitions to access any external corporate network. Go to the Virtual Ethernet Bridge tab to configure bridging. an Ethernet device is configured.1 Network management All physical Ethernet adapters installed in the system are managed by the IVM. Figure 4-1 View/Modify Virtual Ethernet: Virtual Ethernet Bridge creation The Web GUI hides the details of the network configuration.1 Ethernet bridging Under Virtual Ethernet Management in the navigation area. Note: If the physical Ethernet that is selected for bridging is already configured with an IP address using the command line interface.1. When higher throughput and better link availability is required. 72 Integrated Virtualization Manager on IBM System p5 . all connections to that address will be reset. 4. In the work area. you can select one physical device. For each physical and virtual network adapter.
A new ent8 SEA device is created. If a network interface was configured on the physical adapter.3.5 link#1 127 ::1 Standard Ethernet Network Interface Standard Ethernet Network Interface Standard Ethernet Network Interface Standard Ethernet Network Interface Standard Ethernet Network Interface Standard Ethernet Network Interface Standard Ethernet Network Interface Standard Ethernet Network Interface 2-Port 10/100/1000 Base-TX PCI-X Adapter (1410890 2-Port 10/100/1000 Base-TX PCI-X Adapter (1410890 10/100 Mbps Ethernet PCI Adapter II (1410ff01) 10/100 Mbps Ethernet PCI Adapter II (1410ff01) Virtual I/O Ethernet Adapter (l-lan) Virtual I/O Ethernet Adapter (l-lan) Virtual I/O Ethernet Adapter (l-lan) Virtual I/O Ethernet Adapter (l-lan) Address 0. Due to the migration.55. and the IP address of the IVM is migrated on the en8 interface. Advanced configuration 73 .2f.eb.2. binding the physical device with the virtual device.36 ivmopenp loopback Ipkts Ierrs 269 269 50 50 50 0 0 0 0 0 Opkts Oerrs 136 136 72 72 72 Coll 4 4 0 0 0 0 0 0 0 0 When a virtual Ethernet bridge is created.Example 4-1 VIOS Ethernet adapters with no bridging $ lsdev | grep ^en en0 Available en1 Defined en2 Defined en3 Defined en4 Defined en5 Defined en6 Defined en7 Defined ent0 Available ent1 Available ent2 Available ent3 Available ent4 Available ent5 Available ent6 Available ent7 Available $ lstcpip Name en0 en0 lo0 lo0 lo0 Mtu 1500 1500 16896 16896 16896 Network link#2 9. Example 4-2 shows the result of bridging virtual network 1 with the physical adapter ent0 when the IVM is using the network interface en0. Example 4-2 Shared Ethernet adapter configuration $ lsdev | grep ^en en0 Available en1 Defined en2 Defined en3 Defined en4 Defined en5 Defined en6 Defined en7 Defined en8 Available ent0 Available ent1 Available ent2 Available ent3 Available ent4 Available ent5 Available Standard Ethernet Network Interface Standard Ethernet Network Interface Standard Ethernet Network Interface Standard Ethernet Network Interface Standard Ethernet Network Interface Standard Ethernet Network Interface Standard Ethernet Network Interface Standard Ethernet Network Interface Standard Ethernet Network Interface 2-Port 10/100/1000 Base-TX PCI-X Adapter (1410890 2-Port 10/100/1000 Base-TX PCI-X Adapter (1410890 10/100 Mbps Ethernet PCI Adapter II (1410ff01) 10/100 Mbps Ethernet PCI Adapter II (1410ff01) Virtual I/O Ethernet Adapter (l-lan) Virtual I/O Ethernet Adapter (l-lan) Chapter 4. the IP address is migrated to the new SEA. all active network connections on en0 are reset. a new shared Ethernet adapter (SEA) is defined.
ent6 ent7 ent8 $ lstcpip Name en8 en8 et8* et8* lo0 lo0 lo0 Mtu 1500 1500 1492 1492 16896 16896 16896
Available Available Available
Virtual I/O Ethernet Adapter (l-lan) Virtual I/O Ethernet Adapter (l-lan) Shared Ethernet Adapter
Network link#3 9.3.5 link#4 0 link#1 127 ::1
Address 0.2.55.2f.eb.36 ivmopenp 0.2.55.2f.eb.36 0.0.0.0 loopback
Ipkts Ierrs 336 336 0 0 50 50 50
0 0 0 0 0 0 0
Opkts Oerrs 212 212 0 0 75 75 75
Coll 0 0 0 0 0 0 0 0 0 0 0 0 0 0
4.1.2 Ethernet link aggregation
Link aggregation is a network technology that enables several Ethernet adapters to be joined together to form a single virtual Ethernet device. This solution can be used to overcome the bandwidth limitation of a single network adapter and to avoid bottlenecks when sharing one network adapter among many client partitions. The aggregated device also provides high-availability capabilities. If a physical adapter fails, the packets are automatically sent on the other available adapters without disruption to existing user connections. The adapter is automatically returned to service on the link aggregation when it recovers. Link aggregation is an expert-level configuration and it is not managed by the IVM GUI. It is defined using the VIOS functions with the command line, but the IVM is capable of using the link aggregation for network configuration after it is defined. To create the link aggregation, use the mkvdev command with the following syntax: mkvdev -lnagg TargetAdapter ... [-attr Attribute=Value ...]
In the environment shown in Example 4-3, it is possible to aggregate the two physical Ethernet adapters ent2 and ent3. A new virtual adapter ent9 is created, as described in Example 4-3.
Example 4-3 Ethernet aggregation creation
$ mkvdev -lnagg ent2 ent3 ent9 Available en9 et9 $ lsdev -dev ent9 name status ent9 Available $ lsdev -dev en9 name status en9 Defined
description EtherChannel / IEEE 802.3ad Link Aggregation
description Standard Ethernet Network Interface
Integrated Virtualization Manager on IBM System p5
Aggregated devices can be used to define an SEA. The SEA must be created using the mkvdev command with the following syntax: mkvdev -sea TargetDevice -vadapter VirtualEthernetAdapter ... -default DefaultVirtualEthernetAdapter -defaultid SEADefaultPVID [-attr Attributes=Value ...] [-migrate]
Figure 4-2 shows the bridging of virtual network 4 with SEA ent9. The mkvdev command requires the identification of the virtual Ethernet adapter that is connected to virtual network 4. The lssyscfg command with the parameter lpar_names set to the VIOS partition’s name provides the list of virtual adapters defined for the VIOS. The adapters are separated by commas, and their parameters are separated by slashes. The third parameter is the network number (4 in the example) and the first is the slot identifier (6 in the example). The lsdev command with the -vpd flag provides the physical location of virtual Ethernet adapters that contains the letter C followed by its slot number. In the example, ent7 is the virtual Ethernet adapter connected to network 4. The created ent10 adapter is the new SEA.
Figure 4-2 Manual creation of SEA using an Ethernet link aggregation
After the SEA is created using the command line, it is available from the IVM panels. It is displayed as a device with no location codes inside the parenthesis because it uses a virtual device.
Chapter 4. Advanced configuration
Figure 4-3 shows how IVM represents an SEA created using an Ethernet link aggregation.
Physical adapter with location code Link aggregation with no location codes
Figure 4-3 Virtual Ethernet bridge with link aggregation device
The SEA can be removed using the IVM by selecting None as physical adapter for the virtual network. When you click Apply, the IVM removes all devices that are related to the SEA, but the link aggregation remains active.
4.2 Storage management
Virtual disks and physical volumes can be assigned to any LPAR, one at a time. Storage allocation can be changed over time, and the content of the virtual storage is kept. When a virtual disk is created using a logical volume, its size can also be increased. Data protection against single disk failure is available using software mirroring: On IVM to protect it but not the managed systems data Using two virtual disks for each of the managed system’s LPAR to protect its data
4.2.1 Virtual storage assignment to a partition
Unassigned virtual disks and physical volumes can be associated to a running partition. After the operation completes, the LPAR’s operating system must issue its device discovery procedure to detect the newly added disk. In an AIX 5L environment, do this by issuing the cfgmgr command. Before removing a physical disk or a virtual disk from a running partition, the operating system should remove the corresponding disk device because it will become unavailable. In an AIX 5L environment, this is done using the rmdev command. On the Web GUI, it is possible to remove a virtual disk or a physical volume from a running LPAR, but a warning sign always appears requiring an additional confirmation. Figure 4-4 on page 77 shows an example of this message.
Integrated Virtualization Manager on IBM System p5
From the work area.Figure 4-4 Forced removal of a physical volume 4. that is. The primary solution is to create a new virtual disk or select an entire physical disk and dynamically assign it to a partition. 2. Because this operation can be done when the partition is running. Chapter 4. Important: We do not recommend virtual disk extension when using AIX 5L. When disk space is provided to a partition using a virtual disk. On the operating system. Disk outages should be scheduled carefully so that they do not affect overall application availability. a secondary solution is to extend it using the IVM. Consider using this solution when an existing operating system’s volume has to be increased in size and a new virtual SCSI disk cannot be added for this purpose. because the same result is achieved by adding a new virtual disk.2 Virtual disk extension Several options are available to provide additional disk space to an LPAR. a new virtual SCSI disk is available for use. shut down the partition. This disk can be used to extend existing data structures when using Linux with a logical volume manager or AIX 5L. when using Linux without a logical volume manager. On AIX 5L. If the virtual disk is used by a rootvg volume group. Advanced configuration 77 . If this is not possible. it cannot be extended and a new virtual disk must be created. After the partition’s operating system issues its own device reconfiguration process. click View/Modify Virtual Storage.2. select the virtual disk and click Extend. issue the varyoff command on the volume group to which the disk belongs. it is preferred. From the Virtual Storage Management menu in the IVM navigation area. halt any activity on the disk to be extended. This operation can be executed when the partition is running. but the virtual disk must be taken offline to activate the change. The following steps describe how to extend a virtual disk: 1.
Figure 4-5 Forced expansion of a virtual disk 78 Integrated Virtualization Manager on IBM System p5 . Enter the disk space to be added and click OK. The additional disk space is allocated to the virtual disk. If the virtual disk is owned by a running partition. and you must select a check box to force the expansion. as shown in Figure 4-5.3. but it is not available to the operating system. a warning message opens.
On AIX 5L. The default installation of IVM uses only one physical disk. Under Virtual Storage Management in the IVM navigation area. and only system logical volumes can be mirrored. If the disk is owned by a running partition. click View/Modify Virtual Storage. issue the chvg -g command on the volume group to recompute the volume group size.4. Execute the same action as in step 4. issue the appropriate procedure to recognize the new disk size. but assign the virtual disk back to the partition. Chapter 4. make the rootvg storage pool of the VIOS redundant. Disk mirroring on the IVM is an advanced feature that. as suggested by a warning message. 6. and you must select a check box to force the expansion. issue the varyonvg command on the volume group to which the disk belongs and.3 IVM system disk mirroring In order to prevent an IVM outage due to system disk failure. It can be configured using VIOS capabilities on the command line interface. Advanced configuration 79 . a warning message opens. at the time of writing. as shown in Figure 4-6. On the operating system. Unassign the virtual disk by selecting None in the New partition field. 4. is not available on the Web GUI. From the work area. select the virtual disk and click Modify partition assignment. Figure 4-6 Forced unassignment of a virtual disk 5.2.
This procedure mirrors all logical volumes defined in rootvg and must not be run if rootvg contains virtual disks. Figure 4-7 Add second disk to rootvg 80 Integrated Virtualization Manager on IBM System p5 . then go to the Physical Volumes tab. Click Add to storage pool. Use the IVM to add a second disk of a similar size to rootvg. The following steps describe how to provide a mirrored configuration for the rootvg storage pool: 1.Important: Mirrored logical volumes are supported as virtual disks. Under Virtual Storage Management in the navigation area. click View/Modify Virtual Storage. Select a disk of a similar size that is not assigned to any storage pool. as shown in Figure 4-7.
4. The command asks for confirmation and causes a VIOS reboot to activate the configuration after performing data mirroring. An IVM administrator should create virtual storage that will be used by AIX 5L for mirroring purposes with careful respect to data placement. Figure 4-8 Specify addition to storage pool 3. Example 4-4 rootvg mirroring at command line $ mirrorios This command causes a reboot.. select rootvg and click OK. The actual mirroring is done using the VIOS command line. as shown in Example 4-4. and this feature can also be used when the partition is provided twice the number of virtual disks. Continue [y|n]? y SHUTDOWN PROGRAM Fri Oct 06 10:20:20 CDT 2006 Wait for 'Rebooting..' before stopping. Log in as the padmin user ID and issue the mirrorios command.2. Chapter 4.2. The virtual storage should not have any physical disks in common to avoid a disk failure that affects both mirror copies. In the Storage Pool field. Advanced configuration 81 .4 AIX 5L mirroring on the managed system LPARs The AIX 5L logical volume manager is capable of data mirroring.
SVSA Physloc Client Partition ID --------------.DNW108F-P1-T14-L5-L0 vtscsi4 0x8400000000000000 hdisk7 U787B.001. as shown in Example 4-5. we recommend that you create two storage pools and create one virtual disk from each of them.DNW108F-P1-T14-L8-L0 On AIX 5L. After virtual storage is created and made available as an hdisk to AIX 5L. virtual disks are created out of storage pools. Example 4-6 shows the command output with the 12-digit hexadecimal number representing the virtual disk’s LUN number. the lscfg command can be used to identify the hdisk using the same LUN used by the IVM. For each partition.. there is a separate stanza. On the IVM. They are created using the minimum number of physical disks in the pool. and the command provides the virtual storage’s assigned logical unit number (LUN) value.On the IVM.-------------------------------------------.520.10DDEDC-V3-C2-T1-L810000000000 Drive PLATFORM SPECIFIC Virtual SCSI Disk 82 Integrated Virtualization Manager on IBM System p5 . Example 4-5 IVM command line mapping of virtual storage $ lsmap -all .10DDEDC-V1-C13 0x00000003 VTD LUN Backing device Physloc VTD LUN Backing device Physloc VTD LUN Backing device Physloc VTD LUN Backing device Physloc . the lsmap command provides all the mapping between each physical and virtual device. Each logical or physical volume displayed in the IVM GUI is defined as a backing device.. the command line interface is required.520. vtscsi1 0x8100000000000000 aixboot1 vtscsi2 0x8200000000000000 extlv vtscsi3 0x8300000000000000 hdisk6 U787B. If the virtual disks are expanded. it becomes important to correctly map it.001. Example 4-6 Identification of AIX 5L virtual SCSI disk’s logical unit number # lscfg -vpl hdisk0 hdisk0 U9111. On the IVM.. If there is not enough space on a single disk. In order to guarantee mirror copy separation. the same allocation algorithm is applied..-----------------vhost1 U9111. they can span multiple disks.
All physical disks managed by each adapter’s SCSI chain can be used to create a single RAID5 array.5 SCSI RAID adapter use On a system equipped with a SCSI RAID adapter. Disk space for LPARs can be provided using logical volumes created on the rootvg storage pool.0. During the installation.3 Securing the Virtual I/O Server The Virtual I/O Server provides extra security features that enable you to control access to the virtual environment and ensure the security of your system. It enables to you to modify the array configuration and to handle events such as the replacement of a failing physical disk. the IVM partition’s rootvg is created on the array. The following topics discuss available security features and provide tips for ensuring a secure environment for your Virtual I/O Server setup.Name: disk Node: disk Device Type: block 4.2. After the array is created and has finished formatting. Example 4-7 shows the menu related to the SCSI RAID adapter. and IVM can be installed on it. Chapter 4. To do this operation. boot the system with the stand-alone diagnostic CD and enter the adapter’s setup menu. Example 4-7 The diagmenu menu for SCSI RAID adapter PCI-X SCSI Disk Array Manager Move cursor to desired item and press Enter. List PCI-X SCSI Disk Array Configuration Create an Array Candidate pdisk and Format to 522 Byte Sectors Create a PCI-X SCSI Disk Array Delete a PCI-X SCSI Disk Array Add Disks to an Existing PCI-X SCSI Disk Array Configure a Defined PCI-X SCSI Disk Array Change/Show Characteristics of a PCI-X SCSI Disk Array Reconstruct a PCI-X SCSI Disk Array Change/Show PCI-X SCSI pdisk Status Diagnostics and Recovery Options F1=Help F9=Shell F2=Refresh F10=Exit F3=Cancel Enter=Do F8=Image 4. Perform adapter maintenance using the IVM command line with the diagmenu command to access diagnostic routines.3. you can protect data using the adapter’s capabilities. avoiding any software mirroring.0 or later. Advanced configuration 83 . The adapter must be configured to create the array before installing the IVM. the IVM can be installed. These features are available with Virtual I/O Server Version 1.
Although hundreds of security configurations are possible with the VIOS security settings. With this feature. Configuration of the Virtual I/O Server system security hardening will be discussed in one of the next sections. or ignored. you can set security options that provide tighter security controls over your Virtual I/O Server environment. unconfigures. you can specify the port name or number and specify deny to remove it from the Allow list. medium. You can also restrict a specific IP address. You can configure these options using the viosecure command. the command guides the user through the proper security settings. The Virtual I/O Server security feature also enables you to control network traffic by enabling the Virtual I/O Server firewall. For more information about this command. which range from High to Medium to Low. 84 Integrated Virtualization Manager on IBM System p5 . and displays security hardening rules. For example. Using the viosecure command. a menu is displayed itemizing the security configuration options associated with the selected security level in sets of 10. you can specify which ports and network services are allowed access to the Virtual I/O Server system. After any changes. Because each enterprise has its own unique set of security requirements. or low security level. and displays network firewall settings. System security hardening The system security hardening feature protects all elements of a system by tightening security or implementing a higher level of security. Upon running the viosecure command. you can make adjustments by choosing the hardening rules you want to apply.Introduction to Virtual I/O Server security Beginning with Version 1. The viosecure command also configures. The viosecure command activates. If a system is configured at too low a security level. The following sections provide an overview of these features.0 of the Virtual I/O Server. grpck. As you become more familiar with the security settings. see the viosecure command in the Virtual I/O Server Commands Reference. Virtual I/O Server firewall The Virtual I/O Server firewall enables you to enforce limitations on IP activity in your virtual environment.3. and Low security configuration settings are best suited as a starting point for security configuration rather than an exact match for security requirements. viosecure continues to apply the security settings to the computer system. For example. individually toggled off or on. the telnet and rlogin commands are disabled for high-level security because the login password is sent over the network unencrypted. you can activate and deactivate specific ports and specify the interface and IP address from which connections will be allowed. These options enable you to select a level of system security hardening and specify settings that are allowable within that level.0. and sysck actions Default file creation settings System crontab settings Configuring a system at too high a security level might deny services that are needed. The system security hardening features provided by Virtual I/O Server enable you to specify values such as: Password policy settings The usrck. none of the security hardening features is activated after installation. the system might be vulnerable to security threats. deactivates. the predefined High. Medium. By default. if you need to restrict login activity from an unauthorized port. After this initial selection. You can get information about the hardening rules by running the man command. you can easily implement security controls by specifying a high. pwdck. These options can be accepted in whole.
To enable the VIOS firewall. You can use the default setting or configure the firewall settings to meet the needs of your environment by specifying which ports or port services to allow. You can use the -force option to enable the standard firewall default ports. which allows access for the following IP services: ftp ftp-data ssh web https rmc cimon Note: The telnet command is disabled when the firewall is turned on. see the viosecure command description. or Low. see the viosecure command description and Appendix 3. Medium. by using the following command: viosecure -firewall allow | deny -port number 3. For more about the force option. Advanced configuration 85 . The following topic describes this action. Configuring firewall settings Enable the Virtual I/O Server (VIOS) firewall to control IP activity. Note: The firewall settings are in the viosecure. you must first enable the Virtual I/O Server firewall.Before configuring firewall settings. Chapter 4. The VIOS firewall is not enabled by default.ctl file in the /home/ios/security directory. Enable the VIOS firewall by issuing the following command: viosecure -firewall on 2. you will lose your connection or session. To implement system security hardening rules. you must turn it on by using the viosecure command with the -firewall option. issue the following command: viosecure -firewall off For more about any viosecure command option. A default set of rules is defined for each level. You can also turn off the firewall to deactivate the settings. Configuring Virtual I/O Server system security hardening Set the security level to specify security hardening rules for your Virtual I/O Server (VIOS) system. View the current firewall settings by issuing the following command: viosecure -firewall view 4. Use the following tasks at the VIOS command line to configure the VIOS firewall settings: 1. If you want to disable the firewall configuration. When you enable it. you can use the viosecure command to specify a security level of High. the default setting is activated. You can also set a level of default. Specify the ports to allow or deny. So if you are using Telnet to set the security settings. which returns the system to the system standard settings and removes any level settings that have been applied.
no VIOS security levels are set. type ALL to apply all of the options. create them using the ssh-keygen command (press Enter for passphrases) as shown in Example 4-8 on page 87. To exit the command without making any changes. 86 Integrated Virtualization Manager on IBM System p5 . enter q. separated by a comma. Use the following tasks to configure the system security settings: Setting a security level To set a VIOS security level of High. Review the displayed options and make your selection by entering the numbers that you want to apply. Press Enter to display the next set of options. Viewing the current security setting To display the current VIOS security level setting use the viosecure command with the -view flag. you must run the viosecure command to enable the settings. as in the following example: viosecure -view Removing security level settings To unset any previously set system security levels and return the system to the standard system settings. Starting with IVM/VIOS 1. and continue entering your selections.) 2.0.4 Connecting to the Virtual I/O Server using OpenSSH This topic describes how to set up remote connections to the Virtual I/O Server using secure connections. or type NONE to apply none of the options. the High level is the most restrictive and provides the greatest level of control. (Pressing Enter displays the next set in the sequence. as in the following example: viosecure -level low -apply Changing the settings in a security level To set a VIOS security level in which you specify which hardening rules to apply for the setting. Setting up SSH authorization for non-prompted connection 1. see the viosecure command description.The low-level security settings are a subset of the medium-level security settings.3.0. run the viosecure command interactively. which are a subset of the high-level security settings. 3. type viosecure -level high. use the viosecure -level command. 4. as in the following example: 1. All security level options (hardening rules) at that level are displayed. issue the following command: viosecure -level default For more information about using the viosecure command. Medium. or Low. If the id_dsa files do not exist on your workstation. 10 at a time. OpenSSH and OpenSSL are already installed by default. Therefore. You can apply all of the rules for a specified level or select which rules to activate for your environment. By default. At the VIOS command line.
known_hosts file creation ># ssh firstname.lastname@example.org ># ls -l total 24 -rw------1 root system 668 Oct 13 15:31 id_dsa -rw-r--r-1 root system 598 Oct 13 15:31 id_dsa. Chapter 4.111 $ Connection to 9.3.Example 4-8 Create the id_dsa files on your workstation nim-ROOT/root/.ssh ># ssh-keygen -t dsa Generating public/private dsa key pair. padmin@9. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added '9. RSA key fingerprint is 1b:36:9b:93:87:c2:3e:97:48:eb:09:80:e3:b6:ee:2d.ssh ># ls -l total 16 -rw------1 root system -rw-r--r-1 root system nim-ROOT/root/. nim-ROOT/root/.123 (9.5.5. There is not yet a known_hosts file created. Now log in to the IVM through SSH.5.ssh ># ftp 9.123 The authenticity of host '9.5.123 closed.ssh/id_dsa.5.3. Next step is to retrieve the authorized_keys2 file with FTP (get) from the IVM (Example 4-11).5. The key fingerprint is: d2:30:06:6b:68:e2:e7:fd:3c:77:b7:f6:14:b1:ce:35 root@nim nim-ROOT/root/.5. Example 4-9 Verify successful creation of id_dsa files nim-ROOT/root/.pub 3. 4. Your public key has been saved in /root/.3.123)' can't be established.3.ssh 2. Verify that the keys are generated on your workstation (Example 4-9).5.5.3. Advanced configuration 87 .111 Last login: Fri Oct 13 15:25:21 CDT 2006 on /dev/pts/1 from 9.3. Example 4-10 First SSH login toward IVM .5.123.pub. Example 4-11 Transfer of authorized_keys2 file nim-ROOT/root/.123's password: Last unsuccessful login: Fri Oct 13 15:23:50 CDT 2006 on ftp from ::ffff:9.ssh/id_dsa. Enter file in which to save the key (/root/.ssh/id_dsa): Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /root/.pub -rw-r--r-1 root system 391 Oct 13 15:33 known_hosts The known_hosts file has been created.123 Connected to 9.3. which will be done during the first SSH login (Example 4-10).123' (RSA) to the list of known hosts.3.3.ssh 668 Oct 13 15:31 id_dsa 598 Oct 13 15:31 id_dsa.
ftp> cd . environment authorized_keys2 226 Transfer complete.5.ssh (Example 4-14). Example 4-14 FTP of authorized key back to IVM nim-ROOT/root/.ssh ># cat id_dsa. 150 Opening data connection for .pub file (Example 4-13). Transfer the authorized key file back to the IVM into the directory /home/padmin/. 5.5. ftp> get authorized_keys2 200 PORT command successful.111 230 User padmin logged in.123:root): padmin 331 Password required for padmin.3.pub known_hosts 7. Add the contents of your local SSH public key (id_dsa.2 Fri Feb 3 22:13:23 CST 2006) ready.3. 150 Opening data connection for authorized_keys2 (598 bytes). ftp> ls 200 PORT command successful.pub >> auth* 6.5.123 nim-ROOT/root/.2 Fri Feb 3 22:13:23 CST 2006) ready.5e-05 seconds (7799 Kbytes/s) local: authorized_keys2 remote: authorized_keys2 ftp> by 221 Goodbye..123. Example 4-12 Add contents of local SSH public key to authorized_keys2 file nim-ROOT/root/. 220 IVM FTP server (Version 4.ssh ># ftp 9.3.220 IVM FTP server (Version 4.5.ssh ># ftp 9. Example 4-13 Compare addition of public key nim-ROOT/root/.5. 88 Integrated Virtualization Manager on IBM System p5 . Password: 230-Last unsuccessful login: Fri Oct 13 15:23:50 CDT 2006 on ftp from ::ffff:9. Name (9.3.5. 599 bytes received in 7.3.123 Connected to 9.ssh ># ls -l total 32 -rw-r--r-1 root system -rw------1 root system -rw-r--r-1 root system -rw-r--r-1 root system 598 668 598 391 Oct Oct Oct Oct 13 13 13 13 15:38 15:31 15:31 15:33 authorized_keys2 id_dsa id_dsa. Verify the successful addition of the public key by comparing the size of the authorized keys file to the id_dsa.ssh 250 CWD command successful.111 230-Last login: Fri Oct 13 15:32:03 CDT 2006 on /dev/pts/1 from 9. 226 Transfer complete.pub) to the authorized_keys2 file (Example 4-12).3.
599 bytes sent in 0.5.power_off_policy=0. ftp> cd . 8. For example: ssh email@example.com lssyscfg -r sys name=p520-ITSO.111 230 User padmin logged in.123 lssyscfg -r sys Example 4-16 shows the output of the padmin command.4 Kbytes/s) local: authorized_keys2 remote: authorized_keys2 ftp> by 221 Goodbye.123 ioscli mkvt -id 2 This enables us to get a console directly to a client LPAR with id 2 ssh firstname.lastname@example.org. Example 4-15 Test the configuration nim-ROOT/root/.service_lpar_name=VIOS.ssh Chapter 4.111 Last login: Fri Oct 13 15:37:33 2006 on ftp from ::ffff:9.os400_capable=1.127.assign_phys_io_capable=0.5.0.config_version=0100010000 000000. ftp> put authorized_keys2 200 PORT command successful.5.3. 226 Transfer complete.3.111 230-Last login: Fri Oct 13 15:35:44 CDT 2006 on ftp from ::ffff:9.3.max_ power_ctrl_lpars=1.type_model=9111-520. Example 4-16 Output of the padmin command nim-ROOT/root/.cod_proc_capable=1.ssh ># ssh email@example.com urr_configured_max_lpars=11.max_lpars=22.214.171.124.sys_time=10/13/06 17:39:22.serial_num=10DDEEC.ssh ># ssh firstname.lastname@example.org After establishing these secure remote connections.ssh 250 CWD command successful. ssh -t email@example.com=Opera ting.3.5.cod_mem_capable=0.3.111 $ ioslevel 1. Advanced configuration 89 .3. Verify that the key can be read by the SSH daemon on the IVM and test the connection by typing the ioslevel command (Example 4-15).123 Last unsuccessful login: Fri Oct 13 15:23:50 2006 on ftp from ::ffff:9.5.pend_lpar_config_state=enabled nim-ROOT/root/.mfg_default_config=0.pend_configured_max_lpars=11.3. we can execute several commands.3. Password: 230-Last unsuccessful login: Fri Oct 13 15:23:50 CDT 2006 on ftp from ::ffff:9.ipaddr=9.5.service_lpar_id=126.96.36.199 This gives us an interactive login (host name is also possible). 150 Opening data connection for authorized_keys2.Name (9.123:root): padmin 331 Password required for padmin.3.m icro_lpar_capable=1.dlpar_mem_capable=1.000624 seconds (937.
90 Integrated Virtualization Manager on IBM System p5 .
5 Chapter 5. 2006. 2005. 91 . All rights reserved. Maintenance This chapter provides information about maintenance operations on the Integrated Virtualization Manager (IVM). This chapter discusses the following topics: IVM backup and restore Logical partition backup and restore IVM upgrade Managed system firmware update IVM migration Command logging Integration with IBM Director © Copyright IBM Corp.
or the ASMI menus.1 Backup and restore of the logical partition definitions LPAR configuration information can be backed up to a file. There is only one unique backup file at a time. and a new backup file replaces an existing one. memory.5. Select Generate Backup in the work area. click Backup/Restore.bak is generated and stored under the user’s home directory. follow these steps: 1. Under the Service Management menu in the navigation area. Click Upload Backup File. Click Restore Partition Configuration to restore the last backed-up file. the system must not have any LPAR configuration defined. Some operations are available using the GUI. restore. Click Restore Partition Configuration to restore the uploaded backup file. or upgrade.1. The uploaded file replaces the existing backup file.1 IVM maintenance You can use the IVM to perform operations such as backup. Figure 5-1 Partition Configuration Backup/Restore A file named profile. the CLI. as shown in Figure 5-1. Click Browse and select the file. Information about virtual disks is not included in the backup file. If you want to restore a backup file stored on your disk. This file will be used to restore information if required and can also be exported to another system. and network. The following steps describe how to back up the LPAR configuration: 1. 2. such as processors. In the work area. 2. 3. In order to perform a restore operation. 5. 92 Integrated Virtualization Manager on IBM System p5 . you can select this file name and save it to a disk. The backup file contains the LPAR’s configuration.
Maintenance 93 .2 Backup and restore of the IVM operating system The only way to back up the IVM operating system is with the backupios command. This command creates a bootable image that includes the IVM partition’s rootvg.htm 5.jsp?topic=/iphb1/iphb 1_vios_commandslist. Figure 5-2 Backup/Restore of the Management Partition Important: The backup operation does not save the data contained in virtual disks or physical volumes assigned to the LPARs. install the operating system using the bootable media created by the backup process. Chapter 5. It can also contain the storage pool structure. The backup can use one of the following media types: File Tape CD-R DVD-RAM To restore the management partition.You can also back up and restore LPAR configuration information from the CLI.ibm. No operating system backup operation is available within the GUI.com/infocenter/eserver/v1r3s/index. Use the bkprofdata command to back up the configuration information and the rstprofdata command to restore it.1. depending on the flags used. See the VIO Server and PLM command descriptions in the Information Center at the following Web page for more information: http://publib.boulder.
we notice that a newer fix pack is available: FP 8. Applying this package will upgrade the VIOS to the latest level. Example 5-1 Using the ioslevel command $ ioslevel 1.0.4-FP-7. Determining the current VIOS level By executing the ioslevel command from the VIOS command line. These can be downloaded from: http://techsupport.0.3.com/server/vios/ Updates are necessary whenever new functionalities or fixes are introduced.4 with Fix Pack 7. V1. Note: Applying a fix pack can cause the restart of the IVM. Figure 5-3 IBM Virtual I/O Server Support page Fix Pack 8.4.1. the level of the VIOS software is 1.3 IVM updates Regularly IBM brings out updates (or fix packs) of the Virtual I/O Server.1. All VIOS fix packs are cumulative and contain all fixes from previous fix packs. the padmin user can determine the actual installed level of the VIOS software (Example 5-1).1.ibm.2.0.services.2.0 provides a migration path for existing Virtual I/O Server installations. That means that all LPARs must be stopped during this reboot.5.4 $ In the example. 94 Integrated Virtualization Manager on IBM System p5 . If we now go back to the mentioned Web site (Figure 5-3).
3.0. (See 5.3. Maintenance 95 . “Microcode update” on page 110.com/webapp/set2/firmware/gjsn Important: Be sure to have the right level of firmware before updating the IVM.software.) The VIOS Web site has a direct link to the microcode download site: http://www14. Example 5-2 shows how to list fixes.0.0.1.To take full advantage of all of the available functions in the VIOS. run this command to remove exit Note: It is recommended that the AIX 5L client partitions using VSCSI devices should upgrade to AIX 5L maintenance Level 5300-03 or later.0. Chapter 5. it is recommended that the firmware be updated before upgrading the VIOS to Version 1.3.ibm. VIOS customers who applied interim fixes to the VIOS should use the following procedure to remove them prior to applying Fix Pack 8. SF230_120 is the minimum level of SF230 firmware supported by the Virtual I/O Server V1. If a system firmware update is necessary. Example 5-2 Listing fixes $ $ $ $ oem_setup_env /*from the VIOS command line emgr -P /*gives a list of the installed efix's (by label) emgr -r -L /* for each additional efix listed. All interim fixes applied to the VIOS must be manually removed before applying Fix Pack 8. it is necessary to be at a system firmware level of SF235 or later.
Figure 5-4 IBM Virtual I/O Server download options For the first download option. Reboot. Verify a successful update by checking the results of the updateios command and running the ioslevel command. Create a directory on the Virtual I/O Server. 5. the updates can be applied from a directory on your local hard disk: 1. Using the ftp command. Log in to the Virtual I/O Server as the user padmin. 4. all filesets are downloaded into a user-specified directory. The result of the ioslevel command should equal the level of the downloaded package. When the download has completed. 2. transfer the update file (or files) to the directory you created. $ ioslevel 1.Downloading the fix packs Figure 5-4 shows four options for retrieving the latest fix packs. $ mkdir directory_name 3. which retrieves the latest fix pack using the Download Director. Apply the update by running the updateios command: $ updateios -dev directory_name -install -accept Accept to continue the installation after the preview update is run.0 96 Integrated Virtualization Manager on IBM System p5 .0-FP-8.0.3.
gz | tar -xvf 6. enter no. Apply the update by running the updateios command: $ updateios -dev /mnt -install -accept 4. Apply the update by running the updateios command: $ updateios -dev directory_name -install -accept 5.gz. Maintenance 97 .tar. 1. Chapter 5./fixpack<nn>. Log in to the Virtual I/O Server as user padmin.tar. Change directories to the new directory: $ cd <directory> 5. Applying updates from a local hard disk Follow these steps to apply the updates from a directory on your local hard disk: 1. If prompted to remove the .0 Applying updates from a remotely mounted file system If the remote file system is to be mounted read-only.tar. Unzip and extract the tar file contents with the following command: $ gzip -d -c .toc. otherwise you will be prevented from installing this fix pack. Quit from the shell.gz). fixpack<nn>.0-FP-8. Copy the compressed tar file. Follow these steps: 1.3. The result of the ioslevel command should equal the level of the downloaded package. to the current directory. Accept to continue the installation after the preview update is run. $ ioslevel 1. 4. Using the ftp command.tar. 2. Enter the following command to escape to a shell: $ oem_setup_env 2. Create a directory on the Virtual I/O Server: $ mkdir directory_name 3. in our case this would be fixpack80.toc?.0. Log in to the Virtual I/O Server as the user padmin. Create a new directory for the files you extract from the tar file: $ mkdir <directory> 4.Uncompressing and extracting a tar file If you downloaded the latest fix pack using FTP as a single. 3. Verify a successful update by checking the results of the updateios command and running the ioslevel command. you must first rename the fix pack file tableofcontents.. you must uncompress the tar file and extract the contents before you can install the update. compressed tar file (option 2 in Figure 5-4 on page 96) (fixpack<nn>. The next step is to follow the installation instructions in the next section. transfer the update file (or files) to the directory you created.txt to . Mount the remote directory onto the Virtual I/O Server: $ mount remote_machine_name:directory /mnt 3.gz. 2.
Verify a successful update by checking the results of the updateios command and running the ioslevel command. The result of the ioslevel command should equal the level of the downloaded package: $ ioslevel 188.8.131.52-FP-8.0
Applying updates from the ROM drive
This fix pack can be burned (or ordered directly from IBM Delivery Service Center - option 4) onto a CD by using ISO image files, which was the third option on Figure 5-4 on page 96. After the CD has been created, perform the following steps to apply the update: 1. Log in to the Virtual I/O Server as user padmin. 2. Place the update CD into the drive. 3. Apply the update by running the updateios command: $ updateios -dev /dev/cdX -install -accept (where X is a device number between 0 and N) 4. Verify a successful update by checking the results of the updateios command and running the ioslevel command. The result of ioslevel command should equal the level of the downloaded package: $ ioslevel 184.108.40.206-FP-8.0 Note: If updating from an ioslevel prior to 220.127.116.11, the updateios command might indicate several failures (such as missing requisites) while installing the fix pack. This is expected. Proceed with the update if you are prompted to Continue with the installation [y/n].
5.2 The migration between HMC and IVM
It is important to note that moving between the HMC and an IVM environments will require a certain amount of reconfiguration. Attention: There is no guarantee onto which disk the VIOS will install. If the install takes place to a disk containing your client volume group, you will lose the data and not be able to import it again. You should have a backup of Virtual I/O Server and virtual I/O clients and profiles for system recovery before attempting any migration. A simple trick might be the physical removal of any disks you want to save, when doing the install, and put them back in after installation. Always make a backup of your environment prior to migrating between the IVM and HMC environment.
5.2.1 Recovery after an improper HMC connection
An HMC must not be connected to a system running the IVM; otherwise, you cannot perform any other operation on the IVM.
Integrated Virtualization Manager on IBM System p5
If an HMC was connected to a system using the IVM, the following steps explain how to re-enable the IVM capabilities: 1. Power off the system. 2. Remove the system definition from the HMC. 3. Unplug the HMC network cable from the system if directly connected. 4. Connect a TTY console emulator with a serial cross-over cable to one of the system’s serial ports. 5. Press any key on the console to open the service processor prompt. 6. Log in as the user admin and answer the questions about the number of lines and columns. 7. Reset the service processor. Type 2 to select 2. System Service Aids, type 10 to select 10. Reset Service Processor, and then type 1 to confirm your selection. Wait for the system to reboot. 8. Reset it to the factory configuration (Manufacturing Default Configuration). Type 2 to select 2. System Service Aids, type 11 to select 11. Factory Configuration, and then type 1 to confirm. Wait for the system to reboot. 9. Configure the ASMI IP addresses if needed. Type 5 to select 5. Network Services, type 1 to select 1. Network Configuration, and then configure each Ethernet adapter. For more information, refer to 2.3, “ASMI IP address setup” on page 23. 10.Start the system. Type 1 to select 1. Power/Restart Control, type 1 to select 1. Power On/Off System, type 8 to select 8. Power on, and press Enter to confirm your selection. 11.Go to the SMS menu. 12.Update the boot list. Type 5 to select 5. Select Boot Options, type 2 to select 2. Configure Boot Device Order, and select the IVM boot disk. 13.Boot the system. 14.Wait for the IVM to start. 15.Connect to the IVM with the GUI. 16.Restore the partition configuration using the last backup file. From the Service Management menu in the navigation area, click Backup/Restore, and then click Restore Partition Configuration in the work area. For more information, refer to 5.1.1, “Backup and restore of the logical partition definitions” on page 92. This operation only updates the IVM partition configuration and does not restore the LPARs hosted by the IVM. 17.Reboot the IVM. (If changes do not require reboot, then recovery of IVM should be done immediately.) 18.Restore the partition configuration using the last backup file. This time, each LPAR definition is restored. 19.Reboot the IVM. This reboot is needed to make each virtual device available to the LPARs. (This is also possible by issuing the cfgdev command.)
Chapter 5. Maintenance
20.Restart each LPAR.
5.2.2 Migration considerations
These are the minimum considerations to migrate between an HMC and the IVM. For a production redeployment, it will depend on the configuration of the system. VIOS version System firmware level VIOS I/O device configuration Backup VIOS, virtual I/O clients profile, and virtual I/O devices The mapping information between physical and virtual I/O devices VIOS and VIO client backups
The ioslevel command displays the VIOS version; you will see output similar to this: $ ioslevel 18.104.22.168
System firmware level
You can display the system firmware level using lsfware command. You will see output similar to this: $ lsfware system:SF240_219 (t) SF240_219 (p) SF240_219 (t)
VIOS I/O devices configuration
To display I/O devices such as adapter, disk, or slots, use the lsdev command.
Example 5-3 VIOS device information
$ lsdev -type adapter name status ent0 Available ent1 Available ent2 Available ent3 Available ent4 Available ent5 Available ent6 Available ide0 Available sisscsia0 Available sisscsia1 Available usbhc0 Available usbhc1 Available vhost0 Available vhost1 Available vsa0 Available $ lsvg -lv db_sp db_sp: LV NAME db_lv
description 2-Port 10/100/1000 Base-TX PCI-X Adapter (1410890 2-Port 10/100/1000 Base-TX PCI-X Adapter (1410890 10/100 Mbps Ethernet PCI Adapter II (1410ff01) 2-Port 10/100/1000 Base-TX PCI-X Adapter (1410890 2-Port 10/100/1000 Base-TX PCI-X Adapter (1410890 Virtual I/O Ethernet Adapter (l-lan) Shared Ethernet Adapter ATA/IDE Controller Device PCI-X Dual Channel Ultra320 SCSI Adapter PCI-X Dual Channel Ultra320 SCSI Adapter USB Host Controller (33103500) USB Host Controller (33103500) Virtual SCSI Server Adapter Virtual SCSI Server Adapter LPAR Virtual Serial Adapter
LV STATE open/syncd
MOUNT POINT N/A
Integrated Virtualization Manager on IBM System p5
Also. You must check the profile of VIOS as shown in Figure 5-5. Tip: Note the physical location code of the disk unit that you are using to boot the VIOS. then use the lsdev -dev Devicename -attr command. Figure 5-5 Virtual I/O Server Physical I/O Devices Backup VIOS. And you can use lsdev -slots command for the slot informations and lsdev -dev Devicename -child command for the child devices associated with devices. you can use lsvg -lv volumegroup_name command to discover system disk configuration and volume group information. Maintenance 101 . and virtual I/O devices You should document the information in the virtual I/O clients’ that which have a dependency on the virtual SCSI server and virtual SCSI client adapter as shown in Figure 5-6 on page 102. To migrate from an HMC to an IVM environment. To display this. use the lsldev -dev Devicename -vpd command. the VIOS must own all of the physical devices. Chapter 5. virtual I/O clients profile.If you want to display the attribute of each device.
it is necessary to back up the VIOS and VIOC. and optical media.10DDEEC-V1-C3 0x00000002 VTD LUN Backing device Physloc VTD LUN Backing device Physloc VTD LUN Backing device Physloc vopt0 0x8300000000000000 cd0 U787A. Example 5-4 Mapping information between physical I/O devices and virtual I/O devices $ lsmap -vadapter vhost0 SVSA Physloc Client Partition ID --------------.001. 102 Integrated Virtualization Manager on IBM System p5 .1 in IBM System p Advanced POWER Virtualization Best Practices. as shown in Example 5-4.-----------------vhost0 U9111.520.DNZ00XK-P4-D3 vscsi0 0x8100000000000000 dbroot_lv vscsi1 0x8200000000000000 db_lv VIOS and VIO client backups Before the migration from an HMC to an IVM environment. network.-------------------------------------------.Virtual Adapter Information Figure 5-6 Virtual SCSI — Client Adapter properties The mapping information between physical and virtual I/O devices In order to display the mapping information between physical I/O devices and virtual I/O devices such as disk. For more information about backup. “Backup and restore of the IVM operating system” on page 93 as well as Section 2.2. use the lsmap -vadapter vhost# command.1. refer to 5. REDP-4194.
When you reset the firmware.2 or above System firmware level SF230_120 or above Figure 5-7 shows the general migration procedure from HMC to an IVM environment. Reset to Manufacturing Default Configuration 2. There is some dependency on system configuration. the managed system must be reset to the Manufacturing Default Configuration using the ASMI menu function.2. network configuration and passwords to their factory defaults. Boot each Virtual I/O Client Figure 5-7 General migration procedure from HMC to an IVM environment 1. use different storage pools for client data). This migration has the following requirements: VIOS of HMC-managed environment owns all physical I/O devices Backup of VIOS and VIOC VIOS Version 1. Re-create Virtual I/O Clients 6. The only reason to save the Virtual I/O installation is if there is client data on the rootvg (not recommended. it is necessary to restore firmware setting.3 Migration from HMC to an IVM environment For redeployment from HMC to IVM. Re-create Virtual Devices and Ethernet Bridging 5.5. Connect IVM Web-interface using VIOS IP address 4. Maintenance 103 . If the client data is on other VGs. Tip: The recommended method is a complete reinstallation. it will remove all partition configuration and any personalization that has been made to the service processor. Reset to manufacturing default configuration If you decide to perform this migration. Change the serial console connection 3. 1. you should export the VG and remove the disk to make sure it does not get installed over. A default full system partition will be created to handle all hardware resources. Without an HMC the system console is provided through the internal serial Chapter 5. In the end this is more complex and time-consuming than starting with a “fresh” install. You will have to reconfigure all of your device mappings and so on.
Power On/Off system as shown in Example 5-5. and connections are made using a serial ASCII console and cross-over cable connected to the serial port. at the first menu.Power/Restart Control → 1. Firmware boot side for the next boot Currently: Temporary 3. Log out Example 5-5 shows that the Power on menu is 6. If you perform the firmware reset after detaching the HMC. You can remove this using the HMC GUI. System operating mode Currently: Normal 4. 2. System boot speed Currently: Fast 2. When a console session is opened to the reset server. Power on 98. Example 5-5 Power On/Off System Power On/Off System Current system power state: Off Current firmware boot side: Temporary Current system server firmware state: Not running 1. You will change the console connection.ports. You can reset the service processor or put the server back to factory configuration through the System Service Aids menu in ASMI. you can no longer use the default console connection through vty0. Example 5-6 Serial connection change for IVM # lscons NULL # lsdev -Cc tty vty0 Defined Asynchronous Terminal vty1 Available Asynchronous Terminal vty2 Available Asynchronous Terminal # lsdev -Cl vty0 -F parent vsa0 # lsdev -Cl vty1 -F parent vsa1 104 Integrated Virtualization Manager on IBM System p5 . This is effective after the VIOS reboot. select 1. Boot to system server firmware Currently: Standby 5. Return to previous menu 99. the HMC will retain information about the server as a managed system. If the firmware reset is performed and the system is no longer managed by an HMC. as shown in Example 5-6. and you will change the physical serial connection from SPC1 to SPC2 for using the vty1 console connection. This means that the firmware reset has not been performed and the system is still managed by an HMC. System power off policy Currently: Automatic 6. Change the serial connection for IVM When you change the management system from HMC to IVM. then the Power on menu is 8.
Virtual I/O Server Figure 5-8 View/Modify Partitions 4. Connect IVM Web-interface using the VIOS IP address The first Web-interface pane that opens after the login process is View/Modify Partitions. shared Ethernet. as shown in Figure 5-8. If you define virtual disks for clients from the Management Partition. the virtual SCSI server and client devices are created automatically for you. Re-create virtual devices and Ethernet bridging When changed to an IVM environment. Maintenance 105 . effective on next system boot 3. IVM does not have any information about other virtual I/O clients because the service process is reset to the manufacturing default configuration. but their status is changed to defined after migrating to an IVM environment. Example 5-7 Remove the virtual device $ rmdev -dev vhost0 -recursive vtopt0 deleted dbrootvg deleted Chapter 5. the VIOS (now Management Partition) still has virtual device information left over from the HMC environment. virtual Ethernet. you should remove them before creating the virtual I/O clients in IVM. There is the virtual SCSI. You can only see a VIOS partition. Because these virtual devices no longer exist. You can remove the virtual devices as shown in Example 5-7.# lsdev -Cl vsa1 vsa1 Available LPAR Virtual Serial Adapter # chcons /dev/vty1 chcons: console assigned to: /dev/vty1. and virtual target device information.
For more information about creating LPARs. Tip: You should export any volume group containing client data using the exportvg command. import the volume groups using the importvg command. refer to 3. 106 Integrated Virtualization Manager on IBM System p5 . “IVM graphical user interface” on page 38. select Assign existing virtual disks and physical volumes as shown in Figure 5-10 on page 107. This is a more efficient method to migrate the client data without loss. Re-create virtual I/O clients Because the IVM does not have virtual I/O clients information. you can re-create virtual devices using the cfgdev command or through the IVM GUI and the Virtual Ethernet Bridge for virtual I/O clients in the View/Modify Virtual Ethernet pane as shown in Figure 5-9. After migrating. Figure 5-9 Virtual Ethernet Bridge 5.2. When you choose Storage type. You can also let the IVM create a virtual disk for you by selecting Create virtual disk when needed.vtscsi0 deleted vhost0 deleted $ rmdev -dev ent4 ent4 deleted $ rmdev -dev en4 en4 deleted $ rmdev -dev et4 et4 deleted After removing the virtual devices. you will have to re-create virtual I/O clients using the IVM Web interface.
This can be migrated more easily compared to the reverse. Figure 5-11 on page 108 shows the general migration procedure from an IVM environment to the HMC. the IVM interface will be disabled immediately.Figure 5-10 Create LPAR: Storage Type 5.2. Tip: Take care to carefully record all configuration information before performing this migration.4 Migration from an IVM environment to HMC There is no officially announced procedure to migrate from an IVM environment to the HMC. Chapter 5. You have to create one profile for each LPAR. effectively making it just a Virtual I/O Server partition. Maintenance 107 . so there are no restrictions on server configuration that could affect a possible migration. The IVM environment must own all of the physical devices and there can be only one Management Partition per server. After recovery completes. The managed system goes into recovery mode. the HMC shows all of the LPARs without a profile. If an HMC is connected to the system.
effectively making it just a VIOS partition as shown in Figure 5-12. Re-create partition profile 4. Recover HMC 3. Re-create Virtual Devices and Ethernet Bridging on VIOS 5. Connect System p to HMC 2.1. Connect System p server to an HMC The server is connected and recognized by the HMC. Figure 5-12 IVM management after connecting HMC 108 Integrated Virtualization Manager on IBM System p5 . and the IVM interface will be disabled immediately. Boot each Virtual I/O Client Figure 5-11 General migration procedure from an IVM environment to HMC 1.
More information about creating partitions and profiles can be found on the following Web site: http://publib. Right-click on the managed system. then the managed system will go into recovery mode as shown in Figure 5-13. The IVM devices will appear in the defined state. VIOS should be able to use the virtual Ethernet adapter created in an IVM environment when it is rebooted. otherwise the HMC might delete all of the LPARs.boulder. if this is not the case. Example 5-8 Re-create bridge between VIOS and virtual I/O clients << SEA creation >> $ mkvdev -sea ent0 -vadapter ent5 -default ent5 -defaultid 1 ent6 Available Chapter 5. after removing the previous virtual devices. then select Recover Partition Data → Restore profile data from HMC backup data.com/infocenter/eserver/v1r3s/topic/iphbl/iphblcreatel par. Maintenance 109 . Figure 5-13 HMC recovery mode 3. However. the HMC displays all partitions without a profile in the managed system. Re-create the partition profile After the recovery completes.ibm. as shown in Example 5-8.2. Make sure that at least one of the LPARs is up and running.htm 4. Recover server configuration data to HMC Add the managed system to the HMC. Then re-create the virtual devices to bridge between VIOS and virtual I/O clients as shown in Example 5-8. Re-create virtual devices and Ethernet bridging Because everything is identical from the PHYP side. you normally should not re-create virtual devices or bridging. you can create the VIOS profile including the virtual server SCSI adapter and virtual Ethernet adapter.
From the Service Management menu in the navigation area. Note: If you are using an IBM BladeCenter JS21.3.1 in theIBM System p Advanced POWER Virtualization Best Practices. click Updates.en6 et6 << Virtual Disk Mapping >> $ mkvdev -vdev dbroot_lv -vadapter vhost0 -dev vscsi0 vscsi0 Available $ mkvdev -vdev cd0 -vadapter vhost0 -dev vtopt0 vtopt0 Available Also. 110 Integrated Virtualization Manager on IBM System p5 . then click the Microcode Updates tab in the work area.3 System maintenance Operations such as microcode updates and Capacity on Demand are available for the system hosting the IVM. For more information about backup. Important: Before migration from an IVM environment to an HMC. SG24-7940. 5.1 Microcode update The IVM provides a convenient interface to generate a microcode survey of the managed system and to download and upgrade microcode. it is necessary to back up the VIOS and VIOC. then you should follow the specific directions for this platform. including the virtual client SCSI adapter and virtual Ethernet adapters. The following steps describe how to update the device firmware: 1. For more information about the creation of virtual devices on the VIOS refer to the IBM Redbook Advanced POWER Virtualization on IBM System p5. refer to Section 2. 5. SG24-7273. you will create the virtual I/O clients’ profile. See IBM BladeCenter JS21: The POWER of Blade Innovation. REDP-4194.
Click the Download link in the task area. Chapter 5. From the Microcode Survey Results list. select one or more items to upgrade. Figure 5-14 Microcode Survey Results 3.2. Maintenance 111 . Click Generate New Survey. This generates a list of devices. as shown in Figure 5-14.
Run the install commands provided by the GUI in step 3 on page 111. and click OK to download the selected microcode and store it on the disk. “Microcode update” on page 21 for the update procedure with a diagnostic CD. Select the Accept license check box in the work area. If you are not able to connect to the GUI of the IVM and a system firmware update is needed.4. Information appears about the selected devices such as the available microcode level and the commands you need in order to install the microcode update. refer to 2. 6. Log in to the IVM using a terminal session.2. 112 Integrated Virtualization Manager on IBM System p5 . Figure 5-15 Download Microcode Updates 5. as shown in Figure 5-15.
If your machine does not have sufficient space. boot in maintenance mode.3. depending on which operating system is installed.5.4. Maintenance 113 .4. refer to 2. you can back up and restore. you can use NFS to mount some space from another server system in order to create a system backup to file.2 Capacity on Demand operations Operations for Capacity on Demand (CoD) are available only through the ASMI menu. Such a backup could also be done by using additional software such as the Tivoli Storage Manager. However. as shown in Figure 5-16.1 Backup of the operating system There are many ways to back up LPARs hosted by the IVM. Because there is no virtual tape device. For example. the file systems must be writable. 5. “Virtualization feature activation” on page 26. Figure 5-16 CoD menu using ASMI For more information. Chapter 5.4 Logical partition maintenance Each LPAR hosted by the IVM works like a stand-alone system. The main possibilities for the AIX operating system are: In general the mksysb command creates a bootable image of the rootvg volume group either in a file or onto a tape. tape backup cannot be done locally for the client partitions but only by a remotely operated tape device. 5. and perform an operating system update or a migration.
boulder. DVD-RAM. The main steps are: 1. 2. Open a virtual terminal for the LPAR to be installed with the mkvt command. 4. see: http://publib. Log in to the IVM. 2. 5. 5. Boot the LPAR.5 Command logs All IVM actions are logged into the system.ibm. Note: When creating very large backups (DVD sized backups larger than 2 GB) with the mkcd command.The mkcd command creates a system backup image (mksysb) to CD-Recordable (CD-R) or DVD-Recordable (DVD-RAM) media from the system rootvg or from a previously created mksysb image.2 Restore of the operating system The restoration process is exactly the same as on stand-alone systems. Follow the specific operating system’s restore procedures. 6.install/doc/insgdrf/create_sys_backup. Start the LPAR in SMS mode. For more information. The log contains all the commands that the IVM Web GUI runs and all IVM-specific commands issued by an administrator on the command line.htm 5. providing the ID of the LPAR to be restored. This generates the selected log entries. The log contains the following information for each action: User name Date and time The command including all the parameters The following steps describe how to access the log: 1.4.5 GB for CD or 9 GB for DVDs). You can create a /mkcd file system that is very large (1. Select the boot device that was used for the backup such as CD.com/infocenter/pseries/v5r3/index.jsp?topic=/com. 114 Integrated Virtualization Manager on IBM System p5 . click Application Logs. Multiple volumes are possible for backups over 4 GB. 3.ibm. Network Installation Management (NIM) creates a system backup image from a logical partition rootvg using the network.aix . as shown in Figure 5-17. Under the Service Management menu in the navigation area. In the work area. use the provided filters to restrict the log search and then click Apply. The /mkcd file system can then be mounted onto the clients when they want to create a backup CD or DVD for their systems. the file systems must be large file enabled and this requires that the ulimit values are set to unlimited. or network.
You can also monitor processes. Maintenance 115 . stop.3. includes an agent that allows full integration and management through the IBM Director console. and reset the machines/LPARs. From this centralized console you can monitor critical resources and events with automated alerts or responses to predefined conditions. Chapter 5. 1.Figure 5-17 Application command logs 5. Figure 5-18 shows the Platform Manager and Members view of IBM Director of our IVM server. together with some of the parameters and options of IBM Director. You also have control over to hardware to remotely start.6 Integration with IBM Director The current version of VIOS/IVM. All of this can now be integrated into an heterogeneous environment.0. You can also examine the software and hardware inventory and deploy new applications or updates across the environment.0.
and powering off LPARs. Because the IVM provides a Web GUI for creating. some may be IVM systems. This is subject to change. the system must be added to IBM Director using one of two methods: The client can choose to create a new system. giving it the User ID and Password. This will cause IBM Director to interrogate systems that are reachable based on Director’s Discovery Preferences for Level 0. The CIMOM also forwards event information to IBM Director (see Figure 1-4 on page 10).10. This managed object would appear on the IBM Director console with the padlock icon next to it. deleting. IVM contains a running CIMOM that has information about the physical system it is managing and all of the LPARs. This time the user will have to Request Access to the managed object so that IBM Director can determine which ones are IVM managed systems. The other way is to Discover a Level 0: Agentless Systems. powering on. in which case the IP address would be provided and IBM Director would validate the IP address and if validated would create a managed object for IVM. How does it work. indicating that the managed object is locked and needs authentication information to unlock it. This will be determined after access has been granted. and how is it integrated? Before IBM Director can manage an IVM system. at the time of writing. The user has to Request Access to the managed object. some might not be. it also enables the client to manage events that have occurred on the system. In this case 0 or more managed objects will be created and locked as above. 116 Integrated Virtualization Manager on IBM System p5 .Figure 5-18 Platform Manager and Members view Attention: The figure is one of an early code level. The support for IVM directly leverages support for the Hardware Management Console (HMC) that was available in IBM Director 5.
If IBM Director gets a fail to connect indication. such as the physical system and each of its LPARs. These presence checks could happen either before or after a request access has been completed successfully. Now the possibilities for presence check should be “fail to connect” or “connect successful. this validation that is done by presence check would keep things in sync. This gives us the whole topology. we collect information for physical and virtual information for processors and memory. Director connects to the CIMOM on the IVM system and begins discovering the resources that are being managed by IVM. Because an LPAR could be deleted when Director server is down for some reason. This is done by attempting to connect to the CIMOM on the IVM system. At this point the user can request access. This gives us an object that contains the PowerState property that we use to set the Power State attribute on the CEC and the subsequent LPARs. When this happens. Basically. if an LPAR was deleted. While the managed object is in the offline state. If that is not the case. and has a default interval of 15 minutes. Each of these resources will also have a managed object representation on the Director Console. Maintenance 117 . All events that IBM Director receives are recorded in the Director Event Log and those that require action are acted on. so if the presence check is done before the request access. Before we discover the LPARs. then Director’s action would be to remove the managed object from the console. an attribute is set to identify it as belonging to an IVM system. IBM Director will either get a fail to connect or an invalid authentication. IBM Director subscribes to events with the CIMOM on IVM. which we get from the association IBM_AssociatedPowerManagementService. we use the IBM_TaggedCollection. managed objects are created or deleted to make the two lists agree. We then use the association between IBMP_CEC_CS object and IBMP_LPAR_CS objects to get all objects for all LPARs. for example. we subscribe to the CIMOM for event notification. IBM Director also provides a means of doing Inventory collection. and some require no action. which is an association between the Hardware Control Point and the objects that represent the physical system. If IBM Director was receiving a fail to connect indication because of a networking problem or because the hardware was turned off. This is enabled by default. This will be an instance of IBMP_CEC_CS class. For example. then presence check does a topology scan and verifies that all resources have managed objects and all managed objects represent existing resources.After a user Requests Access to a Level 0 managed object and access is granted. When this happens. then the managed object will indicate “offline” and will remain that way until a presence check gets an invalid authentication indication. All discovery of the resources starts from the IBM_HwCtrlPoint CIM object. we must provide the Power Status. IBM Director will delete the managed object for this LPAR. The presence check uses the credentials that the managed object has. every 15 minutes (or whatever the user chooses). Some events require action from IBM Director such as power-on or power-off events or creation or deletion of an LPAR. After access is granted. It also indicates that this managed object is a Platform Manager. When we have that object. fixing those problems will cause the managed object to go back to online and locked.” If the connection is successful. the user will not be able to request access to it. IBM Director creates a Logical Platform managed object for it and passes to it the authentication details. subsequent presence checks will use those validated credentials to connect to the CIMOM. Normally events will be created when an LPAR is deleted. a presence check will be attempted on the managed object for IVM and for all of the managed objects that it is managing. then the managed object for the LPAR would show the new power state. Finally. After this is done. If an LPAR was powered on. Chapter 5. For IVM. IBM Director has a presence check facility.
118 Integrated Virtualization Manager on IBM System p5 .
2005. Linux. 119 . 2006. Reinstall using optical media or network is supported. AIX 5L. and i5/OS virtual console support Password authentication with granular control of task-based authorities and object-based authorities -Integrated firewall -SSL support for clients and for communications with managed systems Managed operating systems supported Virtual console support User security Network security © Copyright IBM Corp. IVM and HMC feature summary Table 5-1 provides a comparison between IVM and the HMC. Linux. All rights reserved. Table 5-1 IVM and HMC comparison at a glance Integrated Virtualization Manager (IVM) Physical footprint Installation Integrated into the server Installed with the VIOS (optical or network). AIX 5L and Linux AIX 5L and Linux virtual console support Password authentication with support for either full or ready-only authorities -Firewall support via command line -Web server SSL support Hardware Management Console (HMC) A desktop or rack-mounted appliance Appliance is preinstalled. Preinstall option available on some systems. and i5/OS® AIX 5L.A Appendix A.
BladeCenter JS21 only support for processing I/O Support for AIX 5L and Linux I/O Support for i5/OS Maximum # of virtual LANs Fix/update process for Manager Adapter microcode updates Firmware updates Virtual optical.System p5 support for processing & memory . disk.full support Maximum number of partitions supported Uncapped partition support Dynamic Resource Movement (dynamic LPAR) Firmware maximum Yes . and console None Four Virtual and Direct Virtual and Direct 4096 HMC e-fixes and release updates Inventory scout Service Focal Point with concurrent firmware updates VIOS fixes and updates Inventory scout VIOS firmware update tools (not concurrent) 120 Integrated Virtualization Manager on IBM System p5 . Ethernet.Integrated Virtualization Manager (IVM) Servers supported System p5 505 and 505Q Express System p5 510 and 510Q Express System p5 520 and 520Q Express System p5 550 and 550Q Express System p5 560Q Express eServer p5 510 and 510 Express eServer p5 520 and 520 Express eServer p5 550 and 550 Express OpenPower 710 and 720 BladeCenter JS21 Multiple system support Redundancy One IVM per server One IVM per server Hardware Management Console (HMC) All POWER5 and POWER5+ Processor-based servers: System p5 and System p5 Express eServer p5 and eServer p5 Express OpenPower eServer i5 One HMC can manage multiple servers Multiple HMCs can manage the same system for HMC redundancy Firmware maximum Yes Yes .
IVM and HMC feature summary 121 .Integrated Virtualization Manager (IVM) I/O concurrent maintenance VIOS support for slot and device level concurrent maintenance via the diag hot plug support VIOS command line interface (CLI) and HMC-compatible CLI No support Web browser (no local graphical display) One Hardware Management Console (HMC) Guided support in the Repair and Verify function on the HMC HMC command line interface Full support WebSM (local or remote) 254 Yes Yes Service Focal Point support for consolidated management of operating system and firmware detected errors Dump collection and call home support Full remote support for the HMC and connectivity for firmware remote support Scripting and automation Capacity on Demand User interface Workload Management (WLM) groups supported LPAR configuration data backup and restore Support for multiple profiles per partition Serviceable event management Yes No Service Focal Point Light: Consolidated management of firmware and management of partition detected errors Dump collection with support to do manual dump downloads No remote support connectivity Hypervisor and service processor dump support Remote support Appendix A.
122 Integrated Virtualization Manager on IBM System p5 .
123 . Update 2 (RHEL AS 3) or later © Copyright IBM Corp. System requirements The following are the currently supported systems: System p5 505 and 505Q Express System p5 510 and 510Q Express System p5 520 and 520Q Express System p5 550 and 550Q Express System p5 560Q Express eServer p5 510 and 510 Express eServer p5 520 and 520 Express eServer p5 550 and 550 Express OpenPower 710 and 720 BladeCenter JS21 The required firmware level is as follows: SF235 or later (not applicable to BladeCenter JS21) The software minimum supported levels are: AIX 5L V5. 2005.B Appendix B. 2006. All rights reserved.3 or later SUSE Linux Enterprise Server 9 for POWER (SLES 9) or later Red Hat Enterprise Linux AS 3 for POWER.
124 Integrated Virtualization Manager on IBM System p5 .
REDP-9111 IBM eServer p5 550 Technical Overview and Introduction.com/servers/eserver/support/pseries/aixfixes. SA38-0508.html © Copyright IBM Corp. 2006. including power and environment specifications. SG24-5496 Understanding IBM eServer pSeries Performance and Sizing. draft available. SG24-6606 Partitioning Implementations for IBM eServer p5 Servers. contains site and planning information. see “How to get IBM Redbooks” on page 127. expected publication date December 2005 IBM System p5 505 and 505Q Technical Overview and Introduction. IBM Redbooks For information about ordering these publications. System Unit Safety Information. Devices. SG24-4810 Other publications These publications are also relevant as further information sources: RS/6000 and eServer pSeries Adapters. contains information regarding slot restrictions for adapters that can be used in this system. 2005.ibm. and cables for your system. SA38-0516. REDP-4079 IBM eServer p5 510 Technical Overview and Introduction. contains information about adapters. All rights reserved. Advanced POWER Virtualization on IBM System p5. SG24-6050 Problem Solving and Troubleshooting in AIX 5L. Note that some of the documents referenced here may be available in softcopy only. SG24-7039 Practical Guide for SAN with pSeries. SG24-7940. REDP-9113 Managing AIX Server Farms. contains translations of safety information used throughout the system documentation. Online resources These Web sites and URLs are also relevant as further information sources: AIX 5L operating system maintenance packages downloads http://www. SA38-0538. IBM eServer Planning. and Cable Information for Multiple Bus Systems.Related publications The publications listed in this section are considered particularly suitable for a more detailed discussion of the topics covered in this Redpaper. devices. RS/6000 and eServer pSeries PCI Adapter Placement Reference for AIX. 125 . REDP-4001 IBM eServer p5 520 Technical Overview and Introduction. SA23-2652.
novell.boulder.boulder.com/servers/storage/disk/expplus/index.ibm.com/servers/storage/disk/enterprise/ds_family.html Virtual I/O Server supported environments http://www14.com/servers/eserver/pseries/ondemand/ve/resources.com/pseries/en_US/infocenter/base/ IBM eServer Information Center http://publib.com/server/suma/home.com/webapp/set2/sas/f/vios/home.ibm.services.html IBM TotalStorage Expandable Storage Plus http://www.htm Customer Specified Placement and LPAR Delivery http://www. pSeries.html IBM TotalStorage Mid-range Disk Systems http://www.pdf Hardware documentation http://publib16.com/servers/eserver/linux/power/whitepapers/linux_overview. OpenPower and IBM RS/6000 Performance Report http://www.html Hardware Management Console support information http://techsupport.com/autonomic/index.software.ibm.ibm.html Autonomic computing on IBM Sserver pSeries servers http://www.services.com/servers/aix/whitepapers/aix_support.html IBM Virtualization Engine http://www.html IBM TotalStorage Enterprise disk storage http://www.com/servers/eserver/pseries/linux/ SUSE Linux Enterprise Server 9 http://www.com/servers/eserver/iseries/lpar/systemdesign.com/servers/eserver/about/virtualization/ Advanced POWER Virtualization on IBM Sserver p5 http://www.com/server/hmc IBM LPAR Validation Tool (LVT).IBM eServer p5.ibm.com/servers/eserver/power/csp/index.redhat.ibm.ibm.ibm.com/servers/eserver/pseries/hardware/system_perf.html SUMA on AIX 5L http://techsupport.com/software/rhel/details/ IBM eServer Linux on POWER Overview http://www.html Linux on IBM eServer p5 and pSeries http://www.shtml IBM eServer p5 AIX 5L Support for Micro-Partitioning and Simultaneous multithreading whitepaper http://www.com/products/linuxenterpriseserver/ Red Hat Enterprise Linux details http://www.ibm.ibm.com/servers/storage/disk/ds4000/index.com/eserver/ 126 Integrated Virtualization Manager on IBM System p5 . a PC-based tool intended assist you in logical partitioning http://www.ibm.ibm.ibm.ibm.ibm.ibm.ibm.
boulder.invscoutMDS POWER4 system microarchitecture.ibm. Hints and Tips. view.ibm.com/infocenter/eserver/v1r3s/index.com/servers/eserver/pseries/linux/ Microcode Discovery Service http://techsupport. No.t10.srchBroker Linux for IBM eServer pSeries http://www.research.com/support IBM Global Services ibm. or download Redbooks. as well as order hardcopy Redbooks or CD-ROMs. January 2002 http://www. comprehensively described in the IBM Journal of Research and Development.1.com/services Related publications 127 .com/servers/eserver/support/pseries/index.ibm. Vol 46.org Microcode Downloads for IBM Sserver i5. p5.htm How to get IBM Redbooks You can search for.ibm.services.ibm. Redpapers.html IBM eServer support: Tips for AIX 5L administrators http://techsupport.com/journal/rd46-1. draft publications and Additional materials. pSeries.jsp?topic=/iphb1/i phb1_vios_commandslist.html SCSI T10 Technical Committee http://www.com/server/aix. OpenPower.ibm.services.com/server/mdownload VIO Server and PLM command descriptions http://publib.com/server/aix.com/redbooks Help from IBM IBM Support and downloads ibm. at this Web site: ibm.services.ibm.IBM eServer pSeries support http://www. and RS/6000 systems http://techsupport.
128 Integrated Virtualization Manager on IBM System p5 .
2 provided a hardware management function called the Integrated Virtualization Manager (IVM). adds a number of new functions. This IBM Redpaper provides an introduction to the Integrated Virtualization Manager. describing its architecture and showing how to install and configure a partitioned server using its capabilities. security additions such as viosecure and firewall. With its intuitive.0.0. The Integrated Virtualization Manager enables a more cost-effective solution for consolidation of multiple partitions onto a single server. 1. It handled the partition configuration on selected IBM System p5. The latest version of VIOS. INTERNATIONAL TECHNICAL SUPPORT ORGANIZATION BUILDING TECHNICAL INFORMATION BASED ON PRACTICAL EXPERIENCE IBM Redbooks are developed by the IBM International Technical Support Organization. For more information: ibm. such as a Hardware Management Console. Customers and Partners from around the world create timely technical information based on realistic scenarios. IBM eServer p5. the Integrated Virtualization Manager is easy to use and significantly reduces the time and effort required to manage virtual devices and partitions. browser-based interface.com/redbooks . task manager monitor for long-running tasks. Experts from IBM. such as support for dynamic logical partitioning for memory and processors in managed systems. and IBM OpenPower systems without the need for dedicated hardware. and other improvements.Back cover ® Integrated Virtualization Manager on IBM System p5 Redpaper No dedicated Hardware Management Console required Powerful integration for entry-level servers Key administration tasks explained The IBM Virtual I/O Server Version 1.3. Specific recommendations are provided to help you implement IT solutions more effectively in your environment.
This action might not be possible to undo. Are you sure you want to continue?
We've moved you to where you read on your other device.
Get the full title to continue reading from where you left off, or restart the preview.