You are on page 1of 58

Power Implementation Quality

Standard 2.5 for commercial


workloads

November 2021 PowerVM


PowerHA
Fredrik Lundholm
Senior Architect
Biography

Fredrik Lundholm
Thanks to Annika Blank, Bret Olszewski, Farrukh Naeem, John Banchy
Senior Architect Version 1.2 thanks to Farrukh Mahmood, Björn Roden and Ihab Ahmed
IBM Middle East Version 1.3 thanks to Tony Ojeil
FREDRIKL@AE.IBM.COM Version 1.5 thanks to Azhar Ali
+971-505572431 Version 1.6 thanks to Anouar Braham,Hasan Hashmi
Version 1.7 thanks to Björn Rodén and Steven Knudson
Version 1.8/1.9 thanks to Björn Rodén
Please contact me to provide feedback on the content, Version 1.10 thanks to Mohtashim Nomani
Version 1.11 thanks to Łukasz Schodowski
suggestions or success stories! Version 1.13 thanks to Mohtashim Nomani, Nigel Griffiths
Version 1.14 thanks to Azhar Ali
Version 1.15 thanks to Björn Rodén and Rob McNelly
Version 1.17’ thanks to Chris Gibson
Version 1.18 thanks to Anouar Braham
Version 1.19 thanks to Alexander Paul
Version 1.2 0 thanks to Chris Gibson, Arshad Zaidi
Version 2.1 thanks to Ali Attar, Subodh Jaiswar
Version 2.2 thanks to Subodh Jaiswar
Version 2.4 thanks to Zaki Jaaskelainen, Tony Ojeil
Version 2.5 thanks to David Wasserbauer, Nigel Griffiths, Franck Bonfils, Hari G M
Changes Changes for 1.8:
Update on AIX and VIOS patch levels
New load sharing comment update
Changes for 2.5
Network adapter update, Power10, cadence Changes for 1.7:
Changes for 2.4 Oracle 11.2.0.3 notice
Network adapter update, NPIV performance improvement, cadence Update nmon flags
Changes for 2.3: Update on AIX and VIOS patch levels
PreTL5, VIOS 3.1.2 update, cadence New mtu_bypass tunable
Changes for 2.2: Explanation how to enable load sharing on the SEA where ODM entry incorrectly migrated from earlier
2020 Feb Update, IO Ifix, VIOS, Spectrum Scale, AIX currency, PowerHATL4+ IFI VIOS
Changes for 2.1:
New Layout, POWER9 Enterprise, VIOS, AIX recommendation VIOS rules, recommended network Changes for 1.6:
and SAN adapters, cleaned out obsolete content New FC timeout_policy
Changes for 1.20: GPFS 3.4 certified with Oracle RAC
Rootvg failure monitoring in PowerHA 7.2, Default Processor mode, Largesend parameter highligted further
Changes for 1.19: N-series default queue depth, V7000 default queue depth
2018 Apr Update, POWER9 enablement, Spectrum Scale 4.2 certified with Oracle RAC 12c, Include a slide on nmon data collection and MPIO vs PowerPath, HDLM, SecurePath
Changes for 1.18:
2017 Sep Update,, new AIX default multipathing for SVC Changes for 1.5:
Changes for 1.17: Added page_steal_method to the AIX 5.3 recommended tuning options
2017 Update, VIOS 2.2.5, poll_uplink clarification (edit) Additional info on queue depth
Changes for 1.15: Reorg of network setup slides
VIOS update, PowerHA update, AIX update, GPFS update, poll uplink, vNIC, SR_IOV, Linux, IBM New AIX release information
i
Changes for 1.14: Changes for 1.4:
Clarification aio_maxreqs Included a link to the ess page
VIOS clarification, interim FIXES Network Load Balancing in VIOS 2.2.1.1
Changes for 1.13: 20111019 update VIOS info, FP25 new home page
Correction on attribute for large receive PowerHA 6.1 or 7.1?
Currency update, POWER8, FAQ update
I/O Enlarged Capacity
Changes for 1.12: Changes for 1.3:
PowerHA, and PowerHA levels, AIX levels, VIO levels. Fixed a typo in udp_recvspace (p 19)
Virtual Ethernet buffer update The tcp parameters have been increased compared to the previous document to accommodate good
Changes for 1.11: performance when doing network backups and other general tasks.
Power Saving Animation Adding pages 21 and 22 to explain how largesend eneblement can be automated
Network configuration update admin VLAN / simplification Added IO Pacing PowerHA recommendation on minpout/maxpout p 26
Removal of obsolete Network design
Changes for 1.10: Changes for 1.2:
Favor Performance without Active Energy Manager GPFS 3.3 is now certified with Oracle RAC
AIX/GPFS code level updates Clarification on Dual Shared Ethernet setup
AIX Memory Pin One new FC parameter included and two new settings for vSCSI
Changes for 1.9: Maxuproc clarification on
Reorg of slides into VIOS and AIX Sections FAQ/QA page added (p 27)
Update on AIX and VIOS patch levels Link to Microcode Discovery Services
Hypervisor receive buffer tuning for large servers (new 1.9)) Other minor updates and clarifications
3
Power Implementation Quality Standard, 2021 / © 2021 IBM Corporation
Power Implementation Quality Standard IBM Power
This presentation describes the expected best practices implementation and
documentation guidelines for Power Systems. These should be considered
mandatory procedures for virtualized Power servers

The overall goal is to combine simplicity with flexibility. This is key to achieve
the best possible total system availability with adequate performance over time

While this presentation lists the expected best practices, all customer engagements
are unique. It is acceptable to adapt and make implementation deviations after
a mandatory review with the responsible architect (not only engaging the
customer) and properly documenting these

4
Contents

General Design Principles for Power System implementations page 6

System and PowerVM Setup recommendations page 18

AIX Setup recommendations page 33

PowerHA page 42

Linux/IBM i page 47

FAQ page 48

Reference Slides: Procedures for older AIX/VIOS releases page 53

5
General principles IBM Power

Contain server sprawl using virtualization to increase utilization without


sacrificing security, performance or scalability

Enable you to migrate easily to a new platform by leveraging a fully


virtualized environment (enabled for Live Partition Mobility with PowerVM
Enterprise Edition). No AIX rootvg on internal disks, only through VIOS

An implementation without proper documentation is unacceptable. Leverage


the supplied template and engage with the customer to keep the
documentation current throughout the lifetime of the solution

6
FC
CRM SAN
FC
Självbetjäning
VSCSI
VSCSI FC
VSCSI
VSCSI SAN
FC
VIO Server 2
VETH Gbit eth

VETH Gbit eth

VETH Gbit eth

VETH Gbit eth

VSCSI

VETH

VETH

(see footnote for access and how to guide)


VETH

VETH
SAP DB
CRM

VETH
SAP DI
Självbetjäning
VSCSI

Virtual Network Backup


VSCSI

Virtual Network HACMP IC


SAP CI

Virtual Network Prod


VETH
CRM

Virtual Network TEST/UTVIC


VSCSI
SAP DI
VETH
Use Full virtualization and shared CPU resources everywhere

CRM

VSCSI

SAP DI
VETH
CRM
Guidelines for Virtualized System Design

FC
Självbetjäning SAN
FC

VSCSI
VSCSI
CRM SAN FC
VSCSI
VSCSI
FC
VIO Server 1
VETH Gbit eth

VETH Gbit eth

VETH Gbit eth

VETH Gbit eth

10G eth
MPIO multipath with appropriate ODM for the storage (Installed in the VIOS)

IBM System p 570

IBM System p 570

10G eth
www.ibm.com/eserver/ess and put in VIOS as virtual DVD images or implement NIM
Always install two or more VIOS redundantly using shared ethernet failover and AIX

VETH Gbit eth

VETH Gbit eth

VETH Gbit eth


Download AIX/System Software DVD images from Entitled Software Support (ESS) page

VETH Gbit eth


VIO Server 1
FC
VSCSI
VSCSI
VSCSI
VSCSI FC
CRM SAN
FC
Självbetjäning SAN
FC

VETH
SAP DI
CRM
VSCSI

VETH
SAP DI
Självbetjäning
VSCSI

VETH
SAP KM
Självbetjäning
VSCSI

VETH
SAP CI
Självbetjäning
VSCSI

VETH
Adobe
Självbetjäning
VSCSI
Virtual Network Prod

Virtual Network TEST/UTVIC

SAP DB
VETH
Virtual Network HACMP IC

Virtual Network Backup

Självbetjäning
VETH

VETH

VETH

VSCSI

VETH Gbit eth

VETH Gbit eth

VETH Gbit eth

VETH Gbit eth


VIO Server 2
FC
VSCSI
VSCSI
VSCSI
VSCSI FC
CRM SAN
FC
Självbetjäning
SAN FC
IBM Power

7
Guidelines for Capacity planning IBM Power
Add adapters to satisfy IO requirements. Negligible difference in performance between dedicated
or virtualized resources when correctly implemented

Use Virtual CPU’s to cater for load spikes combined with entitled capacity close to average
utilization where possible. It should be considered normal to have a sum of virtual CPU’s around
2x-5x the number of physical processor cores for production workloads

The LPAR table and CPU settings might be provided by IBM or ISV as an initial design
document or Low Level Design, use the settings below as a guideline where no information is
provided:

Entitlement Virtual CPU Mode Type


0.2 2 Uncapped Shared-SMT

Recommended CPU Settings

VIOS Production DB Production Dev/Test DB Dev/Test App/Web


App/Web
255 128 128 25 5
Recommended Weight Settings 8
POWER9 vNIC Capable Network Hardware Recommendation IBM Power
(Mod 2.5)
LPAR

VIOS
For 1/10/25GbE (dual 10/25GbE)
For 100GbE (dual 40/100GbE)
EC2U, fits full height systems, S924, S914, E950
EC2T, fits low profile, S922, E980 EC66, fits full height systems, S924, S914, E950
For each port add 10Gb SR SFP+ EB46, 25Gb SFP28 EB47 and EC67, fits low profile, S922, E980
EB48 for 1GbE connectivity
Withdrawn but still supported (dual 10GbE, dual 1GbE)
For 10GbE (quad 10Gb) SR fiber standard with LC connector EN0H, fits full height systems, S924, S914, E950
EN15, fits full height systems, S924, S914, E950 EN0J, fits low profile, S922, E980
EN16, fits low profile, S922, E980 EL38, Linux only, L922

Hypervisor
Low profile - Full high - Low profile - Full high -
Specific PCIe adapters support SR-IOV multi-OS multi-OS Linux only Linux only
RAM usage
(MB)
PCIe2 4-port (10GbE+1GbE) SR Optical fiber #EN0J #EN0H #EL38 #EL56 160
PCIe3 4-port 10GbE SR optical fiber #EN16 #EN15 160
PCIe3 2-Port 25/10Gb NIC&RoCE #EC2T # EC2U 2900
PCIe4 2-port 100Gb ROCE EN #EC67 #EC66 3700

What about EN0S, EN0T, other adapters? They are low function adapters without IBM i and vNIC support
EN0X, EN0W are a special case where RJ45 10GbE is required without vNIC. EC3M and EC3L are alternative PCIe3 100GbE Ethernet
9
adapters that are also certified with POWER8
Power10 vNIC Capable Network Hardware Recommendation IBM Power
(New 2.5)
LPAR

VIOS
Power10 network support is simplified compared to earlier systems.
Support for EN0H, EN0J, EN15, EN16 is not carried forward. No dedicated 1GbE adapter available for order with E1080,
use EC2T+EB48 or EN0X/EN0W
For 1/10/25GbE (dual 1/10/25GbE)
For 100GbE (dual 40/100GbE)
EC2U, fits IO-drawers and future full-height systems
EC2T, fits E1080 and future low-profile systems EC66, fits IO-drawers and future full-height systems
Must add SFP, two per adapter: EC67, fits E1080 and future low-profile systems
10Gb SR SFP+ EB46
25Gb SFP28 EB47 EC77, IBM i only 2-port 100GbE RoCE with Crypto, LP
(New)
1GbE RJ45 EB48 EC78, IBM i only 2-port 100GbE RoCE with Crypto, FH

The EC77/EC78 adapters currently support RoCE and IP


Security (IPSEC) for DB2 Mirror and runs in dedicated mode
only (no PowerVM virtualization) with IBM i

In the future when additional capabilities are announced this


notice will be updated

Tip: You can order more SFP:s than EC2U/EC2T adapters, for example, with one adapter you can configure two EB48 1GbE and two EB46 10GbE SFP so you can
upgrade from 1GbE to 10GbE without ordering additional parts 10
POWER9/10 Fibre Channel Hardware Recommendation IBM Power
(Mod 2.5)
LPAR

VIOS
For 16Gb (4-port) SR Fiber standard with LC connector
EN1C, for full height systems, S924, S914, E950, IO-drawer
EN1D, for low profile, S922, E980, E1080

For 32Gb (2-port) SR Fiber standard with LC connector


EN1A, for full height systems, S924, S914, E950, IO-drawer
EN1B, for low profile, S922, E980, E1080

All these adapters support the new NPIV Multi Queue Support

EN1A and EN1B also supports FC-NVMe with certain storage


subsystems and operating systems

11
Firmware Policy (Upd 2.5) IBM Power
Always update to the latest HMC and System firmware version during initial deployment

http://www-01.ibm.com/support/docview.wss?uid=ssm1maps

Document installed versions and cross reference the latest versions, document why the latest
firmware was not applied

Microcode Discovery Service is used to determine if microcode on your installed Power


systems server is at the latest level https://esupport.ibm.com/customercare/mds

Don’t forget to update Network adapter card firmware, FC adapter card firmware and disk
firmware

FLRT can be useful for those who are planning to upgrade key components
http://www14.software.ibm.com/webapp/set2/flrt/home

12
Optimize clock speed POWER7-POWER8 (Upd 1.17) IBM Power

With firmware levels 740, 760 or 770 and above on POWER7 systems and all
POWER8/POWER7+ models, the ASMI interface includes the favor performance setting.

With POWER8 and HMC8 this interface can be directly accessed from the HMC configuration
panels, ASMI is not required. A new option fixed max frequency is also available (1.17)

Engage favour performance as a mandatory modification for most environments in the ”Power
Management Mode” menu

Ensures systems runs at best possible speed

Follow Animation on page 47:

13
Optimize clock speed POWER9-Power10 (Upd 2.5) IBM Power
On POWER9/10 the default setting for optimizing speed for most servers is ”Maximum Performance”
Please note that the meaning of “static” mode has changed from “enabling a static reduction of power
consumption and clock speed” in POWER9 to “disable” power saving in Power10 and run at stock
speed. That corresponds to “Disable all modes” on POWER9
“Enable Maximum Performance Mode” is recommended

(This setting can also be directly accessed from the HMC configuration panels)

P9
P10

14
ASMI on POWER9 default mode: IBM Power
Maximum Performance

Can be change online for immediate


effect
ASMI on Power10 default mode: IBM Power
Maximum Performance

Can be change online for immediate


effect

Idle Power Saver option not


available on Power10
HMC method to enable Maximum Performance Mode on IBM Power
POWER8/POWER9/Power10 (Upd 2.5)

P8 P9 P10

Please note that the meaning of “static” mode has changed from “enabling a static reduction of power
consumption and clock speed” in POWER8/9 to “disable” power saving in Power10 and run at stock
speed. That corresponds to “Disable” on POWER8/9
PowerVM
PowerVM
Follow this chapter when you install VIOS…

PowerVM Blog
https://community.ibm.com/community/user/power/blogs/hariganesh-m
uralidharan1/2020/07/02/list-of-blogs

Creating a virtual computing environment


https://www.ibm.com/docs/en/power10/9080-HEX?topic=e1080-virtual
-computing-environment

18
VIOS Policy (Upd 2.5)
PowerVM
VIOS is considered a firmware rather than an operating system and Current VIOS 2.6 Mini Pack 2.2.6.65 (July 2020)
with a few exceptions, the only level with additional hardware and VIOS 2.2.6 contains new enhancements such as POWER9 support
bug fix support is the latest one and network virtualization enhancements and a very long support
roadmap, not recommended for new POWER9 installation
Current VIOS 3.1 Update Release 3.1.3.10 (Sept 2021) 
Is POWER10 minimum level https://www-945.ibm.com/support/fixcentral/vios/selectFixes?pare
nt=Virtualization%20software&product=ibm/vios/5765G34&relea
se=2.2.6.61&platform=All&function=all
https://www.ibm.com/support/fixcentral/vios/selectFixes?parent=V
irtualization%20software&product=ibm/vios/5765G34&release=3.
1.3.0&platform=All&function=all Minimum levels:
POWER7’ (MMC/MHC) requires VIOS minimum 2.2.1.1
If you need to maintain POWER7 (MMB/MMC) systems, VIOS 2.6 POWER7+ (MMD/MHD) requires VIOS minimum 2.2.2.1
is the only option and requires support extension since October POWER8 S814/S824/S822 requires VIOS minimum 2.2.3.3
2020 POWER8 Enterprise E870/E880 requires VIOS 2.2.3.4 +
POWER8 Enterprise E850 requires VIOS 2.2.3.51
Regularly check VIOS HIPER fixes POWER8 Enterprise E850C requires VIOS 2.2.5.10
https://www14.software.ibm.com/webapp/set2/flrt/doc?page=hiper POWER9 H/S/L-models requires VIOS 2.2.6.21
&os=vios_hiper POWER9 E950 requires VIOS 2.2.6.23
POWER9 E980 requires VIOS 2.2.6.31
Environments that use CAA, including PowerHA and VIOS: POWER9 Scale out refresh (G/S) 2.2.6.65
https://www.ibm.com/support/pages/node/6507119
19
(Upd 2.3)
PowerVM

http://www14.software.ibm.com/webapp/set2/sas/f/genunix3/VIOS_ServiceLife.jpg 20
Network access mechanism (Upd 2.4)
PowerVM
Please configure servers with high function network adapters such EN15, EN16, EC2T, EC2U,
EC66, EC67 for best flexibility

EN0H, EN0J, EL38 and EL56 (withdrawn but still supported) do not need FCoE switches and will
connect normally to any network switch and offer vNIC and SR_IOV network functionality
(see page 22)

Deploy Shared Ethernet Failover with Load Sharing to greatly reduce LPAR configuration
complexity. Use Link aggregation on 1GbE or 10/25/40/100GbE ports on the VIO servers

Create as many shared ethernet adapters as deemed necessary from a performance/ separation /
security perspective

Connect and configure one virtual network adapter per VLAN to the client partition
 Reference the SEA Load Sharing Documentation:
https://www.ibm.com/support/knowledgecenter/9009-42A/p9hb1/p9hb1_vios_sc
enario_sea_load_sharing.htm
21
Load Balanced Shared Ethernet Adapter + Link
Aggregation Ethernet Switch 1
VLAN Trunk 1,2,999
(Upd 2.5)
PowerVM
ent1 ent0 ent2 ent3
(Phy) (Phy) (Phy) (Phy)

VIOS2 IP ADDR ent9 ent10


(LA) (SEA)

ent8 ent6 ent4 ent5 ent7


(Vir) (Vir) (Vir)
PVID= PVID=96 PVID=97 PVID=98
(Vir) (Vir)
PVID=4095
Virtual I/O Server 2
999 VLAN=999 VLAN=1 VLAN=2

Control Channel
VLAN2
PVID=2
ent0
(Vir)
AIX lpar 1
ent1
(Vir)
Admin VLAN 999 (optional)
PVID=999
ent1
(Vir) AIX lpar 2

SEA
VLAN1 ent0
PVID=1 (Vir)

ent8
(Vir)
ent6
(Vir)
ent4
(Vir)
ent5
(Vir)
Ent7
(Vir)
Virtual I/O Server 1
ent9 ent10
VIOS1 IP ADDR
(LA) (SEA)

ent1 ent0 ent2 ent3


(Phy) (Phy) (Phy) (Phy)

Ethernet Switch 2
VLAN Trunk 1,2,999 22
Steps to create the recommended network architecture
(Upd 2.5)
PowerVM
1. Assign physical network cards to HMC profile of VIOS partition 4. Create SEA with load sharing (repeat for each VIOS, start
with the primary)
2. Create virtual and trunk adapters from HMC of VIOS partition
ent4 = PVID=97, Access External Network , IEEE 802.1q supported , Additional VLAN 1 , Priority 1 #mkvdev -sea ent9 -vadapter ent4,ent5,ent6 -default ent4
in VIOS 1 and Priority 2 in VIOS 2 -defaultid 97 -attr ha_mode=sharing
ent5 = PVID=98, Access External Network, IEEE 802.1q supported , Additional VLAN 2 , Priority 1
in VIOS 1 and Priority 2 in VIOS 2
ent6 = PVID=96, Access External Network, IEEE 802.1q supported , Additional VLAN 999 , Priority (The default control channel runs on PVID 4095 unless
1 in VIOS 1 and Priority 2 in VIOS 2*
ent8 = PVID=999 explicitly changed)

3. Create link aggregation (preferable with ”src_dst_port” hash *ent6 is an optional administration VLAN, it is recommended
algoritm) to create a virtual ethernet adapter (ent 8) in each VIOS and
activate and log into VIOS attach an IP address for VIOS administration on VLAN
smitty etherchannel (ent9) 999

choose the adapters which you want as primary adapters:


ent0,ent1,ent2,ent3
do not add a backup adapter
set mode according to switch vendor and hash mode to
src_dst_port (don’tforget to match the switch)

23
Alternate SR_IOV and vNIC network access mechanism
(Upd 2.5)
PowerVM
Typical use case: Dedicated backup network in large consolidation SR_IOV: Share parts of a dedicated network adapter among several
scenarios where offloading the backup traffic from the SEA is partitions. Works with all current IBM AIX, IBM i and Linux distributions.
desired. Could also be used for specialized networks where low Restrictions: prevents Live Partition Mobility, restrictions on Etherchannel.
latency is more important than bandwidth (Oracle RAC) Max 20 VM per network port (POWER7+ and POWER8). POWER9/10
support 40 VFs per port for EC2T/EC2U/EC2R/EC2S (25 GbE/10GbE) and
60VFs per port for EC66/EC67 (100/40GbE) adapter
Common Prerequisites: High function Network adapters EN0H,
EN0J, EN11, EN15, EN16, EL38, EL56 Please note that these vNIC: SR_IOV enhanced with Live Partition Mobility support. Max 20 VM
cards DO NOT need to be connect to FCoE switches. They per network port (POWER8, AIX 71TL4, AIX 7.2,IBM i 7.1 TR10 or IBM i
typically connect to regular Ethernet switches. In addition, 7.2 TR3). 40/60 VM per network ports with EC network adapters on
POWER9/10 supported options are EC66/EC67 (100/40GbE) POWER9/POWER10. vNIC & vNIC failover support for Linux is new with
EC2T/C2U (25 GbE/10GbE) and EC2R/EC2S (10GbE) POWER9

Use these for specialized scenarios where SEA is not practical,


please note that SR_IOV has restrictions on Etherchannel. AIX 7.1TL4 or AIX 7.2+
PowerVM 2.2.5 introduced vNIC failover IBM i 7.1 TR10 or IBM i 7.2 TR3+
LINUX prereqs
SuSE 12sp3, 15, RHEL 7.6+

24
Virtual I/O Server (VIOS) rules (Upd 2.2_rev2)
PowerVM
VIOS Rules can be used to automate the VIOS tuning presented in this
chapter

Suggest adding rules from best practice that are not in the default file To see the difference between the current rules file and rules
that are currently applied on the system:
$ rules -o add -t adapter/vdevice/IBM,l-lan -a min_buf_medium=1024 $ rules -o diff -s
$ rules -o add -t adapter/vdevice/IBM,l-lan -a max_buf_medium=1024 To see the difference between the current rules file and the
$ rules -o add -t adapter/vdevice/IBM,l-lan -a min_buf_large=128 default rules file:
$ rules -o add -t adapter/vdevice/IBM,l-lan -a max_buf_large=128 $ rules -o diff -d
$ rules -o add -t adapter/vdevice/IBM,l-lan -a min_buf_huge=96
$ rules -o add -t adapter/vdevice/IBM,l-lan -a max_buf_huge=96 The rules command should be re-run after a VIOS
To deploy the rules on VIOS: update, to check for new rules
$ rules -o deploy

To distribute the rules file to multiple VIOS partitions:


$ rules –o list

1. Capture the current rules file from a source VIOS Current rule file from VIUOS 3.1.10:

$ rules -o capture
2. Copy the captured current rules file from the source VIOS to
the target VIOS /home/padmin/rules/vios_current_rules.xml
3. Merge the transferred rules file with the current rules file rules_31__o_list_d.txt
on the target VIOS:
$ rules –o import –f <transferred> Click on attachment above for expanding the file while not in
4. Deploy the merged current rules on the target VIOS: presentation mode
$ rules –o deploy
5. Restart the target VIOS: Please note that default VIOS rules specifies round robin as load
$ shutdown -restart balancing algorithm and IBM now recommends shortest_queue for the
storage systems that support it

https://www.ibm.com/support/knowledgecenter/HW4P4/p8hb1/p8hb1_rules_file_mgmt.htm 25
VIOS Virtual Ethernet Tuning – Large Send
and Large Receive
(Upd 2.3)
PowerVM
Enable largesend for each Virtual Ethernet and each SEA adapter. This leads to reduced CPU
consumption and higher network throughput. This is the default setting for SEA in VIOS since VIOS
2.2.3.3!

There is also segment aggregation “large_receive” parameter introduced for 10Gbit adapters. Enable
large_receive for the SEA when using 10Gbit network adapters this is not the default setting

For all SEA ent(x) devices on all VIO Servers: Use “chdev”

– For all SEA interfaces, chdev -l entX -a large_receive=yes  survives reboot

In cases you have upgraded from an older VIOS release make sure largesend is updated on VIO
Servers

For all SEA interfaces, chdev -l entX -a largesend=1  survives reboot

Please read
https://www.ibm.com/support/pages/power8-platform-largesend-segmentation-offload-plso-feature-aix-vir
tual-network-environment
for a discussion about the new Platform Large Send feature and how it relates to traditional largesend
described above
Updating and applying the VIOS rules in previous section will not automate this step 26
VIOS Virtual Ethernet Tuning – Hypervisor receive buffer
tuning for large servers (Upd 2.5) PowerVM
For typical virtualized setups the default hypervisor Ethernet receive As a rule of thumb, increase “Tiny” , “Small” , “Medium”, “Large”
buffers might become congested. The buffers are maintained per and “Huge” Min buffers setting to Max as a starting point on
interface and defaults are the same for VIOS and client partition VEN each VIOS VEN interface
interfaces
For each virtual Ethernet interface in the VIOS and on large
Default: partitions execute:
Receive Buffers
Buffer Type Tiny Small Medium Large #chdev -l entX -a min_buf_small=4096 -a
Huge max_buf_small=4096 –P
Min Buffers 512 512 128 24 24 #repeat for tiny, medium, large, and huge
Max Buffers 2048 2048 256 64 64 buffers

Change to: (This corresponds to VIOS ”ent4” and ”ent5” and Virtual
Min Buffers 4096 4096 1024 128 96
machine ”ent0” on page 21)
Max Buffers 4096 4096 1024 128 96

Performance is better when buffers are pre-allocated, rather than allocated


dynamically when needed

Updating and applying the VIOS rules in previous section will automate
this on the VIOS. The AIX partition still need manual tuning 27
Storage access mechanism
1. Install corresponding ODM and PCM or multipath device
(Upd 2.4)
PowerVM
driver on the VIO server. The
default AIX PCM should be used with IBM Storage as Use the "manage_disk_drivers" command to transition from using
SDDPCM is out of support since June 2020 SDDPCM to using AIX PCM

2. Configure the physical access like a normal AIX FC card on the You can select between different PCMs, or between using a
VIO server PCM and configuring the disks as non-MPIO disks

3. Connect and configure virtual machines to access the physical


The -l option shows a list of all options available
devices using AIX PCM and LVM-mirroring in the client # manage_disk_drivers –l
partition. NPIV technology together with
POWER9/FW940/VIOS 3.1.2+ By selecting the "AIX_AAPCM" option, the user can instruct
provides multiple queue support and thus is a viable AIX to use the AIX default PCM even if SDD PCM is installed:
alternative to vSCSI with AIX 7.2TL5
# manage_disk_drivers -d IBMSVC -o
Bonus: Following these guidelines allows for uncomplicated AIX_AAPCM (for IBM SVC family storage devices)
cross vendor storage upgrades in the solution life-time
without causing complicated data migrations and multipath
software issues

If you need to plan a SDDPCM to AIXPCM migration, follow


this procedure:
https://www.ibm.com/support/pages/migrate-aixpcm-using
-managediskdrivers-command 28
NPIV Fibre Channel Recommendation (New 2.4)
PowerVM
Increase from 64 to 255 number of NPIVs per FC port for 32Gb adapters

For 32Gb (2-port) SR Fiber standard with LC connector


EN1A, for full height systems, S924, S914, E950 LPAR
EN1B, for low profile, S922, E980
VIOS
Following IBM Support guide:
The default max value for max_npivs is still 64, for compliance with older
supported HBAs. When you use 16 Gbps or lesser adapters, the maximum limit of
64 still applies

To enable the new maximum value for a 32Gbps adapter, the below action is
required:​

$ chdev -dev fcs# -perm -attr max_npivs=255

$ shutdown -restart

This change is supported with 32 Gbps FC adapters and available with VIOS
levels 2.2.6.x or 3.1.x.
29
Mirroring and MPIO combined, vSCSI
PowerVM
AIX virtual machine

Virtual I/O Server 1 Virtual I/O Server 2


LVM mirroring

hdisk0 hdisk1
VTD rootvg rootvg VTD
vtscsi0 vtscsi0

VTD MPIO default PCM VTD


MPIO w/PCM vtscsi1 failover only vtscsi1
MPIO w/PCM
load balancing Client SCSI adapter Client SCSI adapter load balancing
vscsi0 vscsi1
Phy.adapter

Phy.adapter

Phy.adapter
Phy.adapter

Server SCSI adapter Server SCSI adapter


vhost0 vhost0

SAN Device 1 SAN Device 2


30
Multipathing on the VIO (vSCSI, MPIO Default AIX
PCM)
(Upd 2.4)
PowerVM
Install ODM files to enable MPIO (Default PCM) with the current storage device
Virtual I/O Server 1
For each physical fscsi device:

Change the value of the attribute fc_err_recov to fast_fail VTD


Dynamic Tracking enhances resilience of the solution vtscsi0

VTD
# chdev -dev fscsi0 -attr dyntrk=yes MPIO w/PCM vtscsi1
fc_err_recov=fast_fail load balancing

Phy.adapter

Phy.adapter
For each physical hdisk:

To allow the new shortest_queue load balancing over multiple paths, make the vhost0

following changes, per hdisk:

# chdev –l hdisk0 –a reserve_policy=no_reserve

# chdev –l hdisk0 –a algorithm=shortest_queue

Updating and applying the VIOS rules in previous section will automate
this on the VIOS SAN Device 1 SAN Device 2
31
Set FC timeout_policy (vSCSI, MPIO Default AIX PCM)
PowerVM
When running MPIO device with round_robin algorithm, disk I/O
throughput may be greatly affected by one or more paths failing then Virtual I/O Server 1
coming back online.
AIX is implementing a new hdisk attribute, timeout_policy, for dealing with flaky disk paths.
VTD
This functionality is integrated in AIX 6.1 TL6 sp5 and 7.1TL0 sp3 and higher. vtscsi0
VIOS 2.2.1.3 (FP25 service pack 1) is based on AIX 6100-07-02 and thus has the updated fileset to support the timeout
policy setting.
VTD
vtscsi1
MPIO w/PCM
To allow graceful round robin load balancing over multiple paths, set load balancing
timeout_policy to fail_path for all physical hdisks in the VIO server:

Phy.adapter

Phy.adapter
# chdev –l hdisk0 –a timeout_policy = fail_path
vhost0

fail_path = Path will be failed on first occurrence of a command timeout (assuming it is not the last path in the path
group). If a path that failed due to transport issues recovers, the path will not be used for read/write I/O until a period of
time has expired with no failures on that path.

Updating and applying the VIOS rules in previous section will not automate this on the VIOS, manual tuning
required

Reboot VIOS!
SAN Device 1 SAN Device 2
32
AIX

Follow this chapter when you install AIX…

AIX 5.3 Documentation


http://www.ibm.com/support/knowledgecenter/ssw_aix_53/com.ibm.kcwelcome.doc/kc_welcome_53.html?lang=en

AIX 6.1 Documentation


http://www.ibm.com/support/knowledgecenter/ssw_aix_61/com.ibm.aix.base/kc_welcome_61.htm?lang=en

AIX 7.1 Documentation


http://www.ibm.com/support/knowledgecenter/ssw_aix_71/com.ibm.aix.base/kc_welcome_71.htm?lang=en

AIX 7.2 Documentation


http://www.ibm.com/support/knowledgecenter/ssw_aix_72/com.ibm.aix.base/welcome_72.htm?lang=en

To optimize workloads on POWER8 and POWER7


Performance Optimization and Tuning Techniques for IBM Power Systems Processors Including IBM POWER8
http://www.redbooks.ibm.com/abstracts/sg248171.html?Open

33
AIX latest support matrix as of NOV 2021
(Upd 2.5)
AIX 5.3 TL12 Final service pack EoSPS Introducing
is 12
31 April 2012 : AIX 7.3 and software enhancements for the Power10 f
AIX 6.1 TL9 Final service pack i Extended Support, amily
s 12 POWER7+ (GA
128TB file and filesystem capacities for growing data
Dec 10)
POWER8
POWER9
needs
AIX 7.1 TL5 Latest vNIC
 Integrates use of on-chip NZ GZIP with AIX
service pack is 09
POWER9
commands and libs
AIX 7.2 TL1 Final service pack i EoSPS  New IP security protects data in motion (IKEV2, NAT-
s 06
30 November 2019 T)
AIX 7.2 TL2 Final service pack i 31 October 2020  On-chip acceleration for logical volume encryption of
s 06
rootvg
AIX 7.2 TL3 Finalt service pack i Not recommended
s 07 EoSPS 31 Sep  AI inferencing with python and other opensource
2021 packages 
AIX 7.2 TL4 Latest POWER9
service pack is 04 accelerators

Support timeline graph:


AIX 7.2 TL5 Latest Multi Queue http://www14.software.ibm.com/webapp/set2/sas/f/genuni
service pack is 03 Support for NPIV
x3/AIXcurrent.jpg

Important: pls check


https://www.ibm.com/support/pages/node/6507119 34
AIX Policy
(Upd 2.5)

For new implementations use AIX 7.2 with latest TL/fixpack


Oracle RAC supports Spectrum Scale 5 (April 2020) Use
AIX 5.3 can run natively on POWER8 with extended support latest patch level with each release
contract. It also supports running with PowerHA 6.1. Note that AIX
5.3 on POWER8 runs in SMT2. AIX 7.1 or 7.2 is required for Final Patch level for Spectrum Scale 4.2.3 is 4.2.3.24
SMT8. Note that AIX 5.3 can run on POWER8 machines in SMT8 (September 2020)
mode if installed in versioned WPAR on AIX 7.2. As of March 2018 Latest Patch level for Spectrum Scale 5.0.0 is 5.0.5.10
versioned WPAR is EOL, order RPQ P91337 if you have a (Sep 2021)
requirement and accept to run it unsupported.
Latest Patch level for Spectrum Scale 5.1.0 is 5.1.2.0 (Oct
AIX 5.3 will not run on POWER9/10 2021)
AIX 6.1 will not run on Power10
AIX 6.1TL9 is certified with POWER8/POWER9, but only support IBM Spectrum Scale Software Version Recommendation
running in POWER7 (SMT4) mode. AIX 7.1 or 7.2 is required for Preventive Service Planning
SMT8. AIX 6.1 is not supported on S922/S924-G models
You can always install the latest SP/TL and retain Oracle support.
AIX 7.2TL5sp3 is suited to run all Oracle workloads certified
for AIX 7.2
AIX 7.2TL5sp3 supports the following Oracle DB: 11.2.0.4,
12.1.0.2, 12.2.0.1, 18.0.0.0, 19.0.0.0

35
Recommended AIX tuning (Upd 1.15)

Verify the following lines in the /etc/security/limits file, -1


represents “unlimited”:
fsize = -1
core = 2097151
cpu = -1
data = -1
rss = -1
stack = -1
nofiles = -1

Please allow 16384 processes for each user by setting the


“maxuproc” parameter to 16384. Use the command
/usr/sbin/chdev -l sys0 -a maxuproc = 16384

Make sure the “aio_maxreqs” is set to 65536 (64K) by issuing


“ioo -a |grep aio_maxreqs”.
Set it by “# ioo –p –o aio_maxreqs=65536”
(AIX 5.3 only)

36
Mandatory AIX 5.3 memory tuning

Bad memory settings cause problem with AIX 5.3. Recommended AIX 5.3 tuning
Mandatory exception approval is required if
these settings are not updated for any reason

minperm% = 3

Note: these are the “official” suggestions by Oracle maxperm% = 90


and SAP as well
maxclient% = 90

lru_file_repage = 0 *
For AIX 6.1 and 7.1/7.2 these settings are default
strict_maxclient = 1

strict_maxperm = 0
If these values are not set, use the command # vmo
-p –o <parameter=new value Page_steal_method= 1 *

* require reboot to take effect

37
AIX Network tuning

Below are recommended settings with Oracle RAC in AIX. Find out the current values for the above parameters using the
They are suitable for most other workloads unless contradicted “no –a” command
by application documentation
For setting ipqmaxlen use “no -r -o ipqmaxlen=512”
The tcp parameters have been increased compared to the
previous document version to accommodate good performance For setting other parameters use “no -p -o
when doing network backups and other general tasks parameter=value”

ipqmaxlen = 512
rfc1323 = 1 Application documentation takes precedence over these
sb_max =1310720 settings, should a certain application demand a different setting,
tcp_recvspace = 262144 document and use that instead
tcp_sendspace = 262144
udp_recvspace = 655360 (10x udp sendspace)
udp_sendspace = 65536

38
Automate largesend setting with mtu_bypass
(AIX6.1 TL7 sp1+ / 7.1 TL1 sp1+) (AIX 7.2 default) (Upd 2.3)

Largesend increases virtual Ethernet throughput performance and reduces processor utilization. Starting
with AIX 6.1 TL7 sp 1 and AIX 7.1 TL1 sp 1 the operating systems that supports the mtu_bypass
attribute for the shared Ethernet adapter provide a persistent way to enable the largesend feature

To determine if the operating system supports the mtu_bypass attribute run the following lsattr
command:

# lsattr -El enX |grep by_pass

If the mtu_bypass attribute is supported, the above command will return:


mtu_bypass off Enable/Disable largesend for virtual Ethernet True

Enable largesend on all AIX en interfaces through:


# chdev -l enX -a mtu_bypass=on

In AIX 7.2 mtu_bypass is enabled by default, but partitions updated from an older releases to AIX 7.2
will not enable mtu_bypass by default!

39
Adjust vSCSI parameters on each client partition

Load Balancing/Path priority


AIX virtual machine 1 AIX virtual machine 2
# lspath -AHE -l hdisk0 -p vscsi0
attribute value description
user_settable LVM LVM
priority 1 Priority True
hdisk0 hdisk0
# chpath -l hdisk0 -p vscsi0 -a rootvg rootvg
priority=2
PRIO1 PRIO2 PRIO2 PRIO1

As a rule of thumb make odd numbered hdisks MPIO default PCM MPIO default PCM
use vscsi1 as primary path failover only failover only

health_check attribute vscsi0 vscsi1 vscsi0 vscsi1

# chdev -l hdisk0 -a hcheck_interval=60 –P


vhost0 vhost1 vhost0 vhost1

Path Timeout and Error recovery


# chdev -l vscsi0 -a vscsi_path_to=30 –P

# chdev -l vscsi0 -a
vscsi_err_recov=fast_fail -P
Virtual I/O Server 1 Virtual I/O Server 2
Reboot partition !
40
Virtual I/O - vSCSI Client Queue Depth
Tuning
AIX 5.3/6.1/7.1/7.2 MANDATORY!

Increase the queue_depth for each hdisk in the virtual machine Queue Depth Default Recommended
to match the queue depth of the physical adapter in the VIOS (vscsi+physical
(up to 32)! device)

Netapp 12 12
– This allows outstanding I/O on the device from the client partition
and is key to achieve performance in virtualized setups. IBM Flash 64 64

DS8000 20 20

XIV 32 80
– Example 1: IBM Flash has a default FC queue depth of 64. Make
sure the vscsi default is increased from 3 to 64 on the vscsi client SVC 32 32
adapter! (in each LPAR)
V7000 8 20

EMC 16 16

– Example 2: HDS Based Storage devices uses a default queue


HDS (HP/SUN) 2 32
depth of 2. Its imperative to increase it to 32, both on the VIO 9990, XP24000
physical device and vscsi client adapter vSCSI (LPAR) 3 12-64

Update each hdisk in each virtual machine


Syntax for tuning:
# chdev -l hdiskX -a queue_depth=Y
41
PowerHA

PowerHA

42
PowerHA Recommendations (Upd 2.5)
PowerHA
PowerHA 6.1 is no longer supported. New implementation should
focus on PowerHA 7. 2.5sp1 or 7.2.6 when available on AIX PowerHA 7.2.6 for AIX (Available Dec 10 2021)
(Review the support and service timelines) Logical Volume Manager (LVM) encryption support
Power10 software and hardware
Configure PowerHA with poll uplink instead of netmon.cf (see GUI new functionality
next page)
PowerHA V7.2.5 for AIX is enhanced with:
•Geographic Logical Volume Manager (GLVM)
• Enables clients to configure and orchestrate multiple
parallel GLVM instances from a source to a target,
particularly for cloud deployments
•Graphical User Interface (GUI) improvements:
•Cloud tiebreaker via lock file for on premise or cloud deployments
•Smart Assist for PowerHA currency support for IBM Tivoli
Directory Server, IBM Tivoli Storage Manager, Oracle, IBM Db2,
and SAP NetWeaver

Important: pls reference check


https://www.ibm.com/support/pages/node/6507119
43
PowerHA and poll uplink (avoid netmon.cf) (Upd 1.19)
PowerHA
Netmon.cf is a traditional method for virtual machines to detect
network failure by pinging safe gateways outside the environment PowerHA installed in virtual machines set up in accordance to the
best practices should be configured to propagate uplink status to
PowerHA could until recently not rely on the physical link status of PowerHA. Netmon.cf should be removed/not used
the adapter (veth) because it is always up even in the underlying
SEA has lost contact with the outside world On the client partitions enable poll_uplink on all ent[0.1.2..]
interfaces:
If some of these ping packets gets discarded on route to the virtual
machine it may trigger an involuntary PowerHA failover, even # chdev -l entX -a poll_uplink=yes -P
though there is nothing wrong with the network

Starting with VIOS 2.2.3.4+ the SEA adapter has a new Read more background summarized by Rob McNelly here:
functionality “poll uplink” http://www.ibmsystemsmag.com/Blogs/AIXchange/October-2015/
An-Underutilized-PowerHA-Option/
On AIX 7.1TL3, TL4, TL5 AIX 7.2 and AIX 6.1TL9 the poll
uplink attribute is introduced on the veth adapter on client
partitions. This allows PowerHA to detect network failure events
without pinging a default gateway

44
Rootvg Failure Monitoring (Upd 2.3)
PowerHA
AIX has a new “critical volume group” capability which will smitty sysmirror -> Custom Cluster Configuration -> Events -
> System Events -> Change/Show Event Response (smitty
monitor for the loss or failure of a volume group. You can apply cm_change_show_sys_event)
this to any volume group, including rootvg. If applied to rootvg,
then you can monitor for the loss of the root volume group * Event Name            ROOTVG                 +
* Response              Log event and reboot   +
* Active                Yes                    +
This feature may be useful if your AIX LPAR experiences a loss of
SAN connectivity e.g. total loss of access to SAN storage and/or all
PowerHA v7.2 takes advantage of this functionality specifically for
SAN switches. Typically, when this happens, AIX will continue to
rootvg. If the critical VG option is set for rootvg and it losses
run, in memory for a period and will not immediately crash. This
access to quorum set of disks (or all disks if quorum is disabled),
may not be desirable behavior
instead of moving the VG to an offline state, the node is crashed
and a message is displayed on the console
This feature is not enabled by default. Enable it with chvg –r y
rootvg
More details here:
http://gibsonnet.net/blog/cgaix/html/AIX%20rootvg%20failure%2
You want this feature enabled for your HA clusters so that they 0monitoring.html
respond appropriately to loss of the root volume group and initiate
a failover

45
IO Pacing - PowerHA/HACMP Policy
PowerHA
The AIX 6.1 default value for maxpout is 8193, and minpout is 4096. To disable I/O
pacing, simply set them both to zero

Start with the AIX 6.1 default (required for Oracle RAC) also for AIX 5.3 (if you want to
update the default)

Setting minpout/maxpount to 24/33 is strongly discouraged

Contact IBM for all cases where customers or ISV demands a different value for
verification. (24/33 is always incorrect)

https://www.ibm.com/support/knowledgecenter/ssw_aix_72/com.ibm.aix.performance/disk
_io_pacing.htm
http://www.oracle.com/technetwork/database/clusterware/overview/rac-aix-system-stabilit
y-131022.pdf

46
Linux and IBM i notes on mtu_bypass and SEA Performance (Upd 2.5)

Newer AIX, Linux and IBM i versions can detect if hypervisor supports enhanced largesend feature and
allow TCP layer to transfer large packets (Hypervisor Assisted Platform Largesend)

Linux versions are able to co-exist with manual mtu_bypass large and receive offload on SEA. This is true
for SUSE Linux 11sp4+ (HANA version), RedHat 7.2+ and RedHat Feodora 22+. Required driver for
supporting explicit mtu_bypass is ibmveth 1.05

VIOS 3.1.3.10 recommended, if on VIOS 2.2.3.61/60 and lower levels it is critical to install 2.2.3.65
Minipack from IBM support, this is important both for AIX, IBM i and Linux

If you run Linux and IBM i workloads that does not support largesend, or on VIOS without
IV72825/VIOS 2.2.3.65, very bad network performance will be observed. If the VIOS workloads cannot
be upgraded to later releases, create a second Virtual Ethernet Switch and configure a SEA without
mtu_bypass for those VM:s on private physical network ports

Use of SR_IOV can also be considered if practical in this instance for Linux and IBM i

47
FAQ

48
Optimize clock speed!
POWER7+/POWER7 C (Animation)

Press Play!
4. Default (Wrong)

1
5. Change to (Correct)

49
FAQ (1/3) (Upd 2.4)

1. On page 27 of the presentation, you usually mentioned (Avoid NPIV except where virtual TAPE is desired),
kindly advise with the reason? Why did you change your mind in 2021?

The most critical aspect is systems availability over time and simplicity (which contributes to availability). With NPIV you have to install
SAN device drivers in the virtual machine. These change over time and differ between disk models/vendors. Most disk vendors now
support the usage AIX_PCM reducing the inconvenience.

The other aspect is that there is a limitation in the number of WWN pairs that can be generated from a physical card, which makes planning
more critical. 32Gb adapter now support increasing the limit from 64 to 256 slots.

vSCSI with correct default tuning as per the presentation should equal NPIV in performance and avoid these problems. Since POWER9,
FW940 and later VIOS 3.1.2 and AIX 7.2TL5, a new multi queue acceleration is present for NPIV, hence vSCSI will now not reach the
same performance levels anymore. It is time to consider defaulting to NPIV.

2. You advice against Network Interface Backup with EtherChannel configured in each virtual machine, kindly
advice wht we can't use this approach?

We should not optimize where it is not required from a performance perspective. Using NIB adds complexity on the client LPAR (Especially
with more than ONE VLAN in the solution). Using NIB can also make PowerHA more complicated to understand/operate. From a
design perspective there should be enough adapters available to satisfy the performance requirement using Shared Ethernet Adapter
Failover model.

Having a proper setup with shared CPU resources, CPU consumption in the VIOS becomes a non issue.

Using SEA failover as described in the presentation should be the default and will automatically load balance network traffic in VIOS
2.2.1.3 (FP25 service pack 1) and later.

If there is a specific reason for using NIB, motivate it and document it as a deviation. The default should be SEA Failover. Do not complicate
LPAR setup where not needed.
50
FAQ (2/3) (Upd 2.1)

How do I run nmon to collect disk service times, top process cpu consumption, etc?
STG Lab services recommends the following parameters for nmon data collection:

/usr/bin/nmon –M -^ –f –d –T –A –s 60 –c 1435 –m /tmp/nmonlog

This will invoke nmon every minute and continue for 24 hours capturing vital disk access time data along with top processes

-d includes the Disk Service Time section in the view

-T includes the top processes in the output and saves the command line arguments into the UARG section

-^ includes the Fibre Channel (FC) sections

On the HMC, there is an "Allow performance information collection" checkbox on the processor configuration tab. Select this checkbox on
the partition that you want to collect this data. If you are using IVM, you use the lssyscfg command, specifying the all_perf_collection
(permission for the partition to retrieve shared processor pool utilization) parameter. Valid values for the parameter are 0, do not allow
authority (the default) and 1, allow authority.

Shall I use AIX MPIO or vendor supplied multi pathing (HDLM, PowerPath, SecurePath)??

HDS, HP, EMC, IBM provides ODM definitions for AIX default MPIO as well as selling their own solution. I would
recommend to use AIX MPIO with Default PCM for all vendor storage solutions. Since SDDPCM is not supported on
POWER9 we should start to migrate off SDDPCM. For DS4000,5000,3950, SDDPCM should not be used.

51
FAQ (3/3)
Unlike earlier releases Oracle 11.2.0.3 by default uses an "unshared 1TB memory segment"
This have caused performance degradation for some customers until disabled and partition restarted or fix implemented.

Please reference these AIX program upgrades to fix the problem:

AIX 7.1:

http://www-01.ibm.com/support/docview.wss?uid=isg1IV23859

"Applications run slowly with High System Time.


Users running Oracle 11.2.0.3 on AIX V7 are particularly
susceptible to this."

Fix integrated in the 7.1TL1sp5 service pack, GA on 7/18/2012

AIX 6.1:

http://www-01.ibm.com/support/docview.wss?uid=isg1IV23851

Fix integrated in the 6.1TL7sp5 service pack released 7/18/2012

Workaround:

If you cannot apply these PTF:s or service packs( for any reason) you can consider the following workaround:

Unshared aliases be disabled via the "vmo" tunable:

vmo -r -o shm_1tb_unsh_enable=0 # Must do "bosboot -a" and then do


"sync;sync;sync" and then "reboot -q".

Please note: "The shm_1tb_unsh_enable must not be changed unless VMM and/or performance development asks to change it."

52
Obsolete information When implementing on previous VIOS / AIX 5.3 or older 6.1 or
7.1 releases use these slides to:

1. Automate largesend settings on each client partition

2. Hypervisor reserved memory on large POWER8 servers

3. Lock AIX 6.1 kernel in RAM to make it resilient to paging

4. Special Notice regarding FC card support in 2020

O
LD
53
Automate largesend setting with a
rc.ifconfig script
(Traditional Method)
If the mtu_bypass attribute in AIX chapter isn't available:

For all en(x) interfaces in Virtual Machines (LPARS): Use “ifconfig”

O
For Virtual Ethernet interfaces, ifconfig enX largesend

LD
The script on the next page will enable largesend on all virtual ethernet interfaces in a virtual
machine

1. Place the script on the next page in /local/etc (or other documented location)

2. Enable execution with inittab (on all Virtual Machines / LPARS):

# mkitab -i rctcpip 'rcifconfig:2:wait:/local/etc/rc.ifconfig > /dev/console 2>&1 # Enable


largesend for virt enet adapters

Largesend will now be automatically set for all ethernet interfaces at system boot and no
further action is required.

54
rc.ifconfig script
#!/bin/ksh
#
# rc.ifconfig
# Mon Jun 9 10:40:33 CEST 2008, B.Roden
#
#-------------------------------------------------------------------

# Enable largesend for each discovered interface...

O
#

LD
DEBUG=$1
for IF in $(/usr/sbin/ifconfig -a|awk -F: '/^en/{print $1}');do
set -x

$DEBUG /usr/sbin/ifconfig $IF largesend


set +x
done
55
E870C/E880C Hypervisor reserved memory (Moved to obsolete information tab ver 2.3)

On E870/E880 machines the recommendation is to disable “I/O Adapter Enlarged Capacity” to free up
hypervisor memory. With PowerVM 2.2.5+ and FW860+ this is no longer required
Power off the machine, log on to ASMI menu on HMC –> I/O Adapter Enlarged Capacity:

1. Disable I/O Adapter Enlarged Capacity by


unselecting the tick box

2. Power on the server

OL
3
3. Observe Hypervisor memory consumption

Untick!

1 2
56
Lock AIX 6.1 kernel in RAM to make it resilient to paging (New 1.10)

Beginning with AIX 7.1, the AIX kernel memory is pinned by default

To improve resilience towards overload and slow response times caused by


paging out the AIX kernel please update the following in accordance to Oracle OL
RAC on AIX best practices* on AIX 6.1TL6 onwards:
D
# vmo -r -o vmm_klock_mode=2;

Modification to restricted tunable vmm_klock_mode,


confirmation required yes/no yes

Setting vmm_klock_mode to 2 in nextboot file

Warning: some changes will take effect only after a


bosboot and a reboot

Run bosboot now? yes/no yes

bosboot: Boot image is 45198 512 byte blocks.

Warning: changes will take effect only at next reboot


57
Special Notice 2020 February (Upd 2.4)

IJ22290 - I/O failures on LPARs using certain FC adapters LPAR

https://www-01.ibm.com/support/docview.wss?uid=isg1SSRVPOVIRTUVIOS200125-1534 VIOS

I/O failures or hangs can occur when using the following Fibre Channel adapters on AIX or VIOS:

PCIe3 32Gb 2-port Fibre Channel Adapter (FC: EN1A/EN1B; CCIN: 578F)

PCIe3 16Gb 4-port Fibre Channel Adapter (FC: EN1C/EN1D; CCIN: 578E)

PCIe3 16Gb 2-port Fibre Channel Adapter (FC: EN0A/EN0B; CCIN: 577F)

O
LD
Only VIOS 3.1.1.10 and AIX 7.2 TL4 are affected.

This problem is addressed by VIOS 3.1.1.21, hence if you follow best practice for VIOS patch level shared in this
presentation you are not affected. The fix for AIX 7.2TL4sp2 was released in July 2020 and this section is now
moved to the obsolete section

58

You might also like