You are on page 1of 232

Lenovo® X6 Systems Solution™ for SAP HANA®

Implementation Guide for System x® X6 Servers

Lenovo Development for SAP Solutions


In cooperation with: SAP AG
Created on 9th December 2016 09:40 – Version 1.12.121-16
© Copyright Lenovo, 2016
Technical Documentation

X6 Systems Solution for SAP HANA Platform Edition


Dear Reader,
We wish to explicitly announce that this guide book is for the System X6 based servers for SAP HANA
Platform Edition (Type 6241 Model AC3/AC4/Hxx) based on Intel® Xeon® IvyBridge, Haswell or Broad-
well EX Family of Processors.
System eX5 based servers for SAP HANA Platform Edition (models 7147-H** and 7143-H**) are not
discussed in this manual.
The Lenovo Systems X6 solution for SAP HANA Platform Edition is based on System X6 Architecture
building blocks that provide a highly scalable infrastructure for the SAP HANA Platform Edition ap-
pliance software. The Systems x3850 X6, x3950 X6 and software, such as IBM General Parallel File
System™ (GPFS) will be used to run SAP HANA Platform Edition appliance software.
Lenovo has created orderable models upon which you may install and run the SAP HANA Platform Edi-
tion appliance software according to the sizing charts coordinated with SAP AG. For each workload type,
special ordering options for the System x3850 X6 and System x3950 X6 Type 6241 Models AC3/AC4/Hxx
have been approved by SAP and Lenovo to accommodate the requirements for the SAP HANA Platform
Edition appliance software.
The Lenovo – SAP HANA Development Team

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® I


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

Copyrights and Trademarks


© Copyright 2010-2016 Lenovo.
Lenovo may not offer the products, services, or features discussed in this document in all countries.
Consult your local Lenovo representative for information on the products and services currently available
in your area. Any reference to a Lenovo product, program, or service is not intended to state or imply
that only that Lenovo product, program, or service may be used. Any functionally equivalent product,
program, or service that does not infringe any Lenovo intellectual property right may be used instead.
However, it is the user’s responsibility to evaluate and verify the operation of any other product, program,
or service.
Lenovo may have patents or pending patent applications covering subject matter described in this doc-
ument. The furnishing of this document does not give you any license to these patents. You can send
license inquiries, in writing, to:
Lenovo (United States), Inc.
1009 Think Place - Building One
Morrisville, NC 27560
U.S.A.
Attention: Lenovo Director of Licensing

LENOVO PROVIDES THIS PUBLICATION “AS IS” WITHOUT WARRANTY OF ANY KIND, EI-
THER EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WAR-
RANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR
PURPOSE.
Neither this documentation nor any part of it may be copied or reproduced in any form or by any means
or translated into another language, without the prior consent of Lenovo.
This document could include technical inaccuracies or errors. The information contained in this document
is subject to change without any notice. Lenovo reserves the right to make any such changes without
obligation to notify any person of such revision or changes. Lenovo makes no commitment to keep the
information contained herein up to date.
Any references in this publication to non-Lenovo Web sites are provided for convenience only and do not
in any manner serve as an endorsement of those Web sites. The materials at those Web sites are not part
of the materials for this Lenovo product, and use of those Web sites is at your own risk.
Information concerning non-Lenovo products was obtained from a supplier of these products, published
announcement material, or other publicly available sources and does not constitute an endorsement
of such products by Lenovo. Sources for non-Lenovo list prices and performance numbers are taken
from publicly available information, including vendor announcements and vendor worldwide home pages.
Lenovo has not tested these products and cannot confirm the accuracy of performance, capability, or any
other claims related to non-Lenovo products. Questions on the capability of non-Lenovo products should
be addressed to the supplier of those products.

Edition Notice: 9th December 2016


This is the thirteenth published edition of this document. The online copy is the master.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® II


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

Lenovo, the Lenovo logo, System x and For Those Who Do are trademarks or registered trademarks
of Lenovo in the United States, other countries, or both. Other product and service names might be
trademarks of Lenovo or other companies.
A current list of Lenovo trademarks is available on the web at:
http://www.lenovo.com/legal/copytrade.html.
IBM, the IBM logo, and ibm.com are trademarks of International Business Machines Corp., registered in
the United States and/or other countries.
Adobe and PostScript are either registered trademarks or trademarks of Adobe Systems Incorporated in
the United States and/or other countries.
Fusion-io is a registered trademark of Fusion-io, in the United States.
Intel, Intel Xeon, Itanium, and Pentium are trademarks or registered trademarks of Intel Corporation or
its subsidiaries in the United States and other countries.
UNIX is a registered trademark of The Open Group in the United States and other countries.
Linux is a trademark of Linus Torvalds in the United States, other countries, or both.
SAP HANA is a trademark of SAP Corporation in the United States, other countries, or both.
Other company, product or service names may be trademarks or service marks of others.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® III


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

Contents
1 Abstract 1
1.1 Preface & Scope . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Disclaimer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.3 SAP HANA Platform Edition Versions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.4 Appliance Versions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.4.1 Determining Appliance Version . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.4.2 Appliance Change Log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.5 Feedback . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.6 Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.7 Conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.7.1 Icons Used . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.7.2 Code Snippets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.8 Documentation Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

2 Solution Overview 8
2.1 The SAP HANA Appliance Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.2 Definition of SAP HANA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

3 Hardware Configurations 9
3.1 Workload Optimized Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
3.1.1 Hardware Layout and Filesystem Options . . . . . . . . . . . . . . . . . . . . . . . 10
3.2 Tailored Datacenter Integration Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
3.2.1 Hardware Layout and Filesystem Options . . . . . . . . . . . . . . . . . . . . . . . 12
3.3 SAP HANA Platform Edition T-Shirt Sizes . . . . . . . . . . . . . . . . . . . . . . . . . . 12
3.4 Single Node versus Clustered Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . 13
3.4.1 Network Switch Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
3.5 SAP HANA Optimized Hardware Configurations . . . . . . . . . . . . . . . . . . . . . . . 15
3.5.1 System x3850 X6 - 2 socket - Single Node Configurations . . . . . . . . . . . . . . 15
3.5.2 System x3850 X6 - 4 socket - Single Node Configurations . . . . . . . . . . . . . . 15
3.5.3 System x3950 X6 - 4 socket - Single Node Configurations . . . . . . . . . . . . . . 16
3.5.4 System x3950 X6 - 8 socket - Single Node Configurations . . . . . . . . . . . . . . 16
3.5.5 System x3950 X6 - 4 socket - Single Flex-Node Configurations . . . . . . . . . . . 17
3.5.6 System x3950 X6 - 8 socket - Single Flex-Node Configurations . . . . . . . . . . . 17
3.5.7 System x3850 X6 - 2 and 4 socket - Cluster Node Configurations . . . . . . . . . . 18
3.5.8 System x3950 X6 - 4 socket - Cluster Node Configurations . . . . . . . . . . . . . . 18
3.5.9 System x3950 X6 - 8 socket - Cluster Node Configurations . . . . . . . . . . . . . . 19
3.5.10 System x3950 X6 - 8 socket - Cluster Flex-Node Configurations . . . . . . . . . . . 19
3.6 All Flash Solution for SAP HANA Hardware Configurations . . . . . . . . . . . . . . . . . 20
3.6.1 System x3850 X6 - 2 socket - Single Node - All Flash Configurations . . . . . . . 20
3.6.2 System x3850 X6 - 4 socket - Single Node - All Flash Configurations . . . . . . . . 21
3.6.3 System x3950 X6 - 4 socket - Single Node - All Flash Configurations . . . . . . . . 21
3.6.4 System x3950 X6 - 8 socket - Single Node - All Flash Configurations . . . . . . . . 22
3.6.5 System x3850 X6 - 4 socket - Cluster Node - All Flash Configurations . . . . . . . 23
3.6.6 System x3950 X6 - 4 socket - Cluster Node - All Flash Configurations . . . . . . . 23
3.6.7 System x3950 X6 - 8 socket - Cluster - All Flash Configurations . . . . . . . . . . 23
3.7 Card Placement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
3.7.1 Network Interface Cards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
3.7.2 Slots for additional Network Interface Cards . . . . . . . . . . . . . . . . . . . . . . 24
3.7.3 RAID Adapter Cards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

4 Storage Configuration 29
4.1 RAID Setup for GPFS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® IV


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

4.2 RAID Setup for GPFS All Flash . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32


4.3 RAID Setup for XFS accelerated with bcache . . . . . . . . . . . . . . . . . . . . . . . . . 33
4.4 RAID Setup for XFS All Flash . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36

5 UEFI settings 38

6 Networking 41
6.1 Networking Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
6.2 Customer Query . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
6.3 Network Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
6.3.1 Clustered Installations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
6.3.2 Customer Site Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
6.3.3 Network Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
6.4 Setting up the Switches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
6.4.1 Basic Switch Configuration Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
6.4.2 Advanced Setup of the Switches . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
6.4.3 Disable Spanning Tree Protocol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
6.4.4 Disable Default IP Address . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
6.4.5 Enable L4Port Hash . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
6.4.6 Disable Routing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
6.4.7 Add Networking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
6.4.8 VLAN configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
6.4.9 Save changes to switch FLASH memory . . . . . . . . . . . . . . . . . . . . . . . . 56
6.4.10 Inter-Site Portchannel Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . 56
6.4.11 Save and Restore Switch Configuration . . . . . . . . . . . . . . . . . . . . . . . . 58
6.4.12 Generation of Switch Configurations . . . . . . . . . . . . . . . . . . . . . . . . . . 59
6.5 Setting up Networking on the Server Nodes . . . . . . . . . . . . . . . . . . . . . . . . . . 60
6.5.1 Jumbo Frames . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
6.5.2 Using the wicked Framework for Network Configuration in SLES12 . . . . . . . . . 60
6.5.3 Bonding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
6.5.4 VLAN tagging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63

7 Guided Install of the Lenovo Solution 66

8 Disaster Recovery 67
8.1 Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
8.1.1 Terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
8.1.2 Architectural overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
8.1.3 Three site/Tiebreaker node architecture . . . . . . . . . . . . . . . . . . . . . . . . 70
8.2 Mixing eX5/X6 Server in a DR Cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
8.3 Hardware Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
8.3.1 Site A and B . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
8.3.2 Tiebreaker Site C (optional) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
8.3.3 Acquire TCP/IP addresses and host names . . . . . . . . . . . . . . . . . . . . . . 71
8.3.4 Network switch setup (GPFS and SAP HANA network) . . . . . . . . . . . . . . . 72
8.3.5 Link between site A and B . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
8.3.6 Network integration into customer infrastructure . . . . . . . . . . . . . . . . . . . 72
8.3.7 Setup network connection to tiebreaker node at site C (optional) . . . . . . . . . . 72
8.4 Software Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
8.4.1 GPFS configuration prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
8.4.2 GPFS Server configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
8.4.3 GPFS Disk configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
8.4.4 Filesystem Creation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
8.4.5 SAP HANA appliance installation . . . . . . . . . . . . . . . . . . . . . . . . . . . 78

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® V


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

8.4.6 Tiebreaker node setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80


8.4.7 Verify Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
8.5 Extending a DR-Cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
8.6 Mixing eX5/X6 Server in a DR Cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
8.6.1 Hardware Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
8.6.2 GPFS Part 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
8.6.3 HANA Backup Node Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
8.6.4 GPFS Part 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
8.6.5 HANA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
8.7 Using Non Productive Instances on Inactive DR Site . . . . . . . . . . . . . . . . . . . . . 86
8.7.1 Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
8.7.2 Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88

9 Mixed eX5/X6 Environments 90


9.1 Mixed eX5/X6 HA Clusters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
9.2 Mixed eX5/X6 DR Clusters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90

10 Special Single Node Installation Scenarios 91


10.1 Single Node with HA Installation with Side-car Quorum Solution . . . . . . . . . . . . . . 91
10.1.1 Installation of SAP HANA appliance single node with HA . . . . . . . . . . . . . . 92
10.1.2 Prepare quorum node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
10.1.3 Quorum Node Network Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
10.1.4 Adapt hosts file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
10.1.5 SSH configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
10.1.6 Quorum Node IBM GPFS setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
10.1.7 Quorum Node IBM GPFS installation . . . . . . . . . . . . . . . . . . . . . . . . . 96
10.1.8 Add quorum node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
10.1.9 Create descriptor disk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
10.1.10 Add disk to file system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
10.1.11 Verify Cluster Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
10.1.12 Installation of SAP HANA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
10.2 Single Node with stretched HA Installation . . . . . . . . . . . . . . . . . . . . . . . . . . 99
10.2.1 Installation and configuration of SLES and IBM GPFS . . . . . . . . . . . . . . . 100
10.2.2 Installation of SAP HANA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
10.3 Single Node with DR Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
10.3.1 Installation and configuration of SLES and IBM GPFS . . . . . . . . . . . . . . . 103
10.3.2 Optional: Expansion Storage Setup for Non-Production Instance . . . . . . . . . . 103
10.4 Single Node with HA and DR Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
10.4.1 Installation and configuration of SLES and IBM GPFS . . . . . . . . . . . . . . . 104
10.4.2 Optional: Expansion Storage Setup for Non-Production Instance . . . . . . . . . . 106
10.5 Single Node DR Installation with SAP HANA System Replication . . . . . . . . . . . . . 107
10.5.1 OS Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
10.5.2 Installation of SAP HANA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
10.5.3 Optional: Expansion Storage Setup for Non-Production Instance . . . . . . . . . . 109
10.6 Single Node with HA using IBM GPFS Storage Replication and DR using System Replication110
10.6.1 Installation and configuration of SLES and IBM GPFS . . . . . . . . . . . . . . . 111
10.6.2 Installation of SAP HANA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
10.7 Expansion Storage Setup for Non-productive SAP HANA Instance . . . . . . . . . . . . . 113
10.7.1 GPFS based installations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
10.7.2 XFS accelerated with bcache based installations . . . . . . . . . . . . . . . . . . . 114

11 Virtualization 118

12 Upgrading the Hardware Configuration 119

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® VI


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

12.1 Power Policy Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120


12.2 Reboot Behavior . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
12.3 Adding storage (GPFS) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
12.3.1 Adding storage via Storage Expansion (D1024, E1024, D1224 or EXP2524) . . . . 121
12.3.2 Adding storage on second internal M5210 controller . . . . . . . . . . . . . . . . . 122
12.3.3 Configure RAID array(s) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
12.3.4 Deciding for a CacheCade RAID Level . . . . . . . . . . . . . . . . . . . . . . . . . 123
12.3.5 Configuring RAID array when CacheCade is not yet configured . . . . . . . . . . . 123
12.3.6 Configuring RAID array with existing CacheCade . . . . . . . . . . . . . . . . . . 124
12.3.7 Changing the CacheCade RAID Level . . . . . . . . . . . . . . . . . . . . . . . . . 124
12.3.8 Configuring GPFS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
12.4 Adding storage (XFS accelerated with bcache) . . . . . . . . . . . . . . . . . . . . . . . . 125
12.4.1 Adding storage via Storage Expansion (D1024, E1024, D1224 or EXP2524) . . . . 125
12.4.2 Adding storage on second internal M5210 controller . . . . . . . . . . . . . . . . . 126
12.4.3 Prepare Server for Changes in bcache Layout . . . . . . . . . . . . . . . . . . . . . 126
12.4.4 Configure RAID array(s) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
12.4.5 Reconfigure first RAID controller SSD RAID array . . . . . . . . . . . . . . . . . . 128
12.4.6 Reconfigure Software RAID . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
12.5 Adding memory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
12.6 Adding CPU Books . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131

13 Software Updates 134


13.1 Warning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
13.2 General Update Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
13.2.1 Single Node Update . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
13.2.2 Cluster Update . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
13.2.3 Common Steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
13.3 Update Firmware and Drivers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
13.3.1 Lenovo UpdateXpress System Pack Installer . . . . . . . . . . . . . . . . . . . . . . 138
13.3.2 Update Mellanox Network Cards . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
13.3.3 Updating ServeRAID Driver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
13.4 Linux Kernel Update . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
13.4.1 SLES Kernel Update Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
13.4.2 RHEL versionlock . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146
13.4.3 RHEL Kernel Update Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
13.4.4 Kernel Update Procedure for XFS with bcache . . . . . . . . . . . . . . . . . . . . 148
13.4.5 Disruptive Cluster and Single Node Kernel Update Procedure for GPFS . . . . . . 148
13.5 Updating & Upgrading GPFS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
13.5.1 Supported Versions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
13.5.2 Disruptive Cluster and Single Node Upgrade . . . . . . . . . . . . . . . . . . . . . 150
13.5.3 Rolling Cluster Update . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
13.5.4 GPFS 3.5 Updates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
13.5.5 GPFS 4.1 & 4.2 Updates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
13.5.6 Upgrade from GPFS 4.1 to 4.2.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156
13.5.7 Upgrade from GPFS 3.5 to 4.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156
13.5.8 Upgrade from GPFS 3.5 to 4.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
13.6 Update SAP HANA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160

14 Operating System Upgrade 162


14.1 Rolling or Non-Rolling Upgrade . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162
14.2 Upgrade SLES for SAP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162
14.2.1 Upgrade Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162
14.2.2 Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
14.2.3 Shutting down SAP HANA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® VII


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

14.2.4 Shutting down IBM GPFS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164


14.2.5 Upgrade of IBM GPFS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164
14.2.6 Update Mellanox Drivers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
14.2.7 Upgrading SLES for SAP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
14.2.8 Mandatory Kernel Update . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
14.2.9 Reinstall Mellanox Software Stack . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
14.2.10 Updating the ServeRAID driver if necessary . . . . . . . . . . . . . . . . . . . . . . 166
14.2.11 Installing Compability Packages . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
14.2.12 Recompile Linux Kernel Modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167
14.2.13 Adapting Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167
14.2.14 Start IBM GPFS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167
14.2.15 Start SAP HANA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167
14.2.16 Check Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167
14.3 Upgrade RHEL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
14.3.1 Upgrade Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
14.3.2 Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
14.3.3 Shutting down SAP HANA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169
14.3.4 Shutting down IBM GPFS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169
14.3.5 Upgrade of IBM GPFS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169
14.3.6 Update Mellanox Drivers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169
14.3.7 Upgrading Red Hat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
14.3.8 Mandatory Kernel Update . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
14.3.9 Updating the ServeRAID driver if necessary . . . . . . . . . . . . . . . . . . . . . . 171
14.3.10 Installing Compability Packages . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
14.3.11 Recompile Linux Kernel Modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
14.3.12 Adapting Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
14.3.13 Start IBM GPFS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
14.3.14 Start SAP HANA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
14.3.15 Check Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172

15 System Check 173


15.1 System Login . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
15.2 Basic System Check . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
15.2.1 Cluster-wide gathering of support data . . . . . . . . . . . . . . . . . . . . . . . . . 177
15.2.2 Automatic exchange of support script within the cluster . . . . . . . . . . . . . . . 177
15.3 Check Installation Tool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178
15.3.1 Basic Usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178
15.3.2 Test Selection Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179
15.4 Additional Tools for System Checks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180
15.4.1 Lenovo Advanced Settings Utility . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180
15.4.2 ServeRAID StorCLI Utility for Storage Management . . . . . . . . . . . . . . . . . 181
15.4.3 SSD Wear Gauge CLI utility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181
15.4.4 Lenovo Dynamic System Analysis (DSA) Portable Edition . . . . . . . . . . . . . . 182

16 Backup and Restore of the Primary OS Partition 183

17 SAP HANA Backup and Recovery 184

18 Troubleshooting 185
18.1 Adding SAP HANA Worker/Standby Nodes in a Cluster . . . . . . . . . . . . . . . . . . . 185
18.2 GPFS mount points missing after Kernel Update . . . . . . . . . . . . . . . . . . . . . . . 185
18.3 Degrading disk I/O throughput . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
18.4 SAP HANA will not install after a system board exchange . . . . . . . . . . . . . . . . . . 186
18.5 Installer [1.8.80-12]: Installation of RHEL 6.5 fails . . . . . . . . . . . . . . . . . . . . . . 186

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® VIII


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

18.6 Installer [1.10.102-14]: Installation Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187


18.7 SAP Note 1641148 HANA server hang caused by GPFS issue . . . . . . . . . . . . . . . . 187

Appendices 189

A GPFS Disk Descriptor Files 189

B Topology Vectors (GPFS 3.5 failure groups) 190

C Quotas 191
C.1 Quota Calculation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191
C.2 Quota Calculation Script . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191

D Lenovo X6 Server MTM List & Model Overview 193

E Frequently Asked Questions 195


E.1 FAQ #1: SAP HANA Memory Limits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195
E.1.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195
E.1.2 Recommendation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196
E.1.3 Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196
E.2 FAQ #2: GPFS parameter readReplicaPolicy . . . . . . . . . . . . . . . . . . . . . . . . . 196
E.3 FAQ #3: SAP HANA Memory Limit on XS sized Machines . . . . . . . . . . . . . . . . . 197
E.4 FAQ #4: Overlapping GPFS NSDs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197
E.5 FAQ #5: Missing RPMs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197
E.6 FAQ #6: CPU Governor set to ondemand . . . . . . . . . . . . . . . . . . . . . . . . . . . 200
E.7 FAQ #7: No disk space left bug (Bug IV33610) . . . . . . . . . . . . . . . . . . . . . . . . 200
E.8 FAQ #8: Setting C-States . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201
E.9 FAQ #9: ServeRAID M5120 RAID Adapter FW Issues . . . . . . . . . . . . . . . . . . . 202
E.9.1 Changing Queue Depth . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202
E.9.2 Use recommended Firmware version . . . . . . . . . . . . . . . . . . . . . . . . . . 203
E.10 FAQ #10: GPFS Parameter enableLinuxReplicatedAIO . . . . . . . . . . . . . . . . . . . 203
E.11 FAQ #11: GPFS NSD on Devices with GPT Labels . . . . . . . . . . . . . . . . . . . . . 204
E.12 FAQ #12: GPFS pagepool should be set to 4GB . . . . . . . . . . . . . . . . . . . . . . . 205
E.13 FAQ #13: Limit Page Cache . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205
E.14 FAQ #14: restripeOnDiskFailure and start-disks-on-startup . . . . . . . . . . . . . . . . . 205
E.15 FAQ #15: Rapid repair on GPFS 4.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206
E.16 FAQ #16: Parameter changes for performance improvements . . . . . . . . . . . . . . . . 206
E.17 FAQ #17: GPFS 4.1.1-3 behaviour change . . . . . . . . . . . . . . . . . . . . . . . . . . 207
E.18 FAQ #18: Setting the HANA Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . 208
E.19 FAQ #19: Performance Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208
E.20 FAQ #20: Disks mounted by "ID" . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211

F References 213
F.1 Lenovo References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213
F.2 IBM References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214
F.3 SAP General Help (SAP Service Marketplace ID required) . . . . . . . . . . . . . . . . . . 214
F.4 SAP Notes (SAP Service Marketplace ID required) . . . . . . . . . . . . . . . . . . . . . . 214
F.5 Novell SUSE Linux Enterprise Server References . . . . . . . . . . . . . . . . . . . . . . . 216
F.6 Red Hat Enterprise Linux References (Red Hat account required) . . . . . . . . . . . . . . 216
F.7 VMware References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217

G Changelog 218

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® IX


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

List of Figures
1 Current SAP HANA Appliance Scenarios . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2 Hardware Overview - Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
3 Hardware Overview - Optional Storage Expansion . . . . . . . . . . . . . . . . . . . . . . 9
4 Hardware Overview - Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
5 SAP HANA Multiple Single Node Example . . . . . . . . . . . . . . . . . . . . . . . . . . 13
6 SAP HANA Clustered Example with Backup . . . . . . . . . . . . . . . . . . . . . . . . . 14
7 Workload Optimized System x3850 X6 2 Socket Rear View . . . . . . . . . . . . . . . . . 25
8 Workload Optimized System x3850 X6 4 Socket Rear View . . . . . . . . . . . . . . . . . 26
9 Workload Optimized System Storage Book. This contains slots 11, 12 and slots 43, 44 on
x3950 X6 in an additional Storage Book . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
10 Workload Optimized System x3950 X6 8 Socket Rear View . . . . . . . . . . . . . . . . . 28
11 Rearview of Storage Expansion D1224 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
12 G8264 RackSwitch front view . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
13 G8124 RackSwitch front view . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
14 G8272 RackSwitch front view . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
15 G8296 RackSwitch front view . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
16 G8052 RackSwitch front view . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
17 Cluster Node Network Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
18 Cluster Switch Networking Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
19 VLAN Configuration using Yast . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
20 DR Architectural Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
21 DR Data Distribution in a Four Node Cluster . . . . . . . . . . . . . . . . . . . . . . . . . 68
22 Logical DR Network Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
23 DR Networking View (with no client uplinks shown) . . . . . . . . . . . . . . . . . . . . . 70
24 SAP HANA DR using storage expansion - architectural overview . . . . . . . . . . . . . . 87
25 Single Node with High Availability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
26 File System Layout - Single Node HA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
27 Network Switch Setup for Single Node with HA . . . . . . . . . . . . . . . . . . . . . . . . 95
28 Single Node with stretched HA - Two Site Approach . . . . . . . . . . . . . . . . . . . . . 100
29 Single Node with stretched HA - Three Site Approach . . . . . . . . . . . . . . . . . . . . 100
30 File System Layout - Single Node stretched HA . . . . . . . . . . . . . . . . . . . . . . . . 101
31 Single Node with Disaster Recovery - Two Site Approach . . . . . . . . . . . . . . . . . . 102
32 Single Node with Disaster Recovery - Three Site Approach . . . . . . . . . . . . . . . . . 102
33 File System Layout - Single Node with DR with Storage Expansion . . . . . . . . . . . . . 103
34 Single Node with HADR using IBM GPFS Storage Replication . . . . . . . . . . . . . . . 104
35 File System Layout - Single Node HADR . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
36 File System Layout - Single Node HADR with Storage Expansion . . . . . . . . . . . . . . 107
37 Single Node DR with SAP System Replication . . . . . . . . . . . . . . . . . . . . . . . . 108
38 Single Node DR with SAP System Replication . . . . . . . . . . . . . . . . . . . . . . . . 108
39 File System Layout of Single Node DR with SAP System Replication . . . . . . . . . . . . 109
40 File System Layout of Single Node DR with SAP System Replication with Storage Expansion110
41 Single Node with HA using IBM GPFS Storage Replication and DR using System Replication111
42 Single Node with HA using IBM GPFS Storage Replication and DR using System Repli-
cation without remote site Switches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
43 File System of Single Node with HA and DR with System Replication . . . . . . . . . . . 112
44 File System of Single Node with HA and DR with System Replication and Storage Expansion113

List of Tables
1 Appliance change log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2 Network Switch Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® X


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

3 System x3850 X6 - 2 socket - Single Node Configurations . . . . . . . . . . . . . . . . . . 15


4 System x3850 X6 - 4 socket - Single Node Configurations . . . . . . . . . . . . . . . . . . 15
5 System x3950 X6 - 4 socket - Single Node Configurations . . . . . . . . . . . . . . . . . . 16
6 System x3950 X6 - 8 socket - Single Node Configurations . . . . . . . . . . . . . . . . . . 16
7 System x3950 X6 - 4 socket - Single Flex-Node Configurations . . . . . . . . . . . . . . . . 17
8 System x3950 X6 - 8 socket - Single Flex-Node Configurations . . . . . . . . . . . . . . . . 17
9 System x3850 X6 - 2 and 4 socket - Cluster Node Configurations . . . . . . . . . . . . . . 18
10 System x3950 X6 - 4 socket - Cluster Node Configurations . . . . . . . . . . . . . . . . . . 18
11 System x3950 X6 - 8 socket - Cluster Node Configurations . . . . . . . . . . . . . . . . . . 19
12 System x3950 X6 - 8 socket - Cluster Flex-Node Configurations . . . . . . . . . . . . . . . 19
13 System x3850 X6 - 2 socket - All Flash Configurations <= 768GB . . . . . . . . . . . . . 20
14 System x3850 X6 - 2 socket - All Flash Configurations >= 1024GB . . . . . . . . . . . . . 20
15 System x3850 X6 - 4 socket - All Flash Configurations <= 1536GB . . . . . . . . . . . . . 21
16 System x3850 X6 - 4 socket - All Flash Configurations >= 2048GB . . . . . . . . . . . . . 21
17 System x3950 X6 - 4 socket - Single Node - All Flash Configurations <= 1536GB . . . . . 21
18 System x3950 X6 - 4 socket - Single Node - All Flash Configurations >= 2048GB . . . . . 22
19 System x3950 X6 - 8 socket - Single Node - All Flash Configurations <= 3072GB . . . . . 22
20 System x3950 X6 - 8 socket - Single Node - All Flash Configurations >= 4096GB . . . . . 22
21 System x3850 X6 - 4 socket - Cluster Node - All Flash Configurations . . . . . . . . . . . 23
22 System x3850 X6 - 4 socket - Cluster Node - All Flash Configurations . . . . . . . . . . . 23
23 System x3950 X6 - 8 socket - Cluster Node - All Flash Configurations . . . . . . . . . . . 23
24 Slots which may be used for additional NICs . . . . . . . . . . . . . . . . . . . . . . . . . 24
25 Card assignments for a two socket x3850 X6 . . . . . . . . . . . . . . . . . . . . . . . . . . 25
26 Card assignments for a four socket x3850 X6 . . . . . . . . . . . . . . . . . . . . . . . . . 26
27 Network interface card assignments for an eight socket x3950 X6 . . . . . . . . . . . . . . 27
28 Card placement for x3950 X6 four socket and eight socket . . . . . . . . . . . . . . . . . . 28
29 X6 RAID Controller Configuration for GPFS . . . . . . . . . . . . . . . . . . . . . . . . . 30
30 x3950 X6 RAID Controller Configuration for XFS . . . . . . . . . . . . . . . . . . . . . . 31
31 Partition Scheme for Single Nodes and Cluster Installations . . . . . . . . . . . . . . . . . 32
32 x3850 X6 RAID Controller Configuration for GPFS . . . . . . . . . . . . . . . . . . . . . 32
33 Partition Scheme for Single Nodes with GPFS . . . . . . . . . . . . . . . . . . . . . . . . . 33
34 x3850 X6 RAID Controller Configuration for XFS . . . . . . . . . . . . . . . . . . . . . . 34
35 x3950 X6 RAID Controller Configuration for XFS . . . . . . . . . . . . . . . . . . . . . . 35
36 Partition Scheme for Single Nodes with XFS . . . . . . . . . . . . . . . . . . . . . . . . . 36
37 x3850 X6 RAID Controller Configuration for XFS . . . . . . . . . . . . . . . . . . . . . . 36
38 Partition Scheme for Single Nodes with XFS . . . . . . . . . . . . . . . . . . . . . . . . . 37
39 Required Operation Modes UEFI settings . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
40 Required Power UEFI settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
41 Required Processors UEFI settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
42 Required Memory UEFI settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
43 Required GPT UEFI settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
44 Customer infrastructure addresses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
45 IP address configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
46 Numbering conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
47 G8264 RackSwitch port assignments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
48 G8124 RackSwitch port assignments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
49 G8272 RackSwitch port assignments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
50 G8296 RackSwitch port assignments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
51 G8052 RackSwitch port assignments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
52 Hostname Settings for DR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
53 Extra Network Settings for DR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
54 GPFS Settings for DR Cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
55 Single Node with HA OS Partitioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® XI


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

56 Single Node with HA OS Networking Setup . . . . . . . . . . . . . . . . . . . . . . . . . . 94


57 Single Node with HA Network Switch Definitions . . . . . . . . . . . . . . . . . . . . . . . 95
58 Expansion Storage Setup for Non-productive SAP HANA Instance: XPFS-based installations115
59 RAID array and RAID controller overview . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
60 CacheCade RAID Level Possibilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
61 x3850 X6 Memory DIMM Placement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
62 x3950 X6 Memory DIMM Placement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
63 Update Procedure for XFS with bcache on single node . . . . . . . . . . . . . . . . . . . . 135
64 Update Procedure for GPFS on single node . . . . . . . . . . . . . . . . . . . . . . . . . . 135
65 Upgrade GPFS Portability Layer Checklist . . . . . . . . . . . . . . . . . . . . . . . . . . 135
66 Upgrade Kernel for XFS with bcache . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
67 Upgrade GPFS Portability Layer Checklist for GPFS . . . . . . . . . . . . . . . . . . . . . 148
68 Supported GPFS versions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
69 Upgrade GPFS Portability Layer Checklist . . . . . . . . . . . . . . . . . . . . . . . . . . 150
70 Upgrade GPFS Portability Layer Checklist . . . . . . . . . . . . . . . . . . . . . . . . . . 152
71 HANA SPS / OS Release –Support Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . 160
72 Upgrade Procudure for GPFS-based installations . . . . . . . . . . . . . . . . . . . . . . . 163
73 Upgrade Procudure for XFS-based installations . . . . . . . . . . . . . . . . . . . . . . . . 163
74 Upgrade Procudure for GPFS-based installations . . . . . . . . . . . . . . . . . . . . . . . 168
75 Topology Vectors in a 8 node DR-cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190
76 Lenovo MTM Mapping & Model Overview . . . . . . . . . . . . . . . . . . . . . . . . . . 193
77 Lenovo MTM Mapping & Model Overview . . . . . . . . . . . . . . . . . . . . . . . . . . 194
78 ServeRAID M5120 Firmware Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202

List of Listings
1 SSH login screen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
2 Support script usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
3 Support script output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
4 Support script update usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178
5 Support script update Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® XII


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

List of Abbreviations
ASU Lenovo Advanced Settings Utility
BIOS Basic Input / Output System
DR Disaster Recovery (previously SAP Disaster Tolerance)
DT SAP Dynamic Tiering (not to be confused with Disaster Recovery (DR), previously
Disaster Tolerance (DT))
ELILO EFI Linux Loader
IBM GPFS IBM General Parallel File System
GRUB Grand Unified Bootloader
GSS GPFS Storage Server
HDD Hard Disk Drive
IMM Integrated Management Module
KPI Key Performance Indicator
LILO Linux Loader
MTM Machine Type Model
NSD Network Shared Disk
NIC Network Interface Controller
NUMA Non-Uniform Memory Access
OLAP On Line Analytical Processing
OLTP On Line Transaction Processing
OS Operating System
RAID Redundant Array of Independent Disks
RHEL Red Hat Enterprise Linux
SAP HANA SAP HANA Platform Edition
SSD Solid State Disk
SLES SUSE Linux Enterprise Server
SLES for SAP SUSE Linux Enterprise Server for SAP Applications
TDI Tailored Datacenter Integration
UEFI Unified Extensible Firmware Interface
UUID Universally Unique Identifier
VLAG Virtual Link Aggregation Group
VLAN Virtual Local Area Network
VM Virtual Machine

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® XIII


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

1 Abstract

1.1 Preface & Scope

This document provides general information specific to the Lenovo Systems Solution for SAP HANA
Platform Edition (short: Lenovo Solution). This document assumes that the reader understands the
basic structure and components of the SAP HANA Platform Edition (SAP HANA) software, that he
has a solid understanding of Linux administration processes, and that he has been instructed how to
install the SAP HANA1 software on Lenovo Systems hardware.
The objective of this paper is to document topics regarding the installation, configuration or maintenace
of the SAP HANA Platform Edition (SAP HANA) on System x hardware using a managed set up rather
than manually installing each node from scratch. The major products installed here are SAP HANA,
IBM General Parallel File System (IBM GPFS) and the operating systems SUSE Linux Enterprise
Server for SAP Applications (SLES for SAP), or Red Hat Enterprise Linux (RHEL).

1.2 Disclaimer

This document is subject to change without notification and will not cover the issues encountered in
every customer situation. It should be used only in conjunction with the official product literature. The
information contained in this document has not been submitted to any formal test and is distributed AS
IS.
All statements regarding Lenovo future direction and intent are subject to change or withdrawal without
notice, and represent goals and objectives only. Contact your local Lenovo office or Lenovo authorized
reseller for the full text of the specific Statement of Direction.
Some information addresses anticipated future capabilities. Such information is not intended as a defini-
tive statement of a commitment to specific levels of performance, function or delivery schedules with
respect to any future products. Such commitments are only made in Lenovo product announcements.
The information is presented here to communicate Lenovo’s current investment and development activities
as a good faith effort to help with our customers’ future planning.
This document is for educated service personnel only. If you are not familiar with the described system,
we will ask you to restrain from trying to apply what is described herein – you could void the preloaded
system installation – and void the SAP certified configuration. This will void the warranty and support of
said machine. Please contact the sapsolutions@lenovo.com to get enrolled for education prior to installing
an Lenovo Solution appliance.

1.3 SAP HANA Platform Edition Versions

In this document, we reference to several different versions of the Lenovo Solution guided installation
software. The following numbering refers to the corresponding SAP HANA Platform Edition version.
1.7.x SAP HANA Platform Edition v 1.0 SPS07 - First release on IBM/Lenovo Systems X6 hardware
1.8.x SAP HANA Platform Edition v 1.0 SPS08
1.9.x SAP HANA Platform Edition v 1.0 SPS09
1.10.x SAP HANA Platform Edition v 1.0 SPS10
1.11.x SAP HANA Platform Edition v 1.0 SPS11
1.12.x SAP HANA Platform Edition v 1.0 SPS12
1 SAP HANA Platform Edition

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 1


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

1.4 Appliance Versions

During the lifetime of the appliance changes were made to the delivered software or the appliance setup
which necessitate different handling in some appliance operations.
If parts of this guide are only valid for certain appliance versions, these sections, paragraphs and chapters
will be marked as follows:
• [1.3]+ denotes appliance version 1.3.x and later
• -[1.3] denotes appliance version 1.3.x and earlier
• [1.3] applies only to version 1.3.x
• [1.2]-[1.4] applies to all versions from 1.2.x to 1.4.x
• [DR] applies only to Disaster Resistance enabled clusters
• [HA] applies only to standard High-Availability clusters
• [Single] is only valid for single node installations
In general the information given here is valid for all appliances 1.7.x and later.

1.4.1 Determining Appliance Version

In appliances installed with a release between (including) 1.3.x and (including) 1.8.x, the appliance
software version used to install a node can be read from the file /etc/opt/ibm/appliance-version.
In appliances installed with release 1.9.96 or later, this information can be found in /etc/lenovo/
appliance-version.
Appliance version 1.5.53-5 and subsequent will have a version number formatted like 1.5.53-5.690. The
first 4 numbers are the appliance version, the last number is an internal build number.
If this file does not exist, e.g due to a manual installation, you can obtain the appliance version by
executing
# rpm -qi ibm|lenovo-saphana-ipla...

The appliance version will part of the package name.


Different components like SAP HANA or drivers may have newer versions due to software updates and
hardware replacement, but this does not change the appliance version.

1.4.2 Appliance Change Log

Only major changes necessitating different operations are listed.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 2


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

Appliance
Changes
Version

• Support of D1224
1.11.121-16 • GPFS 4.2.1.2
• Supported OS: SLES11 SP4, SLES12 SP1, RHEL 6.7 and RHEL 7.2

• Support of Broadwell CPUs - Intel Xeon E7 v4 processors


• Support of All Flash Models
1.11.112-15
• GPFS 4.1.7
• Supported OS: SLES11 SP4, SLES12 SP1 and RHEL 6.7

• Added support for bcache & XFS on Single Nodes with SLES12
1.10.102-14 • GPFS 4.1.1
• Supported OS: SLES12 and RHEL 6.6

• GPFS mount point and SAP HANA installation path is configurable


1.9.96-13
• Supported OS: SLES11 SP3 and RHEL 6.6

• Added backup & restore functionality


1.8.80-12
• Supported OS: SLES11 SP3 and RHEL 6.5

1.8.80-11 • (Flex/GSS only release)

• X6 only release, eX5 (MTM 7143/7147) not supported


• GPFS 4.1.0
1.8.80-10
• Added automatic RAID setup tool
• Supported OS: SLES11 SP3 and RHEL 6.5

1.7.73-9 • Support for X6-based server with 8 Sockets (MTM 3837-AC4)

• Initial release of this document for X6-based servers


1.7.70-8 • Support for X6-based servers (MTM 3837-AC3)
• eX5-based servers are not supported by this release (MTM 7143/7147)

Table 1: Appliance change log

1.5 Feedback

We are interested in your comments and feedback. Please send it to sapsolutions@lenovo.com. The full
guidebook can be downloaded, depending on its version, from following community (SAP HANA Support
Document section) – SAP Solutions at Lenovo Community.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 3


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

1.6 Support

The System x SAP HANA development team provides new images for the SAP HANA appliance at reg-
ular intervals. These images have dependencies regarding the hardware, operating systems, and hardware
drivers. The use of the latest image for maintenance and installation of SAP HANA appliance is highly
recommended.
Whenever the firmware level recommendations (fixes known firmware issues) for the Lenovo components
of the SAP HANA appliance are given by the individual System x support representatives, it is the
customers’ responsibility to upgrade (or downgrade) to the recommended levels as instructed by System
x support representatives. A list of the minimally required versions can be found in SAP Note 1880960
– Lenovo Systems Solution for SAP HANA Platform Edition FW/OS/Driver Maintenance.
Whenever the operating systems recommendations (fixes known operating systems issues) for the SUSE
Linux components of the SAP HANA appliance are given by the SAP, SUSE, Red Hat, or IBM/Lenovo
support representatives, it is the customers’ responsibility to upgrade (or downgrade) to the recommended
levels as instructed by SAP through an explicit SAP Note or a Customer OSS Message. SAP describes
their operational concept, including updating of the operating system components in SAP Note 1599888
– SAP HANA: Operational Concept. If the Linux kernel is updated, you have to recompile IBM GPFS
software as well.
Whenever other hardware or software recommendations (that fix known issues) for components of the
SAP HANA appliance are given by the individual Lenovo support representatives, it is the customers’
responsibility to upgrade (or to downgrade) to the recommended levels as instructed by Lenovo support
representatives.
Its recommended that the customer continuously monitors for security and bug fixes in the software,
firmware, and OS and apply those.
Review the documentation for the components in your environment before applying updates to ensure
the update is compatible with the software in this solution. If software and documentation updates are
available, you can download them from the respective Lenovo, IBM, SUSE, Red Hat or SAP website.
To check for updates, go to the following websites. Follow the procedure in the included documentation
to update the software.
• Firmware and drivers for System X6 Servers
– You can obtain updates for System x3850/x3950 X6 servers on the Lenovo support website at
http://support.lenovo.com/us/en/.
• IBM General Parallel File System (IBM GPFS2 ) and IBM Spectrum Scale updates
– You can obtain updates for GPFS on the IBM support website for GPFS 3.5.0, GPFS 4.1.0
and Spectrum Scale 4.1.1IBM Spectrum Scale/GPFS 4.1.1
• SUSE Linux Enterprise Server for SAP Applications
– You can download the installation package from the SUSE website at http://download.
novell.com/
• SUSE Linux patches and updates
– You can obtain the latest code updates for SUSE from the SUSE website at http://download.
novell.com/patch/finder/
• Red Hat Enterprise Linux
– You can download the installation package from the Red Hat website at http://www.redhat.
com/en/technologies/linux-platforms/enterprise-linux
2 IBM General Parallel File System

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 4


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

• VMware ESX Server patches and updates


– You can obtain the latest code updates for vSphere ESX server from the lenovo support website
at http://support.lenovo.com/us/en/.
• SAP HANA appliance updates
– You can obtain the latest code updates from SAP at the SAP Service Marketplace at http:
//service.sap.com/swdc

Attention
As mentioned in https://www-947.ibm.com/support/entry/portal/docdisplay?
lndocid=LNVO-CHANGE a migration for the Lenovo branded System x, and Storage products
from the IBM Fix Central to the Lenovo Support site http://support.lenovo.com/us/en/
is in process.
Lenovo recommends that customers follow the software upgrade recommendations set out by SAP in the
SAP HANA Technical Operations Manual3 (TOM). It is important to understand that the corrections
listed in this note are those known to be a solution to a definite problem when running SAP HANA
appliance on the System x solutions. This knowledge was derived from internal testing, or customers who
ran into a specific problem. In parallel, the organizations owning the individual products provide a lot
more fixes that are unknown to the Lenovo-SAP team, yet are recommend to be applied, nevertheless.
In particular, there are fixes that Lenovo recommends to install that are not listed here. It is expected
that you contact your Lenovo service contact to get a list of those fixes as well as a reasonably current
service level in general.
In case of a failure follow these instructions:
1. Check for hardware failure: The server’s IMM (Integrated Management Module) will report hard-
ware incidents. You may also use the IMM’s Virtual Light Path Diagnostics. LEDs on various
external and internal components of the server indicate the failed HW part.
• If only a hardware replacement is necessary, take the according steps with IBM (please refer
to IBM Support Portal4 ).
2. Control the software status: Execute saphana-support-lenovo.sh -cv with the latest version of
the support script (see SAP Note 1661146 – Lenovo/IBM Check Tool for SAP HANA appliances).
The script will check for common root causes of failures. Consult the Lenovo SAP HANA Appliance
Operations Guide 5 .
• Try to apply suggested solutions by the support script and the Operations Guide.
3. If you can not determine the root cause of the failure or there is no solution provided by the
support script or the Operations Guide, open an SAP OSS ticket. See the Quick Start Guide6 ,
section Getting help and technical assistance for more information.

1.7 Conventions

This guide uses several conventions to improve the reader’s experience and the ease of understanding.
3 http://help.sap.com/hana/SAP_HANA_Technical_Operations_Manual_en.pdf
4 https://www-947.ibm.com/support/entry/portal/support
5 SAP Note 1650046 (SAP Service Marketplace ID required)
6 http://www.ibm.com/support/entry/portal/docdisplay?lndocid=MIGR-5087035

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 5


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

1.7.1 Icons Used

The following information boxes indicate important information you should follow according to the level
of importance.
Attention
ATTENTION – pay close attention to the instructions given

Warning
WARNING – this is something to take into consideration

Note
INFORMATION – extra information describing in detail

1.7.2 Code Snippets

When reading code snippets you have to note the following: Lines of code that are too long to be shown
in one line will be automatically broken. This line break is indicated by an arrow at the end of the first
and an arrow at the start of the second line:
This is a code snippet that is too long to be printed in one single line, therefore ←-
,→you will see an automatic line break.

There are also line numbers at the left side of each code snippet to improve the readability.
Code examples that contain commands that have to be executed on a command line follow these rules:
• Lines beginning with a # indicate commands to be executed by the root user.
• Lines beginning with a $ indicate commands to be executed by an arbitrary user.

1.8 Documentation Overview

• General Info
– Quick Start Guide for SAP HANA Appliance - Lenovo System x3850 X6 (6241) and x3950 X6
(6241)
– In-memory Computing with SAP HANA on Lenovo X6 Systems Planning / Implementation
• Implementation
– Lenovo Systems X6 Solution for SAP HANA Implementation Guide
– Lenovo Systems X6 Solution for SAP HANA Installation Guide SLES11
– Lenovo Systems X6 Solution for SAP HANA Installation Guide SLES12
– Lenovo Systems X6 Solution for SAP HANA Installation Guide RHEL6
– Lenovo Systems X6 Solution for SAP HANA Installation Guide RHEL7
• Operation
– Lenovo Systems X6 Solution for SAP HANA Operations Guide
• Special Topic Guides:
– Special Topic Guide for System x eX5/X6 Servers: Backup and Restore

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 6


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

– Special Topic Guide for System x eX5/X6 Servers: Mixed Cluster


– Special Topic Guide for System x X6 Servers: Virtualization
– Special Topic Guide for System x eX5/X6 Servers: Monitoring
Please check the references section where to get the documents
For instructions how to administrate SAP HANA Platform Edition (SAP HANA) please refer to the
SAP HANA Technical Operations Manual7 . Instructions how to administrate and maintain the other
components delivered with the System x solution can be found in the SAP Note 1650046 – Lenovo Systems
Solution Hardware, Operating System & GPFS Operations Guide. The Lenovo System x solution for
SAP HANA Quick Start Guide provides an overview of the complete solution and instructions how to
find service and support for your Lenovo Solution.

7 http://help.sap.com/hana_platform

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 7


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

2 Solution Overview
This document provides general information specific to the Lenovo Solution. This document assumes
that the reader understands the basic structure and components of the SAP HANA Platform Edition.
SAP HANA should be installed on hardware that has been specifically certified for SAP HANA by SAP.
This hardware may not be configured from individual parts, rather it is to be ordered and delivered as a
single unit using an Lenovo manufacturer type/model number specified later.

2.1 The SAP HANA Appliance Software

The Lenovo Solution is based on building blocks that provide a highly scalable infrastructure for SAP HANA
based on the System x architecture: x3850/x3950 X6 as well as software, such as IBM GPFS, that will
be used to run SAP HANA.
Lenovo has created several system models upon which you may install and run SAP HANA according to
the sizing charts coordinated with SAP. For each workload type a special System x type/model has been
approved by SAP and Lenovo to accommodate the requirements for the SAP HANA Platform Edition.

2.2 Definition of SAP HANA

The following picture defines the current SAP HANA scenarios that can be leveraged through the System
x solution for the SAP HANA Platform Edition.

Corporate Business Intelligence (BI)

SAP Business Warehouse

SAP HANA DB Appliance 1.0 SPS 03

Local BI Local BI

SAP ERP SAP ERP n Customer


SAP SAP SAP
HANA
(CRM (CRM,
HANA Application HANA

SRM,SCM) SRM,SCM)

Data Mart Data Mart Data Mart

SAP HANA DB SAP HANA DB SAP HANA DB


SAP HANA SAP HANA SAP HANA
Appliance Appliance 1.0 Appliance
1.0 1.0
1.0 SPS 05 1.0 SPS 05 1.0 SPS 05

Figure 1: Current SAP HANA Appliance Scenarios

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 8


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

3 Hardware Configurations

3.1 Workload Optimized Models

The System X6 Workload Optimized servers for SAP HANA are based upon two building blocks that
can be used to fulfill the hardware requirements for SAP HANA. The SAP HANA appliance software
must be installed only on a certified and tested hardware configuration based on one of these two models.
Lenovo provides a model/type number for four (4) socket and eight (8) socket systems that are to be
setup for each certified model by SAP. A customer needs only to choose the model and the extra options
to fulfill their requirements. Models created manually will neither be supported by Lenovo nor SAP due
to the high-performance criteria set out by SAP during certification.

(a) System x3850 X6 (b) System x3950 X6

Figure 2: Hardware Overview - Server

(a) System Storage EXP2524 (b) System Storage D1024 / E1024 (c) System Storage D1224

Figure 3: Hardware Overview - Optional Storage Expansion

System x3850 X6 Workload Optimized Server for SAP HANA (Figure 2a)
• 2×–4×Intel Xeon E7-8880v28,9 , E7-8880v310 or E7-8880v411 Family of Processors
• 128-2048GB DDR3 Memory
• Internal Storage:
– 6×1.2TB 2.5" HDD for RAID1 and RAID5
– 2×400GB SSD for SSD based HDD acceleration
• One (1) External Storage (EXP2524, D1024, E1024(for People’s Republic of China) or D1224) for
systems > 512GB (stand-alone configurations) or ≥ 512GB (cluster configurations)
• 2 ×Dual-Port 10GbE NICs
• 1 ×Quad-Port 1GbE NICs
8 Forimproved performance, E7-8890v2 is supported as an optional feature.
9 Forcustomers who confirm that an upgrade to an 8 socket system will never be desired, the Intel processors E7-4880v2
or E7-4890v2 will also be supported as optional alternate features.
10 E7-8890v3 (for improved performance) or E7-8880Lv3 (for improved efficiency) are supported as optional feature
11 For improved performance, E7-8890v4 is supported as an optional feature.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 9


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

• Certified for SLES for SAP12 OS and SAP HANA appliance software
Optionally for standalone servers, mandatory for cluster nodes
• IBM General Parallel File System
Optional System Storage EXP2524 (Figure 3a)
• Up to 20×1.2TB 2.5" HDD RAID513
• Up to 4×400GB SSD for SSD based HDD acceleration
Optional System Storage D1024 (or E1024 for People’s Republic of China) (Figure 3b)
• Up to 20×1.2TB 2.5" HDD RAID514
• Up to 4×400GB SSD for SSD based HDD acceleration
Optional System Storage D1224 (Figure 3c)
• Up to 20×1.2TB 2.5" HDD RAID515
• Up to 4×400GB SSD for SSD based HDD acceleration
System x3950 X6 Workload Optimized Server for SAP HANA (Figure 2b)
• 4×–8×Intel Xeon E7-8880v216,17 , E7-8880v318 or E7-8880v419 Family of Processors
• 256GB–6TB DDR3 Memory
• Internal Storage:
– 12×1.2TB 2.5" HDD for RAID1 and RAID5
– 4×400GB SSD for SSD based HDD acceleration
• One (1) External Storage (EXP2524, D1024, E1024(for People’s Republic of China) or D1224) for
systems ≥ 3TB (stand-alone configurations) or > 1024GB (cluster configurations)
• 2 ×Dual-Port 10GbE NICs
• 1 ×Quad-Port 1GbE NICs
• IBM General Parallel File System
• Certified for SLES for SAP OS and SAP HANA appliance software
Note
For the SSD based HDD acceleration two different implementations are used. Currently GPFS
based installations will use CacheCade while XFS based installations will use bcache, which
is an integrated part of the Linux kernel.

3.1.1 Hardware Layout and Filesystem Options

Starting with appliance version 1.10.102-14 XFS or IBM GPFS (has been rebranded to IBM Spectrum
Scale) can be used as filesystem. You have the following options:
12 SUSE Linux Enterprise Server for SAP Applications
13 RAID6 optional
14 RAID6 optional
15 RAID6 optional
16 For improved performance, E7-8890v2 is supported as an optional feature.
17 For customers who confirm that an upgrade to an 8 socket system will never be desired, the Intel processors E7-4880v2

or E7-4890v2 will also be supported as optional alternate features.


18 E7-8890v3 (for improved performance) or E7-8880Lv3 (for improved efficiency) are supported as optional feature
19 For improved performance, E7-8890v4 is supported as an optional feature.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 10


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

1. XFS accelerated with bcache setup


• Utilizes HDDs using XFS as filesystem and SSDs as caching device using bcache
• Supports single node installation and single node DR installation with SAP HANA System
Replication.
• Operating System option: SLES for SAP 12 only
2. IBM GPFS with CacheCade setup
• Utilizes HDDs using GPFS as filesystem. SSDs are used for CacheCade acceleration.
• Supports single node and cluster installation (all described HA and DR setups)
• Operating System options: SLES for SAP 11 and 12 or RHEL20 6

3.2 Tailored Datacenter Integration Models

The System X6 TDI21 server for SAP HANA are based upon two building blocks that can be used to
fulfill the hardware requirements for SAP HANA TDI.
Lenovo provides a model/type number for four (4) socket and eight (8) socket systems that are to be
setup for each certified model by SAP. A customer needs only to choose the model and the extra options
to fulfill their requirements.

(a) System x3850 X6 (b) System x3950 X6

Figure 4: Hardware Overview - Server

System x3850 X6 Server for SAP HANA TDI (Figure 4a)


• 2×–4×Intel Xeon E7-8880v222,23 , E7-8880v324 or E7-8880v425 Family of Processors
• 128–2048GB DDR3 Memory
• Internal Storage:
– 3-6×3.84TB 2.5" SSD for RAID5
– 2×400GB SSD for operating system
• 2 ×Dual-Port 10GbE NICs
• 1 ×Quad-Port 1GbE NICs
20 Red Hat Enterprise Linux
21 Tailored Datacenter Integration
22 For improved performance, E7-8890v2 is supported as an optional feature.
23 For customers who confirm that an upgrade to an 8 socket system will never be desired, the Intel processors E7-4880v2

or E7-4890v2 will also be supported as optional alternate features.


24 E7-8890v3 (for improved performance) or E7-8880Lv3 (for improved efficiency) are supported as optional feature
25 For improved performance, E7-8890v4 is supported as an optional feature.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 11


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

• Certified for SLES for SAP and RHEL OS and SAP HANA appliance software
Optionally for standalone servers, mandatory for cluster nodes
• IBM General Parallel File System
System x3950 X6 Server for SAP HANA TDI (Figure 4b)
• 4×–8×Intel Xeon E7-8880v226,27 , E7-8880v328 or E7-8880v429 Family of Processors
• 256GB–4TB DDR3 Memory
• Internal Storage:
– 3–8×3.84TB 2.5" SSD for RAID5
– 2×400GB SSD for operating sstem
• 2 ×Dual-Port 10GbE NICs
• 1 ×Quad-Port 1GbE NICs
• IBM General Parallel File System
• Certified for SLES for SAP and RHEL OS and SAP HANA appliance software

3.2.1 Hardware Layout and Filesystem Options

Starting with appliance version 1.10.102-14 XFS or IBM GPFS (has been rebranded to IBM Spectrum
Scale) can be used as filesystem. You have the following options:
1. XFS All Flash setup
• Utilizes only SSDs using XFS as filesystem
• Supports single node installation and single node DR installation with SAP HANA System
Replication.
• Operating System option: SLES for SAP 11 SP4 and 12 SP1 or RHEL 6.7
Note
This is a TDI-solution. Therefore it is not described in details in this specification.
Only the base-setup is described in details.
2. IBM GPFS All Flash setup
• Utilizes only SSDs with GPFS as filesystem. acceleration.
• Supports single node and cluster installation (all described HA setups)
• Operating System options: SLES for SAP 11 and 12 or RHEL 6 and 7

3.3 SAP HANA Platform Edition T-Shirt Sizes

Lenovo and SAP have certified a set of configurations to be used with the SAP HANA Platform Edition
that are based on the Intel Xeon IvyBridge EX E7-4880v2, E7-4890v2, E7-8880v2, E7-8890v2 or Intel
Xeon Haswell EX E7-8880v3, E78880Lv3, E7-8890v3, E7-8880v4, E7-8890v4 processor family.
26 For improved performance, E7-8890v2 is supported as an optional feature.
27 For customers who confirm that an upgrade to an 8 socket system will never be desired, the Intel processors E7-4880v2
or E7-4890v2 will also be supported as optional alternate features.
28 E7-8890v3 (for improved performance) or E7-8880Lv3 (for improved efficiency) are supported as optional feature
29 For improved performance, E7-8890v4 is supported as an optional feature.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 12


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

3.4 Single Node versus Clustered Configuration

The Systems X6 Solution servers can be configured in two ways:


1. As a single node configuration with separate, independent HANA installations (example: produc-
tion, test, development). These servers all have individual GPFS clusters or XFS filesystems that
are independent from each other. These should be installed as single servers.

SAP ERP Clients SAP ERP Clients SAP ERP Clients


(Prod) (Test) (Dev)

Server 1 Server 2 Server 3


(Production)
Production) (Test) (Development)

SAP HANA SAP HANA SAP HANA


database database database

GPFS or XFS GPFS or XFS GPFS or XFS

Internal
Internal Internal Internal
storage
storage storage storage

Figure 5: SAP HANA Multiple Single Node Example

2. As a clustered configuration with a distributed HANA instance across servers. All server (nodes)
form one HANA cluster. All servers (nodes) form one GFPS cluster. These should be installed as
clustered servers.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 13


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

Clients

SAP BW SAP ERP

SAP HANA Cluster


Server 1 Server 2 Server 3

SAP HANA SAP HANA SAP HANA Backup/Recovery


Database Database Database
Master node Worker node Standby node
SAN Storage

SAN
storage
GPFS GPFS
GPFS
Primary Secondary
Additional node
node node SAN
storage

SAN
Internal Storage Internal Storage Internal Storage storage
SAP HANA SAP HANA SAP HANA
Data&Log Data&Log Data&Log
SAN
storage
GPFS Cluster

Figure 6: SAP HANA Clustered Example with Backup

The term scale-out or cluster is used interchangeably in this document. What is meant is the use
of multiple single Lenovo workload optimized servers connected via one or more configuration specific
network switches in such a way that all servers act as one single high performance SAP HANA instance.
These servers will need to be configured different from a single node system and are therefore defined
here explicitly. Further documentation will differentiate between non-clustered (single or consolidated)
and clustered installations.

3.4.1 Network Switch Options

For clustered configurations, extra hardware such as network switches and adapters need to be pur-
chased in addition to the clustered appliances. Currently, the supported network switches for the Lenovo
Workload Optimized server in a clustered configuration are:

Network Description Part Number


RackSwitch G8296 (Rear-to-Front) 7159GR6
RackSwitch G8296 (Front-to-Rear) 7159GF5
10Gb Ethernet RackSwitch G8272 (Rear-to-Front) 7159CRW
RackSwitch G8272 (Front-to-Rear) 7159CFV
RackSwitch G8264 (Rear-to-Front) 7159G64
RackSwitch G8264 (Front-to-Rear) 715964F
RackSwitch G8124E (Rear-to-Front) 7159BR6
RackSwitch G8124E (Front-to-Rear) 7159BF7
RackSwitch G8052 (Rear-to-Front) 7159G52
1Gb Ethernet
RackSwitch G8052 (Front-to-Rear) 715952F

Table 2: Network Switch Options

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 14


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

Note
These configurations may change over time, so please contact sapsolutions@lenovo.com for
any update.

3.5 SAP HANA Optimized Hardware Configurations

SEO models exist for certain configurations, please see the D: Lenovo X6 Server MTM List & Model
Overview on page 193 for more details.

3.5.1 System x3850 X6 - 2 socket - Single Node Configurations

SAP Models 128 256 384 512 768 1024* 1536* 2048**
Product x3850 X6
Type/Model 6241-AC3
† 2 ×E7-

CPU 2 ×Intel Xeon® E7-8880v2, v3 or v4 2 ×E7-8880v3 or v4
8890v4
Memory 128GB 256GB 384GB 512GB 768GB 1024GB 1536GB 2048GB
Disk 6×1.2TB HDD 2×400GB SSD 15×1.2TB HDD 4×400GB SSD
Controller 1 ×M5210 1 ×M5210 and 1 ×M5225
Disk Layout 3.6TB RAID5 13.2TB RAID5
2 × Dual-Port 10GbE
Network
1 × Quad-Port 1GbE

Table 3: System x3850 X6 - 2 socket - Single Node Configurations


* For Suite on HANA only, not for Datamart and BW
** SAP HANA SPS12 or later required for Suite on HANA, not for Datamart and BW
† or alternative processor types

3.5.2 System x3850 X6 - 4 socket - Single Node Configurations

SAP Models 256 512 768 1024 1536 2048†† 3072* 4096**
Product x3850 X6
Type/Model 6241-AC3
4 ×E7-
† 4 ×E7-
CPU 4 ×Intel Xeon® E7-8880v2, v3 or v4 8880v3 or
† 8890v4
v4
Memory 256GB 512GB 768GB 1024GB 1536GB 2048GB 3072GB 4096GB
6×1.2TB HDD 15×1.2TB HDD 24×1.2TB HDD
Disk
2×400GB SSD 4×400GB SSD 4×400GB SSD
Controller 1 ×M5210 1 ×M5210 and 1 ×M5225
Disk Layout 3.6TB RAID5 13.2TB RAID5 22.8TB RAID5
2 × Dual-Port 10GbE
Network
1 × Quad-Port 1GbE

Table 4: System x3850 X6 - 4 socket - Single Node Configurations


* For Suite on HANA only, not for Datamart and BW
** SAP HANA SPS12 or later required for Suite on HANA, not for Datamart and BW
† or alternative processor types
†† SAP HANA SPS11 or later and Haswell CPUs (v3) or later required for BW or Datamart

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 15


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

3.5.3 System x3950 X6 - 4 socket - Single Node Configurations

SAP Models 256 512 768 1024 1536 2048†† 3072* 4096**
Product x3950 X6
Type/Model 6241-AC4
4 ×E7-
† 4 ×E7-
CPU 4 ×Intel Xeon® E7-8880v2, v3 or v4 8880v3 or
8890v4
v4 †
Memory 256GB 512GB 768GB 1024GB 1536GB 2048GB 3072GB 4096GB
6×1.2TB HDD 12×1.2TB HDD 21×1.2TB HDD
Disk
2×400GB SSD 4×400GB SSD 6×400GB SSD
1 ×M5210 2 ×M5210 2 ×M5210
Controller
1 ×M5225
Disk Layout 3.6TB RAID5 9.6TB RAID5 19.2TB RAID5
2 × Dual-Port 10GbE
Network
2 × Quad-Port 1GbE

Table 5: System x3950 X6 - 4 socket - Single Node Configurations


* For Suite on HANA only, not for Datamart and BW
** SAP HANA SPS12 or later required for Suite on HANA, not for Datamart and BW
† or alternative processor types
†† SAP HANA SPS11 or later and Haswell CPUs (v3) or later required for BW or Datamart

3.5.4 System x3950 X6 - 8 socket - Single Node Configurations

SAP Models 512 1024 1536 2048 3072 4096†† 6144* 8192**
Product x3950 X6
Type/Model 6241-AC4
CPU 8 ×Intel Xeon® E7-8880v2, v3 or v4 † 8 ×E7-8880v4
Memory 512GB 1024GB 1536GB 2048GB 3072GB 4096GB 6144GB 8192GB
6×1.2TB 30×1.2TB 39×1.2TB
12×1.2TB HDD 21×1.2TB HDD
Disk HDD HDD HDD
2×400GB 6×400GB 8×400GB
4×400GB SSD 6×400GB SSD
SSD SSD SSD
1
2 ×M5210 2 ×M5210 2 ×M5210
Controller ×M5210
1 ×M5225 2 ×M5225
3.6TB 28.8TB 38.4TB
Disk Layout 9.6TB RAID5 19.2TB RAID5
RAID5 RAID5 RAID5
2 × Dual-Port 10GbE
Network
2 × Quad-Port 1GbE

Table 6: System x3950 X6 - 8 socket - Single Node Configurations


* For Suite on HANA only, not for Datamart and BW
** SAP HANA SPS12 or later required for Suite on HANA, not for Datamart and BW
† or alternative processor types
†† SAP HANA SPS11 or later and Haswell CPUs (v3) or later required for BW or Datamart

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 16


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

3.5.5 System x3950 X6 - 4 socket - Single Flex-Node Configurations

SAP Models 2×128 2×256 2×384 2×512 2×768 2×1024* 2×1536* 2×2048*
Product x3950 X6
Type/Model 6241-AC4
4 ×E7-
† 4 ×E7-
CPU 4 ×Intel Xeon® E7-8880v2, v3 or v4 8880v3 or
8890v4
v4 †
Memory 256GB 512GB 768GB 1024GB 1536GB 2048GB 3072GB 4096GB
12×1.2TB HDD 30×1.2TB HDD
Disk
4×400GB SSD 8×400GB SSD
2 ×M5210 2 ×M5210
Controller
2 ×M5225
Disk Layout 7.2TB RAID5 26.4TB RAID5
2 × Dual-Port 10GbE
Network
2 × Quad-Port 1GbE

Table 7: System x3950 X6 - 4 socket - Single Flex-Node Configurations


* For Suite on HANA only, not for Datamart and BW
** SAP HANA SPS12 or later required for Suite on HANA, not for Datamart and BW
† or alternative processor types
†† SAP HANA SPS11 or later and Haswell CPUs (v3) or later required for BW or Datamart

3.5.6 System x3950 X6 - 8 socket - Single Flex-Node Configurations

SAP Models 2×265 2×512 2×768 2×1024 2×1536 2×2048†† 2×3072* 2×4096**
Product x3950 X6
Type/Model 6241-AC4
CPU 8 ×Intel Xeon® E7-8880v2, v3 or v4 † 8 ×E7-8880v4
Memory 512GB 1024GB 1536GB 2048GB 3072GB 4096GB 6144GB 8192GB
12×1.2TB 48×1.2TB 66×1.2TB 84×1.2TB
30×1.2TB HDD
Disk HDD HDD HDD HDD
4×400GB 8×400GB 12×400GB 12×400GB
8×400GB SSD
SSD SSD SSD SSD
2 ×M5210 2×M5210 2×M5210
Controller
2×M5225 4×M5225
7.2TB 45.6TB 64.8TB 84.0TB
Disk Layout 26.4TB RAID5
RAID5 RAID5 RAID5 RAID5
2 × Dual-Port 10GbE
Network
2 × Quad-Port 1GbE

Table 8: System x3950 X6 - 8 socket - Single Flex-Node Configurations


* For Suite on HANA only, not for Datamart and BW
** SAP HANA SPS12 or later required for Suite on HANA, not for Datamart and BW
† or alternative processor types
†† SAP HANA SPS11 or later and Haswell CPUs (v3) or later required for BW or Datamart

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 17


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

3.5.7 System x3850 X6 - 2 and 4 socket - Cluster Node Configurations

SAP Models 256 512 738 1024 1536 2048††


Product x3850 X6
Type/Model 6241-AC3
4 ×E7-8880 4 ×E7-8880
2 ×Intel Xeon® 4 ×E7-8880 †
CPU v2, v3 or v4 v2, v3 or v4 4 ×E7-8880v3 or v4
E7-8880v2 † † v2 †

Memory 256GB 512GB 738GB 1024GB 1536GB 2048GB


6×1.2TB HDD & 24×1.2TB HDD
Disk 15×1.2TB HDD & 4×400GB SSD
2×400GB SSD & 4×400GB SSD
Controller 1 ×M5210 1 ×M5210 & 1 ×M5120/M5225
Disk Layout 3.6 TB RAID5 13.2 TB RAID5 for SAP HANA data/log 22.8 TB RAID5
2 × Dual-Port 10GbE
Network
1 × Quad-Port 1GbE

Table 9: System x3850 X6 - 2 and 4 socket - Cluster Node Configurations


† or alternative processor types
†† SAP HANA SPS11 or later and Haswell CPUs (v3) or later required for BW or Datamart

3.5.8 System x3950 X6 - 4 socket - Cluster Node Configurations

SAP Models 512 1024 1536 2048††


Product x3950 X6
Type/Model 6241-AC4
CPU 4 ×Intel Xeon® E7-8880v2, v3 or v4 † 4 ×E7-8880v3 or v4 †
Memory 512GB 1024GB 1536GB 2048GB
Disk 12×1.2TB HDD & 4×400GB SSD 21×1.2TB HDD & 6×400GB SSD
Controller 2 ×M5210 2 ×M5210 & 1 ×M5252
Disk Layout 9.6 TB RAID5 19.2 TB RAID5
2 × Dual-Port 10GbE
Network
2 × Quad-Port 1GbE

Table 10: System x3950 X6 - 4 socket - Cluster Node Configurations


† or alternative processor types
†† SAP HANA SPS11 or later and Haswell CPUs (v3) or later required for BW or Datamart

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 18


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

3.5.9 System x3950 X6 - 8 socket - Cluster Node Configurations

SAP Models 512 1024 2048 3072 4096 ††


Product x3950 X6
Type/Model 6241-AC4
CPU 8 ×Intel Xeon® E7-8880v2, v3 or v4 † 8 ×Intel Xeon® E7-8880v3 or v4 †
Memory 512GB 1024GB 2048GB 3072GB 4096GB
21×1.2TB 30×1.2TB 39×1.2TB
12×1.2TB HDD
Disk HDD HDD HDD
& 6×400GB & 6×400GB & 8×400GB
& 4×400GB SSD
SSD SSD SSD
2 ×M5210 2 ×M5210 2 ×M5210 2 ×M5210
Controller
1 ×M5225 1 ×M5225 2 ×M5225
Disk Layout 9.6 TB RAID5 19.2 TB RAID5 28.8 TB RAID5 38.4 TB RAID5
2 × Dual-Port 10GbE
Network
2 × Quad-Port 1GbE

Table 11: System x3950 X6 - 8 socket - Cluster Node Configurations


† or alternative processor types
†† SAP HANA SPS11 or later and Haswell CPUs (v3) or later required for BW or Datamart

3.5.10 System x3950 X6 - 8 socket - Cluster Flex-Node Configurations

SAP Models 2×512 2×1024 2×1536 2×2048 ††


Product x3950 X6
Type/Model 6241-AC4
CPU 8 ×Intel Xeon® E7-8880v2, v3 or v4 † 8 ×Intel Xeon® E7-8880v3 or v4 †
Memory 1024GB 2048GB 3072GB 4096GB
30×1.2TB HDD 48×1.2TB HDD 84×1.2TB HDD
Disk
& 12×400GB
& 8×400GB SSD & 8×400GB SSD
SSD
2×M5210 2×M5210
Controller
2×M5225 4×M5225
Disk Layout 26.4 TB RAID5 45.6 TB RAID5 84.5 TB RAID5
4 × Dual-Port 10GbE
Network
2 × Quad-Port 1GbE

Table 12: System x3950 X6 - 8 socket - Cluster Flex-Node Configurations


† or alternative processor types
†† SAP HANA SPS11 or later and Haswell CPUs (v3) or later required for BW or Datamart

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 19


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

3.6 All Flash Solution for SAP HANA Hardware Configurations

All Flash Solution for SAP HANA HW configurations get full Lenovo solution support but are treated as
TDI (SAP HANA Tailored Datacenter Integration) by SAP AG. All listed configurations fulfill the TDI
performance KPIs.

Note
Either two HDDs or two SSDs in RAID1 setup can be used to host the Operating System.
Following tables depict RAID1 setup using two SSDs.

3.6.1 System x3850 X6 - 2 socket - Single Node - All Flash Configurations

SAP Models 128 256 384 512 768


Product x3850 X6
Type/Model 6241-AC3
CPU 2 ×Intel Xeon® E7-8880v3 or v4 †
Memory 128GB 256GB 384GB 512GB 768GB
2×400GB SSD
Disk
3×3.8TB SSD
Controller 1 ×M5210
Disk Layout 7.6TB RAID5
2 × Dual-Port 10GbE
Network
1 × Quad-Port 1GbE

Table 13: System x3850 X6 - 2 socket - All Flash Configurations <= 768GB
† or alternative processor types

SAP Models 1024* 1536* 2048**


Product x3850 X6
Type/Model 6241-AC3
CPU 2 ×Intel Xeon® E7-8880v3 or v4 †
Memory 1024GB 1536GB 2048GB
2×400GB SSD 2×400GB SSD
Disk
3×3.8TB SSD 4×3.8TB SSD
Controller 1 ×M5210
Disk Layout 7.6TB RAID5 11.5TB RAID5
2 × Dual-Port 10GbE
Network
1 × Quad-Port 1GbE

Table 14: System x3850 X6 - 2 socket - All Flash Configurations >= 1024GB
* For Suite on HANA only, not for Datamart and BW
** SAP HANA SPS12 or later required for Suite on HANA, not for Datamart and BW
† or alternative processor types

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 20


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

3.6.2 System x3850 X6 - 4 socket - Single Node - All Flash Configurations

SAP Models 256 512 768 1024 1536


Product x3850 X6
Type/Model 6241-AC3
CPU 4 ×Intel Xeon® E7-8880v3 or v4 †
Memory 256GB 512GB 768GB 1024GB 1536GB
2×400GB SSD
Disk
3×3.8TB SSD
Controller 1 ×M5210
Disk Layout 7.6TB RAID5
2 × Dual-Port 10GbE
Network
1 × Quad-Port 1GbE

Table 15: System x3850 X6 - 4 socket - All Flash Configurations <= 1536GB
† or alternative processor types

SAP Models 2048†† 3072* 4096**


Product x3850 X6
Type/Model 6241-AC3
CPU 4 ×Intel Xeon® E7-8880v3 or v4 †
Memory 2048GB 3072GB 4096GB
2×400GB SSD 2×400GB SSD 2×400GB SSD
Disk
4×3.8TB SSD 5×3.8TB SSD 6×3.8TB SSD
Controller 1 ×M5210
Disk Layout 11.5TB RAID5 15.3TB RAID5 19.2TB RAID5
2 × Dual-Port 10GbE
Network
1 × Quad-Port 1GbE

Table 16: System x3850 X6 - 4 socket - All Flash Configurations >= 2048GB
* For Suite on HANA only, not for Datamart and BW
** SAP HANA SPS12 or later required for Suite on HANA, not for Datamart and BW
† or alternative processor types
†† SAP HANA SPS11 or later and Haswell CPUs (v3) or later required for BW or Datamart
‡ Manual RAID configuration required

3.6.3 System x3950 X6 - 4 socket - Single Node - All Flash Configurations

SAP Models 256 512 768 1024 1536


Product x3850 X6
Type/Model 6241-AC3
CPU 4 ×Intel Xeon® E7-8880v3 or v4 †
Memory 256GB 512GB 768GB 1024GB 1536GB
2×400GB SSD
Disk
3×3.8TB SSD
Controller 1 ×M5210
Disk Layout 7.6TB RAID5
2 × Dual-Port 10GbE
Network
1 × Quad-Port 1GbE

Table 17: System x3950 X6 - 4 socket - Single Node - All Flash Configurations <= 1536GB
† or alternative processor types

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 21


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

SAP Models 2048†† 3072* 4096**


Product x3850 X6
Type/Model 6241-AC3
CPU 4 ×Intel Xeon® E7-8880v3 or v4 †
Memory 2048GB 3072GB 4096GB
2×400GB SSD 2×400GB SSD 2×400GB SSD
Disk
4×3.8TB SSD 5×3.8TB SSD 6×3.8TB SSD
Controller 1 ×M5210
Disk Layout 11.5TB RAID5 15.3TB RAID5 19.2TB RAID5
2 × Dual-Port 10GbE
Network
1 × Quad-Port 1GbE

Table 18: System x3950 X6 - 4 socket - Single Node - All Flash Configurations >= 2048GB
* For Suite on HANA only, not for Datamart and BW
** SAP HANA SPS12 or later required for Suite on HANA, not for Datamart and BW
† or alternative processor types
†† SAP HANA SPS11 or later and Haswell CPUs (v3) or later required for BW or Datamart

3.6.4 System x3950 X6 - 8 socket - Single Node - All Flash Configurations

SAP Models 512 1024 1536 2048 3072


Product x3950 X6
Type/Model 6241-AC4
CPU 8 ×Intel Xeon® E7-8880v3 or v4 †
Memory 512GB 1024GB 1536GB 2048GB 3072GB
2×400GB SSD 2×400GB SSD 2×400GB SSD
Disk
3×3.8TB SSD 4×3.8TB SSD 5×3.8TB SSD
Controller 1 ×M5210
Disk Layout 7.6TB RAID5 11.5TB RAID5 15.3TB RAID5
2 × Dual-Port 10GbE
Network
1 × Quad-Port 1GbE

Table 19: System x3950 X6 - 8 socket - Single Node - All Flash Configurations <= 3072GB
† or alternative processor types

SAP Models 4096†† 6144* 8192**


Product x3950 X6
Type/Model 6241-AC4
CPU 8 ×Intel Xeon® E7-8880v3 or v4 †
Memory 4096GB 6144GB 8192GB
2×400GB SSD 2×400GB SSD 2×400GB SSD
Disk
6×3.8TB SSD 9×3.8TB SSD 12×3.8TB SSD
Controller 1 ×M5210 2 ×M5210
Disk Layout 19.2TB RAID5 26.8TB RAID5 38.4TB RAID5
2 × Dual-Port 10GbE
Network
1 × Quad-Port 1GbE

Table 20: System x3950 X6 - 8 socket - Single Node - All Flash Configurations >= 4096GB
* For Suite on HANA only, not for Datamart and BW
** SAP HANA SPS12 or later required for Suite on HANA, not for Datamart and BW
† or alternative processor types
†† SAP HANA SPS11 or later and Haswell CPUs (v3) or later required for BW or Datamart
‡ Manual RAID configuration required

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 22


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

3.6.5 System x3850 X6 - 4 socket - Cluster Node - All Flash Configurations

SAP Models 512 1024 1536 2048


Product x3850 X6
Type/Model 6241-AC3
CPU 4 ×Intel Xeon® E7-8880v4
Memory 512GB 1024GB 1536GB 2048GB
2×400GB SSD 2×400GB SSD 2×400GB SSD 2×400GB SSD
Disk
3×3.8TB SSD 4×3.8TB SSD 5×3.8TB SSD 6×3.8TB SSD
Controller 1 ×M5210
Disk Layout 13.2TB RAID5 13.2TB RAID5 13.2TB RAID5 22.8TB RAID5
2 × Dual-Port 10GbE
Network
1 × Quad-Port 1GbE

Table 21: System x3850 X6 - 4 socket - Cluster Node - All Flash Configurations

3.6.6 System x3950 X6 - 4 socket - Cluster Node - All Flash Configurations

SAP Models 512 1024 1536 2048


Product x3950 X6
Type/Model 6241-AC3
CPU 4 ×Intel Xeon® E7-8880v4
Memory 512GB 1024GB 1536GB 2048GB
2×400GB SSD 2×400GB SSD 2×400GB SSD 2×400GB SSD
Disk
3×3.8TB SSD 4×3.8TB SSD 5×3.8TB SSD 6×3.8TB SSD
Controller 1 ×M5210
Disk Layout 7.6TB RAID5 11.5TB RAID5 15.3TB RAID5 19.2TB RAID5
2 × Dual-Port 10GbE
Network
1 × Quad-Port 1GbE

Table 22: System x3850 X6 - 4 socket - Cluster Node - All Flash Configurations

3.6.7 System x3950 X6 - 8 socket - Cluster - All Flash Configurations

SAP Models 512 1024 2048 3072 4096


Product x3950 X6
Type/Model 6241-AC4
CPU 8 ×Intel Xeon® E7-8880v3 or v4 †
Memory 512GB 1024GB 1536GB 2048GB 3072GB
2×400GB 2×400GB 2×400GB
2×400GB SSD 2×400GB SSD
Disk SSD SSD SSD
3×3.8TB SSD 4×3.8TB SSD 5×3.8TB SSD 6×3.8TB SSD 8×3.8TB SSD
Controller 1 ×M5210 2 ×M5210
11.5TB 15.3TB
Disk Layout 7.6TB RAID5 19.2TB RAID5 26.8TB RAID5
RAID5 RAID5
2 × Dual-Port 10GbE
Network
1 × Quad-Port 1GbE

Table 23: System x3950 X6 - 8 socket - Cluster Node - All Flash Configurations
† or alternative processor types

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 23


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

3.7 Card Placement

Attention
You need to make sure, that the cards are placed in the correct PCI slot. Please refer to the
tables below for the assignment regarding in which slot a certain card should be. This step
must be done before the installation. Please be aware, that only with the correct card layout
your machine is supported by Lenovo.
Depending on the socket count of a machine, correct card placement may differ. Please refer to figure
7 and table 25 regarding two socket machines, figure 8 and table 26 on page 26 regarding four socket
machines and figure 10 and table 27 on page 27. Concerning the numbering of the slots please note that
PCI slots 11 and 12 are located in the Storage Book, see figure 9. A x3950 X6 machine has an additional
Storage Book containing PCI slots 43 and 44. The Storage Books are accessible from the front.

3.7.1 Network Interface Cards

The x3850 X6 machine comes with two Mellanox Connect X-3 10GbE adapters that provide two 10GbE
ports or two Mellanox ConnectX-3 FDR IB VPI adapters that provide two QSFP ports. With QSA
adapters the QSFP ports support SFP+ transceivers for 10GbE connectivity. A quad port Intel I-350
provides four 1GbE ports and is placed in slot 10. In a x3950 X6 an additional I-350 card can be placed
in slot 42. Intel I-340 PCI cards are available optionally, if more 1GbE ports are needed.
Please see the tables and figures below regarding the correct card placement, depending on your machine
type and configuration.

3.7.2 Slots for additional Network Interface Cards

If the customer needs more network ports, the PCI slots shown in table 24: Slots which may be used for
additional NICs on page 24 may be used for additional NICs.

Machine PCI Slots


x3850 X6 two sockets 9, 10
x3850 X6 four sockets 2, 3, 5, 6, 10
x3950 X6 four sockets 9, 10, 41, 42
x3950 X6 eight sockets 5, 6, 10, 37, 38, 42

Table 24: Slots which may be used for additional NICs

3.7.3 RAID Adapter Cards

The internal RAID adapter is a ServeRAID M5210 which resides in slot 12 in the Storage Book. Regarding
the x3950 X6, there are two internal RAID adapter used, residing in slot 12 and 44.
External RAID adapters (ServeRAID M5120 or M5225) in a x3850 X6 have to be placed in slot 8, slot
7 and slot 9 in order of their naming here. 2 socket servers have less available slots, therefore the RAID
adapter has to be placed in slot 9). Regarding a x3950 X6 machine, placement will start in slot 40, then
39, then 41 and finally 7 and 8, refer to table 28 for details.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 24


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

Ethernet
Card Port Label Slot
Device
ServeRAID M5210 (internal) – 12 –
E eth4
F eth5
Intel I-350 1GbE quad port 10
G eth6
H eth7
– eth8
– eth9
Intel I-340 1GbE quad port * 9
– eth10
– eth11
or ServeRAID M5120/M5225 (external) * – 9 –
A eth0
Mellanox ConnectX-3 (10GbE or FDR IB VPI) 8
B eth1
C eth2
Mellanox ConnectX-3 (10GbE or FDR IB VPI) 7
D eth3
100MbE internal Ethernet Adapter for System
I – –
Management via the IMM

Table 25: Card assignments for a two socket x3850 X6


* This card is optional

Figure 7: Workload Optimized System x3850 X6 2 Socket Rear View

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 25


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

Ethernet
Card Port Label Slot
Device
ServeRAID M5210 (internal) – 12 –
E eth4
F eth5
Intel I-350 1GbE quad port 10
G eth6
H eth7
ServeRAID M5120/M5225 (external) * – 9 –
ServeRAID M5120/M5225 (external) * – 8 –
ServeRAID M5120/M5225 (external) * – 7 –
– eth8
– eth9
Intel I-340 1GbE quad port * 5
– eth10
– eth11
C eth2
Mellanox ConnectX-3 (10GbE or FDR IB VPI) 4
D eth3
A eth0
Mellanox ConnectX-3 (10GbE or FDR IB VPI) 1
B eth1
100MbE internal Ethernet Adapter for System
I – –
Management via the IMM

Table 26: Card assignments for a four socket x3850 X6


* This cards are only used in certain configurations, please refer to section 3.7.3 for details

Figure 8: Workload Optimized System x3850 X6 4 Socket Rear View

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 26


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

Figure 9: Workload Optimized System Storage Book. This contains slots 11, 12 and slots 43, 44 on x3950
X6 in an additional Storage Book

Ethernet
Card Port Label Slot
Device
E eth4
F eth5
Intel I-350 1GbE quad port 10
G eth6
H eth7
C eth2
Mellanox ConnectX-3 (10GbE or FDR IB VPI) 36
D eth3
A eth0
Mellanox ConnectX-3 (10GbE or FDR IB VPI) 4
B eth1
100MbE internal Ethernet Adapter for System
I – –
Management via the IMM
100MbE internal Ethernet Adapter for System
J – –
Management via the IMM
K e.g. eth8
L e.g. eth9
Intel I-350 1GbE quad port * 42
M e.g. eth10
N e.g. eth11
– e.g. eth8
– e.g. eth9
Intel I-340 1GbE quad port * 5
– e.g. eth10
– e.g. eth11

Table 27: Network interface card assignments for an eight socket x3950 X6
* This cards is optional, please refer to table 28 for details

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 27


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

Sockets RAM Size S/C* Mellanox I350 M5210 M5120/M5225


4 ≤ 512GB S 7, 39 42, 10 44
4 ≤ 2TB S 7, 39 42, 10 44, 12
4 ≤ 6TB S 7, 39 42, 10 44, 12 40
4 ≤ 1TB C 7, 39 42, 10 44, 12
4 ≤ 2TB C 7, 39 42, 10 44, 12 40
8 ≤ 512GB S 4, 36 42, 10 44
8 ≤ 2TB S 4, 36 42, 10 44, 12
8 ≤ 6TB S 4, 36 42, 10 44, 12 40
8 8TB S 4, 36 42, 10 44, 12 40, 39
8 12TB S 4, 36 42, 10 44, 12 40, 39, 41
8 ≤ 1TB C 4, 36 42, 10 44, 12
8 ≤ 3TB C 4, 36 42, 10 44, 12 40
8 4TB C 4, 36 42, 10 44, 12 40, 39
8 8TB C 4, 36 42, 10 44, 12 40, 39, 41

Table 28: Card placement for x3950 X6 four socket and eight socket
* S = Single Node / C = Cluster

Figure 10: Workload Optimized System x3950 X6 8 Socket Rear View

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 28


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

4 Storage Configuration
The drives of x3850 X6 and x3950 X6 Storage Books are internally connected to ServeRAID M5210
RAID adapter. If the HW setup consists of Storage Expansion EXP2524, D1024 or E1024 (in PCR only)
the SAS port labeled ’In’ has to be connected to one of the external ServeRAID M5120 or M5225 RAID
adapter ports. In case of D1224 Storage Expansion the SAS Port labeled ’A’ on Environment Service
Module (ESM) ’A’ has to be connected to the RAID adapter.

Figure 11: Rearview of Storage Expansion D1224

4.1 RAID Setup for GPFS

The RAID configuration of all non-OS RAID arrays is executed by the automated installer starting with
release 1.8.80-10. Except the configuration of a RAID1 for the operating system, no manual steps have
to be executed regarding RAID configuration.
The following tables are provided as an overview and a reference in case that the automated RAID
configuration fails.
The HDD and SSD RAID arrays are configured with the settings WriteBack, ReadAhead, Cached, No
Write Cache if Bad BBU, pdcache=off, and a strip size of 64.
Tables 29: X6 RAID Controller Configuration for GPFS on page 30 and 30: x3950 X6 RAID Controller
Configuration for XFS on page 31 describe possible configurations of the RAID controllers. There are
different possible setups for the RAID controllers with different numbers of SSDs and HDDs:
• M5210 (on x3950 X6: first internal)
– 2 SSDs + 6 HDDs: 1 × RAID1 for OS, 1 × RAID5 for GPFS
• M5210 (only x3950 X6, second internal)
– 2 SSDs + 6 HDDs: 1 × RAID5 for GPFS
• M5120/M5225
– 2 SSDs + 9 HDDs: 1 × RAID5 for GPFS
– 2 SSDs + 10 HDDs: 1 × RAID6 for GPFS
– 2 SSDs + 18 HDDs: 2 × RAID5 for GPFS
– 2 SSDs + 20 HDDs: 2 × RAID6 for GFPS
– Optionally: +2 SSDs30
30 Optionally: +2 SSDs for CacheCade RAID1. For details on hardware configuration and setup see Operations Guide

for X6 based models section CacheCade RAID1 Configuration

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 29


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

Physical
Controller Models VD ID Type Config Comment
Drives
0 HDD 2 RAID1 VD for OS
M5210 all GPFS,
(3+p)
1 HDD 4 CacheCade
RAID5
enabled
CacheCade
2 SSD 2 RAID0/1†
of VD1
(8+p)
Single node: 9 GPFS,
M5120/ 0* HDD RAID5
≥ 768GB, CacheCade
M5225 (8+2p)
Cluster: 10 enabled
RAID6
≥ 512GB
CacheCade
1 SSD 2 RAID0/1†
of VD0

Table 29: X6 RAID Controller Configuration for GPFS


* There are different possible configurations for this VD depending on the number of SSDs/HDDs connected to
the controller.
† RAID1 for all CacheCade arrays is possible, but may require additonal hardware. See section CacheCade RAID1
Configuration in the Operations Guide for X6 based models for more details.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 30


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

Physical
Controller Models VD ID Type Config Comment
Drives
0 HDD 2 RAID1 VD for OS
1st M5210 all GPFS,
(3+p)
1 HDD 4 CacheCade
RAID5
enabled
CacheCade
2 SSD 2 RAID0/1†
for VD1
GPFS,
(5+p)
Single node: 0 HDD 6 CacheCade
2nd M5210 RAID5
≥ 768GB, enabled
Cluster: CacheCade
1 SSD 2 RAID0/1†
≥ 512GB for VD0
(8+p)
Single node: 9 GPFS,
0* HDD RAID5
≥ 3072GB, CacheCade
1st M5120/ (8+2p)
Cluster: 10 enabled
M5225 RAID6
≥ 2048GB
(8+p)
Single node: 9 GPFS,
1* HDD RAID5
≥ 6144GB, CacheCade
Cluster: (8+2p) enabled
10
≥ 3072GB RAID6
CacheCade
1/2** SSD 2/4* RAID0/1†
for VD0&1
(8+p)
Single node: HDD 9 GPFS,
0* RAID5
nd ≥ 12.288, CacheCade
2 M5120/ (8+2p)
Cluster: 10 enabled
M5225 RAID6
≥ 4096GB
(8+p)
Single node: 9 GPFS,
1* HDD RAID5
≥ 12.288GB, CacheCade
Cluster: (8+2p) enabled
10
≥ 6144GB RAID6
CacheCade
1/2* SSD 2 RAID0/1†
for VD0&1
(8+p)
Single node: 9 GPFS,
3rd M5120/ 0* HDD RAID5
≥ 12.288GB, CacheCade
M5225 (8+2p)
Cluster: 10 enabled
RAID6
≥ 6144GB
CacheCade
1 SSD 2 RAID0/1†
for VD0

Table 30: x3950 X6 RAID Controller Configuration.


* There are different possible configurations for this VD depending on the number of SSDs/HDDs connected to
the controller.
** This number will depend on the availability of VD1
† RAID1 for all CacheCade arrays is possible, but may require additonal hardware. See section CacheCade RAID1
Configuration in the Operations Guide for X6 based models for more details.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 31


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

Partition
Device Partition # Size File system Mount Point
Name*
1 /dev/sda1 148MB vfat /boot/efi
2 /dev/sda2 128GB ext3/4 /
/dev/sda 3 /dev/sda3 32GB swap (none)
/var/backup/
4 /dev/sda4 148MB vfat
boot/efi
5 /dev/sda5 128GB ext3/4 /var/backup
/sapmnt
/dev/sd[b-z] Unpartitioned (whole device) 100% GPFS
(sapmntdata)

Table 31: Partition Scheme for Single Node and Cluster Installations
* The actual partition numbers may vary depending on whether you use RHEL or SLES for SAP.

4.2 RAID Setup for GPFS All Flash

The RAID configuration of all non-OS RAID arrays is executed by the automated installer. Except the
configuration of a RAID1 for the operating system, no manual steps have to be executed regarding RAID
configuration.
The following tables are provided as an overview and a reference in case that the automated RAID
configuration fails.
Depending on the intended us as Cluster node or standalone node the SSD arrays have different caching
policies:
• Single nodes use currently WriteBack, ReadAhead, Cached, No Write Cache if Bad BBU, pdcache=off,
and a strip size of 64.
• Cluster ndoes use currently WriteThrough, ReadAhead, Cached, No Write Cache if Bad BBU,
pdcache=off, and a strip size of 64.
Tables 32: x3850 X6 RAID Controller Configuration for GPFS on page 32 describe possible configurations
of the RAID controllers. There are different possible setups for the RAID controllers with different
numbers of SSDs:
• M5210 (on x3950 X6: first internal)
– 3-6 SSDs + 2 SSDs: 1 × RAID1 for OS, 1 × RAID5 for GPFS
• M5210 (only x3950 X6: second internal)
– 3 SSDs: 1 × RAID5 for GPFS

Physical
Controller Models VD ID Type Config Comment
Drives
0 SSD 2 RAID1 VD for OS
M5210 all 1 SSD 3-6 RAID5 GPFS
x3950 4TB
M5210 0 SSD 3 RAID5 GPFS
Cluster

Table 32: x3850 X6 RAID Controller Configuration for GPFS All Flash

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 32


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

Partition
Device Partition # Size File system Mount Point
Name*
1 /dev/sda1 148MB vfat /boot/efi
2 /dev/sda2 128GB ext3/4 /
/dev/sda 3 /dev/sda3 32GB swap (none)
/var/backup/
4 /dev/sda4 148MB vfat
boot/efi
5 /dev/sda5 128GB ext3/4 /var/backup
/hana
/dev/sd[b-z] none /dev/sd[b-z] >6TB GPFS NSD
(configurable)

Table 33: Partition Scheme for Single Nodes with GPFS All Flash
* The actual partition numbers may vary depending on whether you use RHEL or SLES for SAP.

4.3 RAID Setup for XFS accelerated with bcache

The RAID configuration of all non-OS RAID arrays is executed by the automated installer. Except the
configuration of a RAID1 for the operating system, no manual steps have to be executed regarding RAID
configuration.
The following tables are provided as an overview and a reference in case that the automated RAID
configuration fails.
The HDD and SSD RAID arrays are configured with the settings WriteBack, ReadAhead, Cached, No
Write Cache if Bad BBU, pdcache=off, and a strip size of 64.
Tables 34: x3850 X6 RAID Controller Configuration for XFS on page 34 and 35: x3950 X6 RAID
Controller Configuration for XFS on page 35 describe possible configurations of the RAID controllers.
There are different possible setups for the RAID controllers with different numbers of SSDs and HDDs:
• M5210 (on x3950 X6: first internal)
31
– 2 SSDs + 6 HDDs: 1 × RAID1 for OS, 1 × RAID5 for bcache backing device, 1 × RAID1
for bcache cache set
• M5210 (only x3950 X6, second internal)
– 2 SSDs + 6 HDDs: 1 × RAID5 for bcache backing device, 1 × RAID1 for bcache cache set
• M5120/M5225
– 2 SSDs + 9 HDDs: 1 × RAID5 for bcache backing device, 1 × RAID1 for bcache cache set
– 2 SSDs + 10 HDDs: 1 × RAID6 for bcache backing device, 1 × RAID1 for bcache cache set
– 2 SSDs + 18 HDDs: 2 × RAID5 for bcache backing device, 1 × RAID1 for bcache cache set
– 2 SSDs + 20 HDDs: 2 × RAID6 for bcache backing device, 1 × RAID1 for bcache cache set
– Optionally: +2 SSDs32
31 RAID0 in systems with only one single RAID controller with SSDs
32 Optionally: 2 additional SSDs are required for CacheCade RAID1 in some hardware configurations. When using XFS,
these additional SSDs will create an additional RAID1 array for the bcache cache set

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 33


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

Physical
Controller Models VD ID Type Config Comment
Drives
0 HDD 2 RAID1 VD for OS
M5210 all (3+p)
1 HDD 4 bcache backing device
RAID5
RAID0 or
2 SSD 2 bcache cache set
RAID1†
(8+p)
Single node: 9 bcache
M5120/ 0* HDD RAID5
≥ 768GB, backing
M5225 (8+2p)
Cluster: 10 device
RAID6
≥ 512GB
1* SSD 2 RAID1 bcache caching set

Table 34: x3850 X6 RAID Controller Configuration for XFS.


* There are different possible configurations for this VD depending on the number of SSDs/HDDs connected to
the controller.
† RAID1 for the SSDs is only possible if at least 2 sets of 2 SSDs are installed in the server, this means more than
one properly installed RAID controllers are present

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 34


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

Physical
Controller Models VD ID Type Config Comment
Drives
0 HDD 2 RAID1 VD for OS
1st M5210 all (3+p) bcache backing de-
1 HDD 4
RAID5 vice
2 SSD 2 RAID1 bcache cache set
bcache backing
(5+p) device
Single node: 0 HDD 6
2nd M5210 RAID5
≥ 768GB,
Cluster: bcache cache set
≥ 512GB 1 SSD 2 RAID1
(8+p)
Single node: 9 bcache
0* HDD RAID5
≥ 3072GB, backing
1st M5120/ (8+2p)
Cluster: 10 device
M5225 RAID6
≥ 2048GB
(8+p)
Single node: 9 bcache
1* HDD RAID5
≥ 6144GB, backing
Cluster: (8+2p) device
10
≥ 3072GB RAID6
1/2** SSD 2* RAID1 bcache cache set
(8+p)
Single node: HDD 9 bcache
0* RAID5
≥ 12.288, backing
2nd M5120/ (8+2p)
Cluster: 10 device
M5225 RAID6
≥ 4096GB
(8+p)
Single node: 9 bcache
1* HDD RAID5
≥ 12.288GB, backing
Cluster: (8+2p) device
10
≥ 6144GB RAID6
1/2* SSD 2 RAID1 bcache cache set
(8+p)
Single node: 9 bcache
3rd M5120/ 0* HDD RAID5
≥ 12.288GB, backing
M5225 (8+2p)
Cluster: 10 device
RAID6
≥ 6144GB
1* SSD 2 RAID1 bcache cache set

Table 35: x3950 X6 RAID Controller Configuration for XFS.


* There are different possible configurations for this VD depending on the number of SSDs/HDDs connected to
the controller.
** This number will depend on the availability of VD1

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 35


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

Partition
Device Partition # Size File system Mount Point
Name*
1 /dev/sda1 148MB vfat /boot/efi
2 /dev/sda2 128GB ext3/4 /
/dev/sda 3 /dev/sda3 32GB swap (none)
/var/backup/
4 /dev/sda4 148MB vfat
boot/efi
5 /dev/sda5 128GB ext3/4 /var/backup
/dev/sd[b-z] bcache back- /hana
127 >800GB
/dev/sd[b-z] 127 ing device (configurable)
/dev/sd[b-z] bcache cache /hana
128 800GB
128 device (configurable)

Table 36: Partition Scheme for Single Nodes with XFS


* The actual partition numbers may vary depending on whether you use RHEL or SLES for SAP.

4.4 RAID Setup for XFS All Flash

The RAID configuration of all non-OS RAID arrays is executed by the automated installer. The only
manual step the installing person has to do is to configure the RAID1 for the OS.
The following tables are meant as an overview and a reference in case that the automated RAID config-
uration is not working properly.
The HDD and SSD RAID arrays are configured with the settings WriteBack, ReadAhead, Cached, No
Write Cache if Bad BBU, pdcache=off, and a strip size of 64.
Tables 37: x3850 X6 RAID Controller Configuration for XFS on page 36 describe possible configurations
of the RAID controllers. There are different possible setups for the RAID controllers with different
numbers of SSDs:
• M5210 (on x3950 X6: first internal)
– 3-6 SSDs + 2 SSDs: 1 × RAID1 for OS, 1 × RAID5 for XFS

Physical
Controller Models VD ID Type Config Comment
Drives
0 SSD 2 RAID1 VD for OS
M5210 all 1 SSD 3-6 RAID5 XFS

Table 37: x3850 X6 RAID Controller Configuration for XFS All Flash

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 36


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

Partition
Device Partition # Size File system Mount Point
Name*
1 /dev/sda1 148MB vfat /boot/efi
2 /dev/sda2 128GB ext3/4 /
/dev/sda 3 /dev/sda3 32GB swap (none)
/var/backup/
4 /dev/sda4 148MB vfat
boot/efi
5 /dev/sda5 128GB ext3/4 /var/backup
/dev/sd[b-z] XFS on soft- /hana
/dev/sd[b-z] 127 >6TB
127 ware raid (configurable)

Table 38: Partition Scheme for Single Nodes with XFS All Flash
* The actual partition numbers may vary depending on whether you use RHEL or SLES for SAP.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 37


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

5 UEFI settings

Note
Please be aware, that not every setting is available on every platform.
Section Operation Modes:

Setting Value Pr.33 ASU tool setting


Choose Operating Custom Mode H,B OperatingModes.ChooseOperatingMode
Mode
Memory Speed Max Performance H,B Memory.MemorySpeed
Memory Power Automatic H,B Memory.MemoryPowerManagement
Management
Proc Performance Enable H Processors.ProcessorPerformanceStates
States
CPU P-State Legacy B Processors.CPUP-StateControl
Control
C1 Enhance Mode Disable H,B Processors.C1EnhancedMode
QPI Link Max Performance H,B Processors.QPILinkFrequency
Frequency
Turbo Mode Enable H,B Processors.TurboMode
C-States Enable H Processors.C-States or
Processors.Processors_C-States34
C-States Legacy B Processors.Processors_C-States
Package ACPI ACPI C3 H,B Processors.PackageACPIC-StateLimit
C-State Limit
Power/Performance Platform H,B Power.PowerPerformanceBias
Bias Controlled
Platform Max Performance H,B Power.PlatformControlledType
Controlled Type

Table 39: Required Operation Modes UEFI settings

Section Power:

Setting Value Pr.35 ASU tool setting


Active Energy Capping Disable H,B Power.ActiveEnergyManager
Manager
Power/Performance Platform H,B Power.PowerPerformanceBias
Bias Controlled
Platform Max Performance H,B Power.PlatformControlledType
Controlled Type
Workload I/O sensitive H,B Power.WorkloadConfiguration
Configuration
10Gb Mezz Card Disable H,B Power.10GbMezzCardStandbyPower
Standby Power

Table 40: Required Power UEFI settings

33 Processor Architecture: Haswell,Ivy Bridge=H, Broadwell=B


34 depends on UEFI version
35 Processor Architecture: Haswell,Ivy Bridge=H, Broadwell=B

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 38


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

Section Processors:

Setting Value Pr.36 ASU tool setting


Turbo Mode Enable H,B Processors.TurboMode
Proc Performance Enable H Processors.ProcessorPerformanceStates
States
CPU P-State Legacy B Processors.CPUP-StateControl
Control
C-States Enable H Processors.C-States or
Processors.Processors_C-States37
C-States Legacy B Processors.Processors_C-States
Package ACPI ACPI C3 H,B Processors.PackageACPIC-StateLimit
C-State Limit
C1 Enhanced Mode Disable H,B Processors.C1EnhancedMode
Hyper Threading Enable H,B Processors.Hyper-Threading
Execute Disable Enable H,B Processors.ExecuteDisableBit
Bit
Intel Enable H,B Processors.IntelVirtualizationTechnology
Virtualization
Technology
Enable SMK Disable H,B Processors.EnableSMX
Hardware Enable H,B Processors.HardwarePrefetcher
Prefetcher
Adjacent Cache Enable H,B Processors.AdjacentCachePrefetch
Prefetch
DCU Streamer Enable H,B Processors.DCUStreamerPrefetcher
Prefetcher
DCU IP Enable H,B Processors.DCUIPPrefetcher
Prefetcher
Direct Cache Enable H,B Processors.DirectCacheAccessDCA
Access (DCA)
Cores in CPU All H,B Processors.CoresinCPUPackage
Package
QPI Link Max H,B Processors.QPILinkFrequency
Frequency Performance
Energy Efficient Enable H,B Processors.EnergyEfficientTurbo
Turbo
Uncore Frequency Enable H,B Processors.UncoreFrequencyScaling
Scaling
MWAIT/MMONITOR Enable H,B Processors.MWAITMMONITOR
Per Core P-state Disable H,B Processors.PerCoreP-state
CPU Frequency Full turbo H,B Processors.CPUFrequencyLimits
Limits uplift
AES-NI Enable H,B Processors.AES-NI

Table 41: Required Processors UEFI settings

36 Processor Architecture: Haswell,Ivy Bridge=H, Broadwell=B


37 depends on UEFI version

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 39


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

Section Memory:

Setting Value Pr.38 ASU tool setting


Memory Mode Independent H,B Memory.MemoryMode
Memory Speed Max Performance H,B Memory.MemorySpeed
Memory Power Automatic H,B Memory.MemoryPowerManagement
Management
Socket NUMA H,B Memory.SocketInterleave
Interleave
PatrolScrub Enable H,B Memory.PatrolScrub
Memory Data Enable H,B Memory.MemoryDataScrambling
Scrambling
Mirroring Disable H,B Memory.Mirroring
Mirroring Type Full B Memory.MirroringType
Sparing Disable H,B Memory.Sparing

Table 42: Required Memory UEFI settings

Section Recovery & RAS

Setting Value ASU tool setting


Disk GPT Recovery None DiskGPTRecovery.DiskGPTRecovery

Table 43: Required GPT UEFI settings

38 Processor Architecture: Haswell,Ivy Bridge=H, Broadwell=B

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 40


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

6 Networking

6.1 Networking Requirements

The networking for the Lenovo Solution, the IMM39 and the corresponding switches should be set up
and integrated into the customer network environment according to the customer’s requirements and
the recommendations from SAP. SAP currently recommends that individual workloads are separated by
either physical or virtual LAN addresses or subnets.
The individual workloads described by SAP are:
• SAP HANA internal communication via SAP HANA private networking
• Customer access to the SAP HANA appliance via:
– SAP Landscape Transformation Replication (LT)
– Sybase Replication (SR)
– SAP Business Objects Data Services (DS)
– Business Objects XI, Microsoft Excel, etc.
– Server data management tools for:
∗ System/DB backup and restore operations
– Logical server application management (can be partially accomplished via Integrated Manage-
ment Module)
∗ SSH access, VNC access, SAP Support access
We strongly recommend that the following SAP Workloads are dedicated and distinct subnets using
separate Ethernet adapters (NICs). If not, the network setup will become more complicated.
• SAP HANA client access
• Server data management
• Server application management
Additionally to the SAP workloads the Lenovo Solution defines two additional workloads:
• IBM clustered files system communications for GPFS in case GPFS is used
• Physical server management via the Integrated Management Module
– Hardware support, console web access and SSH access
It is necessary to separate the IBM GPFS and SAP HANA internal networks from all other networks as
well as from each other. Servers being configured in a clustered scenario require two dedicated high speed
NICs (e.g. 10GbE) with separate physical private LANs for the internal communication of GPFS and SAP
HANA. In addition external networks, e.g. for SAP Client/BW and SAP management communication
should be separated as well. If not, SAP HANA performance may be compromised and the system is not
supported by SAP nor Lenovo.

6.2 Customer Query

Before you configure the server and install the Lenovo Solution, please gather the following network
information from your network administrator where indicated with the b symbol. Please use only IPv4
addresses.
39 Integrated Management Module

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 41


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

Note
In case the customer plans to install a single node configuration, but would like to scale it out
to a cluster by adding more severs: plan the network configuration for the GPFS and HANA
networks as if the cluster would be already existing to simplify a later scale out.

Note
In a single node configuration using XFS a GPFS network is not necessary.

IP Address b
Default Network Prefix b
Default Netmask b
Default Gateway b
Primary DNS IP b
Secondary DNS IP b
Domain Search b
NTP Server b

Table 44: Customer infrastructure addresses

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 42


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

Port Label
Network Single Cluster IP Address Hostname Netmask Gateway
Server Node 01 (Worker/Stand-By/Single)
IBM GPFS 127.0.1.1 (default)
gpfsnode01 255.255.255.0 None
Private Net- 192.168.10.101 (ex-
any A/C (mandatory) (recommended) (recom-
work (prede- ample)
b b mended)
fined) b
SAP HANA 127.0.2.1 (default)
hananode01 255.255.255.0 None
Private Net- 192.168.20.101 (ex-
any B/D (mandatory) (recommended) (recom-
work (prede- ample)
b b mended)
fined) b
Any of the re-
Customer
maining NIC b b b b
Network
ports
IMM I b b b b
Server Node 02 (Worker/Stand-By)
IBM GPFS
127.0.1.1 (default) None
Private Net- gpfsnode02 255.255.255.0
any A/C 192.168.10.102 (ex- (recom-
work (prede- (mandatory) (recommended)
ample) mended)
fined)
SAP HANA
127.0.2.1 (default) None
Private Net- hananode02 255.255.255.0
any B/D 192.168.20.102 (ex- (recom-
work (prede- (mandatory) (recommended)
ample) mended)
fined)
Any of the re-
Customer
maining NIC b b b b
Network
ports
IMM I b b b b
..
.
for all other nodes
..
.

Table 45: IP address configuration

6.3 Network Configuration

6.3.1 Clustered Installations

In a clustered configuration with high availability, the internal networks of the appliance for GPFS and
HANA are set up with redundant links (refer to chapter 6.4.2: Advanced Setup of the Switches on page
52). These connect to redundant G8264, G8272, G8296 or G8124E 10GigE switches. Both switches are
connected with a minimum of two ISL ports. It is recommended to use the 40GbE ports for the ISLs.
On host side the two corresponding ports of each network are configured as Linux bond devices. The
data replication connection to the primary data source can also be set up in a redundant fashion and
connects directly to the appliance internal 10GigE HANA network. The details for this setup depend
strongly on the customers network infrastructure and need to be planned accordingly. Details to the
exact configuration can be found in chapter 6.3.3.7: Network Configurations in a Clustered Environment
on page 50.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 43


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

Network IP-Interf. VLAN LACP-Key VLAG-Key Tier-ID Network


MGMT (G8264) 128 4095* - - - -
MGMT (G8272) 128 4095* - - - -
MGMT (G8296) 128 4095* - - - -
MGMT (G8124) 128 4095* - - - -
MGMT (G8052) 128 4092** - - - -
ISL - 4094 VLAN+1000 LACP Key 10 -
GPFS - 100(++) port#+1000 LACP-Key - 192.168.10.0/24
HANA - 200(++) port#+1000 LACP-Key - 192.168.20.0/24
IMM (BMC) - 300(++) port#+1000 LACP-Key - 192.168.30.0/24
* VLAN 4095 is internally assigned to the management port(s) and cannot be changed.
** VLAN 4092 is a suggestion for the management VLAN.

Table 46: Numbering conventions

Warning
When connecting the data replication network directly to the internal 10GigE network, an
access control list needs to be configured on the uplink port to isolate the internal networks
(e.g. 127.0.n.24) from the customer network.
If a network adapter or one of the switches fail, the SAP HANA network and the GPFS network are
taken over by the remaining switch and network adapter.
It is recommended to establish redundant network connections for the other networks (e.g. client network)
as well. This setup is similar to the internal networks and requires two identical 1GigE or 10GigE switches
(e.g. G8052 1GigE or G8264 10GigE). As long as there is one redundant path to each server the remaining
appliance and data management networks can be implemented with a single link. Each of the networks
will then connect to one of the two switches.
To implement network redundancy on the switch level, a Virtual Link Aggregation Group (VLAG) needs
to be created on the two network switches. A VLAG requires a dedicated inter-switch link (ISL) for
synchronization. More details can be found in Chapter 6.3.3.7: Network Configurations in a Clustered
Environment on page 50.
Note
For more details on VLAGs please obtain the Application Guide respective to the RackSwitch
model and N/OS you have installed and consult chapter "Virtual Link Aggregation Groups"
(e.g. "RackSwitch G8272 Application Guide").

6.3.2 Customer Site Networks

We allow the customer to define and use their own networks and connect them to the dedicated customer
network NICs using their own switch infrastructure. Please ensure the proper IP address setup on the
Lenovo Solution server. This guide does not go into detail regarding the customers switch configuration,
nor for the configuration in the cluster.
There are several options how to setup HANA regarding the connection on the client network. This
depends highly on the setup of the customer network. A good overview of the possibilities gives: SAP
Note 1906381 – Network setup for external communication

6.3.3 Network Definitions

6.3.3.1 Numbering conventions

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 44


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

Note
The "(++)" in the table indicates that +1 should be added for every new network in case of
multiple GPFS, HANA or IMM LANs.

6.3.3.2 Internal Networks – Option 1 G8264 RackSwitch 10Gbit This option is defined to
use the G8264 RackSwitch 10Gbit Ethernet switch as a private network landscape for IBM GPFS and
SAP HANA. This allows up to 24 Lenovo Solution servers (or 26 servers with "40G -> 4x 10G" breakout
cable on ports 9 or 13) to be connected. The setup is as follows:
18,20,22,24,26,28...64 (HANA)
.----------------------,5_____
MGMT| G8264 Switch |1_____\__ Inter-Switch 40Gb Link (ISL)
‘----------------------’ \_\_____Port 5 bonded ISL
17,19,21,23,25,27...63 (GPFS) / \
18,20,22,24,26,28...64 (HANA)/ \___Port 1 bonded ISL
.----------------------,5____/ /
MGMT| G8264 Switch |1_________/
‘----------------------’
17,19,21,23,25,27...63 (GPFS)

Figure 12: G8264 RackSwitch front view

This guide defines the IBM GPFS network to be used as 192.168.10.0/24 and the SAP HANA network
to be used as 192.168.20.0/24. If the customer wants to use a different IP range he may do so, but it
should be used consistently as the internal (private) network within this guide.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 45


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

Switch Port VLAN IP Address Hostname Server NIC


g8264-1 MGMT 4095 <customer-mgmt IP1> <switch1> n/a
g8264-1 17 100 192.168.10.101 gpfsnode01 bond0
g8264-1 18 200 192.168.20.101 hananode01 bond1
g8264-1 19 100 192.168.10.102 gpfsnode02 bond0
g8264-1 20 200 192.168.20.102 hananode02 bond1
.. .. .. .. .. ..
. . . . . .
g8264-1 63 100 192.168.10.124 gpfsnode24 bond0
g8264-1 64 200 192.168.20.124 hananode24 bond1
g8264-2 MGMT 4095 <customer-mgmt IP2> <switch2> n/a
g8264-2 17 100 192.168.10.101 gpfsnode01 bond0
g8264-2 18 200 192.168.20.101 hananode01 bond1
g8264-2 19 100 192.168.10.102 gpfsnode02 bond0
g8264-2 20 200 192.168.20.102 hananode02 bond1
.. .. .. .. .. ..
. . . . . .
g8264-2 63 100 192.168.10.124 gpfsnode24 bond0
g8264-2 64 200 192.168.20.124 hananode24 bond1

Table 47: G8264 RackSwitch port assignments

Note
There is no public network attached to these switches.

6.3.3.3 Internal Networks – Option 2 G8124 RackSwitch 10Gbit This option is defined to
use the G8124 RackSwitch 10Gbit Ethernet switch as a private network landscape for IBM GPFS and
SAP HANA. This allows up to 7 Lenovo Solution servers to be connected. The setup is as follows:
2,4,6,8,10,12,14 (HANA)
.----------------------,24____
MGMT| G8124 Switch |23____\__ Inter-Switch 10Gb Link (ISL)
‘----------------------’ \_\_____Port 24 bonded ISL
1,3,5,7,9,11,13 (GPFS) / \
2,4,6,8,10,12,14 (HANA)/ \___Port 23 bonded ISL
.----------------------,24___/ /
MGMT| G8124 Switch |23________/
‘----------------------’
1,3,5,7,9,11,13 (GPFS)

Figure 13: G8124 RackSwitch front view

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 46


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

This guide defines the IBM GPFS network to be used as 192.168.10.0/24 and the SAP HANA network
to be used as 192.168.20.0/24. If the customer wants to use a different IP range he may do so, but it
should be used consistently as the internal (private) network within this guide.

Switch Port VLAN IP Address Hostname Server NIC


g8124-1 MGMT-b 4095 <customer-mgmt IP1> <switch1> n/a
g8124-1 1 100 192.168.10.101 gpfsnode01 bond0
g8124-1 2 200 192.168.20.101 hananode01 bond1
g8124-1 3 100 192.168.10.102 gpfsnode02 bond0
g8124-1 4 200 192.168.20.102 hananode02 bond1
g8124-1 5 100 192.168.10.103 gpfsnode03 bond0
g8124-1 6 200 192.168.20.103 hananode03 bond1
.. .. .. .. .. ..
. . . . . .
g8124-1 13 100 192.168.10.107 gpfsnode07 bond0
g8124-1 14 200 192.168.20.107 hananode07 bond1
g8124-2 MGMT-b 4095 <customer-mgmt IP2> <switch2> n/a
g8124-2 1 100 192.168.10.101 gpfsnode01 bond0
g8124-2 2 200 192.168.20.101 hananode01 bond1
g8124-2 3 100 192.168.10.102 gpfsnode02 bond0
g8124-2 4 200 192.168.20.102 hananode02 bond1
g8124-2 5 100 192.168.10.103 gpfsnode03 bond0
g8124-2 6 200 192.168.20.103 hananode03 bond1
.. .. .. .. .. ..
. . . . . .
g8124-2 13 100 192.168.10.107 gpfsnode07 bond0
g8124-2 14 200 192.168.20.107 hananode07 bond1

Table 48: G8124 RackSwitch port assignments

6.3.3.4 Internal Networks – Option 3 G8272 RackSwitch 10Gbit This option is defined to
use the G8272 RackSwitch 10Gbit Ethernet switch as a private network landscape for IBM GPFS and
SAP HANA. This allows up to 24 Lenovo Solution servers (or 32 servers with "40G -> 4x 10G" breakout
cables on ports 49,50,51 or 52) to be connected. The setup is as follows:
2,4,6,8,10,12...48 (HANA)
.----------------------,54_____
MGMT| G8272 Switch |53_____\__ Inter-Switch 40Gb Link (ISL)
‘----------------------’ \_\_____Port 54 bonded ISL
1,3,5,7,9,11...47 (GPFS) / \
2,4,6,8,10,12...48 (HANA) / \___Port 53 bonded ISL
.----------------------,54____/ /
MGMT| G8272 Switch |53_________/
‘----------------------’
1,3,5,7,9,11...47 (GPFS)

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 47


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

Figure 14: G8272 RackSwitch front view

This guide defines the IBM GPFS network to be used as 192.168.10.0/24 and the SAP HANA network
to be used as 192.168.20.0/24. If the customer wants to use a different IP range he may do so, but it
should be used consistently as the internal (private) network within this guide.

Switch Port VLAN IP Address Hostname Server NIC


g8272-1 MGMT 4095 <customer-mgmt IP1> <switch1> n/a
g8272-1 1 100 192.168.10.101 gpfsnode01 bond0
g8272-1 2 200 192.168.20.101 hananode01 bond1
g8272-1 3 100 192.168.10.102 gpfsnode02 bond0
g8272-1 4 200 192.168.20.102 hananode02 bond1
.. .. .. .. .. ..
. . . . . .
g8272-1 47 100 192.168.10.124 gpfsnode24 bond0
g8272-1 48 200 192.168.20.124 hananode24 bond1
g8272-2 MGMT 4095 <customer-mgmt IP2> <switch2> n/a
g8272-2 1 100 192.168.10.101 gpfsnode01 bond0
g8272-2 2 200 192.168.20.101 hananode01 bond1
g8272-2 3 100 192.168.10.102 gpfsnode02 bond0
g8272-2 4 200 192.168.20.102 hananode02 bond1
.. .. .. .. .. ..
. . . . . .
g8272-2 47 100 192.168.10.124 gpfsnode24 bond0
g8272-2 48 200 192.168.20.124 hananode24 bond1

Table 49: G8272 RackSwitch port assignments

Note
There is no public network attached to these switches.

6.3.3.5 Internal Networks – Option 4 G8296 RackSwitch 10Gbit This option is defined to
use the G8272 RackSwitch 10Gbit Ethernet switch as a private network landscape for IBM GPFS and
SAP HANA. This allows up to 43 Lenovo Solution servers (or 47 servers with "40G -> 4x 10G" breakout
cables on ports 87 and 88) to be connected. The setup is as follows:
2,4...48/50,52...86 (HANA)
.----------------------,96_____
MGMT| G8296 Switch |95_____\__ Inter-Switch 40Gb Link (ISL)
‘----------------------’ \_\_____Port 96 bonded ISL
1,3...47/49,51...85 (GPFS) / \
2,4...48/50,52...86 (HANA) / \___Port 95 bonded ISL
.----------------------,96____/ /
MGMT| G8296 Switch |95_________/
‘----------------------’
1,3...47/49,51...85 (GPFS)

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 48


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

Figure 15: G8296 RackSwitch front view

This guide defines the IBM GPFS network to be used as 192.168.10.0/24 and the SAP HANA network
to be used as 192.168.20.0/24. If the customer wants to use a different IP range he may do so, but it
should be used consistently as the internal (private) network within this guide.

Switch Port VLAN IP Address Hostname Server NIC


g8296-1 MGMT 4095 <customer-mgmt IP1> <switch1> n/a
g8296-1 1 100 192.168.10.101 gpfsnode01 bond0
g8296-1 2 200 192.168.20.101 hananode01 bond1
g8296-1 3 100 192.168.10.102 gpfsnode02 bond0
g8296-1 4 200 192.168.20.102 hananode02 bond1
.. .. .. .. .. ..
. . . . . .
g8296-1 85 100 192.168.10.143 gpfsnode43 bond0
g8296-1 86 200 192.168.20.143 hananode43 bond1
g8296-2 MGMT 4095 <customer-mgmt IP2> <switch2> n/a
g8296-2 1 100 192.168.10.101 gpfsnode01 bond0
g8296-2 2 200 192.168.20.101 hananode01 bond1
g8296-2 3 100 192.168.10.102 gpfsnode02 bond0
g8296-2 4 200 192.168.20.102 hananode02 bond1
.. .. .. .. .. ..
. . . . . .
g8296-2 85 100 192.168.10.143 gpfsnode43 bond0
g8296-2 86 200 192.168.20.143 hananode43 bond1

Table 50: G8296 RackSwitch port assignments

Note
There is no public network attached to these switches.

6.3.3.6 Admin., SAP Access and Backup Networks – Option G8052 RackSwitch 1Gbit
The G8052 RackSwitch 1Gbit Ethernet switch is mainly used for the administrative networks. It can
be used also for SAP-Access, backup network or other client specific networks. These networks are both
public and private and need to be carefully separated with VLANs. The landscape is as follows:

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 49


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

2,4,6,8,10,12,14,...48
52.----------------------,50______
51| G8052 Switch |49______\__ Inter-Switch 1Gb Link (ISL)
‘----------------------’ \_\_____Port 50 bonded ISL
1,3,5,7,9,11,13,....47 (IMM) / \
2,4,6,8,10,12,14,...48 / \___Port 49 bonded ISL
52.----------------------,50_____/ /
51| G8052 Switch |49__________/
‘----------------------’
1,3,5,7,9,11,13,....47 (IMM)

Figure 16: G8052 RackSwitch front view

This guide defines the Integrated Management Module (IMM) Network to be 192.168.30.0/24. If the
customer wants to use a different IP range for the Integrate Management Module (IMM) he may do so,
but it should be used consistently within this guide.

Switch Port VLAN IP Address Hostname Server NIC


g8052-1 52 4092 <customer-mgmt IP1> <switch1> n/a
g8052-1 1 300 192.168.30.101 cust-imm01.site.net sys-mgmt
g8052-1 3 300 192.168.30.102 cust-imm02.site.net sys-mgmt
.. .. .. .. ..
g8052-1 . . . . .
g8052-1 47 300 192.168.30.124 cust-imm24.site.net sys-mgmt
g8052-2 52 4092 <customer-mgmt IP2> <switch2> n/a
g8052-2 1 300 192.168.30.125 cust-imm25.site.net sys-mgmt
g8052-2 3 300 192.168.30.126 cust-imm26.site.net sys-mgmt
.. .. .. .. ..
g8052-2 . . . . .
g8052-2 47 300 192.168.30.148 cust-imm48.site.net sys-mgmt

Table 51: G8052 RackSwitch port assignments

6.3.3.7 Network Configurations in a Clustered Environment The networking in the clustered


environment is an essential part of the Lenovo Solution. Therefore it is important that you ensure that
the network (switches, wires, etc.) has been set up before starting the installation of the servers. Below
is one example of how to connect the customers network infrastructure with the clustered environment,
see figure 17.
Please read section 6.4: Setting up the Switches on page 51 for the RackSwitch setup.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 50


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

System
management

IMM IMM IMM

Node1 Node2 Node3

10 GbE
1 or 10 GigE (a) HANA HANA HANA 10GigE (a)
Customer
Switch Choice HANA GPFS
optional bond1 bond1 bond1 VLAN VLAN

SAP
Business Suite ISL ISL

1 or 10 GigE (b) 10GigE (b)


Customer GPFS GPFS GPFS
HANA GPFS
Switch Choice
optional bond0 bond0 bond0 VLAN VLAN

SAP HANA Inter Switch


GPFS (internal) IMM Links (ISL) Network
Interface
10 GbE 10 GbE 1 GbE 40 GbE

Figure 17: Cluster Node Network Overview

6.4 Setting up the Switches

6.4.1 Basic Switch Configuration Setup

6.4.1.1 Configuring SSH/SCP Features on the Switch SSH and SCP features are disabled by
default. To change the SSH/SCP settings, use the following procedure. Connect to the switch via a serial
console and execute the following commands:
RS 8XXX> enable
RS 8XXX# configure terminal
RS 8XXX(config)# ssh enable
RS 8XXX(config)# ssh scp-enable
RS 8XXX(config)# interface ip 128
RS 8XXX(config-ip-if)# ip address <customer-mgmt IP> <customer-subnetmask>
RS 8XXX(config-ip-if)# enable
RS 8XXX(config-ip-if)# exit
Example: Configuring gateway
RS 8XXX(config)# ip gateway 4 address <customer-gateway>
RS 8XXX(config)# ip gateway 4 enable
Save changes to switch FLASH memory
RS 8XXX# copy running-config startup-config

6.4.1.2 Simple Network Management Protocol Version 3 SNMP version 3 (SNMPv3) is an


enhanced version of the Simple Network Management Protocol, approved by the Internet Engineering
Steering Group in March, 2002. SNMPv3 contains additional security and authentication features that
provide data origin authentication, data integrity checks, timeliness indicators and encryption to protect

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 51


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

against threats such as masquerade, modification of information, message stream modification and dis-
closure. SNMPv3 allows clients to query the MIBs securely. SNMPv3 configuration is managed using
the following command path menu:
RS 8XXX(config)# snmp-server ?
The default configuration of N/OS has two SNMPv3 users by default. Both of the following users have
access to all the MIBs supported by the switch:
• User name is adminmd5 (password adminmd5). Authentication used is MD5
• User name is adminsha (password adminsha). Authentication used is SHA
You can try to connect to the switch using the following command.
# snmpwalk -v 3 -c Public -u adminmd5 -a md5 -A adminmd5 -x des -X adminmd5 -l authPriv
<hostname> sysDescr.0

6.4.2 Advanced Setup of the Switches

For every switch in the cluster do the following:


It is mandatory to setup Virtual Link Aggregation Group (VLAG) between the switches as well as a
Virtual Local Area Network (VLAN) for each private network. The following illustration shows the setup
for an M-sized cluster using the G8264 RackSwitches.

G8264 #1 G8264 #2
MGMT 1 IP: 192.168.255.253/24 VLAN 4095 MGMT 2 IP: 192.168.255.252/24 VLAN 4095
ISL VLAN 4094
Tier-ID 10 (port 1, 5)

mgt 5-8 13-16 18 20 22 24 26 28 30 32 34 36 38 40 42 44 46 48 50 52 54 56 58 60 62 64 mgt 5-8 13-16 18 20 22 24 26 28 30 32 34 36 38 40 42 44 46 48 50 52 54 56 58 60 62 64


1-4 9-12 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 49 51 53 55 57 59 61 63 1-4 9-12 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 49 51 53 55 57 59 61 63

Port: 30 Port: 18
Port: 29

Port: 17 Port: 18 Port: 30


Port: 17
bond0 bond0 GPFS
Port: 29 HANA
bond1 bond1
eth6
eth6
eth7 eth0 eth2 node 1 eth7 eth0 eth2 node 2
eth8 eth8
eth9 eth1 eth3 eth4 eth5 sys eth9 eth1 eth3 eth4 eth5 sys

Figure 18: Cluster Switch Networking Example

Note
Please make sure that you pick the same port of each of the two Mellanox adapters for each
of the internal networks. This reduces complexity.

Note
The management IP addresses are examples and need to be customized according to the
customer’s network.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 52


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

These instructions are for RackSwitch N/OS Version 8.2. Newer versions may have different commands.
Please check the RackSwitch Industry-Standard CLI Reference for the version of the CLI that correlates
to the switch N/OS version.

6.4.3 Disable Spanning Tree Protocol

RS 8XXX (config)# spanning-tree mode disable


RS 8XXX (config)# no spanning-tree stg-auto

Note
Spanning-Tree is disabled globally with "spanning-tree mode disable". The setting "no
spanning-tree stg-auto" prevents the switch from automatically creating STG groups when
defining VLANs.

6.4.4 Disable Default IP Address

RS 8XXX (config)# no system default-ip data

6.4.5 Enable L4Port Hash

RS 82XX (config)# portchannel thash l4port


This can be skipped if the command is not available on the switch.

6.4.6 Disable Routing

RS 8XXX (config)# no ip routing

6.4.7 Add Networking

For each subnetwork, you should create the following VLANs and Trunk VLAG configurations as de-
scribed.

6.4.8 VLAN configurations

6.4.8.1 IBM GPFS Storage Network


• Create IP interface for the GPFS storage network
# Define Switch 1,2
RS 8XXX (config)# vlan 100
RS 8XXX (config)# interface ip 10
# next line for the 1st switch:
RS 8XXX (config-ip-if)# ip address 192.168.10.249 255.255.255.0
# next line for the 2nd switch:
RS 8XXX (config-ip-if)# ip address 192.168.10.248 255.255.255.0
RS 8XXX (config-ip-if)# vlan 100
RS 8XXX (config-ip-if)# enable
RS 8XXX (config-ip-if)# exit
• Define LACP Trunk for each VLAN

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 53


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

# Define on Switches 1,2


# RS 8264 ports 9-63, odd (bottom) ports
# RS 8272 ports 1-47, odd (bottom) ports
# RS 8296 ports 1-78, odd ports
# RS 8124 ports 1-21, odd ports
RS 8XXX (config)# interface port <port>
RS 8XXX (config-if)# switchport access vlan 100
RS 8XXX (config-if)# lacp mode active
RS 8XXX (config-if)# lacp key 1000+<port>
RS 8XXX (config-if)# bpdu-guard
RS 8XXX (config-if)# spanning-tree portfast
RS 8XXX (config-if)# exit
Repeat this for every port that needs to be configured.

6.4.8.2 SAP HANA Network


• Create IP interface for the HANA network
# Define Switch 1, 2
RS 8XXX (config)# vlan 200
RS 8XXX (config)# interface ip 20
# next line for the 1st switch:
RS 8XXX (config-ip-if)# ip address 192.168.20.249 255.255.255.0
# next line for the 2nd switch:
RS 8XXX (config-ip-if)# ip address 192.168.20.248 255.255.255.0
RS 8XXX (config-ip-if)# vlan 200
RS 8XXX (config-ip-if)# enable
RS 8XXX (config-ip-if)# exit
• Define LACP Trunk for each VLAN
# Define on Switches 1,2
# RS 8264 ports 10-64, even (top) ports
# RS 8272 ports 2-48, even (top) ports
# RS 8296 ports 2-88, even ports
# RS 8124 ports 2-22, even ports
RS 8XXX (config)# interface port <port>
RS 8XXX (config-if)# switchport access vlan 200
RS 8XXX (config-if)# lacp mode active
RS 8XXX (config-if)# lacp key 1000+<port>
RS 8XXX (config-if)# bpdu-guard
RS 8XXX (config-if)# spanning-tree portfast
RS 8XXX (config-if)# exit
Repeat this for every port that needs to be configured.

6.4.8.3 Integrated Management Module (IMM) Network


• Create IP interface for the IMM network
# Define Switch 1,2
RS 8052 (config)# vlan 300
RS 8052 (config)# interface ip 30
# next line for the 1st switch:
RS 8052 (config-ip-if)# ip address 192.168.30.249 255.255.255.0

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 54


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

# next line for the 2nd switch:


RS 8052 (config-ip-if)# ip address 192.168.30.248 255.255.255.0
RS 8052 (config-ip-if)# vlan 300
RS 8052 (config-ip-if)# enable
RS 8052 (config-ip-if)# exit
• Set access VLAN for switchports
# Define Switch 1,2
# RS 8052 ports 1-47
RS 8052 (config)# interface port <port>
RS 8052 (config-if)# switchport access vlan 300
RS 8052 (config-if)# bpdu-guard
RS 8052 (config-if)# exit
# RS 8052 port 48 as managment port
RS 8052 (config)# interface port 48
RS 8052 (config-if)# description MGMTPort
RS 8052 (config-if)# switchport access vlan 4092
RS 8052 (config-if)# bpdu-guard
RS 8052 (config-if)# exit

6.4.8.4 Enabling VLAG Setup


• Create trunk (dynamic or static) used as ISL
# one of the next four lines is valid according to the switch type
RS 8264 (config)# interface port 1,5
RS 8272 (config)# interface port 53,54
RS 8296 (config)# interface port 95,96
RS 8124 (config)# interface port 23,24
RS 8052 (config)# interface port 49,50
RS 8XXX (config-if)# switchport mode trunk
# next line defines the VLANs needed on the ISL on the HANA/GPFS-switches
RS 82XX (config-if)# switchport trunk allowed vlan 4094,[HANA VLAN(S),GPFS VLAN(S)]
# next line defines the VLANs needed for the ISL on the IMM-switches
RS 8052 (config-if)# switchport trunk allowed vlan 4094,[IMM VLAN(S)]
RS 8XXX (config-if)# lacp mode active
RS 8XXX (config-if)# lacp key 5094
RS 8XXX (config-if)# enable
RS 8XXX (config-if)# exit
RS 8XXX (config)# vlag enable
• Define VLAG peer relationship for each VLAN
Note
If you have more than one switch-pair make sure to use a different VLAG tier-id for
each switch-pair.
# Define Switch 1
RS 8XXX (config)# vlag tier-id 10
RS 8XXX (config)# vlag hlthchk peer-ip <customer-mgmt IP2>
RS 8XXX (config)# vlag isl adminkey 5094
# For each <VLAN port> in <VLAN ports>
RS 8XXX (config)# vlag adminkey 1000+<VLAN port> enable

# Define Switch 2

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 55


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

RS 8XXX (config)# vlag tier-id 10


RS 8XXX (config)# vlag hlthchk peer-ip <customer-mgmt IP>
RS 8XXX (config)# vlag isl adminkey 5094
# For each <VLAN port> in <VLAN ports>
RS 8XXX (config)# vlag adminkey 1000+<VLAN port> enable

6.4.9 Save changes to switch FLASH memory

RS 8XXX# copy running-config startup-config

6.4.10 Inter-Site Portchannel Configuration

In a stretched HA or DR scenario a inter-site port channel needs to be configured. The inter-site port
channel configuration depends on the customer premise equipment and infrastructure. This chapter
describes diverse options how this configuration can be implemented. The following examples are based
on G8264 port layout. For other supported rackswitch types following ports should be used:
• G8124 solution: depending on the connection type, the switch ports 22, or 21-22 respectively
• G8272 solution: depending on the connection type, the switch ports 48, or 47-48 respectively
• G8296 solution: depending on the connection type, the switch ports 86, or 86-87 respectively
If the port channel configuration is needed for a stretched HA setup, the HANA and the GPFS VLANs
have to be enabled on the trunk interfaces. If the port channel trunk is for a DR setup, only GPFS
VLANs have to be enabled on the trunk interfaces.

6.4.10.1 Static Trunk over one Inter-Site Link If there is just one single site-interconnect avail-
able - as described with the drawing below - the following configuration has to be applied to the switches
to establish a static inter-site connection.
Single Inter-Site Link
.------------------------------------------------.
| |
HANA 18,20,22,24,26,28...64 HANA 18,20,22,24,26,28...64
.----------------------,5_____ .----------------------,5_____
MGMT| G8264 Switch 1a |1_____\ MGMT | G8264 Switch 2a |1_____\
‘----------------------’ \ ‘----------------------’ \
GPFS 17,19,21,23,25,27...63 / ISL GPFS 17,19,21,23,25,27...63 / ISL
HANA 18,20,22,24,26,28...64 / HANA 18,20,22,24,26,28...64 /
.----------------------,5____/ .----------------------,5____/
MGMT| G8264 Switch 1b |1___/ MGMT | G8264 Switch 2b |1___/
‘----------------------’ ‘----------------------’
GPFS 17,19,21,23,25,27...63 GPFS 17,19,21,23,25,27...63

• Switchport Portchannel Configuration


# Define Switch 1a,2a
# RS 8264 port 64
# RS 8272 port 48
# RS 8296 port 86
# RS 8124 port 22
RS 8XXX (config)# interface port <port>
RS 8XXX (config-if)# switchport mode trunk

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 56


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

# The next 2 configuration statements are valid in case of a stretched HA solution. In a


# stretched HA scenario HANA and GPFS VLANs must be enabled on the trunk interface.
RS 8XXX (config-if)# switchport trunk allowed vlan [GPFS VLAN,HANA VLAN]
RS 8XXX (config-if)# switchport trunk native vlan [GPFS VLAN,HANA VLAN]
# The next 2 configuration statements are valid in case of DR solution. Only GPFS VLAN
# must be enabled on the trunk interface in a DR scenario.
RS 8XXX (config-if)# switchport trunk allowed vlan [GPFS VLAN]
RS 8XXX (config-if)# switchport trunk native vlan [GPFS VLAN]
RS 8XXX (config-if)# exit

6.4.10.2 Portchannel over two Inter-Site Links If there are two site-interconnect fibres - as
described with the drawing below - each cable should connect to two switches, instead of connecting
them both to just one switch pair. The following configuration has to be applied to the switches to
establish one logical static inter-site connection over 2 cables.

Redundant Inter-Site Link (one on each switch)


.------------------------------------------------.
| |
HANA 18,20,22,24,26,28...64 HANA 18,20,22,24,26,28...64
HANA 18,20,22,24,26,28...64 HANA 18,20,22,24,26,28...64
.----------------------,5_____ .----------------------,5_____
MGMT| G8264 Switch 1a |1_____\ MGMT | G8264 Switch 2a |1_____\
‘----------------------’ \ ‘----------------------’ \
GPFS 17,19,21,23,25,27...63 / ISL GPFS 17,19,21,23,25,27...63 / ISL
HANA 18,20,22,24,26,28...64 / HANA 18,20,22,24,26,28...64 /
.----------------------,5____/ .----------------------,5____/
MGMT| G8264 Switch 1b |1___/ MGMT | G8264 Switch 2b |1___/
‘----------------------’ ‘----------------------’
GPFS 17,19,21,23,25,27...63(64) GPFS 17,19,21,23,25,27...63(64)
| |
‘-----------------------------------------------’
Redundant Inter-Site Link (one on each switch)
• Switchport Portchannel Configuration
# Define Switch 1a,2a,1b,2b
# RS 8264 port 64
# RS 8272 port 48
# RS 8296 port 86
# RS 8124 port 22
RS 8XXX (config)# interface port <port>
RS 8XXX (config-if)# switchport mode trunk
# The next 2 configuration statements are valid in case of a stretched HA solution. In a
# stretched HA scenario HANA and GPFS VLANs must be enabled on the trunk interface.
RS 8XXX (config-if)# switchport trunk allowed vlan [GPFS VLAN,HANA VLAN]
RS 8XXX (config-if)# switchport trunk native vlan [GPFS VLAN,HANA VLAN]
# The next 2 configuration statements are valid in case of DR solution. Only GPFS VLAN
# must be enabled on the trunk interface in a DR scenario.
RS 8XXX (config-if)# switchport trunk allowed vlan [GPFS VLAN]
RS 8XXX (config-if)# switchport trunk native vlan [GPFS VLAN]
RS 8XXX (config-if)# exit
RS 8XXX (config)# portchannel 63 port <port>
RS 8XXX (config)# portchannel 63 enable
RS 8XXX (config)# vlag portchannel 63 enable

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 57


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

6.4.10.3 Portchannel over four Inter-Site Links If there are four site-interconnect fibres - as
described with the drawing below - two of them should be connected on port 63 and port 64 on each
switch. The following configuration has to be applied to the switches to establish one logical static
inter-site connection over 4 cables.
Portchannel over four inter-site links (two on each switch)
.------------------------------------------------.
| .--------------------------------------------+---.
| | | |
HANA 18,20,22,24,26,28...64(+63) HANA 18,20,22,24,26,28...64(+63)
.----------------------,5_____ .----------------------,5_____
MGMT| G8264 Switch 1a |1_____\ MGMT | G8264 Switch 2a |1_____\
‘----------------------’ \ ‘----------------------’ \
GPFS 17,19,21,23,25,27...63 / ISL GPFS 17,19,21,23,25,27...63 / ISL
HANA 18,20,22,24,26,28...64 / HANA 18,20,22,24,26,28...64 /
.----------------------,5____/ .----------------------,5____/
MGMT| G8264 Switch 1b |1___/ MGMT | G8264 Switch 2b |1___/
‘----------------------’ ‘----------------------’
GPFS 17,19,21,23,25,27...63(+64) GPFS 17,19,21,23,25,27...63(+64)
| | | |
| ‘--------------------------------------------+---’
‘------------------------------------------------’
Portchannel over four inter-site links (two on each switch)

• Switchport Portchannel Configuration


# Define Switch 1a,1b,2a,2b
# RS 8264 port 63,64
# RS 8272 port 47,48
# RS 8296 port 85,86
# RS 8124 port 21,22
RS 8XXX (config)# interface port <port>
RS 8XXX (config-if)# switchport mode trunk
# The next 2 configuration statements are valid in case of a stretched HA solution. In a
# stretched HA scenario HANA and GPFS VLANs must be enabled on the trunk interface.
RS 8XXX (config-if)# switchport trunk allowed vlan [GPFS VLAN,HANA VLAN]
RS 8XXX (config-if)# switchport trunk native vlan [GPFS VLAN,HANA VLAN]
# The next 2 configuration statements are valid in case of DR solution. Only GPFS VLAN
# must be enabled on the trunk interface in a DR scenario.
RS 8XXX (config-if)# switchport trunk allowed vlan [GPFS VLAN]
RS 8XXX (config-if)# switchport trunk native vlan [GPFS VLAN]
RS 8XXX (config-if)# exit
RS 8XXX (config)# portchannel 63 port <port>
RS 8XXX (config)# portchannel 63 port <port>
RS 8XXX (config)# portchannel 63 enable
RS 8XXX (config)# vlag portchannel 63 enable

6.4.11 Save and Restore Switch Configuration

6.4.11.1 Save Switch Configuration Locally Execute:


# scp admin@switch.example.com:getcfg .

6.4.11.2 Restore Switch Configuration Execute:

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 58


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

# scp getcfg admin@switch.example.com:putcfg

6.4.12 Generation of Switch Configurations

The script SwitchAutoConfig.sh can be used to create a basis configurations for the switch models
G8124 and G8264. We recommend to copy and paste the created configuration into the serial console of
the switches.
SwitchAutoConfig.sh can be found in /opt/lenovo/saphana/bin/.
As a prerequisite for SwitchAutoConfig.sh, the switches must be base configured as described in chapter
6.4.1: Basic Switch Configuration Setup on page 51, and reachable via ssh over network.

6.4.12.1 Script Usage


./SwitchAutoConfig.sh -h

usage: ./SwitchAutoConfig.sh [-c type] [-d type] styletypes=[G8264|G8052|G8124]

-c just creates switch configurations for the chosen switch type.


-d creates and also deploys the switch configurations for the chosen switch type

Example: SwitchAutoConfig.sh -d G8264

Note
The current version of the script does not support the automated creation of G8272 and G8296
RackSwitch configurations. To obtain such configuration files generate the configuration for
G8264 RackSwitch (SwitchAutoConfig.sh -c G8264) and adapt the port numbers according
to table 49: G8272 RackSwitch port assignments on page 48 or table 50: G8296 RackSwitch
port assignments on page 49. Therfore it is also not possible to use the -d option.

6.4.12.2 Examples The following command will create the configurations for a G8264 switch pair.
You will be asked to enter configuration details like IP addresses. After the configuration part you have
to enter the ssh password of the switches, twice per switch. The first you enter the ssh password, the
script will check the firmware version of the switches, the second time the password must be entered for
the deployment process.
./SwitchAutoConfig.sh -c G8264
The following command will create and deploy the configurations for a G8264 switch pair.
./SwitchAutoConfig.sh -d G8264

Attention
Please be very careful, if you create the configuration for a switch connected to the customer
network. In this case make sure, that the switch is disconnected during the setup. Only if the
configuration is complete and matches the customer requirements bring up the connection to
the customer network.
After the configuration deployment the switches should be checked manually. Afterwards
the configuration can be saved as described in chapter 6.4.9: Save changes to switch FLASH
memory on page 56.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 59


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

6.4.12.3 Input Values All the default values are based on the Networking Guide standards, but
can be changed if needed. Most input values like hostname or IP address need to be provided by the
customer. Portchannel is only needed in case of a DR or HA cluster. If portchannel should be configured,
the script will ask for the type of port channel that has to be configured. There are two port channel
options - HA or DR. The GPFS, HANA, xCat and IMM VLAN IPs are IPs that reside within those
VLANs. Their purpose is to be able to ping server addresses within these VLANs from the switch. For
G8052 the script will ask for a MGMT Port, because the G8052 has no dedicated management port.

6.5 Setting up Networking on the Server Nodes

6.5.1 Jumbo Frames

It is possible and allowed to activate the usage of so-called jumbo frames for the HANA and IBM GPFS
networks. Jumbo frames are Ethernet frames with a Maximum Transmission Unit (MTU) up to 9000
bytes. The standard MTU is 1500.
The advantage of jumbo frames is less overhead for headers and checksum computation. This can lead
to a better network performance on the HANA and IBM GPFS networks.
Attention
Jumbo frames can only be used, if all network components (for example networking adapters
and switches) that have to process these jumbo frames support the usage.
If erroneously activated, jumbo frames cause the loss of network connectivity.
The switches G8264, G8272, G8296 and G8124E are certified for the usage in the Lenovo Solution
appliance with jumbo frames. In a standard cluster setup jumbo frames can be activated. In DR40 or
High Availability setups the HANA and IBM GPFS networks may communicate via non-Lenovo customer
switches that cannot handle jumbo frames, therefore it is recommended to not use jumbo frames in these
setups.
To change this behaviour, you have to change the MTU size. This can be done like the following:
• SLES41 : in the YaST module for networking: General tab in the configuration of the network
device/bond
• RHEL: changing the MTU size in the file /etc/sysconfig/network-scripts/ifcfg-* of the
interface/bond adding the line
MTU=9000

Warning
Jumbo frames are activated during the installation phase for bond0 and bond1. You may have
to deactivate the usage of jumbo frames in certain scenarios.

6.5.2 Using the wicked Framework for Network Configuration in SLES12

In SLES 12 startup of DHCP or setting up static IP addresses is only done if a link/carrier is available
an the regarding port, i.e. the cable is plugged in and so on. This might lead to problems in some cases.
You can avoid troubles by switching off this feature per interface in /etc/sysconfig/network/ifcfg-*
setting:
LINK_REQUIRED='no'
40 Disaster Recovery (previously SAP Disaster Tolerance)
41 SUSE Linux Enterprise Server

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 60


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

In case of a bonding device, the bond master needs the setting:


LINK_REQUIRED='no'

and the bond slave devices (e.g. eth0, eth2 ) need the setting:
LINK_RETRY_WAIT=5

It is not possible to set up this flag using yast. For the standard ports this flag is set during installation
by the installer. To make this setting work, the network nanny daemon needs to be activated, this is
done by the kernel boot parameter
nanny=1

which is set automatically by the installer. One can verify that setting with:
# cat /proc/cmdline
BOOT_IMAGE=/boot/vmlinuz-3.12.28-4-default root=UUID=054df255-d0ad-4d5d-aff6-←-
,→f56af8b130d2 ro resume=/dev/sda3 splash=silent transparent_hugepage=never ←-
,→intel_idle.max_cstate=0 processor.max_cstate=0 numa_balancing=disabled nohz=←-
,→off nanny=1 instmode=cd showopts

It is set in the file /etc/default/grub:


GRUB_CMDLINE_LINUX_DEFAULT=" resume=/dev/sda3 splash=silent transparent_hugepage=←-
,→never intel_idle.max_cstate=0 processor.max_cstate=0 numa_balancing=disabled ←-
,→nohz=off nanny=1 instmode=cd showopts"

To activate a changed setting:


# grub2-install

6.5.3 Bonding

6.5.3.1 SLES With SLES bonding is configured easily using the YAST module for networking. The
bonds for GPFS and HANA networks should have been created automatically during install. To add
additional bonds, use Add with ’device type’ ’Bond’. In Address provide the IP address and network
mask, in Bond Slaves choose the slaves and set the ’Bond Driver Options’. For LACP enter:
mode=4 xmit\_hash\_policy=layer3+4 miimon=100

(mode=4 is the same as mode=802.3ad) and for active-backup bonding choose:


mode=active-backup miimon=100

For more information check /usr/src/linux/Documentation/networking/bonding.txt on your server.

Warning
LACP like other bonding modes require configuring the switch to aggregate the links (Please
check 6.4.2: Advanced Setup of the Switches on page 52). The active-backup, balance-tlb and
balance-alb modes do not require any specific configuration of the switch.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 61


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

6.5.3.2 RHEL In RHEL, you have to do this on file-level:


First, you have to create a file called ifcfg-bondN in /etc/sysconfig/network-scripts for bondN to
define the bond itself:
DEVICE=bond[N]
IPADDR=[IP address]
NETMASK=[netmask]
USERCTL=no
BOOTPROTO=none
ONBOOT=yes
NM_CONTROLLED=no
BONDING_OPTS="mode=802.3ad xmit_hash_policy=layer3+4 miimon=100"
IPV6INIT=no

In RHEL interface-specific parameters for the bonding kernel module must be specified as a space-
separated list in the BONDING_OPTS="bonding parameters" directive in the ifcfg-bondN interface
file. Do not specify options specific to a bond in /etc/modprobe.d/bonding.conf, or in the deprecated
/etc/modprobe.conf file.
The mode is explained in the part Bonding Module Directives42 of the RHEL online documentation.
The BONDING_OPTS in the example above is using LACP.
Warning
LACP like other bonding modes require configuring the switch to aggregate the links (Please
check 6.4.2: Advanced Setup of the Switches on page 52). The active-backup, balance-tlb and
balance-alb modes do not require any specific configuration of the switch.
For active-passive bonding, you can use for example:
BONDING_OPTS="mode=active-backup miimon=100"

In the next step you have to add interfaces to the bond. For this, you have to edit the file ifcfg-ethx in
/etc/sysconfig/network-scripts in the following way:
DEVICE=ethx
USERCTL=no
ONBOOT=yes
MASTER=bond0
SLAVE=yes
BOOTPROTO=none
NM_CONTROLLED=no
IPV6INIT=no

The bond can be activated using:


# service network restart

6.5.3.3 Checking the configuration You can check the bond using:
# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.6.0 (September 26, 2009)

Bonding Mode: IEEE 802.3ad Dynamic link aggregation


42 https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Deployment_Guide/

sec-Specific_Kernel_Module_Capabilities.html#s3-modules-bonding-directives

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 62


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

Transmit Hash Policy: layer3+4 (1)


MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0

802.3ad info
LACP rate: slow
Aggregator selection policy (ad_select): stable
Active Aggregator Info:
Aggregator ID: 1
Number of ports: 2
Actor Key: 33
Partner Key: 1032
Partner Mac Address: 08:17:f4:c3:dd:48

Slave Interface: eth0


MII Status: up
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: f4:52:14:81:07:c1
Aggregator ID: 1
Slave queue ID: 0

Slave Interface: eth2


MII Status: up
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: f4:52:14:81:07:a1
Aggregator ID: 1
Slave queue ID: 0

6.5.4 VLAN tagging

It might be useful (especially in a 10Gb scenario), that more than one VLAN is handled per interface.
In that case the VLAN information has to exchanged between switch and server (VLAN tagging).

Warning
The interfaces on the switch have to be configured accordingly.
You can use that to e.g separate traffic for PROD and/or Backup Subnets with tagged VLAN Frames.

6.5.4.1 SLES In SLES this is configured easily using the YAST module for networking.
First you have to create a bond without IP Address using the option ’No Link and IP Setup (Bonding
Slaves)’. Assign Bond Slaves as usual.
Create VLAN for this bond with IP and FQDN Information. Use Add with ’device type’ VLAN. In
’Configuration Name’ enter the VLAN-ID. Click ’Next’.
In the next menu adapt ’Real Interface for VLAN’ and ’VLAN ID’ to the bond and VLAN-ID you want
to use. Enter ’IP-Adress’, ’Subnet Mask’ and ’Hostname’. Click ’Next’.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 63


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

Figure 19: VLAN Configuration using Yast

You may add additional VLANs for the same bond or other bonds – with different IPs and hostnames.
You can check the result using ifconfig, ip a or using:
# cat /proc/net/vlan/vlan500
vlan500 VID: 500 REORDER_HDR: 1 dev->priv_flags: 1
total frames received 86
total bytes received 3974
Broadcast/Multicast Rcvd 1

total frames transmitted 7


total bytes transmitted 426
Device: bond2
INGRESS priority mappings: 0:0 1:0 2:0 3:0 4:0 5:0 6:0 7:0
EGRESS priority mappings:

6.5.4.2 RHEL In RHEL, you have to do this on file-level:


First, you have to create a file for the interface in /etc/sysconfig/network-scripts e.g. for bond2 to
define the bond itself without IP address and Network Mask:
DEVICE=bond2
USERCTL=no

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 64


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

BOOTPROTO=none
ONBOOT=yes
NM_CONTROLLED=no
BONDING_OPTS="mode=active-backup miimon=100"

Please add the Bond slaves as described above in the Bonding chapter.
Now create a subinterface for e.g. VLAN 500 /etc/sysconfig/network-scripts/ifcfg-bond2.500:
DEVICE=bond2.500
IPADDR=[IP address]
NETMASK=[netmask]
USERCTL=no
BOOTPROTO=none
ONBOOT=yes
NM_CONTROLLED=no
VLAN=yes
IPV6INIT=no

The VLAN can be activated using:


# service network restart

You can check the result using ifconfig, ip a or using cat /proc/net/vlan/bond2.500.
You may add additional VLANs for the same bond or other bonds – with different IPs and hostnames.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 65


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

7 Guided Install of the Lenovo Solution


For information about this topic please refer to our new Installation Guides.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 66


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

8 Disaster Recovery
The scope of this section is to provide a guide for the Lenovo Disaster Recovery (previously SAP Disaster
Tolerance) solution for SAP HANA . The solution is implemented in two physically independent locations,
with one location used as the production site and the second which serves as the backup or disaster site.
A third optional location is possible for a tie breaking (quorum) feature of GPFS.
The goal of DR is to enable a secondary data center to take over production services with exactly the
same set of data as stored in the primary site data center. Synchronous data replication between the
primary and secondary site ensures zero data loss (RPO=0). This allows the protection of a data center
against events like power outage, fire, flood or hurricane. The time required to recover the services (RTO)
is different for each installation depending upon the exact client implementation.

8.1 Architecture

This sections briefly explains the architecture of the Lenovo DR solution for SAP HANA and provides
examples how it can be installed in a standard two-tier or three-tier data center environment.

Site C
Quorum
Node
(Optional)

Site A Site B
FS: FS:
sapmntdata sapmntdata

Synchronous
Synchronous
Replication
Replication

Figure 20: DR Architectural Overview

8.1.1 Terminology

The terms site A, primary site, and active site are used interchangeably in this document to refer to the
location where the productive SAP HANA HA is initially set up and used.
Similarly, site B, backup site, and passive site all refer to the second location where the productive SAP
HANA HA system is copied to in the case of a disaster.
After a failover the naming of these two sides may be swapped, depending whether the customer wants
to switch back as soon as possible or keep using the former backup site as the primary site.
Site C will refer to the quorum or tiebreaker site.
SAP also uses the terms Disaster Recovery (DR) and Disaster Tolerant (DT) interchangeably. We will
try to be consistent and use DR in this document.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 67


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

8.1.2 Architectural overview

The Lenovo DR solution for SAP HANA can be thought of two standard Lenovo HA clusters in two
different sites combined into one large cluster. Each site can be planned as a standard Lenovo HA
cluster with the same hardware requirements as the standard solution. Currently, the only architectural
requirement is that both sites have the same number of server nodes and each site has the same number
of network switches as the existing Lenovo HA cluster offering.
The idea of Lenovo DR solution for SAP HANA is to have one stretched IBM GPFS cluster spanning
both sites and providing one file system for SAP HANA. There are two separate SAP HANA clusters on
both sites that can access data in this single shared file system. Synchronous data replication built into
the file systems ensures that at any given point in time there is the exact same data in both data centers.
Figure 21: DR Data Distribution in a Four Node Cluster on page 68 shows the high-level architecture.
Warning
As of December 2012, SAP has published an end-to-end value of 320µs latency between the
synchronous Sites of a DR cluster. It is known by both SAP and Lenovo that this number
of itself is not enough to describe if the SAP HANA database can recover from a disaster or
not.
Latency is a term that can be split into many different categories such as: network latency,
or application latency; each of which has their own values necessary for a proper DR setup.
It is also dependent on whether you use On Line Analytical Processing (OLAP) or On Line
Transaction Processing (OLTP) workloads.
Currently SAP is considering this value on a case per case basis, and it is important that you
discuss these values with your customer and the SAP consultant on site.
The Lenovo DR solution for SAP HANA works with a total number of three data copies. The first copy
is kept local to the writing node. The second copy is stored on any other node except the writing node
and the third copy is always stored on any node on the remote site. Depending on the file size and actual
disk space usage of a certain node, GPFS tends to either cluster blocks on a node or stripe them across
multiple nodes. The same applies to distribution over disks within a node.

Site A Site B
node1 node2 node3 node4 node5 node6 node7 node8

HDD HDD HDD HDD HDD HDD HDD HDD


replica
first

synchronous
syn
second
replica

chr
ono
us
replica
third
meta
data

fio fio fio fio fio fio fio fio

FG FG FG FG FG FG FG FG
1,0,1 1,0,2 2,0,1 2,0,2 1,1,1 1,1,2 2,1,1 2,1,2

sda1 sda2 sda1 sda2 sda1 sda2 sda1 sda2 sda1 sda2 sda1 sda2 sda1 sda2 sda1 sda2
sdb2 sdb2 sdb2 sdb2 sdb2 sdb2 sdb2 sdb2
OS OS OS OS OS OS OS OS

Figure 21: DR Data Distribution in a Four Node Cluster

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 68


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

The details of the network setup are not strictly defined. It is up to the project team to develop a solution
that is suitable to the customer’s existing network infrastructure. This must be discussed thoroughly in
advance together with the customer networking team.
The basic requirement is to have at least two sites, a third network site is needed if a so called tiebreaker
node will be part of the Disaster Tolerance architecture.
Each site will use a standard HA setup with it’s own dedicated GPFS and SAP HANA network. This
can be provided by using the standard Lenovo RackSwitch G8264 10 Gbps Ethernet switches, which are
part of the standard SAP HANA HA offering of Lenovo. The standard network requirements of a HA
solution regarding the customer’s uplink connectivity also apply to DR.
For the tiebreaker node at site C, there are no
special network requirements as it only consists of
a single server.
For the connectivity between the two main sites, HANA HANA
at least one dedicated optical fibre connection end-
to-end between both sides is recommended. A
routed or non-dedicated connection may be used, GPFS
but no guarantees about performance or operation
can be made. Using redundant optical fibres end-
to-end may improve performance and reliability.
The project team is responsible to work out a solu-
tion with respect to the customer’s infrastructure Site C
(optional)
and requirements. A dedicated Ethernet network
needs to be provided for the GPFS network. For Figure 22: Logical DR Network Setup
the configuration of the inter-site portchannel see
Section 6.4.10: Inter-Site Portchannel Configuration on page 56.
Figure 21 on page 68 shows a scenario with four nodes on each site. Only the HANA internal network
and the GPFS network are shown, the uplinks connecting the HANA cluster to the client network are
left out for visibility.
In a solution with a quorum site, the tiebreaker node must be reachable from within the internal GPFS
network, each node must be able to reach the tiebreaker node and vice versa. There are no particu-
lar requirements regarding bandwidth or latency for this connection. It is acceptable to use a routed
connection through the customer’s internal network as long as it is reliable.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 69


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

IBM RackSwitch G8264 #1 IBM RackSwitch G8264 #3

HANA internal HANA internal


10 Gbit
GPFS GPFS
IBM RackSwitch G8264 #2 IBM RackSwitch G8264 #4
ISL 40 Gbit 40 Gbit ISL
HANA internal HANA internal
10 Gbit
GPFS GPFS

4 ports from each node: 2x GPFS, 2x HANA internal 4 ports from each node: 2x GPFS, 2x HANA internal

node1 node2 node3 node4 node5 node6 node7 node8

Figure 23: DR Networking View (with no client uplinks shown)

8.1.3 Three site/Tiebreaker node architecture

If the customer decides to use a tiebreaker node in a third site, an additional server with an appropriate
GPFS license is required. Although the use of any server is possible, we recommend to use the Side-Car
Quorum Node x3550 M3/M4 defined in section 10.1.2: Prepare quorum node on page 93. This definition
includes the necessary licenses and services required for the tiebreaker node. This node is optional but
recommended for increased reliability and simplicity in the case of a disaster.
The solution has been tested in setups with and without this additional node. The rationale for this node
is the split-brain scenario where the connection between the two main sites is lost. The tiebreaker node
helps deciding which site is the active site and, thus, preventing the primary site from going down for
data integrity reasons. Additionally, this server eases some operational procedures by reducing both the
time needed for recovery and the likelihood of operating errors.
This document will describe the use of the tiebreaker node and explain the deviations when it is not
necessary.

8.2 Mixing eX5/X6 Server in a DR Cluster

Please read our ’Special Topic Guide for System x eX5/X6 Servers: Mixed Cluster’. Information given
there takes precedence over the instructions below.

8.3 Hardware Setup

This section talks about how to physically install System x machines and how to prepare uEFI for HANA.
It also provides information about the network has to be set up.

8.3.1 Site A and B

The hardware setup of the nodes at each site has to be performed as described in our Installation Guides.
The following list summarises these steps.
• Ensure certified hardware is available and connected to power

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 70


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

• Verify firmware levels. They must be identical on all nodes


• Modify / Check UEFI settings. They must be identical on all nodes
• Configure storage (RAID setup)

8.3.2 Tiebreaker Site C (optional)

It is recommended to setup the tiebreaker node according to the description in section 10.1.2: Prepare
quorum node on page 93.
The tiebreaker node must have a small partition (50 MB is sufficient) to hold a replica of the GPFS file
system descriptors. It will not contain any data or metadata information. The node must be able to
reach all other nodes at both site A and site B of the GPFS cluster. The partition can reside on a logical
volume (LVM) if desired. However, GPFS must be able to recognize the partition, so, when using LVM,
the name /dev/dm-X must be used instead of the logical volume name. Performance is not critical for
this partition.

8.3.3 Acquire TCP/IP addresses and host names

Refer to section 6.2: Customer Query on page 41 which contains a template that can be used to gather
all the required networking parameters. Ideally, this is done before the installation starts at the customer
location.

8.3.3.1 Tiebreaker node The following parameters must be available for the installation of the
cluster:

Parameter Value
Hostname
IP address for Hostname
IP address for GPFS Network

Table 52: Hostname Settings for DR

In case of a new installation these additional parameters are required. See table 53 on page 71

Parameter Value
Netmask
Default gateway
Additional routes
DNS server
NTP server

Table 53: Extra Network Settings for DR

The tiebreaker node must be able to reach all cluster nodes on both sites with the IP addresses and
hostnames used for GPFS (gpfsnodeXX) with which the GPFS cluster uses to communicate internally.
Conversely, the cluster nodes must reach the tiebreaker node with the same host name and IP address.
This can be achieved, for example, via routing, tunneling, a VPN connection, or through a dedicated
physical network.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 71


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

8.3.4 Network switch setup (GPFS and SAP HANA network)

The setup of the switches used for the GPFS and SAP HANA network is described in section 6.3.1:
Clustered Installations on page 43. For the link between the switches on both sites refer to the next
sections.

8.3.5 Link between site A and B

The GPFS network will be stretched over site A and B, while the SAP HANA network must not. This
means that the GPFS network on both sites will be one subnet and each node can reach all other nodes
on both sites; whereas, the SAP HANA networks on site A and B are isolated from each other.
The GPFS network on both sites should be connected with at least a dedicated 10GBit connection. A
routed network is not recommended as it may have severe impact on the synchronous replication of the
data.
The SAP HANA network is separated on both sites. This is due to SAP HANA being operated in a cold
standby mode. For this reason, both sites will use the same hostnames and IP adresses for SAP HANA.
This requires a strict isolation of these two networks.

8.3.6 Network integration into customer infrastructure

The network connections in the customer network for SAP HANA access, management, backup and other
connections depends very much on the customer network and his requirements. General guidance can be
found in our Installation Guides.

8.3.7 Setup network connection to tiebreaker node at site C (optional)

The tiebreaker node at site C needs to be integrated as well into the GPFS cluster. Every node in the
cluster must be able to contact the tiebreaker node and vice versa.
This depends on the configuration of the tiebreaker node (one or more network interfaces), the subnet
used for GPFS traffic (private or public) and other parameters. It is up to the project team to come to
an agreed solution with the customer.
Possible setups include:
• an address in the same L2 network segment,
• an address that can be reached via routing, e.g. the default gateway, or
• through static host routes when the private address range can only be reached through another ip
address, see example below for this setup,
• an address that can be reached through VPN or NAT would be also possible.
The following is an example for a setup with a GPFS subnet of 192.168.10.x and a tiebreaker node with
one network adapter and a public IP address in a 10.x.x.x range:
1. On the tiebreaker node add the GPFS address as an alias to the NIC attached to the public network
e.g.
# ifconfig eth0:1 192.168.10.199 netmask 255.255.255.0

To make this permanent, add an entry like this to the respective ifcfg-ethX file in /etc/syscon-
fig/network
IPADDR_1='192.168.10.99/24'

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 72


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

2. Add host routes on every node in the GPFS cluster to this IP alias.
# route add -host 192.168.10.199 gw <tiebreaker external ip>

3. Add host routes on the tiebreaker node for every node in the cluster.
# route add -host 192.168.10.101 gw <external IP node1>
# route add -host 192.168.10.102 gw <external IP node2>
...
# route add -host 192.168.10.10X gw <external IP nodeN>

4. Verify that the newly created alias is reachable throughout the cluster and all nodes can be pinged
from the tiebreaker node via the internal GPFS network addresses.

8.4 Software Setup

Note
The base installation changed with the advent of the new text based installer which also allows
the installation on Red Hat Enterprise Linux. This replaces the manual installation described
here in earlier releases.

Note
Starting with appliance version 1.9.96-13 the mount point for the GPFS file system sapmntdata
is user configurable during installation. SAP HANA will be also installed into this path.
Lenovo recommends /hana for new installations.
The following commands and code snippets use /sapmnt. For any other path please replace
/sapmnt with the chosen path.
Install all standard DR servers as described in our Installation Guides. In phase 3 choose the role Cluster
Node (Worker) for all servers. Please note that in the interim check defined in our Installation Guides
each site is only expected to see only the site-local nodes in the HANA network test.
For the optional quorum node, please follow the instructions given in section 10.1.2: Prepare quorum
node on page 93 and following to install the base operating system and software.

8.4.1 GPFS configuration prerequisites

Create /etc/hosts entries for GPFS. lTo ensure communication over the correct network interfaces,
define the host entries manually on each node (including the tiebreaker node if available) for the GPFS
and SAP HANA networks. Ensure that the entry for the local machine is always the first entry in the
list. This is required for the installer scripts. Do not copy this file from one node to the other as it will
break other installation scripts.
Each node in the cluster (except the tiebreaker node) has the following two names associated with it
192.168.10.1XX gpfsnodeXX
192.168.20.1XX hananodeXX

The tiebreaker node only has a gpfsnode name as it is used solely for GPFS communication
192.168.10.1XX gpfsnodeXX

The GPFS network spans both sites, which means in an example with four nodes per site you have
gpfsnode01 up to gpfsnode08 (gpfsnode01-04 at site A, gpfsnode05-08 at site B).

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 73


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

The SAP HANA network is restricted to only one site, which in turn means you should use each hanan-
odeXX entry twice (once per site). This effectively couples any active SAP HANA node to a backup node
on the second site. In the example with four nodes on each site you have hananode01 to hananode04 at
site A and hananode01 to hananode04 at site B.

8.4.1.1 Example: Two sites with four nodes each


...
# Second node on first site:
192.168.10.102 gpfsnode02
192.168.20.102 hananode02
192.168.10.101 gpfsnode01
192.168.20.101 hananode01
192.168.10.103 gpfsnode03
192.168.20.103 hananode03
192.168.10.104 gpfsnode04
192.168.20.104 hananode04
...
# Second node on second site (physically the sixth node)
192.168.10.106 gpfsnode06
192.168.20.102 hananode02
192.168.10.105 gpfsnode05
192.168.20.101 hananode01
192.168.10.107 gpfsnode07
192.168.20.103 hananode03
192.168.10.108 gpfsnode08
192.168.20.104 hananode04
...

The optional tiebreaker node only has GPFS addresses. This has two consequences: the tiebreaker
node only has gpfsnodeXX entries in the /etc/hosts file for all nodes; and, all other nodes have no
hananodeXX entry for this special node. In our example above, a tiebreaker node would get allocated
gpfsnode99.
After editing the /etc/hosts entries it is a good idea to verify network connectivity. To do so, execute
the following command to list all nodes of the DR clusters attached to the GPFS network:
# nmap -sP 192.168.10.0/24

and execute this command at each site to confirm the SAP HANA network:
# nmap -sP 192.168.20.0/24

Only the nodes of the site should be listed using the second command. Verify that you got the correct
machines by comparing the displayed MAC addresses with the MAC addresses of the bond1 device on
each respective node.

8.4.1.2 SSH key exchange As GPFS uses SSH, the root ssh keys on all nodes need to be exchanged
to allow for password-less SSH connectivity within the cluster. This is a general GPFS requirement.
Please note that the following commands will overwrite any additional SSH key authorizations you may
have installed yourself.
Run the following commands all from the first node in the GPFS cluster.
Generate the known_hosts file on the first node

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 74


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

# for node in gpfsnode0{1..8} ; do ssh-keygen -R $node ; ssh-keyscan -t rsa $node >>←-


,→ /root/.ssh/known_hosts ; done
# for ip in 192.168.10.{1..8} ; do ssh-keygen -R $ip ; ssh-keyscan -t rsa $ip >> /←-
,→root/.ssh/known_hosts ; done

Generate a new SSH key for passwordless ssh access, authorize it and distribute it to the other nodes:
# ssh-keygen -q -b 4096 -N "" -C "Unique SSH key for root on DR Cluster" -f /root/.←-
,→ssh/id_rsa
# cat ~/.ssh/id_rsa.pub > ~/.ssh/authorized_keys
# for node in gpfsnode0{1..8} ; do scp /root/.ssh/id_rsa /root/.ssh/id_rsa.pub /root←-
,→/.ssh/authorized_keys root@$node:.ssh/ ; done

Distribute the known_hosts file to the other nodes:


# for node in gpfsnode0{2..8} ; do scp /root/.ssh/known_hosts root@${node}:/root/.←-
,→ssh/ ; done

A small explanation for the gpfsnode01..8 value, this generates a list of names from gpfsnode01 to gpf-
snode08. If the host names are non-successive, replace gpfsnode01..8 with a space separated list of the
hostnames. The distribution of the known_hosts file omits the first node, as on this node the files are
already prepared.
Note
In previously releases of this document the shipped SSH root key was used and distributed
among the nodes in the DR-enabled. This imposes a security risk and you should consider
replacing this key with a new unique key. Please contact support.

8.4.2 GPFS Server configuration

Create the necessary configuration files. On the first node (which will be the primary configuration
server), create a file /var/mmfs/config/nodes.cluster and add one line per node containing its GPFS
network hostname. If applicable, add the tiebreaker node as last node.
Next append ":quorum" (no spaces) to the end of line for some hosts, according to the following rules:
a) Distribute all available nodes (except tiebreaker) in four equal sized groups and append ":quorum" to
the first node of each group.
b) If a quorum node is available, mark it as quorum.
c) Without a quorum node, mark the second node of the first group as a quorum.
With an example of 8 nodes, you should have 5 nodes marked as quorum nodes. See the following example
for an 8 node DR cluster without and with a dedicated tiebreaker node (gpfsnode99):
The nodes.cluster file for an eight node setup without separate quorum node (i.e. tiebreaker node) should
look like this:
gpfsnode01:quorum-manager
gpfsnode02:quorum-manager
gpfsnode03:quorum-manager
gpfsnode04:
gpfsnode05:quorum-manager
gpfsnode06:
gpfsnode07:quorum-manager
gpfsnode08:

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 75


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

Topology nodes.cluster file with nodes.cluster file with-


Vector Quorum Node out Quorum Node
Failure group 1 1,0,x gpfsnode01:quorum gpfsnode01:quorum
gpfsnode02 gpfsnode02:quorum
Failure group 2 2,0,x gpfsnode03:quorum gpfsnode03:quorum
gpfsnode04 gpfsnode04
Failure group 3 1,1,x gpfsnode05:quorum gpfsnode05:quorum
gpfsnode06 gpfsnode06
Failure group 4 2,1,x gpfsnode07:quorum gpfsnode07:quorum
gpfsnode08 gpfsnode08
Failure group 5
3,0,1 gpfsnode99:quorum (not applicable)
(tie breaker)

Table 54: GPFS Settings for DR Cluster

Note
Adding node designation ’manager’ is optional as quorum nodes are automatically eligible to
be chosen as cluster manager.
One comment regarding the topology vectors, as they will be used in a later step. The value of x has to
be replaced with the number of the node within the failure group. If you have 3 nodes in each failure
group, and the number of the nodes is from 1 to 3 in each failure group, then the second node in the first
failure group will be 1,0,2; the second node in the third failure group will be 1,1,2.
Create the GPFS cluster with the first node of each site as primary (-p) resp. secondary server (-s)
# mmcrcluster -n /var/mmfs/config/nodes.cluster -p gpfsnode01 -s gpfsnode05 -C ←-
,→HANADR1 -A -r /usr/bin/ssh -R /usr/bin/scp

Mark all nodes as licensed. Mark all the quorum nodes (including the optional tiebreaker node) and the
configuration servers with a server license and all other nodes as FPO licensed.
# mmchlicense server --accept -N gpfsnode01,gpfsnode02,..,gpfsnode99
# mmchlicense fpo --accept -N gpfsnode03,gpfsnode04,...

Please take care of your actual licensing.


Start the GPFS daemon on all nodes
# mmstartup -a

Apply the following cluster configuration changes


# mmchconfig unmountOnDiskFail=meta -i
# mmchconfig panicOnDiskFail=meta -i
# /usr/bin/yes 999 | /usr/lpp/mmfs/bin/mmchconfig dataStructureDump=/tmp/GPFSdump,←-
,→pagepool=4G,maxMBpS=2048,maxFilesToCache=4000,skipDioWriteLogWrites=1,←-
,→nsdInlineWriteMax=1M,prefetchAggressivenessWrite=2,readReplicaPolicy=local,←-
,→enableRepWriteStream=false,enableLinuxReplicatedAIO=yes,nsdThreadsPerDisk=24

After this last command you need to restart GPFS with


# mmshutdown -a
# mmstartup -a

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 76


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

8.4.3 GPFS Disk configuration

On the first node, create a file /var/mmfs/config/disk.list.data.fs. For each node add entries as
described in the following section, but replace the failureGroup with the correct topology vector for the
particular node. Make sure that the pool definitions are only once in this file.

8.4.3.1 GPFS 3.5 Disk Definitions For every HDD RAID device /dev/sdb and subsequent devices
add a NSD definition like the following template:
%nsd: device=/dev/sdb
nsd=data01node01
servers=gpfsnode01
usage=dataAndMetadata
failureGroup=1,0,1
pool=system

Please don’t forget to increment the first number in the nsd line, e.g. data02node01 for the second HDD
block device. You can get a device list with lsscsi.
Then after adding als device stanzas add these lines unaltered:
%pool:
pool=system
blockSize=1M
usage=dataAndMetadata
layoutMap=cluster
allowWriteAffinity=yes
writeAffinityDepth=1
blockGroupFactor=1

When using a tiebreaker node add the following lines to the stanza file:
%nsd: device=/dev/sda3
nsd=desc01node99
servers=gpfsnode99
usage=descOnly
failureGroup=3,0,1
pool=system

Replace device, nsd name, server with the correct values where necessary.
If your setup includes a tiebreaker node determine the device name of the partition allocated for the
descriptor-only NSD and change the line in disk.list.data.fs starting with %nsd: device= accord-
ingly.

8.4.4 Filesystem Creation

Create the NSDs


# mmcrnsd -F /var/mmfs/config/disk.list.data.fs -v no

Create the filesystem


# mmcrfs sapmntdata -F /var/mmfs/config/disk.list.data.fs -A no -B 512k -N 3000000 -←-
,→v no -m 3 -M 3 -r 3 -R 3 -j cluster --write-affinity-depth 1 -s ←-
,→failureGroupRoundRobin --block-group-factor 1 -Q yes -T /sapmnt

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 77


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

Create filesets
# mmcrfileset sapmntdata hanadata -t "Data Volume for HANA database"
# mmcrfileset sapmntdata hanalog -t "Log Volume for HANA database"
# mmcrfileset sapmntdata hanashared -t "Shared Directory for HANA database"

Mount the filesystem on all nodes


# mmmount sapmntdata -a

To verify the file system is successfully mounted execute


# mmlsmount sapmntdata -L

Link the filesets in the filesystem


# mmlinkfileset sapmntdata hanadata -J /sapmnt/data
# chmod 755 /sapmnt/data
# mmlinkfileset sapmntdata hanalog -J /sapmnt/log
# chmod 755 /sapmnt/log
# mmlinkfileset sapmntdata hanashared -J /sapmnt/shared
# chmod 755 /sapmnt/shared

Set a quota on the hanalog fileset


The formula for the log quota in a DR scenario is:
<# of active nodes> * RAM * <# of GPFS replicas>
Example: In a 7+7 scenario with L nodes using 6 worker nodes and 1 standby
6 * 1024G * 3 = 18432G
Set the quota
# mmsetquota -j hanalog -h 18432G -s 18432G /dev/sapmntdata

8.4.5 SAP HANA appliance installation

Warning
SAP HANA in this DR solution must be installed using the hostname of the HANA-internal
network (usually on bond1, hostname hananodeXX). The host based routing used in the HA
solution is not applicable for the DR solution.
We recommend to install SAP HANA on the backup site first and thereafter on the primary site. This is
safer to install because your backup site installation cannot accidentally make changes to your production
environment.

8.4.5.1 Install HANA on backup site Before continuing with the installation make sure that the
GPFS file system sapmntdata is mounted at /sapmnt. In order to prepare the backup site, it is necessary
to do a standard HANA installation and then delete the installed content on the shared filesystem.

8.4.5.1.1 Install SAP HANA software on backup site Please install SAP HANA on the backup
site as described in the official SAP documentation available here: http://help.sap.com/hana_appliance.
The location of the SAP HANA installation files is /var/tmp/saphana.
The roles (worker or standby) are not important, except that the first one needs to be a worker. We
recommend to install all other nodes as standby, as this installation type is faster.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 78


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

8.4.5.1.2 Stop HANA and SAP Host agent on backup site Log in as <SID>adm on one node
and stop SAP HANA:
$ HDB stop

Then log in as root and stop SAP Host agent and other services:
# /etc/init.d/sapinit stop

Afterwards disable the autostart of the sapinit service


# chkconfig sapinit off

Do the last two steps on all backup nodes.

8.4.5.1.3 Delete SAP HANA shared content The purpose of this installation is to install the
node local parts of a SAP HANA system. After installing SAP HANA on all backup site nodes the data
in /sapmnt must be deleted:
# rm -r /sapmnt/data/<SID>
# rm -r /sapmnt/log/<SID>
# rm -r /sapmnt/shared/<SID>

8.4.5.1.4 Disable mmfsup script on backup site nodes An installation with the Recovery Image
will install a mmfsup script which will automatically start SAP HANA after the file system comes up.
This must be deactivated as it may start SAP HANA on both sites (using the same hostnames.)
The script resides in /var/mmfs/etc. Disable it on all cluster nodes.
# chmod 644 /var/mmfs/etc/mmfsup

Note
In previous releases of this document the mmfsup script was deleted. This is not necessary as
disabling the script is sufficient and will keep the file for future use.

8.4.5.2 Install HANA on primary site Now install SAP HANA again on the primary site as
described in the official SAP documentation available here: http://help.sap.com/hana_appliance.
The location of the SAP HANA installation files is /var/tmp/saphana. Install SAP HANA with the
same parameters as on the backup site. This is very important for DR to work properly. Please make
sure that you install the individual HANA nodes with the correct roles, for example, five worker and one
standby node in a six node per site solution.
After the installation finished deactivate the autostart of SAP Services
# chkconfig sapinit off

Please verify that the user <SID>adm and the group SAPSYS have the same UID resp. GID on all
nodes. Use the command
# id <SID>adm

and compare the numerical IDs of <SID>adm and group sapsys. You can specify the id’s in the SAP
HANA Installation process either over a configuration file or a commandline parameter, you find details
in the SAP documentation: SAP HANA Server Installation and Update Guide.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 79


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

8.4.5.3 Disable mmfsup script on production site nodes An installation with the Recovery
Image will install an mmfsup script which will automatically start HANA after the file system comes up.
This must be deactivated as it may start SAP HANA on both sites (using the same hostnames.)
The script resides in /var/mmfs/etc. Remove it on all cluster nodes.
# chmod 644 /var/mmfs/etc/mmfsup

Note
In previous releases of this document the mmfsup script was deleted. This is not necessary as
disabling the script is sufficient and will keep the file for future use.

8.4.6 Tiebreaker node setup

8.4.6.1 Quorum node setup using a new node The setup of a new server can be done by following
the instructions in section 10.1.2: Prepare quorum node on page 93 excluding the setup of the switches
which does not apply to a DR configuration.

8.4.6.2 Tiebreaker node setup using an existing node If an existing node will be used as the
tiebreaker node, please consult the system administrator and ask him to:
• Provide a partition which will be used for to hold the GPFS file descriptor information
• Install GPFS
• Build the GPFS portability layer. Note: This may require the installation of the kernel header files
/ sources and some development tools (compiler, make...)
• Setup network access to all other GPFS cluster nodes in the GPFS network
• Exchange ssh keys so that the tiebreaker node root account can be accessed without a password
from the other GPFS cluster nodes.
Follow the instructions in sections 10.1.6: Quorum Node IBM GPFS setup on page 96 and 10.1.7: Quorum
Node IBM GPFS installation on page 96.
General information how to install and setup GPFS can be found online in the Information Center section
Installing GPFS on Linux nodes.

8.4.7 Verify Installation

8.4.7.1 GPFS Cluster configuration


• Verify that all nodes are up and running
# mmgetstate -a

• Verify distribution of the configuration servers


The primary and secondary GPFS configuration servers must each be on one site. Otherwise,
fail-over to the standby site will not work.
This is checked with
# mmlscluster

• Verify distribution of quorum nodes


The current active quorum setup can be checked with

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 80


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

# mmgetstate -aLs

The cluster configuration is listed with


# mmlscluster

When using the tiebreaker node check that the tiebreaker node is a quorum node and that the
remaining quorum nodes are distributed evenly among the other file system failure groups. You see
the failure groups with
# mmlsdisk sapmntdata

Information about the failure group setting can be found in section 8.4.2: GPFS Server configuration
on page 75. If not using the tiebreaker make sure that the active site has at least one more quorum
node than the passive site. In general, try to keep an odd number of quorum nodes.
• Verify cluster manager location
Warning
GPFS 4.1.1-3 changed an important behaviour in DR setups. Starting with GPFS 4.1.1-
3 the cluster manager must be on the active/primary site. For all implementations using
4.1.1-2 and older the cluster manager must be set to the passive/backup site!
Only DR installations with a tiebreaker node are affected.
Verify the location of the cluster manager:
# mmlsmgr

and set the cluster manager node based on these conditions:


– In an implementation with a tiebreaker node and GPFS 4.1.1-3 or later, the cluster
manager must be on the active/primary site.
– In an implementation with a tiebreaker node and GPFS 4.1.1-2 or earlier, the cluster
manager must be on the passive/backup site.
– In an implementation without a tiebreaker node and any GPFS version, the cluster man-
ager must be on the active/primary site.
To change the cluster manager issue
# mmchmgr -c <node>

• Verify replication factor 3 (= three copies, two local and one remote copy)
# mmlsfs sapmntdata

Verify that the following values are all set to 3:


-m Default number of metadata replicas
-M Maximum number of metadata replicas
-r Default number of data replicas
-R Maximum number of data replicas

• Test replication factor 3


Write a new file to the shared filesystem and verify replication level applied to this file:
# mmlsattr <path to file>

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 81


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

All values must be set to 3 and no flags (like illbalanced, metaupdatemiss, etc.) must be shown.
Please check the GPFS documentation or ask IBM GPFS support if there are flags shown after
restripe.
• Check failure groups
You should have four failure groups 1,0,x 2,0,x 1,1,x and 2,1,x. If you are using the tiebreaker node,
a fifth failure group 3,1,1 should be in the file system. Get the list of failure groups from the disk
list
# mmlsdisk sapmntdata

Make sure that the server nodes are distributed evenly among the failure groups.
• Disk availability All GPFS disks must be online.
# mmlsdisk sapmntdata -e
All disks up and ready

If there are disks down or suspended, check the reason (eg. hardware failure, system reboot, ...)
and restart them once the problem has been resolved.
The following command will try to start all disks in the file system. This has no effect on already
started disks.
# mmchdisk sapmntdata start -a

If disks are suspended you can resume them all with the following command:
# mmchdisk sapmntdata resume -a

Note
Follow the instructions in Section our Installation Guides.

8.5 Extending a DR-Cluster

This section describes how to grow a DR cluster. Growing a DR enabled cluster requires that both sites
grow by the same number of nodes. In general the installation of each active/backup server couple needs
not to be done at the same time, but it’s highly recommended. The overcautious technician may also
decide to install the backup node prior to the active node.
The following sections will only explain the differences from the basic DR installation in the sections
before.

8.6 Mixing eX5/X6 Server in a DR Cluster

Please read our ’Special Topic Guide for System x eX5/X6 Servers: Mixed Cluster’. Information given
there takes precedence over the instructions below.

8.6.1 Hardware Setup

Please refer to 8.3: Hardware Setup on page 70 and follow the instructions there. Ping the new machine
on the GPFS network from all machines to test if the network configuration is correct. Ping the new
machine on the HANA network from all servers, it is supposed to be reachable only from nodes on the
same site.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 82


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

8.6.2 GPFS Part 1

1. First step is to add /etc/hosts entries on every machine. Let’s assume that the new nodes are the
9th and 10th nodes with node09 going to the active site and 10 into the backup site. Distribute any
new nodes evenly into the existing failure groups (topology), so that a failure group has at most
one more node than the other, put the backup server into the corresponding FG on the backup site.
In the example above, the 9th node will go into failure group 1 (1,0,x) getting the topology vector
1,0,3 and the 10th node will go into failure group 3 (1,1,x) with topology vector 1,1,3.
On all existing nodes, add host entries for the the GPFS network, .e.g.:
192.168.1.109 gpfsnode09
192.168.1.110 gpfsnode10

On the new nodes add entries for all other nodes. Copying the entries from one of the existing
nodes is the easiest way.
First add host keys for the new nodes to the existing machines. Run on any existing node
# for srcnode in gpfsnode0{1..8} ; do echo node $srcnode ; ssh $srcnode 'for ←-
,→target in gpfsnode0{9,10} ; do echo -n $target ; ssh-keygen -R $target ; ←-
,→ssh-keyscan -t rsa target >> /root/.ssh/known_hosts ; done '; done

The value gpfsnode01..8 will generate a list from gpfsnode01 to gpfsnode08, if the host names differ
or are not consecutive, replace this with a space separated list of host names. The same applies to
gpfsnode09,10 which are the new nodes in this example.
Then copy the root SSH key to the new news. Issue these command on one of the existing cluster
nodes:
# scp /root/.ssh/authorized_keys /root/.ssh/id_rsa /root/.ssh/id_rsa.pub ←-
,→root@gpfsnode09:/root/.ssh/
# scp /root/.ssh/authorized_keys /root/.ssh/id_rsa /root/.ssh/id_rsa.pub ←-
,→root@gpfsnode10:/root/.ssh/

On all new cluster nodes run this command


# for node in gpfsnode{01..10} ; do echo -n $node ; ssh-keygen -R $node ; ssh-←-
,→keyscan -t rsa $node >> /root/.ssh/known_hosts ; done

Test the SSH key exchange by runnign this command on any node
# for srcnode in gpfsnode{01..10} ; do echo from node $srcgpfsnode ; ssh ←-
,→$srcnode 'for target in gpsfnode{01..10} ; do echo To node $target ; ssh ←-
,→$target hostname ; done '; done

The command should run without interaction and errors.


2. Install GPFS (base package):
# cd /var/tmp/install/gpfs-<GPFS-RELEASE>
# rpm -ivh gpfs.base-<GPFS-RELEASE>-0.x86_64.rpm

3. Update to the latest GPFS Maintenance Release Install the following three packages for the latest
(X) maintenance release:
# rpm -ivh gpfs.docs-<GPFS-RELEASE>-X.noarch.rpm
# rpm -ivh gpfs.gpl-<GPFS-RELEASE>-X.noarch.rpm
# rpm -ivh gpfs.msg.en_US-<GPFS-RELEASE>-X.noarch.rpm

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 83


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

4. Verify your GPFS installation:


# rpm -qa | grep gpfs

The installed packaged from above should be listed here.


5. Build the GPFS Portability Layer
Follow the instructions in /usr/lpp/mmfs/src/README:
# cd /usr/lpp/mmfs/src
# make Autoconfig
# make World
# make InstallImages

6. To add the new nodes to the cluster run on any running node
# mmaddnode -N gpfsnode09,gpfsnode10

7. Mark the servers as licensed:


# mmchlicense fpo --accept -N gpfsnode09,gpfsnode10

Please use the correct licensed for the nodes. Server and FPO are just examples.
8. Start the new nodes
# mmstartup -N gpfsnode09,gpfsnode10

9. Create the disk descriptor files. Before adding the disks to the shared file system, you must create
the disk descriptor or stanza files. You can create them on any node on the cluster, but it is
preferably done on the node where the files for the initial cluster creation are located. Please see
chapter 8.4.3: GPFS Disk configuration on page 77 for a description of the stanza files. You only
need to create entries for the drives on the new nodes and you can omit the pool configuration
entries. Let us assume the new file is /var/mmfs/config/disk.list.data.gpfsnode0910.
10. Create NSDs
# mmcrnsd -F /var/mmfs/config/disk.list.data.gpfsnode0910

8.6.3 HANA Backup Node Installation

Skip this for a node on the active site. For the HANA installation on the backup site, we need a temporary
filesystem which must satisfy some requirements. RAM based filesystems are not sufficient, so we use the
fresh created NSDs for a temporary filesystem, install the backup instance, and destroy the temporary
filesystem afterwards before continuing with the installation.
1. Create a temporary filesystem
/usr/lpp/mmfs/bin/mmcrfs sapmnttmp -F /var/mmfs/config/disk.list.data.←-
,→gpfsnode0910 -A no -B 1M -N 3000000 -v no -m 1 -M 3 -r 1 -R 3 -j cluster ←-
,→--write-affinity-depth 1 -s failureGroupRoundRobin --block-group-factor 1←-
,→ -Q yes

Before continuing with the installation make sure that the GPFS file system sapmntdata is not
mounted at /sapmnt on the new nodes.
Mount this filesystem on all new backup nodes
mmmount sapmnttmp /sapmnt -N <new backup nodes>

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 84


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

2. Install HANA on backup site


In order to prepare the backup site, it is necessary to do a standard HANA installation and then
delete the installed content on the shared filesystem. A tool to automate this procedure is currently
in development by SAP.
Install SAP HANA on the backup site as described in the official SAP documentation available
here: http://help.sap.com/hana_appliance. The location of the SAP HANA installation files is
/var/tmp/saphana. Do a single node installation on each node. Make sure to use exact the same
SAP SID, SAP instance number, user names, user IDs, group names and group IDs, paths as in the
original DR-HANA installation. You can use the command id to query user and group information.
3. Stop HANA and SAP Host agent on backup site
Log in as <SID>adm on one node and stop SAP HANA:
$ HDB stop

Then log in as root and stop SAP Host agent and other services:
# /etc/init.d/sapinit stop

Afterwards disable the autostart of the sapinit service


# chkconfig sapinit off

Do the last two steps on all backup nodes.


4. Delete SAP HANA shared content
5. Disable mmfsup script on backup site nodes An installation with the Recovery Image will install
a mmfsup script which will automatically start SAP HANA after the file system comes up. This
must be deactivated as it may start SAP HANA on both sites (using the same hostnames.)
The script resides in /var/mmfs/etc. Remove it on all cluster nodes.
# rm /var/mmfs/etc/mmfsup

6. Delete temporary filesystem After installing all new backup nodes, unmount temporary Filesystem
on all nodes
mmmumount sapmnttmp -a

and delete it
mmdelfs sapmnttmp

This will delete all shared HANA content and will leave the node specific HANA parts installed.

8.6.4 GPFS Part 2

1. Add disks to sapmntdata filesystem


# mmadddisk sapmntdata -F /var/mmfs/config/disk.list.data.gpfsnode0910

2. Verify NSD status


Verify that all NSDs are up and running
# mmlsdisk sapmntdata

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 85


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

3. Mount GPFS on active


On the new active nodes and only on these, mount the GPFS file system
# mmmount sapmntdata -N gpfsnode09,gpfsnode10

GPFS setup is now complete.

8.6.5 HANA

8.6.5.1 Install HANA on active site


1. Please make sure that you have mounted the shared file system on the new nodes.
# mmlsmount sapmntdata -L

2. If not already installed, install the SAP host agent


# cd /var/tmp/install/saphana/DATA_UNITS/SAP_HOST_AGENT_LINUX_X64
# rpm -ihv saphostagent.rpm

As recommended by the RPM installation, a password for sapadm may be set.


3. Deactivate automatic startup through sapinit at startup.
Running SAP’s startup script during system boot must be deactivated as it will will be executed
by a GPFS startup script after cluster start. Execute:
# chkconfig sapinit off

4. Install SAP HANA worker and standby nodes as described in the guide "SAP HANA Administration
Guide".
Warning
SAP HANA in this DR solution must be installed using the hostname of the HANA-
internal network (usually on bond1, hostname hananodeXX). The host based routing
used in the HA solution is not applicable for the DR solution.

8.7 Using Non Productive Instances on Inactive DR Site

Lenovo supports the installation of storage expansions in a DR scenario to allow clients to run a non-
productive SAP HANA instance on idling DR-site nodes. During normal operation in a DR scenario, all
nodes at one of the two sites are only receiving data from the active site and store them on their local
disks.
SAP is tolerating to run a non-productive SAP HANA instance on those nodes. The local disks of the
nodes are used for production data. A storage expansion is used to provide enough local storage for those
non-productive instances.
In the event of a disaster, when the backup site becomes the active site, all non-productive SAP HANA
instances have to be shut down to allow production to continue to run.

8.7.1 Architecture

This section briefly explains how Lenovo enables the use of idling DR-site nodes to run non-productive
SAP HANA instances.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 86


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

8.7.1.1 Prerequisites All nodes on the DR-site must have a storage expansion connected. Having
only a subset of the DR-site nodes equipped with storage expansions is not a supported environment.
Furthermore, all expansions must have identical disk drives installed.
If the customer considers both participating data centers to be equal (which means that after a fail-over
of his production instances to the DR-site he will not manually fail production back to his site A data
center), then you must have storage expansion connected also to all primary site nodes. This storage
expansion will remain unused until you actually need to move data away from DR-site nodes which are
now being used to host SAP HANA production instances.

8.7.1.2 Architectural overview The following illustration shows you how Lenovo’s solution for
SAP HANA DR with storage expansions looks like:

Site A Site B
node1 node2 node3 node4 node5 node6 node7 node8

HDD HDD HDD HDD HDD HDD HDD HDD


replica
first
second
replica

Produc-
tion
file
system
replica
third
meta
data

fio fio fio fio fio fio fio fio

sda1 sda2 sda1 sda2 sda1 sda2 sda1 sda2 sda1 sda2 sda1 sda2 sda1 sda2 sda1 sda2

OS OS OS OS OS OS OS OS

RAID Ctrl RAID Ctrl RAID Ctrl RAID Ctrl

First replica ... ... ... ...


Second replica
Second file system spanning only expansion box drives (metadata and data)

Figure 24: SAP HANA DR using storage expansion - architectural overview

The expansion storage is visible as local storage only and connected via the SAS interface. The storage
is not shared by multiple nodes.
Attention
The external storage can only be used to host data of non-productive SAP HANA instances.
The storage must not be used to expand space of the production file system or to store
backups.

8.7.1.3 Architectural comments Lenovo only support running GPFS with a replication factor of
2 for the non-productive instance. This means, outages of a single node can be handled and no data
is lost. We do not support a replication factor of 3 because the scope of non-productive SAP HANA
environments does not include disaster recovery.
There will be exactly one new file system spanning all DR-site expansion box drives. While we do
not support a multi SID configuration it is a valid scenario to run, e.g., on some DR-site nodes a QA
environment and on other DR-site nodes development. This, however, has to be done on the same file
system.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 87


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

Lenovo does not enable quotas on the new expansion box file system. Make sure to have either a valid
backup procedure in place or to regularly delete old backups.

8.7.2 Setup

This section assumes that the nodes have been successfully installed with an operating system already
(as required for a backup DR site).

8.7.2.1 Hardware setup Connect the Storage Expansion (EXP2524, D1024 or E1024 (in PCR only))
SAS port labeled ’In’ to one of the M5120 or M5225 ports. On D1224 Storage Expansion the SAS Port
labeled ’A’ on Environment Service Module (ESM) ’A’ should be used. For details, see the EXP2524,
D1024, D1224 or E1024 Installation Guide. Configure the drives as described in our Installation Guides.
Either reboot or rescan the SCSI bus and verify that Linux recognizes the new drives.

8.7.2.2 GPFS configuration You reuse the existing GPFS cluster and create a second file system
spanning only the expansion drives of the DR-site nodes.
Even if your setup includes expansions on the primary site, execute the procedure only on the DR-site
expansions. The primary site expansion drives will not be used in the beginning.
1. On each DR-site node, collect the device names of all expansion drives. When using the M5225
Controller you can get the drive names with the this command:
# lsscsi |grep "M5225" |grep -o -E "/dev/sd[a-z]+"

or execute following command in case M5120 Controller is used:


# lsscsi |grep "M5120" |grep -o -E "/dev/sd[a-z]+"

You will end up with something like:


/dev/sde
/dev/sdf
/dev/sdg
/dev/sdh

for each of DR-site node. Note: After sdz, Linux wraps around and continues with sdaa, sdab, ...
2. Create additional NSDs
For all new expansion drives, create NSDs according to the following rules:
(a) all NSDs will be dataAndMetadata
(b) all NSDs go into the system pool
(c) naming scheme is extXXgpfsnodeYY with XX being the two-digit drive number and YY being
the node number
(d) One failure group for all drives within one expansion box
Example: three M-size nodes with 32-drive expansion (gpfsnode01-03 are primary site nodes, 04-06
are secondary site/DR-site nodes)
/dev/sde:gpfsnode04::dataAndMetadata:4:ext01gpfsnode04:system
/dev/sdf:gpfsnode04::dataAndMetadata:4:ext02gpfsnode04:system
/dev/sdg:gpfsnode04::dataAndMetadata:4:ext03gpfsnode04:system
/dev/sdh:gpfsnode04::dataAndMetadata:4:ext04gpfsnode04:system

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 88


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

/dev/sde:gpfsnode05::dataAndMetadata:5:ext01gpfsnode05:system
/dev/sdf:gpfsnode05::dataAndMetadata:5:ext02gpfsnode05:system
/dev/sdg:gpfsnode05::dataAndMetadata:5:ext03gpfsnode05:system
/dev/sdh:gpfsnode05::dataAndMetadata:5:ext04gpfsnode05:system
/dev/sde:gpfsnode06::dataAndMetadata:6:ext01gpfsnode06:system
/dev/sdf:gpfsnode06::dataAndMetadata:6:ext02gpfsnode06:system
/dev/sdg:gpfsnode06::dataAndMetadata:6:ext03gpfsnode06:system
/dev/sdh:gpfsnode06::dataAndMetadata:6:ext04gpfsnode06:system

Store as /tmp/nsdlistexp.txt. Then create NSDs using those disks


# mmcrnsd -F /tmp/nsdlistexp.txt

3. Create file system


# mmcrfs /dev/sapmntext -F /tmp/nsdlistexp.txt -A no -B 512k -N 3000000 -v no -←-
,→m 2 -M 2 -r 2 -R 2 -j cluster --write-affinity-depth 1 -s ←-
,→failureGroupRoundRobin --block-group-factor=1 -T /sapmntext

Warning
Be sure to use nsdlistexp.txt and not your list with internal drives! Using the wrong
drives can destroy your production data!
4. Mount file system on DR-site nodes only.
# mmmount sapmntext -N [list of DR-site nodes]

5. Install SAP HANA worker and standby nodes as described in the guide "SAP HANA Administration
Guide". Take care to install HANA on /sapmntext and not on /sapmnt.
Also take care that you don’t use the UID (user id) and GID (group id) of the DR HANA instance
especially when installing non-productive HANA instances before installing the DR instance.
If you have expansion boxes connected also to your primary site nodes, they get activated only when you
need to migrate non-productive SAP HANA instances’ data away from DR-site notes. See the Lenovo
SAP HANA Appliance Operations Guide 43 for details.
When configuring a clustered configuration by hand, install SAP HANA worker and standby nodes as
described in the guide "SAP HANA Administration Guide".

43 SAP Note 1650046 (SAP Service Marketplace ID required)

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 89


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

9 Mixed eX5/X6 Environments

9.1 Mixed eX5/X6 HA Clusters

For information about this topic please refer to our new Special Topic Guide: Mixed Cluster.

9.2 Mixed eX5/X6 DR Clusters

For information about this topic please refer to our new Special Topic Guide: Mixed Cluster.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 90


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

10 Special Single Node Installation Scenarios


This section covers installations that consist of just one single node in production and need to have HA
or DR features using SAP System Replication or IBM GPFS Storage replication.
Note
For non-GPFS installations only "Single Node DR Installation with SAP HANA System Repli-
cation" is supported. With GPFS based installations all scenarios can be implemented.

10.1 Single Node with HA Installation with Side-car Quorum Solution

A single node with high availability (HA) describes the smallest possible configuration for a highly
available Lenovo solution for a SAP HANA system. In principle, this can be described as a cluster where
only a single node is highly available, since there is only one SAP HANA worker node. There is no
distribution of information across the nodes as there is no secondary worker node attached. Figure 25:
Single Node with High Availability on page 91 shows a high level overview of the system landscape with
two SAP HANA appliances and an IBM GPFS Quorum node.

Worker Node Standby Node


Quorum Node

GPFS Links

SAP HANA Links

Inter-Switch Link (ISL)

G8264 switches

Figure 25: Single Node with High Availability

The major difference between a single node HA configuration and larger scale out clusters is the require-
ment to have a third node to build a quorum for the IBM GPFS file system. Therefore, the smallest
possible setup needs to contain three nodes. Two Lenovo Workload Optimized Systems for SAP HANA
and one quorum node. The third node can e.g. be a plain Lenovo System x3550 M4 system. The
described solution implements a simple 1U server as quorum node for IBM GPFS. This node does not
contribute to the file system with any data disks, but does contribute to the IBM GFPS cluster. The file
system layout is shown in Figure 26: File System Layout - Single Node HA on page 92.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 91


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

node1 node2 node3


Quorum

Shared File System


HDD HDD
replica
first

Data
second
replica

Data
HDD
Descriptor
System
File

FS Desc FS Desc FS Desc


meta
data

Meta Meta
data data

LG1 LG1 LG1


FG1 FG2 FG3

sda1 sda2 sda1 sda2 sda1 sda2

OS OS OS

Figure 26: File System Layout - Single Node HA

10.1.1 Installation of SAP HANA appliance single node with HA

To begin the installation, you need to install both Lenovo Workload Optimized Systems using the steps in
our Installation Guides. Configure the network interfaces (internal and external) and the NTP server(s)
as described there.
1. Start the text based installer as follows on each of the two nodes:
saphana-setup-saphana.sh -H

The switch -H prevents SAP HANA from being installed automatically. This needs to be done
manually later. Refer to the steps for cluster installation as stated in our Installation Guides
together with the steps described below.
2. Select Cluster (worker) . This does a basic installation as a cluster node.
3. Start the installer again as above with the option -H, this time only on the future master node.
Select this time Cluster (Master) . See again the cluster installation in our Installation Guides.
4. Then the quorum node will be manually installed and configured to include its own IBM GPFS
NSD to the file system cluster.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 92


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

10.1.2 Prepare quorum node

The quorum node used can be, e.g. an Lenovo System x3550 M5 with a single CPU, 6x 8GB (48GB)
memory, three local disks configured in a RAID5 configuration and network adapters providing two 10
Gigabit Ethernet ports (e.g. Emulex Virtual Fabric Adapter II or Mellanox ConnectX-3 Adapter).
Please contact your Lenovo sales representative to get an offering for a suitable quorum node setup with
the best price/performance ratio.

10.1.2.1 Install the Operating System You may use SLES 12 to install the OS on this machine
using the default settings. While installing Linux, please select the pattern "C/C++ Compiler and
Tools" in addition to the default selection of software. If you do not do this at install time, then open
the YaST software panel and install the above pattern before installing and compiling GPFS.
Note
We recommend to always use the latest version of SLES for the quorum node.
After installation of the OS you should:
• upgrade the drivers for ServeRAID like described in chapter 13.3.3: Updating ServeRAID Driver
on page 143
• upgrade the drivers for the Mellanox Cards like described in chapter 13.3.2: Update Mellanox
Network Cards on page 141
• upgrade the wicked packages from SLES 12
• activate the network nanny like described in chapter 6.5.2: Using the wicked Framework for Network
Configuration in SLES12 on page 60
• check and correct the udev rules in /etc/udev/rules.d/70-persistent-net.rules in case the
eth is not fix for some ports. There should be an entry like this for every Mellanox port:
SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*",
ATTR{address}=="<MAC address>", ATTR{dev_id}=="<0x0 for first port/0x1 for
second port>", ATTR{type}=="1", KERNEL=="eth*", NAME="<choosen eth>"

10.1.2.2 Disk partitioning We recommend to manually configure the partitions as described in


Section 55: Single Node with HA OS Partitioning on page 93.

Device Size Mount point FS type


/dev/sda1 156,88 MB /boot/efi FAT
/dev/sda2 rest / ext4
/dev/sda3 10GB swap swap
/dev/sda4 10GB not mounted - used for GPFS NSD not formatted

Table 55: Single Node with HA OS Partitioning

10.1.2.3 Firewall Disable the integrated firewall during the network configuration steps or else you
won’t be able to connect to the server until the firewall has been configured correctly. This may be turned
on and configured according to the SAP HANA Security Guidelines.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 93


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

10.1.3 Quorum Node Network Setup

Follow information in table 56: Single Node with HA OS Networking Setup on page 94 to setup the
networking during the SLES for SAP Applications OS installation. Please check also chapter 6.5.2:
Using the wicked Framework for Network Configuration in SLES12 on page 60 for SLES 12 specific hints.

Network Description
10GbE port 0 Connect 10GigE port to the first G8264 switch
10GbE port 1 Connect 10GigE port to the second G8264 switch
Bond Port 0 and Port 1 together
bond0 Set the Bonding options to:
mode=4 xmit_hash_policy=layer3+4 miimon=100
Host Name gpfsnode99
GPFS IP address Place at the end of the range (e.g. 192.168.10.253)
This is not needed as this node will not run SAP
HANA IP Address
HANA.

Table 56: Single Node with HA OS Networking Setup

Figure 27 on page 95 shows the typical network setup for a single node with HA cluster. Deviations are
possible for the management, client access and ERP replication networks depending on the real customer
requirements.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 94


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

SAP client Legend Interface 0 1 GigE


Inter Switch Bonded
SAP HANA GPFS Links Interface
1GbE IMM 6
Optional 10 GigE
10GbE 10 GbE 10 GbE 1 GbE 40 GbE Interface 8

Customer
Customer
Interface Zone
Interface Zone
SAP
SAPHANA
HANASingle
SingleNode
Nodewith
withHA
HAAppliance
Appliance
IMM IMM IMM
0 1 1
Node1 Node2 0 Quorum 0 1
Node
10 GbE
6 6
Customer HANA HANA
Switch Choice 8 8 10GigE 1

System
management

SAP 10GigE 2
Business Suite 7 7 2
GPFS GPFS GPFS
9 9 3

Customer
Switch Choice

3 5 10 3 5 10
2 4 11 2 4 11

Figure 27: Network Switch Setup for Single Node with HA

10.1.3.1 Switch configuration The network switches need to be configured in the standard scale-
out configuration, described in section 6.3.3.7: Network Configurations in a Clustered Environment on
page 50. The 10GigE connections of the additional quorum node will be configured as an extension to
the existing vLAG configuration. The ports of the new network links need to be added to the correct
VLANs and the vLAG and LACP settings need to be made.

Description G8264 Switch #1 G8264 Switch #2


ports 22 22
vLAG - LACP key 1002 1002
PVID 101 101

Table 57: Single Node with HA Network Switch Definitions

10.1.4 Adapt hosts file

The host file /etc/hosts on all three cluster nodes needs to have the following entries. Change the
IP addresses to the ones used in your scenario. Add entries that are missing like for instance external
hostnames.
192.168.10.101 gpfsnode01 gpfsnode01
192.168.10.102 gpfsnode02 gpfsnode02
192.168.10.253 gpfsnode99 gpfsnode99

10.1.5 SSH configuration

The ssh configuration also needs to be extended to the third node. Each node needs to have the public
ssh keys of each other node so that the communication between the GPFS nodes is guaranteed.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 95


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

10.1.5.1 Generate the ssh key on the quorum node Run the following command to generate
the set of ssh keys on quorum node
ssh-keygen -t rsa -f ~/.ssh/id_rsa -N ''

The key needs to be copied to all cluster nodes. Run the following command on quorum node for each
host.
ssh-copy-id gpfsnode01
ssh-copy-id gpfsnode02

Run the following command on each of the first two nodes with the GPFS private network hostname of
the new quorum node:
ssh-copy-id gpfsnode99

10.1.6 Quorum Node IBM GPFS setup

Update the file /var/mmfs/config/nodes.cluster on the first node (gpfsnode01) to the following con-
tent, as it may be needed later:
gpfsnode01:quorum
gpfsnode02:quorum
gpfsnode99:quorum

Besides the necessary number of quorum nodes it is also required to have a quorum on the file system
descriptor. The number of copies of the file system descriptor depends on the number of disk in different
failure groups. To maintain file system operations GPFS requires a quorum of the majority of the replicas
of the file system descriptor. For a two node HA cluster it is therefore necessary to also have a copy of
the descriptor on the quorum node. A disk needs to made available to GPFS on the additional quorum
node which will only hold a copy of the file system descriptor. It does not have any data or metadata.

10.1.7 Quorum Node IBM GPFS installation

This chapter describes the GPFS Installation to version 4.1.1-7. Please check SAP Note 1880960 –
Lenovo Systems Solution for SAP HANA Platform Edition FW/OS/Driver Maintenance for further
updates needed in the meantime.
Perform the following commands as user root.
Copy the GPFS installer files from the master node:
mkdir -p /var/tmp/install/gpfs-4.1
scp gpfsnode01:/var/tmp/install/gpfs-4.1.0-cluster/GPFS-4.1*
/var/tmp/install/gpfs-4.1
scp gpfsnode01:/var/tmp/install/gpfs-4.1.1-7/*-update /var/tmp/install/gpfs-4.1

This should give you the base installer archive GPFS_4.1_STD_LSX_QSG.tar.gz and the PTF lnvgy_
Spectrum_Scale_Standard-<version>-x86_64-Linux-update.
Extract the IBM GPFS archives and start the installer:
cd /var/tmp/install/gpfs-4.1
tar xvf GPFS_4.1_STD_LSX_QSG.tar.gz
./gpfs_install-4.1.0-0_x86_64 --dir . --text-only

Accept the license by pressing "1". Then install the RPMs:

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 96


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

rpm -ivh *.rpm

Install the update:


./lnvgy_Spectrum_Scale_Standard-4.1.1.7-x86_64-Linux-update
cd /usr/lpp/mmfs/4.1.1.7
rpm -Fvh gpfs*.rpm

Copy the license:


mkdir -p /usr/lpp/mmfs/4.1/
cp -pr license /usr/lpp/mmfs/4.1/

10.1.7.1 Build the IBM GPFS Portability Layer Follow the instructions in /usr/lpp/mmfs/
src/README. In general, you may build the IBM GPFS libraries as follows:
cd /usr/lpp/mmfs/src
make Autoconfig
make World
make InstallImages

10.1.7.2 Change SUSE Linux local settings


1. Create /etc/profile.d/saphana-profile.sh;
PATH=$PATH:/usr/lpp/mmfs/bin

2. Change file permissions:


chmod 644 /etc/profile.d/saphana-profile.sh

3. Activate the new PATH variable


source /etc/profile.d/saphana-profile.sh

4. Create a dump-directory for IBM GPFS


mkdir /tmp/GPFSdump

5. Create a configuration-directory for IBM GPFS


mkdir /var/mmfs/config

10.1.8 Add quorum node

Execute the next commands on the primary node:


1. Add the additional node to the cluster.
mmaddnode gpfsnode99

2. Mark new node as correct licensed


mmchlicense server --accept -N gpfsnode99

3. Mark backup and quorum node as quorum nodes for the cluster

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 97


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

mmchnode --quorum -N gpfsnode02,gpfsnode99

4. Start IBM GPFS on the new node:


mmstartup

10.1.9 Create descriptor disk

Create a disk descriptor file in the configuration directory of the quorum node /var/mmfs/config/disk.
list.quorum.gpfsnode99. It should contain the following line which defines the disk partition on the
quorum node as an NSD with the explicit function to hold the file system descriptor:
/dev/sda3:gpfsnode99::descOnly:1099:quorum01node99

Create the NSD by running the mmcrnsd command on the quorum node:
mmcrnsd -F /var/mmfs/config/disk.list.quorum.gpfsnode99 -v no

10.1.10 Add disk to file system

After creating the NSD the disk needs to be added to the file system by running the mmadddisk command:
mmadddisk sapmntdata -F /var/mmfs/config/disk.list.quorum.gpfsnode99 -v no

10.1.11 Verify Cluster Setup

Execute the command mmlsclusteron one of the cluster nodes. The output should look similar to this:
GPFS cluster information
========================
GPFS cluster name: HANAcluster.gpfsnode01
GPFS cluster id: 12394192078945061775
GPFS UID domain: HANAcluster.gpfsnode01
Remote shell command: /usr/bin/ssh
Remote file copy command: /usr/bin/scp

GPFS cluster configuration servers:


-----------------------------------
Primary server: gpfsnode01
Secondary server: gpfsnode02

Node Daemon node name IP address Admin node name Designation


---------------------------------------------------------------------
1 gpfsnode01 192.168.10.101 gpfsnode01 quorum
2 gpfsnode02 192.168.10.102 gpfsnode02 quorum
3 gpfsnode99 192.168.10.253 gpfsnode99 quorum

10.1.11.1 List the IBM GPFS Disks Check the disks in the cluster. There are 2 devices on each
of the NSD server and none on the quorum node. The listing of the command mmlsdisksapmntdata-L
shows that there is one disk per failure group which contains a file system descriptor. This ensures that
a quorum may be reached if a node fails.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 98


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

disk driver sector failure holds holds storage disk pool remarks
name type size group metadata data status availability id
-------------- ------ ------ ------- -------- ----- ------ ------------ ---- ------ --------
data01node01 nsd 512 1001 yes yes ready up 1 system desc
data02node01 nsd 512 1001 yes yes ready up 2 system
data01node02 nsd 512 1002 yes yes ready up 3 system desc
data02node02 nsd 512 1002 yes yes ready up 4 system
quorum01node99 nsd 512 1003 no no ready up 5 system desc
Number of quorum disks: 3
Read quorum value: 2
Write quorum value: 2

10.1.12 Installation of SAP HANA

Please refer to the official SAP documentation available here: http://help.sap.com/hana_appliance.


The location of the SAP HANA installation files is /var/tmp/saphana.

10.2 Single Node with stretched HA Installation

This solution is designed to provide improved high-availability capabilities for a single node SAP HANA
installation. It can be applied to any SAP HANA configuration size. There is one active SAP HANA
instance running on the primary node and database data gets replicated by IBM GPFS to the secondary
node. The secondary node is running in hot-standby, ready to take over operation if the primary node
experiences any failure. In such a 1+1 stretched HA scenario the secondary node usually is distant to the
primary node. Examples are a different fire compartment zone or the other end of the campus. Depending
on distances it can also be on a different campus in the same city. No non-production SAP HANA instance
is allowed to run in this scenario.
Because of the importance of the quorum node it is recommended to place it at a third site. We
understand, however, that this is not always feasible. This leads to the following two designs. In the first
figure 28: Single Node with stretched HA - Two Site Approach on page 100 the quorum node is placed at
the primary site.
This ensures that IBM GPFS on the primary site node stays up and running even if the link to the
DR-site node gets interrupted.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 99


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

Site B

Worker Node Standby Node


Quorum Node

GPFS Links
SAP HANA Links
Inter-Switch Link (ISL)

G8264 switches

Figure 28: Single Node with stretched HA - Two Site Approach

The second approach places the quorum node at a third site. The network architecture can be seen in
figure 29: Single Node with stretched HA - Three Site Approach on page 100.

Site C Site B

Quorum Node

Worker Node Standby Node

GPFS Links G8264 switches


SAP HANA Links
Inter-Switch Link (ISL)

Figure 29: Single Node with stretched HA - Three Site Approach

10.2.1 Installation and configuration of SLES and IBM GPFS

This scenario must be installed like a conventional 1+1 HA scenario as shown above in 10.1.1: Installation
of SAP HANA appliance single node with HA on page 92. The major difference is the network setup. It
can be either routed or switched, depending on the client’s environment (in conventional 1+1 HA scenarios
there is only one IBM-provided switch between the hops). Usually, clients have different types of links
spanning the two sites and they use different network equipment technologies. The client is allowed to
use his own network equipment (i.e. switches) on the secondary site. Ensure that the separation of

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 100


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

network interfaces is kept across both nodes (distinct switches or VLAN44 s for each IBM GPFS and
HANA network port per node). This is to guarantee high-availability of the solution. The file system
layout is shown in Figure 30: File System Layout - Single Node stretched HA on page 101.

node1 node2 node3


Quorum

Shared File System


HDD HDD
replica
first

Data
second
replica

Data
HDD
Descriptor
System
File

FS Desc FS Desc FS Desc


meta
data

Meta Meta
data data

LG1 LG1 LG1


FG1 FG2 FG3

sda1 sda2 sda1 sda2 sda1 sda2

OS OS OS

Figure 30: File System Layout - Single Node stretched HA

10.2.2 Installation of SAP HANA

Please refer to the official SAP documentation available here: http://help.sap.com/hana_appliance.


The location of the SAP HANA installation files is /var/tmp/saphana.

10.3 Single Node with DR Installation

This solution is designed to provide disaster recovery capabilities for a single node SAP HANA instal-
lation. It can be applied to any SAP HANA machine size. There is one active SAP HANA instance
running on the primary site node and a standby node on the backup site is ready to take over operation
in case of a disaster. The difference between a single node with stretched HA and a single node with
DR installation is the fact that automatic failover is sacrificed for the possibility to run a non-production
SAP HANA instance on the DR-site node. Otherwise, the two setups are identical. The setup of this
solution is a manual process after SLES has been installed.
Because of the importance of the quorum node it is recommended to place it at a third site. We
understand, however, that this is not always feasible. This leads to the following two designs. In the
first figure 31: Single Node with Disaster Recovery - Two Site Approach on page 102 the quorum node is
44 Virtual Local Area Network

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 101


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

placed at the primary site. This ensures that IBM GPFS on the primary site node stays up and running
even if the link to the DR-site node gets interrupted.

Site B
Storage expansion for
non-prod DB instance

Worker Node DR Node


Quorum Node

GPFS Links
SAP HANA Links
Inter-Switch Link (ISL)

G8264 switches

Figure 31: Single Node with Disaster Recovery - Two Site Approach

The second approach places the quorum node at a third site. The network architecture can be seen in
figure 32: Single Node with Disaster Recovery - Three Site Approach on page 102.

Site C Site B

Quorum Node

Worker Node Standby Node

GPFS Links G8264 switches


SAP HANA Links
Inter-Switch Link (ISL)

Figure 32: Single Node with Disaster Recovery - Three Site Approach

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 102


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

10.3.1 Installation and configuration of SLES and IBM GPFS

This scenario has to be installed in the exact same way as described in 10.1.1: Installation of SAP
HANA appliance single node with HA on page 92. IBM GPFS replicates data to the backup site node.
The difference is in the configuration of SAP HANA.

10.3.2 Optional: Expansion Storage Setup for Non-Production Instance

This solution supports the additional use of the DR-site node to host a non-production SAP HANA
instance. Follow instructions in 10.7: Expansion Storage Setup for Non-productive SAP HANA Instance
on page 113 to setup the additional disk drives. The overall file system architecture is illustrated in figure
33: File System Layout - Single Node with DR with Storage Expansion on page 103.

node1 node2 node3


Quorum
Shared File System
HDD HDD
replica
first

Data
second
replica

Data
HDD
Descriptor
System
File

FS Desc FS Desc FS Desc

Meta Meta
meta
data

data data

LG1 LG1 LG1


FG1 FG2 FG3

sda1 sda2 sda1 sda2 sda1 sda2

OS OS OS
M5120

...

Second file system for non-prod

Figure 33: File System Layout - Single Node with DR with Storage Expansion

10.4 Single Node with HA and DR Installation

This solution is designed to provide the maximum level of redundancy for a single node SAP HANA
installation. It can be applied to any SAP HANA configuration size. High availability concepts ensure
that the database stays up if the primary node has an issue. Disaster recovery concepts ensure that
the database stays up if the first two SAP HANA nodes (residing in the primary customer data center)

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 103


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

become unavailable. Figure 34: Single Node with HADR using IBM GPFS Storage Replication on page
104 illustrates the overall architecture of the solution.

Site B
Storage expansion for
non-prod DB instance

Worker Node Standby Node DR Node

GPFS Links
SAP HANA Links
G8264 switches
Inter-Switch Link (ISL)

Figure 34: Single Node with HADR using IBM GPFS Storage Replication

10.4.1 Installation and configuration of SLES and IBM GPFS

Install the latest supported IBM Systems Solution for SAP HANA on all three nodes by using the latest
supported SLES for SAP Applications DVD and the latest non-OS component DVD and the latest
compatibility DVD.
The procedure is similar as described in Installation of SAP HANA appliance single node with HA. The
final file system layout is shown in figure 35 on page 105.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 104


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

node1 node2 node3

Shared File System


HDD HDD HDD

replica
first
Data
second
replica

Data
replica
third

Data
Descriptor
System
File

FS Desc FS Desc FS Desc

Meta Meta Meta


meta
data

data data data

LG1 LG1 LG1


FG1 FG2 FG3

sda1 sda2 sda1 sda2 sda1 sda2

OS OS OS

Figure 35: File System Layout - Single Node HADR

To begin the installation, you need to install both IBM Workload Optimized Systems using the steps in
our Installation Guides. Configure the network interfaces (internal and external) and the NTP server(s)
as described there. The IP addresses can be in different subnets as long as proper routing between the
subnets is in place. Make sure that all three SAP HANA nodes can ping each other on all interfaces.
1. Start the text based installer as follows on each of the two nodes:
saphana-setup-saphana.sh -H

The switch -H prevents SAP HANA from being installed automatically. This needs to be done
manually later. Refer to the steps for cluster installation as stated in our Installation Guides
together with the steps described below.
2. Select Cluster (worker) . This does a basic installation as a cluster node.
3. Start the installer again as above with the option -H, this time only on the future master node.
Select this time Cluster (Master) . Enter details for SID, Instance ID and a HANA password. Enter
number of nodes 3, number of standby nodes 1 (this does not matter, it would be used only for
HANA which is not installed automatically anyway). Assure that the IP addresses for the IBM
GPFS and HANA network are correct. Accept the IBM GPFS license and wait for the installation
process to continue successfully. See again our Installation Guides.
4. Change the replication level for the IBM GPFS fileset:
mmchfs sapmntdata -m 3 -r 3

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 105


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

5. Check the replication level set:


mmlsfs sapmntdata
...
-m 3 Default number of metadata replicas
-M 3 Maximum number of metadata replicas
-r 3 Default number of data replicas
-R 3 Maximum number of data replicas
...

6. Restripe the data on the IBM GPFS filesystem to all have the required three replicas:
mmrestripefs sapmntdata -R

7. Set the following IBM GPFS configuration parameters:


mmchconfig unmountOnDiskFail=meta
mmchconfig panicOnDiskFail=meta

8. Adjust the quotas on the file system. The log quota is set to 1 TB regardless of memory size.
mmsetquota -j hanalog -h 1024G -s 1024G /dev/sapmntdata

The data quota for this HADR scenario is set to 9 * RAM. In case of a 1 TB server this means a
quota of 9 TB.
mmsetquota -j hanadata -h 9216G -s 9216G /dev/sapmntdata

Allocate the remaining space to HANA shared and execute mmsetquote accordingly.
mmsetquota -j hanashared -h <REMAINING>G -s <REMAINING>G /dev/sapmntdata

9. Install SAP HANA similarly as described in section 8.4.5: SAP HANA appliance installation on
page 78.

10.4.2 Optional: Expansion Storage Setup for Non-Production Instance

This solution supports the additional use of the DR-site node to host a non-production SAP HANA
instance. Follow instructions in 10.7: Expansion Storage Setup for Non-productive SAP HANA Instance
on page 113 to setup the additional disk drives. The overall file system architecture is illustrated in figure
36: File System Layout - Single Node HADR with Storage Expansion on page 107.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 106


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

node1 node2 node3

Shared File System


HDD HDD HDD

replica
first
Data
second
replica

Data
replica
third

Data
Descriptor
System
File

FS Desc FS Desc FS Desc

Meta Meta Meta


meta
data

data data data

LG1 LG1 LG1


FG1 FG2 FG3

sda1 sda2 sda1 sda2 sda1 sda2

OS OS OS
M5120

...

Figure 36: File System Layout - Single Node HADR with Storage Expansion

10.5 Single Node DR Installation with SAP HANA System Replication

Note
SAP HANA System Replication is supported with GPFS and XFS filesystems. For servers
installed with XFS, please ignore any reference to GPFS in this sections.
This solution provides redundancy at the application layer. It can be applied to any SAP HANA config-
uration size. For details, see official SAP HANA documentation on http://help.sap.com/hana. There
are two ways how to design the network for such a DR solution based on System Replication. As the IBM
GPFS interfaces on the DR-site node are not connected to the primary site a set of redundant switches
is optional. This leads to one architecture with switches and one architecture without switches between
the SAP HANA nodes. Figure 37: Single Node DR with SAP System Replication on page 108 shows the
solution with switches.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 107


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

Site B
Storage expansion for
non-prod DB instance

Worker Node DR Node

SAP HANA Links


Inter-Switch Link (ISL)

G8264 switches

Figure 37: Single Node DR with SAP System Replication

Because the two SAP HANA nodes do not use their IBM GPFS network interfaces you can also opt
for a solution without intermediate network switches. In this case you have to connect the two 10 Gbit
interfaces used for SAP HANA communication on the two nodes directly without an intermediate switch.
This architecture is illustrated in figure 38: Single Node DR with SAP System Replication on page 108.

Site B
Storage expansion for
non-prod DB instance

Worker Node DR Node

SAP HANA Links

Figure 38: Single Node DR with SAP System Replication

10.5.1 OS Installation

Each site is considered to be a single node, as far as SLES and IBM GPFS are concerned. The final
file system layout can be seen in figure 39: File System Layout of Single Node DR with SAP System
Replication on page 109.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 108


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

SAP HANA
System Replication
node1 node2

File system A File system B


HDD HDD

replica

replica
first

first
Data Data

Descriptor

Descriptor
System

System
File

File
FS Desc FS Desc

meta

meta
data

data
Meta Meta
data data

LG1 LG1
FG1 FG1

sda1 sda2 sda1 sda2

OS OS

Figure 39: File System Layout of Single Node DR with SAP System Replication

Perform a single node installation on both nodes as described in our Installation Guides but start the
installer with the -H option:
saphana-setup-saphana.sh -H

In the option list select Single Node .


The switch -H prevents HANA from being installed automatically. This needs to be done manually later.
Data replication will be taken care of by SAP HANA application level. Replication can happen syn-
chronously or asynchronously. Configure the network connection for SAP HANA and ensure the connec-
tivity.

10.5.2 Installation of SAP HANA

Please refer to the official SAP documentation available here: http://help.sap.com/hana_appliance.


The location of the SAP HANA installation files is /var/tmp/saphana.

10.5.3 Optional: Expansion Storage Setup for Non-Production Instance

This setup supports the additional use of the DR-site node to host a non-production SAP HANA instance.
The layout of the two file systems (production and non-production) is illustrated in figure 40: File System
Layout of Single Node DR with SAP System Replication with Storage Expansion on page 110.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 109


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

SAP HANA
System Replication
node1 node2

File system A File system B


HDD HDD

replica

replica
first

first
Data Data

Descriptor

Descriptor
System

System
File

File
FS Desc FS Desc

meta

meta
data

data
Meta Meta
data data

LG1 LG1
FG1 FG1

sda1 sda2 sda1 sda2

OS OS
M5120

Extra file system


for non-prod ...

Figure 40: File System Layout of Single Node DR with SAP System Replication with Storage Expansion

On the remote site node (receiving the replication data from primary SAP HANA instance) you will
have two file systems configured. The primary file systems spans local disks only and is to be configured
in the exact same way as the primary site file system. This file system will host the replicated data
coming in from the active production SAP HANA instance. The second file system only consists of
storage expansion box drives attached to the remote site node. This file system will host the data
of the non-production SAP HANA instance. Follow instructions in 10.7: Expansion Storage Setup for
Non-productive SAP HANA Instance on page 113 to setup these additional disk drives.

10.6 Single Node with HA using IBM GPFS Storage Replication and DR
using System Replication

This approach also provides maximum redundancy for single node SAP HANA installations. We use
the term 1+1/1 to describe this style of single node installation. It can be applied to any SAP HANA
configuration size. 1+1/1 uses the IBM GPFS storage replication feature and SAP HANA System
Replication feature. For HA (1+1) it uses IBM GPFS storage replication. To achieve this, the active
and the standby node are in the same IBM GPFS cluster and have access to the system file system.
Whenever the active node writes data to disk IBM GPFS replicates it to the standby node.
In addition to that, SAP HANA System Replication transfers data to a DR node on a remote site. In
case of a disaster in the primary site data center the DR node can be used to host SAP HANA. SAP
HANA System Replication can either run in synchronous or in asynchronous replication mode. The DR
node creates a separate IBM GPFS cluster consisting just of itself. It has its own file system on local
disk. There is no logical connection to the primary site IBM GPFS cluster. As a consequence, the IBM
GPFS network adapter on the DR node is to be left unconnected. This leads to two possible network
architectures. The first one provides redundant switches on both sites. Figure 41: Single Node with HA
using IBM GPFS Storage Replication and DR using System Replication on page 111 shows this design.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 110


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

Quorum Node
Site B
Storage expansion for
non-prod DB instance

Worker Node Standby Node DR Node

GPFS Links
G8264 switches
SAP HANA Links
Inter-Switch Link (ISL)

Figure 41: Single Node with HA using IBM GPFS Storage Replication and DR using System Replication

The second architecture drops the switches on the DR site and instead connects the only required network
interfaces (the 10 Gbit connection for SAP HANA communication) directly to the primary site switches.
This is illustrated in figure 42: Single Node with HA using IBM GPFS Storage Replication and DR using
System Replication without remote site Switches on page 111.

Quorum Node
Site B
Storage expansion for
non-prod DB instance

Worker Node Standby Node DR Node

GPFS Links
SAP HANA Links
G8264 switches
Inter-Switch Link (ISL)

Figure 42: Single Node with HA using IBM GPFS Storage Replication and DR using System Replication
without remote site Switches

10.6.1 Installation and configuration of SLES and IBM GPFS

The two nodes on the primary site are to be installed in the exact same way as a 1+1 HA environment
described in 10.1.1: Installation of SAP HANA appliance single node with HA on page 92. There is one

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 111


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

IBM GPFS cluster and one file system spanning both nodes with IBM GPFS taking care of replicating
the data to the standby node (r=2, m=2).
To install the DR node follow all steps of a standard SAP HANA single node installation apart from
installing SAP HANA itself (use the -H option). Please refer to 10.5: Single Node DR Installation with
SAP HANA System Replication on page 107 for details.
The OS and IBM GPFS have no logical dependency on the primary site node. This will be achieved on
application level with SAP HANA in the next step.
The final file system layout is shown in figure 43: File System of Single Node with HA and DR with
System Replication on page 112 and it illustrates the use of the two technologies, IBM GPFS storage
replication and SAP HANA system replication.

SAP HANA
System Replication

node1 node2 node3 node3


Quorum

Shared File System A File system B


HDD HDD HDD
replica

replica
first

first
Data Data
second
replica

Data
HDD
Descriptor

Descriptor
System

System
File

File

FS Desc FS Desc FS Desc FS Desc

Meta Meta Meta


meta

meta
data

data

data data data

LG1 LG1 LG1 LG1


FG1 FG2 FG3 FG1

sda1 sda2 sda1 sda2 sda1 sda2 sda1 sda2

OS OS OS OS

GPFS Cluster A GPFS Cluster B

Figure 43: File System of Single Node with HA and DR with System Replication

10.6.2 Installation of SAP HANA

Install two separate instances of SAP HANA, one in each site. For the primary site please follow the
according steps for a clustered HA installation.
On the DR node you have to follow all steps of a standard SAP HANA single node installation. This
includes installing all components of SAP HANA and making sure that it runs self contained. You then
have to follow official SAP HANA documentation to enable SAP HANA System Replication between the
instance on the primary site node and the instance on the DR node.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 112


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

SAP HANA
System Replication
node1 node2 node3 node3
Quorum

Shared File System A File system B


HDD HDD HDD

replica

replica
first

first
Data Data
second
replica

Data
HDD

Descriptor
Descriptor
System

System
File

File
FS Desc FS Desc FS Desc FS Desc

Meta Meta Meta


meta

meta
data

data
data data data

LG1 LG1 LG1 LG1


FG1 FG2 FG3 FG1

sda1 sda2 sda1 sda2 sda1 sda2 sda1 sda2

OS OS OS OS
M5120

GPFS Cluster A ...

GPFS Cluster B

Figure 44: File System of Single Node with HA and DR with System Replication and Storage Expansion

10.7 Expansion Storage Setup for Non-productive SAP HANA Instance

10.7.1 GPFS based installations

This sections describes how to setup the disks in an expansion storage that hosts a non-productive SAP
HANA instance. Expansions storage is supported in environments where the nodes at a DR site would
be idle otherwise.
Depending on the memory size of the nodes you have a different number of drives in the expansions.
Create as many (8+p) RAID 5 arrays as possible. Declare remaining drives as hot spare. For details on
how to use the RAID configuration utility see 4.1: RAID Setup for GPFS on page 29. Each RAID 5
device will be given to IBM GPFS as an NSD.
Collect the device names of all newly created virtual drives. Then create NSDs on them according to the
following rules:
1. all NSDs will be dataAndMetadata
2. all NSDs go into the system pool
3. naming scheme is extXXnodeYY with XX being the two-digit drive number and YY the node number
4. one single failure group for all expansion box drives, make sure it is unique within you cluster
Store a disk descriptor file similar to the following as /tmp/nsdlistexp.txt:
%nsd: device=/dev/sdd
nsd=ext01node02
servers=gpfsnode02
usage=dataAndMetadata
failureGroup=2
pool=system
%nsd: device=/dev/sde

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 113


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

nsd=ext02node02
servers=gpfsnode02
usage=dataAndMetadata
failureGroup=2
pool=system
%pool:
pool=system
blockSize=1M
usage=dataAndMetadata
layoutMap=cluster
allowWriteAffinity=yes
writeAffinityDepth=1
blockGroupFactor=1

Create NSDs
# mmcrnsd -F /tmp/nsdlistexp.txt

Create the file system


# mmcrfs /dev/sapmntext -F /tmp/nsdlistexp.txt -A no -B 1M -N 3000000 -v no -m 1 -M ←-
,→2 -r 1 -R 2 -s failureGroupRoundRobin -T /sapmntext

Mount the file system on the backup site node


# mmmount sapmntext

If your client has a storage expansion connected to both nodes, primary site and backup site, then you
need to apply above procedure two times, one for each node. Each expansion box file system is to be
handled separately. Do not create a single file system that spans over both expansion box disks!
This scenario is used if both data centers – thus both nodes – are to be considered equal and you want to
be able to run production SAP HANA out of both data centers. In this case non-production SAP HANA
instances must also be able to run on both nodes hence the need for a dedicated /sapmntext file system
on both sides.

10.7.2 XFS accelerated with bcache based installations

The following table gives an overview:

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 114


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

Step Title 3
1 Install required hardware
Perform firmware or driver upgrades
2 Stop SAP HANA
Unmount XFS filesystem(s)
Wait until all backing devices are in state clean or no cache
If SSDs: Flushing all dirty data from the cache back to the HDDs
Shutdown bcache
Stop the SSD cache software RAID
3 create RAID5 HDD arrays and partitions
4 Clear left-over headers from newly created devices
Make sure that all software RAIDs are detected correctly by the OS
new SSDs: Recreate software RAID0 and bcache cache set
5 Check that bcache is still not active
6 Get the names of the newly created partitions
7 Create the software RAID
8 Create new bcache backing device
9 Start the bcache and XFS layers
10 Create XFS filesystem
11 Mount filesystem and attach cache
12 Create subdirectories
13 Install SAP HANA

Table 58: Expansion Storage Setup for Non-productive SAP HANA Instance: XPFS-based installations

After the installation of the new hardware, the required reconfiguration can be done on a live system.
Still a backup is recommended.
The instructions below use /sapmntext as the mount point for the new filesystem. You can use a different
mount point, just change all the paths.
1. Install the required hardware like RAID controllers, SAS enclosures and drives. Follow the instruc-
tions in chapter 12: Upgrading the Hardware Configuration on page 119 section concerning the
hardware installation. Keep in mind that newly attached devices may require firmware or driver
upgrades.
2. Follow all steps in section 12.4.3: Prepare Server for Changes in bcache Layout on page 126.
3. Execute
saphana-raid-config.py -u -l bcachexfs -n

This will create RAID5 HDD arrays and partitions on these arrays. The partitions will have the
partition number 126. If the HDDs are not new, you may need to clear any configuration data from
the disks using any tools you are comfortable with.
saphana-raid-config.py -l bcachexfs -c

This will recreate all SSD RAID arrays and will (if required) change the RAID level from RAID0
to RAID1.
4. Follow the steps in section 12.4.6: Reconfigure Software RAID on page 129 except the "grow linear",
the "Run init script", the "xfs_growfs", and the quota calculator part.
5. At this point bcache must still not be active. You can check this with
# /etc/init.d/lenovo-bcachexfs status

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 115


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

6. Get the names of the newly created partitions


# ls -al /dev/sd?126
brw-rw---- 1 root disk 259, 5 Oct 28 16:15 /dev/sdf126

Depending on the amount of drives you have installed, you will get one or more devices.
7. Create the software RAID
Use mdadm to create the new software RAID array and append the newly created partition names
to the command:
# mdadm --create /dev/md/external -l linear -c 64 --force -n 1 /dev/sdX126
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md/external started.

This will create the new software RAID /dev/md/external. The --force parameter is needed if
the number of RAID members is one.
Clear potential left-overs from the RAID device:
dd if=/dev/urandom of=/dev/md/external bs=512 count=65536

8. Create new bcache backing device


Run
# make-bcache -B /dev/md/external --writeback
UUID: 4c5ff2a0-6296-45a1-956f-5dfe199e6ee9
Set UUID: 28125f54-5d9a-43fd-86f7-93fb8dc9bcee
version: 1
block_size: 1
data_offset: 16

9. Start the bcache and XFS layers with the command


# /etc/init.d/lenovo-bcachexfs start -d -t

This command will initialize only uninitialized components and is safe to run multiple times.
A second bcache device will be created:
# ls -al /dev/bcache?
brw-rw---- 1 root disk 252, 0 Oct 28 16:21 /dev/bcache0
brw-rw---- 1 root disk 252, 1 Oct 28 16:21 /dev/bcache1

10. Create XFS filesystem


First create a mount point for the new filesystem, e.g.: /sapmntext:
# mkdir /sapmntext

Format the block device:


# mkfs.xfs /dev/bcache1
meta-data=/dev/bcache1 isize=256 agcount=9, agsize=268435455 blks
= sectsz=512 attr=2, projid32bit=1
= crc=0 finobt=0
data = bsize=4096 blocks=2341762809, imaxpct=5
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=0

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 116


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

log =internal log bsize=4096 blocks=521728, version=2


= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0

and get the UUID of the new filesystem:


# blkid /dev/bcache1
/dev/bcache1: UUID="c2a9e03d-35bb-4ad0-a62c-5dd5416b340e" TYPE="xfs"

Your UUID will be different.


Edit /etc/fstab and add a line for the new filesystem similar to
UUID=<UUID> /sapmntext xfs nofail,noatime,nodiratime,allocsize=16M,pquota 0 0

and make sure you are using the correct UUID and the desired mount point. You can use the entry
for the main XFS filesystem as a template.
Make the new /etc/fstab entry known to Systemd:
# systemctl daemon-reload

11. Mount filesystem, and attach cache


Mount the filesystem
# mount /sapmntext

Check the state of the new bcache device:


# cat /sys/block/bcache1/bcache/state
dirty

12. Create subdirectories and quotas


Create the required subdirectories
# mkdir /sapmntext/{data,log,shared}

Note
In previous versions, XFS filesystems for non-productive HANA instances had quotas
enabled. These quotas are no longer set.
13. Install SAP HANA to /sapmntext/shared and when asked for the location for HANA data & log
files use the directories /sapmntext/data resp. /sapmntext/log.
The new filesystem will be automatically started during the server startup and can be used like the
primary filesystem for HANA. No special handling is required.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 117


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

11 Virtualization
For information about this topic please refer to our new Special Topic Guide: VMware.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 118


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

12 Upgrading the Hardware Configuration

Note
Please note that this chapter may differ for special setups like DR. This chapter is about
standard appliance configurations.
There are several possibilities to upgrade IBM and Lenovo appliances. You can either upgrade the RAM
of your appliance (scale-up) or you can add servers to create or increase the size of a cluster (scale-out).
Table 59: RAID array and RAID controller overview on page 120 lists defined models according to
number of CPUs, memory, and number of RAID arrays.
An upgrade from 4U chassis (x3850 X6) to 8U chassis (x3950 X6) is possible – with some extra efforts.
Upgrades from 2 CPU sockets to 4, and 4 to 8 sockets are possible. Please note that the PCIe slot
assignment changes (section 3.7: Card Placement on page 24) are required.
When scaling out a stand-alone installation (single server) to a cluster without changing the RAM it might
be necessary to add additional storage to the servers. Please note the different lines for stand-alone and
scale-out that might list different numbers of RAID arrays. Additional storage can mean either to add 9
HDDs to an existing storage expansion, or to add a new storage expansion, or (only for 8U chassis) to
add a second internal M5210 RAID controller. If your upgrade path requires new RAID controllers please
follow the instructions in section 3.7: Card Placement on page 24). Scaling out a stand alone installation
with XFS may require purchasing a GPFS license and always require a reinstallation.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 119


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

Chassis CPUs Usage Memory IA45 EA46 M5120/M5225


128-512GB 1 0 0
Standalone 768GB 1 1 1
2
1TB-3TB 1 1 1
Scaleout 256GB 1 0 0
256-512GB 1 0 0
768-2048GB 1 1 1
Standalone
x3850 X6 3-4TB 1 2 1
6TB 1 3 2
4 512-1536GB 1 1 1
2TB 1 2 1
Scaleout 3TB 1 3 2
4TB 1 4 2
6TB 1 5 3
256-512GB 1 0 0
768-2048GB 2 0 0
Standalone
4 3TB-4TB 2 1 1
6TB 2 2 1
Scaleout 512-1536GB 2 0 0
512GB 1 0 0
1-2TB 2 0 0
3-4TB 2 1 1
Standalone
6TB 2 2 1
x3950 X6
8TB 2 2 1
12TB 2 5 3
8 512-1024GB 2 0 0
2TB 2 1 1
3TB 2 2 1
Scaleout 4TB 2 3 2
6TB 2 5 3
8TB 2 6 3
12TB 2 10 5

Table 59: RAID array and RAID controller overview

12.1 Power Policy Configuration

Unless specified to manufacturing, systems shipped from the factory have default settings that may not
meet customer desired settings. It is strongly recommended that during pre-installation setup, or after
installing additional hardware options, the power policy and power management selections should be
checked to ensure:
• Sufficient power is available for the configuration
• The desired correct power redundancy and throttling settings have been selected
Note
Failure to properly set values can prevent the system from booting or log error events.
45 IA = Number of RAID arrays on Internal M5210 RAID controllers (excluding the RAID array for the OS).
46 EA = Number of RAID5 arrays on External M5120/M5225 RAID controllers.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 120


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

For more information on how to perform this task, refer to section ’Setting power supply power policy
and system power configurations’ of the System x3850 X6 and x3950 X6 Installation and Service Guide47 .

12.2 Reboot Behavior

When installing or performing upgrades, the operator should be prepared to expect multiple reboots
during the POST process as the system performs the required configuration and setting changes. A lack
of understanding reboot behavior could cause the operator to suspect bad or misbehaving hardware or
firmware and result in interrupting the required process. Interrupting the process will result in increased
time to complete the installation and may require service depending on what actions the operator has
performed improperly.
The number of reboots will vary depending upon the type (HW vs FW) and number of changes. Firmware
changes (primary bank, secondary bank, both, option) has most effect and may be as high as seven. The
number and size of installed memory DIMMs affects the time between reboots, not the number.
Note
Before adding or removing any hardware, remove AC power and wait for the LCD display
and all Light Emitting Diodes (LEDs) to turn off.
For more information on this topic and to see a reboot guideline chart, refer to RETAIN tip MIGR-
509687348 .

12.3 Adding storage (GPFS)

12.3.1 Adding storage via Storage Expansion (D1024, E1024, D1224 or EXP2524)

Depending on your upgrade path, you have the following options:


• Add 9 HDDs to an already attached expansion.
• Attach a new expansion to the server and insert 9 (for 1 RAID5) or 18 HDDs (for 2 RAID5s) and
2 SSDs.
• Install 2 additional SSDs into 1st expansion for CacheCade RAID149 .
Please note: you can also configure RAID6 on the expansions, you then need 1 HDD more per RAID
array, i.e. 10 respectively 20 HDDs per expansion).

Note
All steps – except the installation of a new RAID controller – can be executed without
downtime.
1. Install the M5120/M5225 in the server. (Skip this step when just adding storage to an existing
expansion.)
2. Install the HDDs and SSDs in the expansion. (When just adding storage, you will only add HDDs
and no SSDs).
3. Connect the expansion to power and via SAS cable to the RAID controller. (Skip this step when
just adding storage to an existing expansion.)
47 http://publib.boulder.ibm.com/infocenter/systemx/documentation/topic/com.ibm.sysx.3837.doc/nn1hu_

install_and_service_guide.pdf
48 http://www.ibm.com/support/entry/portal/docdisplay?lndocid=migr-5096873
49 For details on hardware configuration and setup see Operations Guide for X6 based models section CacheCade RAID1

Configuration

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 121


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

4. 12.3.3: Configure RAID array(s) on page 122.


5. 12.3.8: Configuring GPFS on page 124.

12.3.2 Adding storage on second internal M5210 controller

The second M5210 will be connected to 6 HDDs for a RAID5 and 2 SSDs for CacheCade.
1. Install the M5210 in the server.
2. Install the HDDs and SSDs.
3. 12.3.3: Configure RAID array(s) on page 122.
4. 12.3.8: Configuring GPFS on page 124.

12.3.3 Configure RAID array(s)

Note
Appliance version 1.8.80-12 (and later) come with the tool saphana-raid-config.py. Use the
following three commands instead of the manual configuration described in the next chapters.
Execute this command to adjust the CacheCade settings:
saphana-raid-config.py -c
Execute this command to configure the unconfigured HDDs into RAID arrays:
saphana-raid-config.py -u
Execute this command to activate the CacheCade also on the newly created RAID arrays:
saphana-raid-config.py -c
Now continue with 12.3.8: Configuring GPFS on page 124.
The command line tool storcli is installed on your appliance. It will be used to configure the RAIDs.
Note
All commands were tested with storcli version 1.07.07. Other versions’ syntax may vary.
Look in the output of storcli64 /call show for the controller with the unconfigured drives (UGood).
The actual enclosure IDs (EID), slot numbers (Slt), and ID of the controller may vary in your setup.
:
Controller = 1
Status = Success
Description = None

Product Name = ServeRAID M5120


:
-------------------------------------------------------------------------
EID:Slt DID State DG Size Intf Med SED PI SeSz Model Sp
-------------------------------------------------------------------------
8:1 18 UGood - 371.597 GB SAS SSD N Y 512B TXA2D20400GA6I U
8:2 19 UGood - 371.597 GB SAS SSD N Y 512B TXA2D20400GA6I U
8:3 9 UGood - 1.089 TB SAS HDD N Y 512B HUC101212CSS60 U
8:4 10 UGood - 1.089 TB SAS HDD N Y 512B ST1200MM0007 U
8:5 11 UGood - 1.089 TB SAS HDD N Y 512B ST1200MM0007 U
8:6 12 UGood - 1.089 TB SAS HDD N Y 512B HUC101212CSS60 U
8:7 13 UGood - 1.089 TB SAS HDD N Y 512B ST1200MM0007 U
8:8 14 UGood - 1.089 TB SAS HDD N Y 512B HUC101212CSS60 U

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 122


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

8:9 15 UGood - 1.089 TB SAS HDD N Y 512B ST1200MM0007 U


8:10 16 UGood - 1.089 TB SAS HDD N Y 512B HUC101212CSS60 U
8:11 17 UGood - 1.089 TB SAS HDD N Y 512B HUC101212CSS60 U
-------------------------------------------------------------------------
:

Create the RAID5, where 8:3-11 is an example list of the HDDs used. It is following the scheme
<Enclosure Device ID>:<Slot Number range>. /c1 stands for controller 1.
storcli64 /c1 add vd type=raid5 drives=8:3-11 wb ra cached pdcache=off strip=64

If you have to configure a second RAID5 array, configure it accordingly.

12.3.4 Deciding for a CacheCade RAID Level

The CacheCade RAID arrays can be configured either with RAID1 or RAID0.
Depending on the hardware setup you have the option to decide which RAID level you want to use. In
general running RAID 1 is recommended but smaller setups with less RAID controllers and SSDs may
not have sufficient performance to run CacheCade with RAID1 and achieve the KPI50 requirements SAP
set for HANA appliances. The SAP HANA TDI KPIs are met with all setups and Lenovo will provide
full Lenovo HANA Solution support for these configurations.
The automated installation will automatically choose RAID1 if sufficient SSDs are available for achieving
the SAP HANA Appliance KPIs, otherwise RAID0 will be chosen. The CacheCade RAID level can be
later changed manually.

RAID 0 RAID 1
SAP TDI & SAP Appliance KPIs
TDI KPIs
Hardware Appliance KPIs Standalone Cluster
1 M5210 3 7 7 3
1 M5210 + 1 M5120/M5225 (2 SSDs) 3 7 3 3
1 M5210 + 1 M5120/M5225 (4 SSDs) 3 3 3 3
1 M5210 + 2 or more M5120/M5225 3 3 3 3
2 M5210 3 3 3 3

Table 60: CacheCade RAID Level Possibilities

This table is only valid for systems which have all the storage for the respective server size and all of
the mentioned hardware must be part of the standard main filesystem. Hardware used for non-prod
filesystems does not factor in for these hardware requirements.
Keep in mind that all CacheCade virtual drives must have the same RAID level. This means when adding
hardware and switching RAID levels you may have to recreate existing CacheCade arrays that have the
now wrong RAID level.

12.3.5 Configuring RAID array when CacheCade is not yet configured

Create the CacheCade device, where assignvds=X is the RAID5 (with X as the Logical/Virtual Drive
ID). If you created 2 RAID5 arrays, use assignvds=X,Y to assign the CacheCade VD to both arrays.
8:1-2 is an example list of SSDs used.
To decide for the RAID level (raidX) see the previous section.
50 Key Performance Indicator

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 123


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

storcli64 /c1 add vd cachecade type=raidX drives=8:1-2 wb assignvds=0

Adjust settings of the CacheCade device, where /vX is the CacheCade VD (with X as the Logical/Virtual
Drive ID):
storcli64 /c1/v1 set rdcache=ra iopolicy=cached wrcache=wb

12.3.6 Configuring RAID array with existing CacheCade

When you added storage to an existing expansion the CacheCade VD is already configured.
Assign the CacheCade VD to the newly created RAID5 array, where /cX is the controller, and /vX the
RAID5 array:
storcli64 /c1/v2 set ssdcaching=on

12.3.7 Changing the CacheCade RAID Level

To change the RAID level of an existing CacheCade VD you have to delete and recreate the CacheCade
VD.
At first, find the CacheCade VD ID and the slots of the SSDs. Use the following command, where /cX
is the RAID controller.
storcli64 /c0 show

Now delete the CacheCade VD, where /cX is the RAID controller and /vX is the ID of the CacheCade
VD.
storcli64 /c0/v1 delete cachecade

Create the deleted CacheCade again, where /cX is the RAID controller and drives=12:1-2 is an example
list of SSD drives used.
storcli64 /c0 add vd cachecade type=raid1 drives=252:2-3 wb

and use the desired RAID level.


Adjust the settings of the CacheCade VD, where /cX is the RAID controller and /vX is the ID of the
newly created CacheCade VD.
storcli64 /c0/v2 set rdcache=ra iopolicy=cached wrcache=wb

12.3.8 Configuring GPFS

Find the block device that belongs to the newly created RAID array. mmlsnsd -X, lsscsi, and lsblk
may be helpful.
Find the name of the new NSD(s). For example: If you are on gpfsnode01, execute mmlsnsd | grep
gpfsnode01 to find out the names that are already in use for the existing NSDs.
Create a stanza file (/var/mmfs/config/disk.list.data.gpfsnodeZZ.new) containing the information
about the new GPFS NSD(s). Repeat this block for all newly created RAID arrays accordingly. ZZ is
the node number (e.g. 01 in gpfsnode01).

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 124


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

%nsd: device=/dev/sdX
nsd=dataYYnodeZZ
servers=gpfsnodeZZ
usage=dataAndMetadata
failureGroup=10ZZ
pool=system

Execute
mmcrnsd -F /var/mmfs/config/disk.list.data.gpfsnodeZZ.new -v no
mmadddisk sapmntdata -F /var/mmfs/config/disk.list.data.gpfsnodeZZ.new -v no

Attention
The following command must only be executed on stand-alone configurations. Do not execute
it in a cluster environment!
mmrestripefs sapmntdata -b
This will balance the data between the used and unused disks equally.
Change the GPFS quotas to match the new requirements. Run the quota calculator and you will see a
result like this:
# saphana-quota-calculator.sh
Please set the Shared quota to 8187 GB
Please set the Data quota to 3072 GB
Please set the Log quota to 1024 GB

Use the following command(s) to set the quota(s)


mmsetquota sapmntdata:hanadata --block 3072G:3072G
mmsetquota sapmntdata:hanalog --block 1024G:1024G
mmsetquota sapmntdata:hanashared --block 8187G:8187G

12.4 Adding storage (XFS accelerated with bcache)

If you want to configure an external filesystem for a non-prod SAP HANA instance, please refer to section
10.7.2: XFS accelerated with bcache based installations on page 114.

12.4.1 Adding storage via Storage Expansion (D1024, E1024, D1224 or EXP2524)

Depending on your upgrade path, you have the following options:


• Add 9 HDDs to an already attached expansion.
• Install an additional M5120 or M5225 RAID controller and attach a new expansion to the server
and insert 9 HDDs (for 1 RAID5) or 18 HDDs (for 2 RAID5s) and 2 SSDs.
Please note: you can also configure RAID6 on the expansions, you then need 1 HDD more per RAID
array, i.e. 10 respectively 20 HDDs per expansion.
Note
When reusing existing hardware, you may end with 4 SSDs in the storage expansion. You
can use all four SSDs or remove two of them.
1. Install the M5120/M5225 in the server. (Skip this step when just adding storage to an existing
expansion.)

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 125


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

2. Install the HDDs and SSDs in the expansion. (When just adding storage, you will only add HDDs
and no SSDs).
3. Connect the expansion to power and via SAS cable to the RAID controller. (Skip this step when
just adding storage to an existing expansion.)
4. Prepare server (chapter 12.4.3)
5. Configure hardware RAID arrays (chapters 12.4.4 and 12.4.5)
6. Reconfigure software RAID arrays (chapter 12.4.6 )
7. Resize XFS filesystem

12.4.2 Adding storage on second internal M5210 controller

The second M5210 will be connected to 6 HDDs for a RAID5 and 2 SSDs for bcache.
1. Install the M5210 in the server.
2. Install the HDDs and SSDs.
3. Prepare server (chapter 12.4.3)
4. Configure hardware RAID arrays (chapters 12.4.4 and 12.4.5)
5. Reconfigure software RAID arrays (chapter 12.4.6)
6. Resize XFS filesystem

12.4.3 Prepare Server for Changes in bcache Layout

After adding the additional HDDs and/or SSDs to the server you have to ensure that the server is ready
for changes in the bcache layout.
1. Stop SAP HANA:
service sapinit stop

2. Make sure that no process is accessing the XFS filesystem(s), then unmount the XFS filesystem(s):
umount /dev/bcache?

3. Wait until all backing devices are in state clean or no cache:


# cat /sys/block/bcache*/bcache/state

Depending on the amount of dirty data it can take a while to clean the writeback cache. To see the
progress check the file dirty_data in the same directory.

Warning
Do not continue until the state of all device is clean or no cache!
4. If you added only HDDs and no SSDs, continue with 12.4.4: Configure RAID array(s) on page 127.
If you added SSDs, continue with the next steps.
5. The bcache cache set must be recreated and this requires flushing all dirty data from the cache
back to the HDDs, which can take a while. Detach the cache set from the backing devices:
# echo 1 > /sys/block/bcache0/bcache/detach

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 126


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

Repeat this for other bcache devices if you have additional filesystems.
6. Wait until
# cat /sys/block/bcache*/bcache/state

shows only no cache


7. Shutdown bcache
# /etc/init.d/lenovo-bcachexfs stop -d -t

8. If the bcache module was unloaded, everything was shutdown in a clean fashion. Check, if the
module was unloaded:
lsmod | grep ^bcache

If there is no output, the module was unloaded.


9. Stop the SSD cache software RAID:
# mdadm --stop /dev/md/cache
mdadm: stopped /dev/md/cache

12.4.4 Configure RAID array(s)

Note
Appliance version 1.8.80-12 (and later) come with the tool saphana-raid-config.py. Use the
following three commands instead of the manual configuration described in the next chapters.
Execute this command to auto-configure the unconfigured HDDs into RAID arrays:
saphana-raid-config.py -l bcachexfs -u
If SSDs were added you also need to run
saphana-raid-config.py -l bcachexfs -c
Continue with 12.4.6: Reconfigure Software RAID on page 129.

Note
All commands were tested with storcli version 1.07.07. Other versions’ syntax may vary.
The command line tool storcli is installed on your appliance. It will be used to configure the RAIDs.
1. Look in the output of storcli64 /call show for the controller with the unconfigured drives
(UGood). The actual enclosure IDs (EID), slot numbers (Slt), and ID of the controller may vary
in your setup.
Controller = 1
Status = Success
Description = None

Product Name = ServeRAID M5120


-------------------------------------------------------------------------
EID:Slt DID State DG Size Intf Med SED PI SeSz Model Sp
-------------------------------------------------------------------------
8:1 18 UGood - 371.597 GB SAS SSD N Y 512B TXA2D20400GA6I U
8:2 19 UGood - 371.597 GB SAS SSD N Y 512B TXA2D20400GA6I U
8:3 9 UGood - 1.089 TB SAS HDD N Y 512B HUC101212CSS60 U
8:4 10 UGood - 1.089 TB SAS HDD N Y 512B ST1200MM0007 U
8:5 11 UGood - 1.089 TB SAS HDD N Y 512B ST1200MM0007 U

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 127


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

8:6 12 UGood - 1.089 TB SAS HDD N Y 512B HUC101212CSS60 U


8:7 13 UGood - 1.089 TB SAS HDD N Y 512B ST1200MM0007 U
8:8 14 UGood - 1.089 TB SAS HDD N Y 512B HUC101212CSS60 U
8:9 15 UGood - 1.089 TB SAS HDD N Y 512B ST1200MM0007 U
8:10 16 UGood - 1.089 TB SAS HDD N Y 512B HUC101212CSS60 U
8:11 17 UGood - 1.089 TB SAS HDD N Y 512B HUC101212CSS60 U
-------------------------------------------------------------------------

2. Create the HDD RAID5 array, where 8:3-11 is an example list of the HDDs used. It is following
the scheme <Enclosure Device ID>:<Slot Number range>. /c1 stands for controller 1.
# storcli64 /c1 add vd type=raid5 drives=8:3-11 wb ra cached pdcache=off strip←-
,→=64

After the RAID array has been created, partition the new drive:
# sgdisk -g --new=127:1MiB:0 --change-name=127:bcache-storage --typecode=127:←-
,→fd00 /dev/sdX

and in case of a non-prod filesystem use


# sgdisk -g --new=126:1MiB:0 --change-name=126:bcache-external --typecode=126:←-
,→fd00 /dev/sdX

To get the device name, have a look at the end of of the output of dmesg, newly created devices
will be logged there.
If you have to configure a second RAID5 array, configure it accordingly.
3. If SSDs were added with the upgrade, the SSD RAID1 arrays must be created as well. Take two
SSDs and create a RAID1 array:
# storcli64 /c1 add vd type=raid1 drives=8:1-2 wb ra cached pdcache=off strip←-
,→=64

After the RAID array has been created, partition the new drive:
# sgdisk -g --new=128:1MiB:0 --change-name=128:bcache-cache --typecode=128:fd00←-
,→ /dev/sdX

To get the device name, have a look at the end of of the output of dmesg, newly created devices
will be logged there.
If you have four SSDs, the please create two RAID1 arrays with two SSDs each

12.4.5 Reconfigure first RAID controller SSD RAID array

Note
Skip this section, if you used saphana-raid-config.py -l bcachexfs -c in the section
before. Use this section only, if you do the configuration by hand (not recommended).
If SSDs are added as part of the upgrade and before there was only one RAID controller with only two
SSDs installed, reconfigure the existing SSD RAID 0 array with RAID1.
1. Destroy the SSD RAID array
If setup correctly the SSD RAID array should be at /c0/v2, but to verify this, run

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 128


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

# storcli64 /c0/dall show


Controller = 0
:
--------------------------------------------------------------------------
DG Arr Row EID:Slot DID Type State BT Size PDC PI SED DS3 FSpace
--------------------------------------------------------------------------
0 - - - - RAID1 Optl N 1.089 TB dflt N N dflt N
0 0 - - - RAID1 Optl N 1.089 TB dflt N N dflt N
0 0 0 252:0 9 DRIVE Onln N 1.089 TB dflt N N dflt -
0 0 1 252:1 10 DRIVE Onln N 1.089 TB dflt N N dflt -
1 - - - - RAID5 Optl N 3.270 TB dsbl N N dflt N
1 0 - - - RAID5 Optl N 3.270 TB dsbl N N dflt N
1 0 0 252:4 1 DRIVE Onln N 1.089 TB dsbl N N dflt -
1 0 1 252:5 2 DRIVE Onln N 1.089 TB dsbl N N dflt -
1 0 2 252:6 3 DRIVE Onln N 1.089 TB dsbl N N dflt -
1 0 3 252:7 4 DRIVE Onln N 1.089 TB dsbl N N dflt -
2 - - - - RAID0 Optl N 743.195 GB dflt N N dflt N
2 0 - - - RAID0 Optl N 743.195 GB dflt N N dflt N
2 0 0 252:2 7 DRIVE Onln N 371.597 GB dflt N N dflt -
2 0 1 252:3 8 DRIVE Onln N 371.597 GB dflt N N dflt -
--------------------------------------------------------------------------
:

The RAID0 has two 371GiB (400GB) members, so this is the SSD array and Disk Group Index 2
equals to /v2 as expected.
Delete the RAID array
# storcli64 /c0/v2 del force
Controller = 0
Status = Success
Description = Delete VD succeeded

2. Create new RAID1 array The easiest way is to use the raid configuration script:
# saphana-raid-config.py -u -c -l bcachexfs

To create the RAID1 manually run these commands


# storcli64 /c0 add vd type=raid1 drives=252:2,252:3 wb ra cached pdcache=off ←-
,→strip=64
# dmesg | tail -n 10
# sgdisk -g --new=128:1MiB:0 --change-name=128:bcache-cache --typecode=128:fd00←-
,→ /dev/sdX

The correct drive slot numbers can be found in the storcli output above. To get the new device
name for sgdisk, have a look at the end of of the output of dmesg, newly created devices will be
logged there.

12.4.6 Reconfigure Software RAID

At this point bcache is still shut down.


1. Make sure that all software RAIDs are detected correctly by the OS. Run
# udevadm trigger

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 129


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

Now the OS must be able to see the storage device. The cache device is not present yet, if you
added SSDs.
# ls /dev/md/
storage

2. Clear left-over headers from newly created devices.


When reusing HDDs or SSDs, you should wipe out the start of the newly created partitions:
# dd if=/dev/urandom of=/dev/sdX127 bs=512 count=65536
# dd if=/dev/urandom of=/dev/sdX128 bs=512 count=65536

Make sure that you wipe all partitions, except those on sda or appearing in /proc/mdstat:
# cat /proc/mdstat
Personalities : [linear]
md127 : active linear sdb127[0]
3512561647 blocks super 1.2 0k rounding

unused devices: <none>

3. Grow linear
To add the newly created HDD RAID array to the software linear RAID array, run
# mdadm --grow --add /dev/md/storage /dev/sdX127

and give the partition device you created in the previous steps. If you created multiple devices, you
can either repeat the command for every device or add all at once. For extending the non-prod
filesystem, use /dev/md/external as the third parameter.
4. This step is only necessary if you added SSDs: Recreate software RAID0 and bcache cache set
Create the new RAID array:
# mdadm --create /dev/md/cache -l 0 -c 64 -n X /dev/sd*128
mdadm: /dev/sdc128 appears to be part of a raid array:
level=raid0 devices=2 ctime=Wed Oct 21 13:06:48 2015
mdadm: /dev/sde128 appears to be part of a raid array:
level=raid0 devices=2 ctime=Wed Oct 21 13:06:48 2015
Continue creating array? y
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md/cache started.

The parameter -n X gives the number of RAID members. Get the number of /dev/sd*128 devices
from ls /dev/sd*128. mdadm will complain that the old drives are part of a RAID array. Confirm
that this is correct.
Wipe potential left-overs from previous usage:
# dd if=/dev/urandom of=/dev/md/cache bs=512 count=65536

Create a new cache set on top of the RAID array:


# make-bcache -C /dev/md/cache

5. Run init script


Now use the init script to start bcache and XFS:

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 130


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

# /etc/init.d/lenovo-bcachexfs start -d -t

The script will automatically assign the cache set to the backing device.
6. xfs_growfs
Grow the XFS filesystem
# xfs_growfs /dev/bcache0

The operation takes only a few seconds. The command must be run on the mounted filesystem.
You can either use the device name or the mount point.
7. Change the quotas to match the new requirements. Run the quota calculator:
# saphana-quota-calculator.sh -a
Please set the Shared quota to 10697 GB
Please set the Data quota to 6144 GB
Please set the Log quota to 1024 GB

Use the following command(s) to set the quota(s)


xfs_quota -x -c 'limit -p bhard=6144g hanadata' /hana
xfs_quota -x -c 'limit -p bhard=1024g hanalog' /hana
xfs_quota -x -c 'limit -p bhard=10697g hanashared' /hana
Quotas set.

12.5 Adding memory

Note
The installation of additional memory requires a system downtime.
When the customer decides for a scale-up, i.e. adding RAM to the server(s), you have to follow the
memory DIMM placement rules for IBM X6 servers to get the best performance. The DIMMs must be
placed equally over all CPU books – each CPU book must contain the same amount of DIMMs in the
same slots.
Tables 61: x3850 X6 Memory DIMM Placement on page 132, and 62: x3950 X6 Memory DIMM Place-
ment on page 132 show which slots must be populated for specific configurations. The number of memory
DIMMs can be computed by "RAM size"/"DIMM size".
After the installation of additional memory, the SAP HANA’s global allocation limit must be reconfigured.

12.6 Adding CPU Books

Note
The installation of additional CPU books requires a system downtime.
The following upgrade paths are possible:
• x3850 X6, 2 sockets → x3850 X6, 4 sockets
• x3950 X6, 4 sockets → x3950 X6, 8 sockets
• x3850 X6, 4 sockets → x3950 X6, 8 sockets, including the exchange of the 4U chassis to a 8U chassis
• x3850 X6, 4 sockets → x3950 X6, 4 sockets, including the exchange of the 4U chassis to a 8U chassis

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 131


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

2 Sockets 4 Sockets
DIMMs per server 8 16 24 32 48 16 32 48 64 96
DIMM Slots 9, 6 3 3 3 3 3 3 3 3 3 3
DIMM Slots 1, 10 3 3 3 3 3 3 3 3 3 3
DIMM Slots 15, 24 7 3 3 3 3 7 3 3 3 3
DIMM Slots 19, 16 7 3 3 3 3 7 3 3 3 3
DIMM Slots 8, 5 7 7 3 3 3 7 7 3 3 3
DIMM Slots 2, 11 7 7 3 3 3 7 7 3 3 3
DIMM Slots 14, 23 7 7 7 3 3 7 7 7 3 3
DIMM Slots 20, 17 7 7 7 3 3 7 7 7 3 3
DIMM Slots 7, 4 7 7 7 7 3 7 7 7 7 3
DIMM Slots 3, 12 7 7 7 7 3 7 7 7 7 3
DIMM Slots 13, 22 7 7 7 7 3 7 7 7 7 3
DIMM Slots 21, 18 7 7 7 7 3 7 7 7 7 3

Table 61: x3850 X6 Memory DIMM Placement

4 Sockets 8 Sockets
DIMMs per server 16 32 48 64 96 32 64 96 128 192
DIMM Slots 9, 6 3 3 3 3 3 3 3 3 3 3
DIMM Slots 1, 10 3 3 3 3 3 3 3 3 3 3
DIMM Slots 15, 24 7 3 3 3 3 7 3 3 3 3
DIMM Slots 19, 16 7 3 3 3 3 7 3 3 3 3
DIMM Slots 8, 5 7 7 3 3 3 7 7 3 3 3
DIMM Slots 2, 11 7 7 3 3 3 7 7 3 3 3
DIMM Slots 14, 23 7 7 7 3 3 7 7 7 3 3
DIMM Slots 20, 17 7 7 7 3 3 7 7 7 3 3
DIMM Slots 7, 4 7 7 7 7 3 7 7 7 7 3
DIMM Slots 3, 12 7 7 7 7 3 7 7 7 7 3
DIMM Slots 13, 22 7 7 7 7 3 7 7 7 7 3
DIMM Slots 21, 18 7 7 7 7 3 7 7 7 7 3

Table 62: x3950 X6 Memory DIMM Placement

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 132


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

Follow these steps to add additional CPU books to a server:


1. If running GPFS: Disable the GPFS auto-mount for your GPFS filesystems. If you only have the
standard GPFS filesystem the following command is enough. If you have more GPFS filesystems,
change the configuration for them accordingly.
mmchfs sapmntdata -A no

2. If running XFS with bcache: Stop HANA, and bcache and XFS. Disable the automated start of
bcache and XFS, and HANA:
service sapinit stop
service lenovo-bcachexfs stop
chkconfig lenovo-bcachexfs off
chkconfig sapinit off

3. Power off the machine.


4. Place the new CPU books in the server. Please make sure that the memory DIMMs are placed
correctly in the CPU books. (See 12.5: Adding memory on page 131.)
5. Adopt the PCIe card placement according to the tables in section 3.7: Card Placement on page 24.
6. Power on the machine.
7. Save the file /etc/udev/rules.d/99-lenovo-saphana-persistent-net.rules to another loca-
tion.
8. Execute
saphana-udev-config.sh -sw

9. Reboot the machine.


10. Review the network settings.
11. If running GPFS:
Mount the GPFS filesystem by hand.
mmmount sapmntdata

Enable the GPFS auto-mount option for you GPFS filesystems again.
mmchfs sapmntdata -A yes

12. If running XFS with bcache: Start bcache and XFS. Enable automated start of bcache and XFS,
and HANA:
service lenovo-bcachexfs start
chkconfig lenovo-bcachexfs on
chkconfig sapinit on

13. At last start the HANA database.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 133


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

13 Software Updates

Note
Starting with appliance version 1.9.96-13 the mount point for the GPFS file system sapmntdata
is user configurable during installation. SAP HANA will be also installed into this path.
Lenovo recommends /hana for new installations.
The following commands and code snippets use /sapmnt. For any other path please replace
/sapmnt with the chosen path.

Note
Starting with appliance version 1.10.102-14 XFS is supported on appliances running
SLES for SAP 12. The mount point for this XFS filesystem is also configurable during instal-
lation.
The following commands and code snippets may use /sapmnt when referring to this mount
point. You may have to adapt your commands accordingly.

Attention
As mentioned in https://www-947.ibm.com/support/entry/portal/docdisplay?
lndocid=LNVO-CHANGE a migration for the Lenovo branded System x, and Storage products
from the IBM Fix Central to the Lenovo Support site http://support.lenovo.com/us/en/
is in process.

13.1 Warning

Please be careful with updates of the software stack. Please update the software and driver components
only with a good reason, either because you are affected by a bug or have a security concern and only
after Lenovo or SAP support advised you to upgrade or after requesting approval from support. Be
defensive with updates as updates may affect the proper operation of your SAP HANA appliance and
the System x SAP HANA Development team does not test every released patch or update.

13.2 General Update Procedure

There are basically two update variants. Disruptive or rolling. Disruptive upgrades always demand a
downtime.
This subsection gives an overview of the procedure in general, how updates should be applied. Then
there are two ways presented, how one could update in a cluster environment, either disruptive with a
downtime, or rolling, where one node is updated at a time and then re-added to the cluster.
Before performing a rolling update (non-disruptive one node at a time update) in a cluster environment
make sure that your cluster is in good health and all server nodes and storage devices are running.

13.2.1 Single Node Update

The update of a single node is always disruptive. Please plan for downtime.

13.2.1.1 Single Node with XFS Filesystem and bcache The XFS and bcache kernel modules
are part of the SLES release. The update procedure is as following:

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 134


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

Step Title 3
1 Stop SAP HANA
2 Perform Update
3 Start SAP HANA

Table 63: Update Procedure for XFS with bcache on single node

13.2.1.2 Single Node with GPFS Filesystem The update procedure is as following:

Step Title 3
1 Stop SAP HANA
2 Unmount GPFS filesystems & stop GPFS
3 Perform Update
4 Restart GPFS & mount filesystem
5 Start SAP HANA

Table 64: Update Procedure for GPFS on single node

13.2.2 Cluster Update

Clusters are only available with GPFS filesystem. Update is possible either as rolling or disruptive update.

13.2.2.1 Disruptive Cluster Update In the disruptive cluster update scenario, one would shut-
down the whole cluster and apply all updates on all nodes in parallel: step 1 on all nodes, then step 2 on
all nodes and then step 3 on all nodes and so on.
This will cause a downtime as for the single node update.

13.2.2.2 Rolling Cluster Update The update procedure is as following:

Step Title 3
1 Check GPFS cluster health
2 (Optional) Deactivate GPFS autorecovery
3 Stop SAP HANA
4 Unmount GPFS filesystems & stop GPFS
5 Perform Update
6 Restart GPFS & mount GPFS filesystem
7 Verify GPFS disks and restripe filesystem
8 Start SAP HANA
9 Continue with step 1 on next node
10 Activate GPFS autorecovery
11 Restore accurate filesystem usage

Table 65: Upgrade GPFS Portability Layer Checklist

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 135


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

13.2.3 Common Steps

13.2.3.1 (on any node) Deactivate GPFS Autorecovery This step is optional. Advanced users
may deactivate GPFS autorecovery. See chapter Temporary Deactivation of GPFS Autorecovery in the
Lenovo SAP HANA Appliance Operations Guide 51 .

13.2.3.2 (on the target node) Stop SAP HANA Stop SAP HANA and all other SAP software
running cleanly. Login in as root and execute
# service sapinit stop

Older versions of the appliance may not have this script, so please stop SAP HANA and other SAP
software manually.
Stop of SAP HANA is documented in the SAP HANA administration guidelines at the SAP Help Portal52
or SAP Service Marketplace53 .
Verify that SAP HANA and sapstartsrv are not running anymore:
# ps ax | grep sapstart
# ps ax | grep hdb

On SAP HANA database Revision 90 or later there might still run a process hdbrsutil:
# ps ax | grep hdb
5671 ? Ssl 0:00 hdbrsutil -f -D -p <port number (e.g. 3xx03)> -i ←-
,→1439801877
6110 pts/1 S+ 0:00 grep hdb

In that case, you can stop the process with (as <sid>adm user):
# hdbrsutil -f -k -p <port number (e.g. 3xx03)>

For further information, please check SAP Note 2191221 – hdbrsutil can block unmount of the filesystem.
No processes should be found, if any processes are found please retry stopping SAP HANA.

13.2.3.3 (on the target node) Start SAP HANA Start SAP HANA
# service sapinit start

Older versions of the appliance may not have this script, so please start SAP HANA and other SAP
software manually as documented in the SAP HANA administration guidelines at the SAP Help Portal
or SAP Service Marketplace.

13.2.3.4 (on the target node) Check GPFS cluster health Before performing any updates on
any node, verify that the cluster is in a sane state. First check that all nodes are running with the
command
# mmgetstate -a

and check that all nodes are active, then verify that all disks are active
# mmlsdisk sapmntdata -e
51 SAP Note 1650046 (SAP Service Marketplace ID required)
52 https://help.sap.com/hana
53 https://service.sap.com/hana

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 136


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

The disks on the node to be taken down do not need to be in the up state, but make sure that all other
disks are up.
If you have more than one GPFS filesystem (e.g. sapmntext) make sure to check the health of every
filesystem.
Warning
If disks of more than one server node are down, the filesystem will be shut down causing all
other SAP HANA nodes to fail.

13.2.3.5 (on the target node) Unmount the GPFS filesystem and stop GPFS
1. Unmount the GPFS filesystem
Take care that no open process is preventing the filesystem from unmounting. Use
# lsof /sapmnt

to find processes still accessing the filesystem, e.g. running shells (root, <SID>adm, etc.). Other
nodes within the cluster can still mount the shared file system.
Unmount locally the shared filesystem using
# mmumount all

2. Stop GPFS
# mmshutdown

GPFS should unload its kernel modules during its shutdown, so check the output of this command.

13.2.3.6 (on the target node) Restart GPFS and mount the filesystem

Note
Instead of these manual steps, you can also restart the server. GPFS and SAP HANA should
start automatically during reboot. You can than skip also the next step. Please check, if the
start from GPFS and SAP HANA was successful.
1. Start GPFS
# mmstartup

Verify that the node started up correctly


# mmgetstate

During the startup phase the node is shown in the state arbitrating, this changes to active when
GPFS completed startup.
2. Mount the filesystem if not already mounted.
You may mount the filesystem after starting GPFS
# mmmount all

You can check the result using


# mmlsmount all -L

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 137


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

13.2.3.7 (on any node) Verify GPFS disks and restripe the filesystem
1. Verify GPFS disks
Verify all GPFS disks are active again
# mmlsdisk sapmntdata -e

If any disks are down, restart them with the command


# mmchdisk sapmntdata start -a

If disks are suspended you can resume them all with the command
# mmchdisk sapmntdata resume -a

Afterwards check the disk status again.


2. GPFS Restripe
Start a restripe so that all data is replicated proper again
# mmrestripefs sapmntdata -r

Warning
Currently the FPO feature used in the appliance is not compatible with filesystem rebalancing.
Do not use the -b parameter!

13.2.3.8 (on any node) Enable GPFS autorecovery If GPFS autorecovery was deactivated in
step 2, enable it again.

13.2.3.9 (on any node) Restore accurate usage count If a filesystem was ill-replicated the used
block count results from mmcheckquota may not be accurate. Therefore it is recommended that you run
mmcheckquota to restore the accurate usage count after the filesystem is no longer ill-replicated.
# mmcheckquota -a

13.3 Update Firmware and Drivers

13.3.1 Lenovo UpdateXpress System Pack Installer

We recommend to use UpdateXpress packages to update your installation. An UpdateXpress System


Pack(UXSP) is an integration-tested bundle of online firmware and driverupdates for System x servers.
UpdateXpress System Packs are generally released semiannually for the first three years and annually for
the final three years of support.
UpdateXpress System Packs simplify the process for downloading and installing all of the online driver
and firmware updates for a given system, ensuring that you are always working with a complete and
current set of updates hat have been tested together and bundled by Lenovo. UpdateXpress System
Packs are created for a machine type and operating system combination. Separate UpdateXpress System
Packs are provided for each of the Linux distributions.
The Lenovo UpdateXpress System Pack Installer acquires and deploys UpdateXpress System Pack update
packages. UpdateXpress System Pack update packages can also be downloaded from Lenovo Support54 .
54 http://support.lenovo.com/us/en/

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 138


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

Go to https://support.lenovo.com/de/en/documents/lnvo-xpress and download the latest binary


of the UpdateXpress System pack Installer (lnvgy_utl_uxspi_-<version>_<os>.bin). Copy it to the
machine.
Note
X6 based servers come preinstalled with this utility starting DVD version 1.11.112-15. In this
case you can find an installed version in /opt/lenovo/uxspi.
This chapter describes only the update of the Mellanox Network Card itself. Please check 13.2: General
Update Procedure on page 134 for complete update scenarios.
The easiest way to use the tool is to start the binary in a console window. In that case you will
be guided through the update process using a GUI. You can also use the commanline version adding
the directory with the downloaded updates using option -l (For additional options using the Lenovo
UpdateXpress System Pack Installer check the documentation provided in https://support.lenovo.
com/de/en/documents/lnvo-xpress.):
# ./lnvgy_utl_uxspi_10.1_rhel6_32-64.bin update -l /tmp/uxsp/
Extracting...
Executing...

This Program is licensed under the terms of the agreement available by invoking
this utility with the --license option. By extracting, copying, accessing, or
using the Program, you agree to the terms of this agreement. If you do not
agree to the terms, please uninstall the software and return it to Lenovo or
the reseller from whom you acquired the software for a refund, if any.
Initializing, Please wait...

UpdateXpress System Pack Installer Version 10.1.15A


(C) Copyright Lenovo 2015. Portions (C) Copyright IBM Corporation.
Active Machine Type: 6241, OS: RHEL 6 (64 bit)
Gathering Inventory ........................... done
Querying Updates done
Comparing Updates ...
(1) [ ] Broadcom NetXtreme tg3 Driver for RHEL6
Category : Network
Severity : Not Required
Reboot : Reboot Required to take effect
Update ID : brcm-lnvgy_dd_nic_tg3-3.137n.a_rhel6_32-64
Requisites : None

Update : tg3.ko (stopped)


New Version : 3.137n
Installed Version : 3.137
[...]
(17) [*] Integrated Management Module 2 (IMM2) Update
Category : IMM2
Severity : Critical
Reboot : Not Required
Update ID : lnvgy_fw_imm2_tcoo16p-2.50_anyos_noarch
Requisites : None
New Version : 2.50 (tcoo16p)
Installed Version : 1.95 (tcoo11r)

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 139


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

Legend:
Type the item number to toggle selected [*] or not selected [ ]
Type 'a' to accept the menu
Type 'f' to select all entries
Type 'q' to quit without processing the entries
[1-17,a,q,f]> 7
[...]
[1-17,a,q,f]> 11

(1) [ ] Broadcom NetXtreme tg3 Driver for RHEL6


(2) [ ] Emulex FC/FCoE (lpfc) Device Driver for RHEL6 - 10.6.228.31 - Release 15C
(3) [ ] Emulex NIC (be2net) Device Driver for RHEL6 - 10.6.228.32 - Release 15C
(4) [ ] Emulex iSCSI (be2iscsi) Device Driver for RHEL6 - 10.6.228.31 - Release 15C
(5) [ ] Qlogic BNX2 RHEL6 Driver
(6) [ ] ServeRAID M Series SAS/SATA Controller Driver for RHEL 6
(7) [ ] Lenovo uEFI Flash Update
(8) [ ] BIOS and Firmware Update for ServeRAID M5200 Series SAS/SATA Controllers
(9) [ ] Lenovo Dynamic System Analysis (DSA)
(10) [ ] Mellanox Firmware Update
(11) [*] Mellanox OFED update for Red Hat Enterprise Linux 6 x86_64
(12) [ ] ServeRAID 6gb SAS/SATA Controller Firmware Update
(13) [ ] Online Broadcom NetXtreme I Linux Firmware Utility
(14) [ ] Online Qlogic NetXtreme II Firmware Utility
(15) [ ] QLogic 10 GbE Converged Network Adapter MultiFlash Update for System x
(16) [ ] Qlogic Update for 8G FC - Multiboot Update for System x
(17) [ ] Integrated Management Module 2 (IMM2) Update

Press enter to show the full list again.


Legend:
Type the item number to toggle selected [*] or not selected [ ]
Type 'a' to accept the menu
Type 'f' to select all entries
Type 'q' to quit without processing the entries
[1-17,a,q,f]> a

Copying update files to temporary directory...

(1 of 1) Running Mellanox OFED update for Red Hat Enterprise Linux 6 x86_64←-
,→...............................................................................................
,→ done
Finished applying selected updates

(1) Mellanox OFED update for Red Hat Enterprise Linux 6 x86_64
Name: Mellanox OFED update for Red Hat Enterprise Linux 6 x86_64
New Version: 3.1-1.0.3.1 (BUILDNM)
Reboot: Reboot Required to take effect
Requisites: None
Status: Successfully Installed

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 140


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

[root@x633 tmp]#

Please check for errors and for necessary reboots and perform them.
Check the system afterwards using the support script.
Attention
The Mellanox driver will be installed without the option ’–enable-affinity’. Therefore the
driver has to be reinstalled manually using ’–enable-affinity’ if it was installed during this
process. Please see chapter 13.3.2: Update Mellanox Network Cards on page 141 for details.

13.3.2 Update Mellanox Network Cards

13.3.2.1 Updating the Driver You should have received a binary update package, e.g. mlnx-lnvgy_
fw_nic_2.4-1.0.0.4_rhel6_x86-64.bin. Please note, that the version number given here might differ.
This package needs to be copied to all nodes you wish to update. It might be necessary to make the file
executable:
chmod +x mlnx-lnvgy_fw_nic_2.4-1.0.0.4_rhel6_x86-64.bin

Then you can start the installation with:


./mlnx-lnvgy_fw_nic_2.4-1.0.0.4_rhel6_x86-64.bin --enable-affinity

If this step fails, you may have to install the python-devel package from the official SLES or RHEL
repositories.
This will upgrade your driver and firmware of the Mellanox network cards. Please review the output of
the above program for possible errors. After a successful upgrade, a reboot will be necessary.
If you have made a kernel-upgrade before, this might not work. In this case try the option --add-kernel-support
in addition.
./mlnx-lnvgy_fw_nic_2.4-1.0.0.4_rhel6_x86-64.bin --add-kernel-support --enable-←-
,→affinity

You might have to perform the steps of SAP-note 2281578 – UDEV rules update necessary after Mellanox
driver update directly afterwards. Only one final reboot after both steps is necessary.
You can check the Mellanox version using:
# ethtool -i ethx

This chapter describes only the update of the Mellanox Network Card itself. Please check 13.2: General
Update Procedure on page 134 for complete update scenarios.

13.3.2.2 Checking Mellanox Port Configuration Sometimes some ports might be in Infiniband-
mode. You can check this using:
# connectx_port_config -s
--------------------------------
Port configuration for PCI device: 0000:95:00.0 is:
eth
eth
--------------------------------
--------------------------------

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 141


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

Port configuration for PCI device: 0000:d5:00.0 is:


eth
eth
--------------------------------

Using connectx_port_config it is alo possible to change the port configuration to ’Ethernet’.

13.3.2.3 Checking Mellanox IRQ Affinity For optimal network performance the IRQs of the
Mellanox Network Adapters need to be aligned proper. To see the current alignment, use the following
command:
# show_irq_affinity.sh <interface>

where interface is one of [eth0, eth1, eth2, eth3] in our setup. Here is some example output this
could look like, please note that the output for eth0 and eth1 are the same, since those ports reside on
the same physical card. The output for eth2 and eth3 are the same too, for the same reason. Please note
that the exact output depends on the number of CPUs and physical placement of the NICs, so the above
example should just give an overview.
# show_irq_affinity.sh eth0 / eth1
544: 0000,00000000,00000000,00400000,00000000
545: 0000,00000000,00000000,00800000,00000000
546: 0000,00000000,00000000,01000000,00000000
547: 0000,00000000,00000000,02000000,00000000
548: 0000,00000000,00000000,04000000,00000000
549: 0000,00000000,00000000,08000000,00000000
550: 0000,00000000,00000000,10000000,00000000
598: 0000,00000000,00000000,00400000,00000000
599: 0000,00000000,00000000,00800000,00000000
600: 0000,00000000,00000000,01000000,00000000
601: 0000,00000000,00000000,02000000,00000000
602: 0000,00000000,00000000,04000000,00000000
603: 0000,00000000,00000000,08000000,00000000
604: 0000,00000000,00000000,10000000,00000000
605: 0000,00000000,00000000,20000000,00000000
606: 0000,00000000,00000000,40000000,00000000

# show_irq_affinity.sh eth2 / eth3


452: 0000,00000000,00000000,00000010,00000000
453: 0000,00000000,00000000,00000020,00000000
454: 0000,00000000,00000000,00000040,00000000
455: 0000,00000000,00000000,00000080,00000000
456: 0000,00000000,00000000,00000100,00000000
457: 0000,00000000,00000000,00000200,00000000
458: 0000,00000000,00000000,00000400,00000000
459: 0000,00000000,00000000,00000800,00000000
460: 0000,00000000,00000000,00001000,00000000
461: 0000,00000000,00000000,00002000,00000000
462: 0000,00000000,00000000,00004000,00000000
463: 0000,00000000,00000000,00008000,00000000
464: 0000,00000000,00000000,00010000,00000000
465: 0000,00000000,00000000,00020000,00000000
466: 0000,00000000,00000000,00040000,00000000
467: 0000,00000000,00000000,00080000,00000000

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 142


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

Make sure the Mellanox IRQ affinity tuner is set to run. Edit the file /etc/infiniband/openib.conf
and check for:
RUN_AFFINITY_TUNER=yes

In case you changed the above setting, make sure you either reboot the server to activate the settings, or
enable them manually:
# service irqbalance stop
# mlnx_affinity start

13.3.3 Updating ServeRAID Driver

13.3.3.1 Linux Stock Driver & Lenovo Provided Driver As part of the driver package, Lenovo
provides precompiled drivers for certain kernel versions, mostly for the base kernel version of each SLES
11/ SLES12 Service Pack and Red Hat Version. If a different kernel version is installed on the server, e.g.
when the kernel was updated, and no precompiled driver is available, the driver needs to compiled before
installation. If more than one server must be updated, it is safe to compile the driver on one server and
then distribute the compiled driver to other servers with the same kernel version.
Installation instructions for both driver variants, shipped and self compiled, will be provided.
Please note that updating the Linux Kernel may revert the ServeRAID driver to the kernel’s default
driver, if the new kernel is incompatible. If this happens and this behavior is not desired, reinstallation
of the Lenovo driver is required, following the same steps as installing the driver for the first time.
The following instructions use the driver version 6.808.14.00. Please insert the correct version number at
the appropriate places.
This chapter describes only the update of the ServeRAID driver itself. Please check 13.2: General Update
Procedure on page 134 for complete update scenarios.
You can find the driver installed with the appliance installation on the Lenovo non OS components DVD
delivered with the appiance under the directory software.

13.3.3.2 Determining available shipped drivers To see the Linux versions supported by any
precompiled driver in a Lenovo driver package, execute e.g. for sles11:
# tar -tvf lnvgy_dd_sraidmr_6.808.14.00_sles11_32-64.tgz sles11/x86_64/update/SUSE-←-
,→SLES/11/rpm

or when you extracted the archive, check the subdirectory sles11/x86_64/update/SUSE-SLES/11/rpm.


The RPM files should be named like lsi-megaraid_sas-kmp-default-06.808.14.00_3.0.76_0.11-30.
x86_64.rpm where kmp-default is the name of the variant required for the HANA servers, 6.808.14.00
is the driver version and 3.0.76_0.11-30 is the Linux kernel version.
You can get the current Linux kernel version using
# uname -r

13.3.3.3 Installing a shipped driver


1. Upload driver package to the server
The downloaded driver package will be named like lnvgy_dd_sraidmr_6.808.14.00_sles11_
32-64.tgz. Upload this zip file to the server, e.g. to root’s home directory.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 143


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

2. Unzip driver package


Create a new directory and extract the driver package
# mkdir serveraid
# cd serveraid
# tar -xvf ../lnvgy_dd_sraidmr_6.808.14.00_sles11_32-64.tgz

3. Install the driver Install the new driver with the command
# ./install.sh --update --add-initrd

If no precompiled driver is available, the install script will abort showing the message NOTE: No
installable rpm found. Then You must compile the driver yourself (see the next section).
4. Reboot Reboot the server to activate the new driver.
Reboot the server and check the driver version afterwards:
# modinfo megaraid_sas

You should see the new driver version.

13.3.3.4 Installing a self-compiled driver


1. Upload driver package to the server
The downloaded driver package will be named like lnvgy_dd_sraidmr_6.808.14.00_sles11_
32-64.tgz. Upload this zip file to the server, e.g. to root’s home directory.
2. Unzip driver package
Create a new directory and extract the driver package
# mkdir serveraid
# cd serveraid
# tar -xvf ../lnvgy_dd_sraidmr_6.808.14.00_sles11_32-64.tgz

3. Compile the driver


# rpmbuild --rebuild sles12/noarch/update/SUSE-SLES/12/sources/lsi-megaraid_sas←-
,→-06.808.14.00-30.src.rpm

The output will show the generated driver RPM files at the end, e.g.
...
Wrote: /usr/src/packages/RPMS/x86_64/lsi-megaraid_sas-kmp-default-06.808.14.00←-
,→_3.0.76_0.11-30.x86_64.rpm
Wrote: /usr/src/packages/RPMS/x86_64/lsi-megaraid_sas-kmp-trace-06.808.14.00_3←-
,→.0.76_0.11-30.x86_64.rpm
Wrote: /usr/src/packages/RPMS/x86_64/lsi-megaraid_sas-kmp-xen-06.808.14.00_3←-
,→.0.76_0.11-30.x86_64.rpm
Executing(--clean): /bin/sh -e /var/tmp/rpm-tmp.10801
+ umask 022
+ cd /usr/src/packages/BUILD
+ rm -rf lsi-megaraid_sas-06.808.14.00
+ exit 0

4. Install the driver Install the new driver with the command, e.g.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 144


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

# rpm -Uvh /usr/src/packages/RPMS/x86_64/lsi-megaraid_sas-kmp-default←-


,→-06.808.14.00_3.0.76_0.11-30.x86_64.rpm

5. Reboot
Reboot the server to activate the new driver and check the driver version:
# modinfo megaraid_sas

You should see the new driver version.

13.4 Linux Kernel Update

Warning
If the Linux kernel is updated, it is mandatory to recompile the GPFS portability layer kernel
module. Otherwise the system will not work anymore!

13.4.1 SLES Kernel Update Methods

There are multiple methods to update a SLES for SAP installation. Possible update sources include
updating by using kernel RPMs copied onto the target server, using a corporate-internal installed SLES
update server/repository or by using Novell’s update server via the Internet (requires registration of the
installation). Possible methods include command line based tools like zypper install or CLI/X11 based
GUI tools like SUSE’s YaST2.
Please refer to SUSE’s official SLES documentation. A good starting point is the chapter "Installing or
Removing Software" in the SLES 12 deployment guide55 , or the SLES 11 deployment guide56 respectively.
If you decide to update from RPM files, you need to update at least the following files depending on your
version of SLES.
SLES12:
• kernel-default-<kernelversion>.x86_64.rpm
• kernel-default-devel-<kernelversion>.x86_64.rpm
• kernel-devel-<kernelversion>.noarch.rpm
• kernel-macros-<kernelversion>.noarch.rpm
• kernel-source-<kernelversion>.noarch.rpm
If you decide to update more rpms from the kernel-patch, you may need dependencies like mozilla-nss-
tools-3.16.4-5.2, pesign-0.109-4.34 and pesign-obs-integration-10.0-26.2. These rpms are available on the
original SLES12-DVD delivered with the system.
Do not attempt to install the kernel-default-base package on a HANA-server.
SLES11 SP4:
• kernel-default-<kernelversion>.x86_64.rpm
• kernel-default-base-<kernelversion>.x86_64.rpm
• kernel-default-devel-<kernelversion>.x86_64.rpm
• kernel-source-<kernelversion>.x86_64.rpm
55 https://www.suse.com/documentation/sles-12/
56 https://www.suse.com/documentation/sles11/

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 145


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

• kernel-syms-<kernelversion>.x86_64.rpm
• kernel-trace-devel-<kernelversion>.x86_64.rpm
• kernel-xen-devel-<kernelversion>.x86_64.rpm

13.4.2 RHEL versionlock

RHEL has a mechanism to lock the versions of specified packages. Without this mechanism you would
update without further notice from RHEL 6.5 to RHEL 6.6 by executing a yum update.
SAP HANA is only released for dedicated RHEL versions. Therefore it is advisable to restrict the update
for the kernel version and redhat-release package. You can find examples for RHEL 6.5 and for RHEL
6.6 below.
If it is not already done this mechanism can be activated by installing two packages and creating a file
/etc/yum/pluginconf.d/versionlock.list in the following way:
yum -y install yum-versionlock yum-security

For RHEL 6.5 the file /etc/yum/pluginconf.d/versionlock.list should look like this
# Keep packages for RHEL 6.5 (begin)
libssh2-1.4.2-1.el6.x86_64
kernel-2.6.32-431.*
kernel-firmware-2.6.32-431.*
kernel-headers-2.6.32-431.*
kernel-devel-2.6.32-431.*
redhat-release-*
# Keep packages for RHEL 6.5 (end)

for RHEL 6.6 like this


# Keep packages for RHEL 6.6 (begin)
libssh2-1.4.2-1.el6.x86_64
kernel-2.6.32-504*
kernel-firmware-2.6.32-504.*
kernel-headers-2.6.32-504.*
kernel-devel-2.6.32-504.*
redhat-release-*
# Keep packages for RHEL 6.6 (end)

and for RHEL 6.7 like this


# Keep packages for RHEL 6.7 (begin)
kernel-2.6.32-573.*
kernel-firmware-2.6.32-573.*
kernel-headers-2.6.32-573.*
kernel-devel-2.6.32-573.*
redhat-release-server-6Server-6.7.0.3.el6.x86_64
# Keep packages for RHEL 6.7 (end)

For RHEL 7.2 it looks like this


# Keep packages for RHEL 7.2 (begin)
kernel-3.10.0-327.*
kernel-devel-3.10.0-327.*
kernel-headers-3.10.0-327.*

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 146


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

kernel-tools-3.10.0-327.*
kernel-tools-libs-3.10.0-327.*
redhat-release-server-7.2-9.*
# Keep packages for RHEL 7.2 (end)

To allow later updates (like kernel updates), you have to comment all lines containing restrictions for that
update case in the file versionlock.list. After the update it is necessary to create similar restrictions
for the updated packages using the new package versions.
Please refer also to the following SAP notes 2013638 – SAP HANA DB: Recommended OS settings for
RHEL 6.5 and 2136965 – SAP HANA DB: Recommended OS settings for RHEL 6.6

13.4.3 RHEL Kernel Update Methods

There are multiple methods to update a RHEL installation. Possible update sources including updating
by using kernel RPMs copied onto the target server, using a corporate-internal installed RHEL update
server/repository or by using Red Hat’s update server via the Internet (requires registration of the
installation).
Please refer to Red Hat’s official RHEL documentation. A good starting point is the Red Hat Deployment
Guide57 (chapter 27 "Manually Upgrading The Kernel").
If you decide to update from RPM files, you need to update at least the following files
• kernel-<kernelversion>.el6.x86_64.rpm
• kernel-devel-<kernelversion>.el6.x86_64.rpm
• kernel-firmware-<kernelversion>.el6.noarch.rpm
• kernel-headers-<kernelversion>.el6.x86_64.rpm4
There are two sources for Kernel upgrades on Red Hat Linux: http://www.redhat.com/security/
updates/, and http://www.redhat.com/docs/manuals/RHNetwork/
Download the kernel RPMs necessary for your system. Red Hat recommends to keep the old kernel
packages as a fallback in case there are problems with the new kernel.
Updating using repositories is recommended over updating from files.
Please refer to chapter 13.4.2: RHEL versionlock on page 146 how to check, if a versionlock mechanism
is implemented and how to allow kernel updates.
57 https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Deployment_Guide/index.

html

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 147


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

13.4.4 Kernel Update Procedure for XFS with bcache

Step Title 3
1 Stop SAP HANA
2 Update the ServeRAID driver
3 Update the Mellanox driver
4 Update Kernel Packages
5 Reinstalling the ServeRAID driver if necessary
6 Reinstalling the Mellanox driver if necessary
7 Start SAP HANA

Table 66: Upgrade Kernel for XFS with bcache

13.4.5 Disruptive Cluster and Single Node Kernel Update Procedure for GPFS

Step Title 3
1 Stop SAP HANA
2 Unmount GPFS filesystems & stop GPFS
3 Update the ServeRAID driver
4 Update the Mellanox driver
5 Update Kernel Packages
6 Reinstalling the ServeRAID driver if necessary
7 Reinstalling the Mellanox driver if necessary
8 Build new GPFS portability layer
9 Restart GPFS & mount filesystem
10 Start SAP HANA

Table 67: Upgrade GPFS Portability Layer Checklist for GPFS

1. Stop SAP HANA: chapter 13.2.3.2: (on the target node) Stop SAP HANA on page 136
2. Unmount GPFS filesystems & stop GPFS: chapter 13.2.3.5: (on the target node) Unmount the
GPFS filesystem and stop GPFS on page 137
3. Update the ServeRAID driver, see chapter 13.3.3: Updating ServeRAID Driver on page 143
4. Update the Mellanox driver, see chapter 13.3.2: Update Mellanox Network Cards on page 141
5. Update Kernel Packages: Update the kernel by your preferred method and restart the server to
boot into the new kernel.
6. Reinstalling the ServeRAID driver if necessary
It might be that the ServeRAID driver is reverted to the kernel’s default driver after the upgrade.
Please check chapter 13.3.3: Updating ServeRAID Driver on page 143 for more information on how
to reinstall the drivers if necessary.
7. Reinstalling the Mellanox driver if necessary
It might be that the Mellanox driver is reverted to the kernel’s default driver after the upgrade.
Please check chapter 13.3.2: Update Mellanox Network Cards on page 141 for more information on
how to reinstall the drivers if necessary.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 148


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

8. Build new portability layer:


• If GPFS 4.1.0-4 or higher is installed, run:
mmbuildgpl

• If a version before 4.1.0-4 is installed, run:


# cd /usr/lpp/mmfs/src/
# make Autoconfig
# make World
# make InstallImages

9. Restart GPFS & mount filesystem: chapter 13.2.3.6: (on the target node) Restart GPFS and mount
the filesystem on page 137
10. Start SAP HANA: chapter 13.2.3.3: (on the target node) Start SAP HANA on page 136

13.5 Updating & Upgrading GPFS

Note
Updating & upgrading GPFS requires a rebuild of the portability layer. The same applies if
the Linux kernel was upgraded.

Note
IBM GPFS has been rebranded to IBM Spectrum Scale since version 4.1.0. This chapter uses
"GPFS" to refer to IBM GPFS and IBM Spectrum Scale.

Warning
The update packages for versions 4.1.1-0 and later do not come as compressed TAR files any-
more. They are coming as self-extracting files. Execute
# bash <GPFS software update file>
to extract it.
The contained files are extracted to /usr/lpp/mmfs/<version>, where <version> is for ex-
ample 4.1.1.3.
For the cluster operations we use the following terms
any node This step or command must be done once on any arbitrary node within the cluster.
every node This step or command must be run on every node. The order of the nodes is not important.
target node The node which is currently being updated, either while the remaining nodes are running, or while
the whole is shutdown.
For a standalone server, these designations do not matter, all steps have to be done the sole node.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 149


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

13.5.1 Supported Versions

GPFS Version Comment

3.4 Not supported, upgrade to 3.5 or 4.1.1.8 mandatory


3.5 PTF 26 or higher required
4.1.0 Not supported, upgrade to 4.1.1 mandatory
4.1.1 PTF 8 or higher required
4.2 PTF 2 or higher required

Table 68: Supported GPFS versions

13.5.2 Disruptive Cluster and Single Node Upgrade

Note
In the disruptive cluster update scenario, one would shutdown the whole cluster and apply
all updates. This will cause a downtime.

Note
A single node update is always disruptive.

Step Title 3
1 Check GPFS cluster health
2 Stop SAP HANA
3 Unmount GPFS filesystems & stop GPFS
4 Upgrade to new GPFS Version
5 Update to new GPFS Version
6 Build new GPFS portability layer
7 Restart GPFS & mount GPFS filesystem
8 Apply upgrade changes
9 Apply update changes
10 Start SAP HANA

Table 69: Upgrade GPFS Portability Layer Checklist

1. Check GPFS cluster health: chapter 13.2.3.4: (on the target node) Check GPFS cluster health on
page 136.
2. (on all nodes) Stop SAP HANA The following command can shutdown HANA on all nodes at the
same time.
# mmdsh service sapinit stop

If mmdsh is not configured or for other methods or for more details see chapter 13.2.3.2: (on the
target node) Stop SAP HANA on page 136.
3. Unmount the GPFS filesystem and stop GPFS on all nodes
You can use the -a option to perform the commands on the whole cluster:
# mmumount all -a
# mmshutdown -a

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 150


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

If on any node any GPFS file system is still in use, the mmumount command will fail to unmount
these affected filesystems on these nodes. One common stopper is the hdbrsutil feature of HANA.
Read more in SAP Notes 2191221 – hdbrsutil can block unmount of the filesystem and 2159435 –
How-to: Keeping SAP HANA row store in memory when restarting and chapter 13.2.3.5: (on the
target node) Unmount the GPFS filesystem and stop GPFS on page 137.
4. (on all nodes) When doing a version upgrade (e.g. 4.1 -> 4.2), install new GPFS version
(a) (on all nodes) Remove the old GPFS software
To find all packages belonging to the current version execute:
# rpm -qa | grep gpfs

Remove all GPFS packages returned from above command, e.g.:


# rpm -e gpfs.base gpfs.docs gpfs.gpl gpfs.msg.en_US

(b) (on all nodes) Install the new GPFS software When upgrading to GPFS 4.1.1 run
# rpm -ivh /usr/lpp/mmfs/4.1/gpfs*.rpm

When upgrading to GPFS 4.2.1 run


# rpm -ivh /usr/lpp/mmfs/4.2.1.0/gpfs*.rpm

In any case, please install also the update as described in the following step as the appliance
never uses zero-releases ("PTF 0").
5. (on all nodes) Install GPFS update For GPFS 4.1 and higher:
The update RPMs are extracted into the path /usr/lpp/mmfs/<version>/ where version is the
version of the GPFS update, so if you want to update to GPFS version 4.2.0.4 you would have the
update path /usr/lpp/mmfs/4.2.0.4/.
So, to install this update run the command
# rpm -Fvh /usr/lpp/mmfs/4.2.0.4/*.rpm

and use the path to the target version.


For GPFS 3.5 update navigate to the folder where you extracted the update RPMs and use the
command
# rpm -Fvh *.rpm

6. (on all nodes) Build portability layer


• If you are updated to a GPFS version 4.1 or higher run
# mmbuildgpl

• If you are running GPFS 3.5.0 execute these commands:


# cd /usr/lpp/mmfs/src/
# make Autoconfig
# make World
# make InstallImages

7. (on any node) Restart GPFS and mount the filesystems on all nodes
Repeat te mmgetstate step until all nodes are active, this may take a while and the node may
transition from the state down via the state arbitrating to the state active

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 151


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

# mmstartup -a
# mmgetstate -a
# mmmount all -a
# mmlsmount all -L

8. Apply Upgrade Changes


If you did a release upgrade, follow the remaining instructions for the respective version.
9. Apply Update Changes
If you did only an update within a release, run the following general steps:
# mmchconfig release=LATEST
# mmchfs sapmntdata -V full

This update the filesystem structure and configuration data structure to new versions if available.
Check any new minimum GPFS version with the commands. This is just informational:
# mmlsconfig | grep minReleaseLevel
# mmlsfs sapmntdata -V

10. Start SAP HANA on all nodes


Depending on configuration and software versions, HANA may have already been started by the
successful GPFS start, if HANA needs to be started manually, see chapter 13.2.3.3: (on the target
node) Start SAP HANA on page 136.

13.5.3 Rolling Cluster Update

The idea of a rolling update is to update only one server at a time and after the server is back online
in the cluster, proceed with the next node in the same way. By doing so, you can avoid a full cluster
downtime. The downside is that this operation takes longer and needs more work.
For updating the SAP HANA software in a SAP HANA cluster, please refer to the SAP HANA Technical
Operations Manual. This can be done independent of other updates.

Step Title 3
1 Check GPFS cluster health
2 Restripe Filesystems
3 Process each node individually
3.1 Unmount GPFS filesystems & stop GPFS
3.2 Upgrade to new GPFS version
3.3 Update to new GPFS version
3.4 Build new GPFS portability layer
3.5 Start GPFS & mount filesystems
3.6 Check GPFS disks are up
3.7 Restripe filesystems
3.8 Start HANA
4 Apply upgrade changes
5 Apply update changes

Table 70: Upgrade GPFS Portability Layer Checklist

Note
These instructions assume that your GPFS filesystems span over all nodes.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 152


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

1. Check GPFS cluster health: chapter 13.2.3.4: (on the target node) Check GPFS cluster health on
page 136.
2. (on any node) Restripe Filesystems
# mmrestripefs sapmntdata -r

Only one filesystem can be restriped per command call. If you have additional filesystems, also
restripe these. Obtain a list of filesystem with the command mmlsconfig.
3. Repeat these step for every node within the cluster
(a) (on target nodes) Stop SAP HANA The following command can shutdown HANA locally.
# service sapinit stop

For more details see chapter 13.2.3.2: (on the target node) Stop SAP HANA on page 136.
(b) (on target node) Unmount the GPFS filesystem and stop GPFS
# mmumount all
# mmshutdown

If any GPFS file system is still in use, the mmumount command will fail to unmount these
affected filesystems. One common stopper is the hdbrsutil feature of HANA. Read more in
SAP Notes 2191221 – hdbrsutil can block unmount of the filesystem and 2159435 – How-to:
Keeping SAP HANA row store in memory when restarting and chapter 13.2.3.5: (on the target
node) Unmount the GPFS filesystem and stop GPFS on page 137.
(c) (on all nodes) When doing a version upgrade (e.g. 4.1 -> 4.2), install new GPFS version
This step is only necessary when upgrading from releases 3.5 to 4.1.1 or 4.2.x where a new
GPFS base package must be installed.
i. Remove the old GPFS software
To find all packages belonging to the current GPFS version execute:
# rpm -qa | grep gpfs

Remove all GPFS packages returned from above command, e.g.:


# rpm -e gpfs.base gpfs.docs gpfs.gpl gpfs.msg.en_US

ii. Install the new GPFS software


When upgrading to GPFS 4.1.1 run
# rpm -ivh /usr/lpp/mmfs/4.1/gpfs*.rpm

When upgrading to GPFS 4.2.1 run


# rpm -ivh /usr/lpp/mmfs/4.2.1.0/gpfs*.rpm

In any case, please install also the update in the following step as the appliance never uses
zero-releases ("PTF 0").
(d) Install GPFS update
For GPFS 4.1 and higher:
The update RPMs are extracted into the path /usr/lpp/mmfs/<version>/ where version is
the version of the GPFS update, so if you want to update to GPFS version 4.2.0.4 you would
have the update path /usr/lpp/mmfs/4.2.0.4/.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 153


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

So, to install this update run the command


# rpm -Fvh /usr/lpp/mmfs/4.2.0.4/*.rpm

and use the path to the target version.


For GPFS 3.5 update navigate to the folder where you extracted the update RPMs and use
the command
# rpm -Fvh *.rpm

(e) Build portability layer


• If you are updating to a GPFS version 4.1 or higher run
# mmbuildgpl

• If you are running GPFS 3.5.0 execute these commands:


# cd /usr/lpp/mmfs/src/
# make Autoconfig
# make World
# make InstallImages

(f) Restart GPFS and mount the filesystems


# mmstartup
# mmgetstate
# mmmount all
# mmlsmount all

(g) Check that all disks are up


# mmlsdisk sapmntdata

Should any disk not be ready & up, start and resume all disks:
# mmchdisk sapmntdata start -a
# mmchdisk sapmntdata resume -a

Again, repeat this step for all filesystems.


(h) (on any node) Restripe Filesystems
# mmrestripefs sapmntdata -r

Only one filesystem can be restriped per command call. If you have additional filesystems,
also restripe these. Obtain a list of filesystem with the command mmlsconfig. The shorter
the downtime of the updated node, the shorter the restripe time.
(i) Start SAP HANA on all nodes
Depending on configuration and software versions, HANA may have already been started by
the successful GPFS start, if HANA needs to be started manually, see chapter 13.2.3.3: (on
the target node) Start SAP HANA on page 136. One common command to start HANA is
# service sapinit start

(j) Continue with the next node


4. Apply upgrade changes
If you did a release upgrade, follow the remaining instructions for the respective version.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 154


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

5. Apply update changes


If you did only an update within a release, run the following general steps:
# mmchconfig release=LATEST
# mmchfs sapmntdata -V full

This update the filesystem structure and configuration data structure to new versions if available.
Check any new minimum GPFS version with the commands. This is just informational:
# mmlsconfig | grep minReleaseLevel
# mmlsfs sapmntdata -V

13.5.4 GPFS 3.5 Updates

In GPFS version 3.5 updates are distributed as compressed tar archives with a naming convention like
GPFS-3.5.0.<PTFversion>-x86_64-Linux.tar.gz. Before starting with the update upload this archive
to the servers you intend to update and extract the software packages (*.rpm). Given that you uploaded
the archive to /tmp/update and the intended update is GPFS 3.5.0-21, the update preparation steps are:
# cd /tmp/update
# tar -xvf GPFS-3.5.0.21-x86_64-Linux.tar.gz

Now you can start with the actual update procedure, for single nodes the disruptive cluster update please
refer to 13.5.2: Disruptive Cluster and Single Node Upgrade on page 150, for the rolling cluster update
see 13.5.3: Rolling Cluster Update on page 152.

13.5.5 GPFS 4.1 & 4.2 Updates

Warning
Starting with GPFS 4.2.1.1 on RHEL 7.2 GPFS fails to install the boot up scripts necessary
for the automatic start of GPFS at boot time. After every GPFS update installation verify
with the command
# service gpfs status
that the state loaded is not not-found. In case it is is not found, enable the service:
# systemctl enable /usr/lpp/mmfs/lib/systemd/gpfs.service
To minimize the downtime the update package should be distributed and extracted on all servers before
the update. The update packages for GPFS version 4.1.1.1 and later are distributed as self-extracting files
named like lnvgy-Spectrum_Scale_Standard-<GPFSversion>-x86_64-Linux-install, e.g. lnvgy-Spectrum_
Scale_Standard-4.2.0.4-x86_64-Linux-install. Upload this file to any directory on the server, e.g.
/tmp, mark it as executable, run it and accept the license.
Example:
# cd /tmp
# chmod +x lnvgy-Spectrum_Scale_Standard-4.2.1.1-x86_64-Linux-install
# ./lnvgy-Spectrum_Scale_Standard-4.2.1.1-x86_64-Linux-install

This procedure will extract update packages to /usr/lpp/mmfs/<version>, where <version> is in this
example 4.2.1.1. During the update these files will be used.
Now you can start with the actual update procedure, for single nodes and the disruptive cluster update
procedure please refer to 13.5.2: Disruptive Cluster and Single Node Upgrade on page 150, for the rolling
cluster update see 13.5.3: Rolling Cluster Update on page 152.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 155


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

13.5.6 Upgrade from GPFS 4.1 to 4.2.1

The upgrade procedure differs depending on which update packages you are using. For update packages
obtained from the Lenovo support site (file names starting with lnvgy-*) it is required to obtain and
install the GPFS 4.2.1.0 base package before the PTF update package can be installed. For update
packages obtained from IBM Fix Central the base package is not required and the update package is
sufficient to perform the version upgrade from 4.1 to 4.2.1.
There are three valid combinations:
• You have the Lenovo base (4.2.1.0) and Lenovo update (4.2.1.x) packages
Please follow the general upgrade procedure described in 13.5.3: Rolling Cluster Update on page 152
and 13.5.2: Disruptive Cluster and Single Node Upgrade on page 150 and install base and update
package as instructed.
• You have the IBM base (4.2.1.0) and IBM update package (4.2.1.x) packages
Please follow the general upgrade procedure described in 13.5.3: Rolling Cluster Update on page 152
and 13.5.2: Disruptive Cluster and Single Node Upgrade on page 150 and install base and update
package as instructed.
• You have only the IBM update (4.2.1.0) package.
Please follow the general update procedure described in 13.5.3: Rolling Cluster Update on page 152
and 13.5.2: Disruptive Cluster and Single Node Upgrade on page 150 and skip the step "Install new
GPFS version".

13.5.6.1 Configuration Changes


Warning
Starting with GPFS 4.2.1.1 on RHEL 7.2 GPFS fails to install the boot up scripts necessary
for the automatic start of GPFS at boot time. After every GPFS update installation verify
with the command
# service gpfs status
that the state loaded is not not-found. In case it is is not found, enable the service:
# systemctl enable /usr/lpp/mmfs/lib/systemd/gpfs.service
In a well-maintained environent no special changes are necessary as all should have already been imple-
mented. The health checks contained in the support can report any missing configuration changes.
# mmchconfig release=LATEST
# mmchfs sapmntdata -V full

If you have defined additional GPFS filesystems, it is recommended to run the mmchfs command also
for these filesystems.

13.5.7 Upgrade from GPFS 3.5 to 4.1

Note
Since IBM GPFS 4.1.0 IBM has been offering three different editions of GPFS: Express Edi-
tion, Standard Edition, Advanced Edition.
For the Lenovo Systems Solution for SAP HANA Platform Edition, the Standard Edition is
required. If you upgrade from GPFS 3.5 to 4.1, you are entitled to use the Standard Edition
(after a migration of the license). For more information please refer to the GPFS FAQ.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 156


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

This chapter contains information about special steps for the upgrade from GPFS 3.5 to 4.1. Detailed
information can also be found in the IBM Knowledge Center58 .
In a clustered environment you have the option to do a rolling upgrade avoiding a full HANA downtime.
The following steps are required:
1. Obtain the GPFS 4.1.0 software package and the PTF update package for GPFS 4.1.1.8 or higher
Obtain the base installation package from IBM or the appliance DVD and the latest available PTFs
from IBM FixCentral or the Lenovo support web site.
2. Distribute both packages to all servers and prepare the upgrade Upload both software packages to
all servers and extract them. Most likely the base package will be a tar archive, and the update file
will have a -install suffix. Given both files have been uploaded to /tmp, commands to extract the
necessary RPM files look like
Example:
# cd /tmp
# tar -xvf GPFS_4.1_STD_LSX_QSG.tar.gz
# ./gpfs_install-4.1.0-0_x86_64
# chmod +x lnvgy-Spectrum_Scale_Standard-4.1.1.8-x86_64-Linux-install
# ./lnvgy-Spectrum_Scale_Standard-4.1.1.8-x86_64-Linux-install

The names may differ. GPFS_4.1_STD_LSX_QSG.tar.gz is the base installation tar archive fron
which gpfs_install-4.1.0-0_x86_64 is extracted, which must be run afterwards. lnvgy-Spectrum_
Scale_Standard-4.1.1.8-x86_64-Linux-install is the update file downloaded from the Lenovo
support web site, it must be marked as executable and run.
3. Follow the general upgrade instructions in the chapters 13.5.2: Disruptive Cluster and Single Node
Upgrade on page 150 or 13.5.3: Rolling Cluster Update on page 152.
4. The upgrade-specific actions can be found at 13.5.7.1: Configuration Changes on page 157

13.5.7.1 Configuration Changes In a GPFS cluster after all servers have been updated to GPFS
4.1.0 or later the following changes must be implemented:
• Secure installation
# mmauth genkey new
# mmauth update . -l AUTHONLY

• Switch to Cluster Configuration Repository (CCR)


If not already switched to the new cluster configuration repository which obsoletes the necessity to
designate primary and secondary configuration server, switch to this new feature now:
# mmchcluster --ccr-enable

• Update configuration
# mmchconfig release=LATEST

• Apply various configuration changes


# mmchconfig ignorePrefetchLUNCount=yes
# mmchconfig pagePool=4G
# mmchconfig restripeOnDiskFailure=yes -N all
58 http://www.ibm.com/support/knowledgecenter/SSFKCN_4.1.0/com.ibm.cluster.gpfs.v4r1.gpfs300.doc/bl1ins_

mig41from35.htm

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 157


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

• Update GPFS filesystems to the latest on disk format, enable Rapid Repair and increase the GPFS
log file (filesystem journal) size
# mmchfs sapmntdata -V full
# mmchfs sapmntdata --rapid-repair
# mmchfs sapmntdata -L 512M

If you have defined additional GPFS filesystems, it is recommended to run these commands also
for these filesystems. The change of the log file size requires a GPFS daemon restart. This can be
done again one node at a time so a complete cluster downtime can be prevented.
• Last change is to delete an obsolete callback:
# mmdelcallback start-disks-on-startup

13.5.8 Upgrade from GPFS 3.5 to 4.2

Note
Since IBM GPFS 4.1.0 IBM has been offering three different editions of GPFS: Express Edi-
tion, Standard Edition, Advanced Edition.
For the Lenovo Systems Solution for SAP HANA Platform Edition, the Standard Edition is
required. If you upgrade from GPFS 3.5 to 4.1, you are entitled to use the Standard Edition
(after a migration of the license). For more information please refer to the GPFS FAQ.
This chapter contains information about special steps for the upgrade from GPFS 3.5 to 4.1.
A direct upgrade from GPFS 3.5 to 4.2 or higher is only possible in a disruptive process requiring the
shutdown of GPFS on all cluster nodes at the same time. If a rolling upgrade is necessary, the upgrade
to GPFS 4.2 must be performed in a two phase process, first a rolling upgrade of the whole cluster to
GPFS 4.1 must be completed then a rolling upgrade to 4.2 can be done. For a single node the disruptive
process is the only possible update process.
After the GPFS software on all nodes has been upgraded to version 4.2 or higher, the configuration
changes described in 13.5.8.3: Configuration Changes on page 159 must be implemented.
This upgrade can also be done using a GPFS 4.1 base installation package and GPFS 4.2 update package.

13.5.8.1 Disruptive Cluster Upgrade and Single Node Upgrade This upgrade procedure is the
only possible upgrade procedure for single node and the easier upgrade procedure for cluster installations.
HANA must be stopped on all nodes and GPFS must also be shutdown on all nodes at the same time.
1. Obtain the GPFS 4.2.0 software package and the PTF update package for GPFS 4.2.0.4 or higher
Obtain the base installation package from IBM or the appliance DVD and the latest available PTFs
from IBM FixCentral or Lenovo Support.
2. Distribute both packages to all servers and prepare the upgrade Upload both software packages to
all servers and extract them. Most likely the base package will be a tar archive, and the update file
will have a -install suffix. Given both files have been uploaded to /tmp, commands to extract the
necessary RPM files look like
Example:
# cd /tmp
# tar -xvf GPFS_4.1_STD_LSX_QSG.tar.gz
# ./gpfs_install-4.1.0-0_x86_64
# chmod +x lnvgy-Spectrum_Scale_Standard-4.2.0.4-x86_64-Linux-install
# ./lnvgy-Spectrum_Scale_Standard-4.2.0.4-x86_64-Linux-install

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 158


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

The names may differ. GPFS_4.1_STD_LSX_QSG.tar.gz is the base installation tar archive fron
which gpfs_install-4.1.0-0_x86_64 is extracted, which must be run afterwards. lnvgy-Spectrum_
Scale_Standard-4.2.0.4-x86_64-Linux-install is the update file downloaded from the Lenovo
support web site, it must be marked as executable and run.
3. Upgrade from GPFS 3.5 to GPFS 4.2.0.4 or higher, follow the general instructions in 13.5.2: Dis-
ruptive Cluster and Single Node Upgrade on page 150
4. Implement configuration changes described in 13.5.8.3: Configuration Changes on page 159

13.5.8.2 Two Phases Rolling Upgrade from GPFS 3.5 to 4.2 GPFS releases 3.5.0 and releases
4.2.0 are not runtime compatible, that means a cluster cannot run a mixed environment of both versions.
The last compatible versions are 3.5.0-x and 4.1.1.x. Releases 4.1.1 and 4.2.0 are compatible. So in
a two phases process, a cluster can be upgraded from 3.5.0 to 4.1.1 one node at a time and after the
whole cluster was upgraded then each server can be upgraded to 4.2 one node at a time minimizing the
downtime to individual nodes.
1. Obtain the GPFS 4.1.0 software package and the update package for GPFS 4.1.1.8 or higher
2. Distribute both packages to all servers and prepare the upgrade
3. Upgrade from GPFS 3.5 to GPFS 4.1.1.8 or higher (these 3 steps are described in 13.5.7: Upgrade
from GPFS 3.5 to 4.1 on page 156)
4. Complete upgrade to 4.1.1
5. Obtain GPFS update 4.2.0.1 or any later update.
6. Update from GPFS 4.1.1.x to GPFS 4.2.0.1 or higher, see general GPFS update instructions in
13.5.3: Rolling Cluster Update on page 152
It is important that the upgrade to 4.1.1 is completed for the whole cluster before any node is updated
to 4.2 or higher.
No configuration changes are necessary as these are implemented as part of the upgrade to GPFS 4.1.

13.5.8.3 Configuration Changes In a GPFS cluster after all servers have been updated to GPFS
4.2.1 or later the following changes must be implemented:
• Secure installation
# mmauth genkey new
# mmauth update . -l AUTHONLY

• Switch to Cluster Configuration Repository (CCR)


If not already switched to the new cluster configuration repository which obsoletes the necessity to
designate primary and secondary configuration server, switch to this new feature now:
# mmchcluster --ccr-enable

• Update configuration
# mmchconfig release=LATEST

• Apply various configuration changes


# mmchconfig ignorePrefetchLUNCount=yes
# mmchconfig pagePool=4G
# mmchconfig restripeOnDiskFailure=yes -N all

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 159


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

• Update GPFS filesystems to the latest on disk format, enable Rapid Repair and increase the GPFS
log file (filesystem journal) size
# mmchfs sapmntdata -V full
# mmchfs sapmntdata --rapid-repair
# mmchfs sapmntdata -L 512M

If you have defined additional GPFS filesystems, it is recommended to run these commands also
for these filesystems. The change of the log file size requires a GPFS daemon restart. This can be
done again one node at a time so a complete cluster downtime can be prevented.
• Last change is to delete an obsolete callback:
# mmdelcallback start-disks-on-startup

Warning
Starting with GPFS 4.2.1.1 on RHEL 7.2 GPFS fails to install the boot up scripts necessary
for the automatic start of GPFS at boot time. After every GPFS update installation verify
with the command
# service gpfs status
that the state loaded is not not-found. In case it is is not found, enable the service:
# systemctl enable /usr/lpp/mmfs/lib/systemd/gpfs.service

13.6 Update SAP HANA

Warning
Make sure that the packages listed in Appendix E.5: FAQ #5: Missing RPMs on page 197
are installed on your appliance. An upgrade may fail without them.
Please refer to the official SAP HANA documentation for further steps.
Please check, if the version of SAP HANA you plan to update to is supported by your current OS:

SLES11 SLES11 SLES12 SLES12 RHEL RHEL RHEL RHEL


SP3 SP4 SP1 6.5 6.6 6.7 7.2
HANA 1.0
SPS09 3 7 7 7 3 3 7 7
SPS10 3 3 3 7 3 3 7 7
SPS11 3 3 3 7 3 3 3 7
SPS12 3 3 3 3 7 7 3 3
HANA 2.0
SPS00 7 7 7 3 7 7 7 3

Table 71: HANA SPS / OS Release –Support Matrix

The current table is attached to SAP Note 2235581 – SAP HANA: Supported Operating Systems. More
information can be found in Understand HANA SPS and supported Operating Systems59 .
Attention
Starting with SAP HANA 2.0 at least a Haswell CPU or later is mandatory; see SAP Note
2399995 – Hardware requirement for SAP HANA 2.0.
59 http://scn.sap.com/docs/DOC-72631

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 160


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

Please perform the mandatory update of the GCC runtime environment if necessary. See
• 2001528 – Linux: SAP HANA Database SPS 08, SPS 09 and SPS 10 on RHEL 6 or SLES 11
• 2228351 – Linux: SAP HANA Database SPS 11 revision 110 (or higher) on RHEL 6 or SLES 11
for details.
After the update of SAP HANA to version 10 or higher make sure that the HANA parameters are set at
the right values, see section E.18: FAQ #18: Setting the HANA Parameters on page 208
Once HANA is updated, sapinit autostart might be enabled. In case of a IBM GPFS based installation
you need to disable sapinit autostart, because this starts SAP HANA too early when the filesystem is
not ready:
#chkconfig sapinit off

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 161


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

14 Operating System Upgrade


Following operating system upgrades are supported and described:
• Upgrade of RHEL 6.5 or 6.6 to RHEL 6.6 or 6.7
• Upgrade of the SLES for SAP 11 SP3 operating system to SP4
• Upgrade of the SLES for SAP 12 operating system to SP1
Attention
A Upgrade from SLES for SAP 11 to SLES for SAP 12 and from RHEL 6 to RHEL 7 is
generally not supported.
For machine type 3837 a upgrade from SLES for SAP 11 SP3 to SLES for SAP 11 SP4 and
from RHEL 6.6 to RHEL 6.7 is not possible without using a IBM to Lenovo (ITL) HW
Conversion Kit.
For the upgrade a maintenance downtime is needed with a least one reboot of the servers. If you have
installed software that was not part of the initial installation from Lenovo, please make sure that this
software is compatible with the target version.
Note
Testing in a non-productive environment before upgrading productive systems is highly rec-
ommended. As always backing up the system before performing changes is also highly rec-
ommended.
The following tested and recommended upgrade steps require one reboot. The tasks are mostly the same
for cluster and single node systems, if there is an operational difference between these two types, it will
be noted. This list shows the upgrade steps.
Please check chapter 13.6: Update SAP HANA on page 160 for informmation, if due to the new OS
release in addition a update of SAP HANA itself is necessary.

14.1 Rolling or Non-Rolling Upgrade

In a cluster environment a rolling upgrade (one node at a time) is possible as long as you are running a
HA environment with IBM GPFS 3.5 or higher and with at least one standby node.
See section 13.5: Updating & Upgrading GPFS on page 149 for information on the IBM GPFS upgrade.
In any case you can perform a non-rolling upgrade, taking all nodes down for maintenance.
When doing a rolling upgrade or the upgrade of a single node, do the steps described in this section only
on the server currently being updated.
When updating all nodes in a cluster at the same time, you can perform the steps on all nodes in parallel:
step 1 on all nodes, then step 2 on all nodes and then step 3 on all nodes and so on.

14.2 Upgrade SLES for SAP

14.2.1 Upgrade Overview

The following table gives an overview for a GPFS based installation:

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 162


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

Step Title 3
1 Stop SAP HANA
2 Stop IBM GPFS
3 Upgrade IBM GPFS if necessary
4 Update Mellanox Drivers if necessary
5 Upgrade from SLES for SAP SP3 to SP4
6 Kernel upgrade if necessary
7 Reinstall Mellanox Software Stack
8 Updating the ServeRAID driver if necessary
9 Install Compability Pack
10 Recompile kernel module for IBM GPFS
11 Adapt Configuration if necessary
12 Start IBM GPFS
13 Start SAP HANA
14 Check Installation

Table 72: Upgrade Procudure for GPFS-based installations

If you use a XFS based installation, you can skip all GPFS related steps:

Step Title 3
1 Stop SAP HANA
2 Update Mellanox Drivers if necessary
3 Upgrade from SLES for SAP SP3 to SP4
4 Kernel upgrade if necessary
5 Reinstall Mellanox Software Stack
6 Install Compability Pack
7 Updating the ServeRAID driver if necessary
8 Adapt Configuration if necessary
9 Start SAP HANA
10 Check Installation

Table 73: Upgrade Procudure for XFS-based installations

14.2.2 Prerequisites

You are running IBM Systems Solution for SAP HANA appliance system and want to to upgrade
• SUSE Linux Enterprise Server for SAP Applications (SLES for SAP) 11 Service Pack 3 (SP3)
operating system to SLES for SAP 11 Service Pack 4 (SP4)
• SUSE Linux Enterprise Server for SAP Applications (SLES for SAP) 12 operating system to
SLES for SAP 12 Service Pack 1 (SP1)
You should run at least IBM GPFS version 4.1.1.8. If your system is running a IBM GPFS version below
that, you should upgrade IBM GPFS.
You can find out your IBM GPFS version with the command
# rpm -q gpfs.base

At least the version 3.3-2.0.0.3 of the Mellanox driver should be used.


You can check the version using:

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 163


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

# ethtool -i eth0

At least the version 06.810.09.00 of the ServeRAID driver should be used.


You can check the version using:
# modinfo megaraid_sas | grep version

For the upgrade the following DVDs or images are needed:


• OS DVD of the target version
– SLES for SAP 11 Service Pack 4
– SLES for SAP 12 Service Pack 1
• Compability DVD of the target version
– SLES 12 SP1 compability pack:
∗ libwicked-0-6-0.6.30-23.1.x86_64.rpm
∗ wicked-service-0.6.30-23.1.x86_64.rpm
∗ wicked-0.6.30-23.1.x86_64.rpm
Other ways of providing the images to the Server (e.g. locally, FTP, SFTP, etc) are possible but not
explained as part of this guide.

14.2.3 Shutting down SAP HANA

For further information check chapter 13.2.3.2: (on the target node) Stop SAP HANA on page 136.
Shutdown HANA and all other SAP software running in the whole cluster or on the single node cleanly.
Login in as root on each node and execute
# service sapinit stop

14.2.4 Shutting down IBM GPFS

For further information check chapter 13.2.3.5: (on the target node) Unmount the GPFS filesystem and
stop GPFS on page 137.
1. Unmount the IBM GPFS file system by issuing
# mmumount all

2. Shutdown IBM GPFS


# mmshutdown <-a>

Use option -a to shutdown the IBM GPFS software on all cluster nodes.

14.2.5 Upgrade of IBM GPFS

You should run at least IBM GPFS version 4.1.1.8. If your system is running a IBM GPFS version below
that, you should upgrade IBM GPFS first, see chapter 13.5: Updating & Upgrading GPFS on page 149.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 164


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

14.2.6 Update Mellanox Drivers

At least the version 3.3-2.0.0.3 of the Mellanox driver should be used.


If you have a version below that, you should upgrade the Mellanox drivers first, see chapter 13.3.2: Update
Mellanox Network Cards on page 141.

14.2.7 Upgrading SLES for SAP

In order to upgrade your existing SLES installation the following steps need to be performed:

14.2.7.1 SLES 11
1. To boot the machine from the SLES for SAP installation media, insert the media either into the
DVD drive or mount the ISO image via the IMM.
2. Reboot the machine by issuing the command
# reboot

3. When the prompt <F1> Setup is displayed, press F1 . Choose StartOptions CD/DVD Rom .

4. When the boot menu shows up select SLES for SAP Applications - Installation .
5. Agree to the License Terms, select your keyboard layout and language if needed and continue with
Next .

6. The media check can be skipped via Next .


7. Select Update an Existing System as installation mode. Continue by selecting Next .
8. There should only be one SLES installation found, go on with Next .
9. Previously used repositories will be reported in this screen and suggested for removal. By selecting
Next , the old repository will be removed.

10. An overview of the changes that are going to be performed is shown. Confirm and continue by
selecting Update .
11. Start the update by selecting Start Update in the pop-up dialog. The update process will run and
the machine will automatically reboot when it is done.
12. Skip the Internet connection test, select No, Skip This Test and select Next .
13. You can review the release notes of SLES 11. Continue with Next and then Finish .

14.2.7.2 SLES 12
1. To boot the machine from the SLES for SAP installation media, insert the media either into the
DVD drive or mount the ISO image via the IMM.
2. Reboot the machine by issuing the command
# reboot

3. When the prompt <F1> Setup is displayed, press F1 . Choose StartOptions CD/DVD Rom .

4. When the boot menu shows up select Upgrade .


5. Agree to the License Terms, select your keyboard layout and language if needed and continue with
Next .

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 165


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

6. The media check can be skipped via Next .


7. There should only be one SLES installation found, go on with Next .
8. Previously used repositories will be reported in this screen and suggested for removal. By selecting
Next , the old repository will be removed.

9. Register or skip the registartion.


10. It is not necessary to check I would like to install an additional Add On product .
11. An overview of the changes that are going to be performed is shown. Confirm and continue by
selecting Update .
12. Start the update by selecting Start Update in the pop-up dialog. The update process will run and
the machine will automatically reboot when it is done.

14.2.8 Mandatory Kernel Update

Please check
• 2240716 – SAP HANA DB: Recommended OS settings for SLES 11 / SLES for SAP Applications
11 SP4
• 2205917 – SAP HANA DB: Recommended OS settings for SLES 12 / SLES for SAP Applications
12
for necessary kernel updates.
Please check also chapter 13.4.1: SLES Kernel Update Methods on page 145.

14.2.9 Reinstall Mellanox Software Stack

Perform again
./mlnx-lnvgy_fw_nic_<version>_<os>_x86-64.bin --enable-affinity

to reinstall the Mellanox software stack using SLES SP4 specific packages.
Afterwards please perform the steps of SAP-note 2281578 – UDEV rules update necessary after Mellanox
driver update directly afterwards. Only one final reboot after both steps is necessary.

14.2.10 Updating the ServeRAID driver if necessary

Now the ServeRAID driver provided by SLES SP4 is installed. At least the version 06.810.09.00 of the
ServeRAID driver should be used.
You can check the version using:
# modinfo megaraid_sas | grep version

Please check chapter 13.3.3: Updating ServeRAID Driver on page 143 for more information.

14.2.11 Installing Compability Packages

14.2.11.1 SLES 12 SP 1: Update wicked packages A update of the wicked packages is necessary.
zypper install [path to packages]/*.rpm

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 166


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

14.2.12 Recompile Linux Kernel Modules

• If you are using GPFS 4.1.0-4 or higher, run:


mmbuildgpl

• If you are using a version before GPFS 4.1.0-4, run:


# cd /usr/lpp/mmfs/src
# make Autoconfig
# make World
# make InstallImages

14.2.13 Adapting Configuration

Please review the performance settings in Appendix E.19: FAQ #19: Performance Settings on page 208
because they might have changed.

14.2.14 Start IBM GPFS

Start IBM GPFS and HANA by either rebooting the machine (recommended) or starting the daemons
manually:
For further information check chapter 13.2.3.6: (on the target node) Restart GPFS and mount the filesys-
tem on page 137.
1. Restart GPFS
# mmstartup <-a>
# mmmount all <-a>

Verify status of IBM GPFS and if the file system is mounted:


# mmgetstate <-a>
# mmlsmount all -L

Use option -a to perform the command on all cluster nodes.

14.2.15 Start SAP HANA

# service sapinit start

For further information check chapter 13.2.3.3: (on the target node) Start SAP HANA on page 136.
14.2.16 Check Installation

Download the latest Lenovo Support Script from SAP Note 1661146 – Lenovo/IBM Check Tool for SAP
HANA appliances and execute it with the -ce option. It will give you useful hints about errors and
optimizations.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 167


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

14.3 Upgrade RHEL

14.3.1 Upgrade Overview

Step Title 3
1 Stop SAP HANA
2 Stop IBM GPFS
3 Upgrade IBM GPFS if necessary
4 Update Mellanox Drivers if necessary
5 Upgrade RHEL
6 Kernel upgrade if necessary
7 Updating the ServeRAID driver if necessary
8 Install Compability Pack
9 Recompile kernel module for IBM GPFS
10 Adapt Configuration if necessary
11 Start IBM GPFS
12 Start SAP HANA
13 Check Installation

Table 74: Upgrade Procudure for GPFS-based installations

14.3.2 Prerequisites

You are running Lenovo Systems Solution for SAP HANA appliance system and want to to upgrade the
RHEL operating system (minimum RHEL 6.5) to RHEL 6.6 OR RHEL 6.7.
You should run at least IBM GPFS version 4.1.1.8. If your system is running a IBM GPFS version below
that, you should upgrade IBM GPFS.
You can find out your IBM GPFS version with the command
# rpm -q gpfs.base

At least the version 3.3-2.0.0.3 of the Mellanox driver should be used.


You can check the version using:
# ethtool -i eth0

At least the version 06.810.09.00 of the ServeRAID driver should be used.


You can check the version using:
# modinfo megaraid_sas | grep version

For the upgrade the following DVDs or images are needed:


• OS DVD of the target version
– RHEL 6.6-DVD
– RHEL 6.7-DVD
• Compability DVD of the target version
– RHEL 6.6 compability pack:
∗ nss-softokn-freebl-3.14.3-19.el6.x86_64

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 168


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

∗ nss-softokn-freebl-3.14.3-19.el6.i686
– RHEL 6.7 compability pack:
∗ tmpwatch-2.9.16-5.el6_7.x86_64
∗ libssh2-1.4.2-1.el6.i686.rpm (downgrade)
∗ libssh2-1.4.2-1.el6.x86_64.rpm (downgrade)
Other ways of providing the images to the server (e.g. locally, FTP, SFTP, etc) are possible but not
explained as part of this guide.
Also other upgrade mechanism like e.g. using a satellite server are out of scope of this guide.

14.3.3 Shutting down SAP HANA

For further information check chapter 13.2.3.2: (on the target node) Stop SAP HANA on page 136.
Shutdown HANA and all other SAP software running in the whole cluster or on the single node cleanly.
Login in as root on each node and execute
# service sapinit stop

14.3.4 Shutting down IBM GPFS

For further information check chapter 13.2.3.5: (on the target node) Unmount the GPFS filesystem and
stop GPFS on page 137.
1. Unmount the IBM GPFS file system by issuing
# mmumount all

2. Shutdown IBM GPFS


# mmshutdown <-a>

Use option -a to shutdown the IBM GPFS software on all cluster nodes.

14.3.5 Upgrade of IBM GPFS

You should run at least IBM GPFS version 4.1.1.8. If your system is running a IBM GPFS version below
that, you should upgrade IBM GPFS first, see 13.5: Updating & Upgrading GPFS on page 149.

14.3.6 Update Mellanox Drivers

At least the version 3.3-2.0.0.3 of the Mellanox driver should be used.


If you have a version below that, you should upgrade the Mellanox drivers first, see 13.3.2: Update
Mellanox Network Cards on page 141.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 169


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

14.3.7 Upgrading Red Hat

1. Allow updates to RHEL 6.6 or RHEL 6.7


To allow these updates, you have to check the file /etc/yum/pluginconf.d/versionlock.list. if
this file does not exists, please create it.
The content of the file should be for an update to RHEL 6.6
# Keep packages for RHEL 6.6 (begin)
libssh2-1.4.2-1.el6.x86_64
kernel-2.6.32-504.*
kernel-firmware-2.6.32-504.*
kernel-headers-2.6.32-504.*
kernel-devel-2.6.32-504.*
redhat-release-*
# Keep packages for RHEL 6.6 (end)

and for an upgrade to RHEL 6.7


# Keep packages for RHEL 6.7 (begin)
kernel-2.6.32-573.*
kernel-firmware-2.6.32-573.*
kernel-headers-2.6.32-573.*
kernel-devel-2.6.32-573.*
redhat-release-server-6Server-6.7.0.3.el6.x86_64
# Keep packages for RHEL 6.7 (end)

2. Create a repository from your RHEL 6.6 DVD


Check where the RHEL DVD is mounted, e.g.:
# ls /media/
RHEL-6.6 Server.x86_64

This information is needed for the baseurl line below. Now create a repository file /etc/yum.
repos.d/rhel-dvd.repo with the following content:
[dvd]
name=Red Hat Enterprise Linux Installation DVD
baseurl=file:///media/RHEL-6.6\ Server.x86_64/
gpgcheck=0
enabled=0

3. Upgrade of RHEL
# yum update --enablerepo=dvd

Check, if the upgrade was successful:


# cat /etc/redhat-release

with the result


Red Hat Enterprise Linux Server release 6.6 (Santiago)

or
Red Hat Enterprise Linux Server release 6.7 (Santiago)

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 170


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

4. Prevent further upgrade from RHEL to a higher versions


If the file /etc/yum/pluginconf.d/versionlock.list did not existed in step one, please install
also the package yum-versionlock.
yum -y install yum-versionlock yum-security --enablerepo=dvd

14.3.8 Mandatory Kernel Update

Please check the SAP Notes


• 2136965 – SAP HANA DB: Recommended OS settings for RHEL 6.6
• 2247020 – SAP HANA DB: Recommended OS settings for RHEL 6.7
for necessary kernel updates.
Please check chapter 13.4.3: RHEL Kernel Update Methods on page 147 on how to perform this update.

14.3.9 Updating the ServeRAID driver if necessary

Now the ServeRAID driver provided by RHEL is installed.


At least the version 06.810.09.00 of the ServeRAID driver should be used.
You can check the version using:
# modinfo megaraid_sas | grep version

Please check chapter 13.3.3: Updating ServeRAID Driver on page 143 for more information.

14.3.10 Installing Compability Packages

14.3.10.1 RHEL 6.6: Update of nss-softokn packages A update of the nss-softokn packages is
mandatory. More information can be found in:
• SAP Note 2001528 – Linux: SAP HANA Database SPS 08 revision 80 (or higher) on RHEL 6 or
SLES 11
• Why can I not install or start SAP HANA after a system upgrade?60
yum -y install [path to packages]/nss-softokn-freebl-3.14.3-19*.rpm

14.3.10.2 RHEL 6.7: Update of tmpwatch package, downgrade of libssh2 packages A


update of the tmpwatch package and a downgrade of libssh2 packages is mandatory. More information
can be found in:
• SAP Note 2247020 – SAP HANA DB: Recommended OS settings for RHEL 6.7
• SAP HANA Multi host install fails with the message L̈IBSSH2_ERROR_KEY_EXCHANGE_FAILURE,
unable to exchange encryption keys¨61
yum -y install [path to packages]/tmpwatch-2.9.16-5.el6_7.x86_64
yum downgrade [path to packages]/libssh2*.rpm

60 https://access.redhat.com/solutions/1236813
61 https://access.redhat.com/solutions/1370033

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 171


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

14.3.11 Recompile Linux Kernel Modules

• If you are using GPFS 4.1.0-4 or higher, run:


mmbuildgpl

• If you are using a version before GPFS 4.1.0-4, run:


# cd /usr/lpp/mmfs/src
# make Autoconfig
# make World
# make InstallImages

14.3.12 Adapting Configuration

Please review the performance settings in Appendix E.19: FAQ #19: Performance Settings on page 208
because they might have changed.

14.3.13 Start IBM GPFS

Start IBM GPFS and HANA by either rebooting the machine (recommended) or starting the daemons
manually:
For further information check chapter 13.2.3.6: (on the target node) Restart GPFS and mount the filesys-
tem on page 137.
1. Restart GPFS
# mmstartup <-a>
# mmmount all <-a>

Verify status of IBM GPFS and if the file system is mounted:


# mmgetstate <-a>
# mmlsmount all -L

Use option -a to perform the command on all cluster nodes.

14.3.14 Start SAP HANA

# service sapinit start

For further information check chapter 13.2.3.3: (on the target node) Start SAP HANA on page 136.
14.3.15 Check Installation

Download the latest Lenovo Support Script from SAP Note 1661146 – Lenovo/IBM Check Tool for SAP
HANA appliances and execute it with the -ce option. It will give you useful hints about errors and
optimizations.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 172


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

15 System Check
This chapter describes different steps to check the appliance’s health status. The script described here
should be updated and executed in regular intervals by a system administrator. The other sections present
additional information and give deeper insight into the system.
Note
SAP Note 1661146 – Lenovo/IBM Check Tool for SAP HANA appliances provides details for
downloading and using the following scripts to catalog the hardware and software configura-
tions and create a set of information to assist service and support of the machine by SAP and
Lenovo.
We highly recommend that a SAP HANA system administrator regularly downloads and
updates these scripts to ensure to obtain the latest support information for the servers.

15.1 System Login

The latest version of the Lenovo Solution installation also adds a message of the day that shows the
current status of the GPFS or XFS filesystems, and memory usage. This will pop up once each login for
every user. The message is created by a cron job that runs once an hour, this means that the information
is not real time and the system status may have changed in the meantime.
Last login: Tue Nov 29 14:06:28 2016 from 192.168.123.14
____ _ ____ _ _ _ _ _ _
/ ___| / \ | _ \ | | | | / \ | \ | | / \
\___ \ / _ \ | |_) | | |_| | / _ \ | \| | / _ \
___) / ___ \| __/ | _ |/ ___ \| |\ |/ ___ \
|____/_/ \_\_| |_| |_/_/ \_\_| \_/_/ \_\

Lenovo Systems Solution for SAP HANA appliance

See SAP Note 1650046 for maintenance and administration information:


https://service.sap.com/sap/support/notes/1650046

_Regularly_ check the system health!


________________________________________________________________________________

[INFO] Last hourly update on Tue Nov 29 14:01:02 CET 2016.


[NOTICE] Memory usage is 0%.
[ERROR] The global_allocation_limit for FLO is not set. (See FAQ #1.)
Listing 1: SSH login screen

15.2 Basic System Check

Included with the installation is a script that will inform you and the customer that all the hardware
requirements and basic operating system requirements have been met.
This script is found in the directory /opt/lenovo/saphana/bin and is called saphana-support-lenovo.
sh.
Using the option -h, you can see the various ways to call the saphana-support-lenovo.sh script.
# saphana-support-lenovo.sh -h
Usage: saphana-support-lenovo [OPTIONS]
Lenovo Systems solution for SAP HANA appliance System Checking Tool

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 173


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

to check hardware system configuration for Lenovo and SAP Support teams.

Options:
-c Check system (no log file, default).
-s Print out the support information for SAP support.
(-s replaces the --support option.)
-C <"check1 check2 ..."> A space-separated list of checks that will be ←-
,→performed.
-L <Checkset> A textfile containing the various checks to perform.
-m Collects the support information for SAP support from all cluster nodes
(this option implies -s). Option -S or option -l is mandatory if -m is ←-
,→specified
-S SID of the HANA Cluster where the support information should be collected.
Use together with -m.
-l Textfile with a nodelist of the cluster nodes (separated with newline) ←-
,→where the
support data should be collected. Use together with -m.
-h Print this information

Check extensions (only valid in conjunction with -c)


-v Verbose. Do not hide messages during check.
Recommended after installation.
-e Do exhaustive testing with longer running tests.
May impact performance during check.
Implies -v.

If using the Advanced Settings Utility (ASU) from a Virtual Machine


-i host The host name of the Integrated Management Module (IMM)

Report bugs to <sapsolutions@lenovo.com>.


Listing 2: Support script usage

An output similar to the following should be reported when you use the options -c (check, which is the
default option). If for any reason you receive warnings or errors that you do not understand, please first
try this again with the option -v (verbose) and then open with the customer an SAP OSS customer
message with the output from the -s (support) option attached.
# saphana-support-lenovo.sh -c
===================================================================
# LENOVO SUPPORT TOOL Version 1.12.121-16.4145.5f849c0 - 2016-11-25
# (C) Copyright IBM Corporation 2011-2014
# (C) Copyright Lenovo 2015, 2016
# Analysis taken on: 20161129-1418
# Command-line arguments are: -c
===================================================================

-------------------------------------------------------------------
Lenovo Systems solution for SAP HANA appliance Hardware Analysis
-------------------------------------------------------------------

Machine analysis for IBM x3850 X6 -[6241AC1]- [06CM436]


Lenovo Systems solution for SAP HANA - Model "AC34S3072" ... OK
-------------------------------------------------------------------

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 174


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

Appliance Solution analysis:


----------
Information from /etc/lenovo/appliance-version.
During installation this machine was detected as:
Lenovo System x3850 X6: Workload Optimized System for SAP HANA Model AC34S3072
Installed appliance version: 1.12.121-16.4133.e7231c6
Installed on: Thu Nov 24 17:05:31 CET 2016

Operating System:
Red Hat Enterprise Linux Server release 7.2 (Maipo)
Kernelversion: 3.10.0-327.el7.x86_64

Installation configuration:
----------
Parameter clustered is single
Parameter exthostname is x610.wdf.lenovo.corp
Parameter cluster_ha_nodes is 0
Parameter cluster_nr_nodes is 1
Parameter db_mode is single_container
Parameter hanainstnr is 10
Parameter hanasid is FLO
Parameter hanauid is 1100
Parameter hanagid is 111
Parameter shared_fs_mountpoint is /hana
Parameter shared_fs_type is gpfs
Parameter shared_fs_storage is standard
Parameter shared_fs_name is sapmntdata
Parameter gpfs_node1 is gpfsnode01 127.0.1.1
Parameter hana_node1 is hananode01 127.0.2.1
Parameter step is 9
-------------------------------------------------------------------

Hardware analysis:
----------
CPU Type: Xeon Intel(R) Xeon(R) CPU E7-8890 v3 @ 2.50GHz ... OK
# of CPUs: 4, threads: 144 ... OK

Memory: 3072 GB / Free Memory: 2978 GB ... OK

ServeRAID: 2 adapters ... OK

IBM General Parallel File System (GPFS):


----------
GPFS is installed. ... OK
GPFS is configured. ... OK
GPFS [4.2.1-2] Cluster HANAcluster.gpfsnode01 is active without replication. ... OK
GPFS device sapmntdata is mounted on /hana of size 12283GB ... OK

SAP Host Agent Information


==========================
/usr/sap/hostctrl/exe/saphostctrl: 721, patch 621, changelist 1643008, linuxx86_64, ←-
,→opt (Jan 24 2016, 22:58:10)

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 175


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

SAP Host Agent known SAP instances


----------------------------------
Inst Info : FLO - 10 - x610 - 745, patch 35, changelist 1644855

SAP Instances
=============

SAP HANA Instance FLO/10


------------------------
SAP HANA 1.00.121.00 Build 1466466057-1530 Revision 121 is installed ... OK

SAP HANA FLO Landscape Overview


*******************************

| Host | Host | Host | Failover | Remove | Storage | Storage | Failover ←-


,→| Failover | NameServer | NameServer | IndexServer | IndexServer | Host ←-
,→ | Host |
| | Active | Status | Status | Status | Config | Actual | Config Group ←-
,→| Actual Group | Config Role | Actual Role | Config Role | Actual Role | ←-
,→Config Roles | Actual Roles |
| | | | | | Partition | Partition | ←-
,→| | | | | | ←-
,→ | |
| ---- | ------ | ------ | -------- | ------ | --------- | --------- | ------------ ←-
,→| ------------ | ----------- | ----------- | ----------- | ----------- | ←-
,→------------ | ------------ |
| x610 | yes | ok | | | 1 | 1 | default ←-
,→| default | master 1 | master | worker | master | ←-
,→worker | worker |

overall host status: ok

General Health checks:


----------
NOTE: The following checks are for known problems of the system.
See the FAQ section of the Lenovo - SAP HANA Operations Guide
SAP Note 1661146 found at https://service.sap.com/notes

Only issues will be shown. If there is no output, no check failed.


To show succeeded checks, add the parameter -v. Recommended on first run.
----------
[ERROR] The global_allocation_limit for FLO is not set. (See FAQ #1.)
-------------------------------------------------------------------
E N D O F L E N O V O D A T A A N A L Y S I S
-------------------------------------------------------------------
Removing support script dump files older than 7 days.
Runtime was 0h 1min 2s
Support script ended with error(s) and warning(s).
Logged error(s): 1

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 176


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

Logged warning(s): 0
Return code of this script is 2.
Listing 3: Support script output

At the end of the support script you find a summary of how many errors and/or warnings occured. A
return code of 0 means everything was o.k.
Note
It is highly recommended to work with the latest version of the system check script. You can
find it in SAP Note 1661146 – Lenovo/IBM Check Tool for SAP HANA appliances.

Note
-[1.8.80-12]: These appliances were shipped with the script /opt/ibm/saphana/bin/
saphana-support-ibm.sh. When installing the latest support script version you will get the
new script saphana-support-lenovo.sh. Do not remove the script saphana-support-ibm.
sh.
If you have troubles running some tests, you can blacklist these tests:
echo checkname >> /etc/lenovo/supportscript_check_blacklist

You can find all checks as files in /opt/lenovo/saphana/lib/check/. You can get the checkname by
check_<name_of_checkfile_without_the_extension_check>.

15.2.1 Cluster-wide gathering of support data

Cluster-wide gathering of support data is turned on with supportscript option -m (multi-run). There are
two switches for chosing the cluster nodes where the support data will be collected:
• option -S: Specifies the SID of the SAP HANA System where the support data will to be collected:
All nodes belonging to that SID are included.
• option -l: Definition of a textfile with a nodelist, nodes separated with newline
The cluster nodes where the support data will be collected are defined via the two switches -S or -l. Use
-S <SID> if you want to collect the data from all cluster nodes belonging to that SID or define an own
list with option -l <filename of nodelist>. The supportscript will be started serial on these nodes.
After finishing all runs the tar files with the results are collected and put together in an combined tar file
on the local node. The filename of this file is displayed at the end.
Make sure to have the latest supportscript installed on all cluster nodes before running the checks.
Examples:
• saphana-support-lenovo.sh -m -S <SID>
• saphana-support-lenovo.sh -m -l <filename>

15.2.2 Automatic exchange of support script within the cluster

Included with the installation there is a tool to update the support script automatically on all cluster
nodes. The script requires the suppport script RPM, which should be downloaded via SAP Note 1661146
– Lenovo/IBM Check Tool for SAP HANA appliances.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 177


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

# saphana-update-supportscript.sh -h
Usage: saphana-update-supportscript [OPTIONS]
Tool for clusterwide updating Lenovo Systems solution for SAP HANA appliance
System Checking Tool

Options:
-S SID of the HANA Cluster where the supportscript should be updated.
-l textfile with nodelist of the cluster nodes (separated with newline)
where the supportscript should be updated.
-p Path to the supportscript rpm
-h Print this information

Report bugs to <sapsolutions@lenovo.com>.


Listing 4: Support script update usage

Example:
saphana-update-supportscript.sh -S <SID> -p /tmp/lenovo-saphana-support←-
,→-1.12.121-16.3872.4964c1f.noarch.rpm
Listing 5: Support script update Example

15.3 Check Installation Tool

Note
The saphana-check-installation.sh tool described in this chapter is shipped with appli-
ance version 1.11.112-15 and later. It is also part of the Lenovo support script update package
downloadable from SAP Note 1661146 – Lenovo/IBM Check Tool for SAP HANA appliances.
If you want to use this tool on older installation, just install the update.
Similar to the support script described in the previous chapter, the saphana-check-installation.sh
tool will also run the same tests shipped with the mentioned support script, but it will only run these tests
and omit the informational output of the support script. Additional output control & output options are
provided.

15.3.1 Basic Usage

When run without any options, the output looks like


# saphana-check-installation.sh

General Health checks:

NOTE: The following checks are for known problems of the system.
See the FAQ section of the Lenovo - SAP HANA Operations Guide
SAP Note 1661146 found at https://service.sap.com/notes

Only issues will be shown. If there is no output, no check failed.


To show succeeded checks, add the parameter -v. Recommended on first run.

[WARNING] Storage size of 12283 GB seems not to be okay.


[ERROR] The global_allocation_limit for FLO is not set. (See FAQ #1.)

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 178


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

To suppress the static information text add the parameter -q to the commandline:
# saphana-check-installation.sh -q
[WARNING] Storage size of 12283 GB seems not to be okay.
[ERROR] The global_allocation_limit for FLO is not set. (See FAQ #1.)

The opposite parameter -v also shows output from tests which did not issue a warning or error:
# saphana-check-installation.sh -v

General Health checks:

NOTE: The following checks are for known problems of the system.
See the FAQ section of the Lenovo - SAP HANA Operations Guide
SAP Note 1661146 found at https://service.sap.com/notes

Only issues will be shown. If there is no output, no check failed.


To show succeeded checks, add the parameter -v. Recommended on first run.

[SKIPPED] Not affected OS, skipping test check_208_days_bug


[INFO] Installation completed with automated installer.
[INFO] No SSD with broken Firmware found
...

In any case, the tool will set an return code according based on the highest message criticality. The
message levels in order from worst to best are
\item[255] An internal error occured, the tool aborted prematurely
\item[2] At least one \emph{error} was found
\item[1] At least one warning was issued
\item[0] No issue was found

15.3.2 Test Selection Options

The saphana-check-installation.sh tool currently offers three options to control which checks will
run. If no test selection parameter is given, per default all tests will be selected.
Generic options:
-i <list> Include. Run only checks named in comma separated list
-x <list> Exclude. Do not run checks named in comma separated list
-s <file> Check set. Reads a list of checks from a file with one entry per line

The -i expects a comma separated list of test names and only these tests willl be run, e.g.
# saphana-check-installation.sh -v -i gpfs_quotas,gpfs_nsd_unavailable

General Health checks:

NOTE: The following checks are for known problems of the system.
See the FAQ section of the Lenovo - SAP HANA Operations Guide
SAP Note 1661146 found at https://service.sap.com/notes

Only issues will be shown. If there is no output, no check failed.


To show succeeded checks, add the parameter -v. Recommended on first run.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 179


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

[INFO] Fileset hanadata on /dev/sapmntdata is below 90% of quota: 0% used


[INFO] Fileset hanalog on /dev/sapmntdata is below 90% of quota: 0% used
[INFO] Fileset hanashared on /dev/sapmntdata is below 90% of quota: 0% used
[OK] All NSD in use are up and running
...

By the use of the parameter -x tests can be excluded from the checklist, e.g.
# saphana-check-installation.sh -x hdd_configuration

General Health checks:

NOTE: The following checks are for known problems of the system.
See the FAQ section of the Lenovo - SAP HANA Operations Guide
SAP Note 1661146 found at https://service.sap.com/notes

Only issues will be shown. If there is no output, no check failed.


To show succeeded checks, add the parameter -v. Recommended on first run.

[ERROR] The global_allocation_limit for FLO is not set. (See FAQ #1.)

and the storage size warning from the previous example is gone.
Finally, the parameter -s /path/to/check_set allows to choose a set of checks defined in a plain text
file. Each line of this file must only contain a test name. For example the file /opt/lenovo/saphana/
lib/checkset/hourly.set is shipped with the support script and defines the tests which will be run
every hour to update the message of the day text shown on system login62 . You can create your own
check sets by creating a similar a file.
Currently there is no commented listing of available test. To get the names of the provided tests run the
command saphanan-check-installation.sh -v -d will output the name of every test run. There is
no need to verify that a test is applicable to the current server as each test will verify this itself.

15.4 Additional Tools for System Checks

15.4.1 Lenovo Advanced Settings Utility

Note
X6 based servers and later technology come preinstalled with this utility.
In some cases it might be useful to check the UEFI settings of the HANA servers. Therefore, the
saphana-support-lenovo.sh script uses the Lenovo Advanced Settings Utility (ASU), if it is installed,
and prints out warnings, if there is a misconfiguration. This check can be enabled via the -e parameter.
Download the latest Linux 64-bit RPM from the Lenovo download page63 and install the RPM.
Before upgrading the ASU tool remove the old version. Find the installed version via rpm -qa | grep
asu.
62 starting with appliance software version 1.11.112-15
63 https://support.lenovo.com/en/en/documents/lnvo-asu

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 180


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

15.4.2 ServeRAID StorCLI Utility for Storage Management

Note
X6 based servers come preinstalled with this utility.
The saphana-support-lenovo.sh script also analyzes the status of the ServeRAID controllers and the
controller-internal batteries to check whether the controllers are in a working and performing state.
For activation of this feature the StorCLI (Command Line) Utility for Storage Management software must
be installed. Go to the Lenovo download page64 and download the file locally and install the RPMs.
Before upgrading the StorCLI tool remove the old version. Find the installed version via rpm -qa |
grep storcli.

Warning
[1.6.60-7]+ With the change to RAID5 based storage configuration, installing the MegaCLI
Utility is even more important as a HDD/SSD failure is not directly visible with standard
GPFS commands until a whole RAID array has failed.

15.4.3 SSD Wear Gauge CLI utility

Note
X6 based servers come preinstalled with this utility. In this case you can find an installed
version in /opt/lenovo/ssd_cli.
For models of the Lenovo Solution that come with SSDs it might be useful to check the state of the SSDs.
This includes all x3850 X6 and x3890 X6 servers, and eX5 SSD, XS, and S models.
Go to the Lenovo download page65 and download the latest binary of the SSD Wear Gauge CLI utility
(lnvgy_utl_ssd_-<version>_linux_32-64.bin). Copy it to the machine to be checked.
When upgrading the tool remove existing binaries from /opt/ibm/ssd_cli/ and/or /opt/lenovo/ssd_
cli/.
Copy the bin file into /opt/lenovo/ssd_cli/:
# mkdir -p /opt/lenovo/ssd_cli/
# cp lnvgy_utl_ssd_-*_linux_32-64.bin /opt/lenovo/ssd_cli/
# chmod u+x /opt/lenovo/ssd_cli/lnvgy_utl_ssd_-*_linux_32-64.bin

Execute the binary:


# /opt/lenovo/ssd_cli/lnvgy_utl_ssd_-*_linux_32-64.bin -u

Sample output:
1 PN:.......-....... SN:........ FW:......
Percentage of cell erase cycles remaining: 100%
Percentage of remaining spare cells: 100%
Life Remaining Gauge: 100%

64 http://support.lenovo.com/en/en/products/Servers/Lenovo-x86-servers/Lenovo-System-x3950-X6/6241/

downloads/DS111345
65 http://support.lenovo.com/en/en/products/Servers/Lenovo-x86-servers/Lenovo-System-x3850-X6/6241/

downloads/DS111630

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 181


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

15.4.4 Lenovo Dynamic System Analysis (DSA) Portable Edition

Note
X6 based servers come preinstalled with this utility starting DVD version 1.11.112-15. In this
case you can find an installed version in /opt/lenovo/dsa.
Dynamic System Analysis (DSA) for Lenovo x86 servers collects and analyzes system information to aid
in diagnosing system problems.
Go to the Lenovo download page66 and download the latest binary of the DSA portable edition (lnvgy_
utl_dsa_-<version>_portable_linux_<os>_x86-64.bin). Copy it to the machine to be checked.

66 https://support.lenovo.com/en/en/documents/lnvo-dsa

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 182


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

16 Backup and Restore of the Primary OS Partition


For information about this topic please refer to our new Special Topic Guide: Backup and Restore.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 183


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

17 SAP HANA Backup and Recovery


For information about this topic please refer to our new Special Topic Guide: Backup and Restore.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 184


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

18 Troubleshooting
For the Lenovo Systems Solution for SAP HANA Platform Edition the installation of SLES for SAP as
well as the installation and configuration of IBM GPFS or XFS respectively, and SAP HANA has been
greatly simplified by an installation process with an accompanying guided installation. This process
automatically installs and configures the base OS components necessary for the SAP HANA appliance
software. It is no longer supported to install the OS manually for the Lenovo Solution.
You can find a list of SAP Notes in Appendix F.4: SAP Notes (SAP Service Marketplace ID required)
on page 214. Please check them and Appendix E: Frequently Asked Questions on page 195 first in case
of problems.

18.1 Adding SAP HANA Worker/Standby Nodes in a Cluster

When configuring a clustered configuration by hand, install SAP HANA worker and standby nodes as
described in the Lenovo SAP HANA Appliance Operations Guide 67 (Section 4.3 Cluster Operations →
Adding a cluster node).

18.2 GPFS mount points missing after Kernel Update

If you updated the Linux kernel, you will have to update the portability layers for GPFS before starting
SAP HANA. After a kernel reboot, you will not see the GPFS mount points available. Follow the
directions above in section 13.4: Linux Kernel Update on page 145 regarding updating the portability
layers.

18.3 Degrading disk I/O throughput

One possible reason for degrading disk I/O on the HDDs or SSDs could be a discharged or disconnected
battery on the RAID controller. In that case the cache policy is changed from "WriteBack" (default) to
WriteThrough, meaning that the data is written to disk instead to the cache. This will have a significant
I/O performance impact.
To verify, please proceed as follows:
1. The StorCLI tool (see section 15.4.2: ServeRAID StorCLI Utility for Storage Management on page
181) is installed during HANA setup. The path is /opt/MegaRAID/storcli/. If you have been
using the MegaCli64 client before, you don’t have to learn new commands. The commands are the
same.
2. Determine current cache policy:
# /opt/MegaRAID/storcli/storcli64 -LdPdInfo -aAll | grep "Cache Policy:"

3. Depending on the model there is a varying number of output lines. Sample output:
Default Cache Policy: WriteBack, ReadAhead, Direct, No Write Cache if Bad BBU
Current Cache Policy: WriteBack, ReadAhead, Direct, No Write Cache if Bad BBU
Default Cache Policy: WriteBack, ReadAhead, Cached, No Write Cache if Bad BBU
Current Cache Policy: WriteBack, ReadAhead, Cached, No Write Cache if Bad BBU
67 SAP Note 1650046 (SAP Service Marketplace ID required)

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 185


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

If the output contains "WriteThrough" for the "Current Cache Policy" while the previous "Default
Cache Policy" defines "WriteBack", the cache policy has been switched from the "WriteBack" default
due to some issue.
You can then check each battery’s status. For example, with the sample output above you would
check the status of the first two adapter’s batteries (the third one is OK).
# /opt/MegaRAID/MegaCli/storcli64 /c0/bbu show all
# /opt/MegaRAID/MegaCli/storcli64 /c1/bbu show all

If the output contains "Get BBU Capacity Info Failed", the battery is most likely bad or disconnected
and needs to be replaced or reconnected to the adapter.
If the output indicates a state of charge that is significantly smaller than 100%, then the battery is
most likely bad and should be replaced.
If any of the above issues occurs, a hardware support call with Lenovo/IBM should be opened.

18.4 SAP HANA will not install after a system board exchange

When a IBM Certified Engineer exchanges a system board, he is required only to reset the Manufacturer
Type and Model (MTM) and serial number of the machine inside of the EEPROM Settings. SAP HANA
hardware checker (before revision 27) looks at the description of the string instead of the MTM.
To workaround this issue a Lenovo services person can use the Lenovo Advanced Settings Utility (ASU)
tool (see section 15.4.1: Lenovo Advanced Settings Utility on page 180) to reset the system product data to
the correct data for the SAP installer to work. ASU is installed under /opt/lenovo/toolscenter/asu.
The tool can then be used to view or set the firmware settings of the IMM from the command line. For
example to show and subsequently reset the System Product Identifier required by SAP HANA, you can
use the following commands:
# asu64 show SYSTEM_PROD_DATA.SysInfoProdIdentifier --host <IMM Hostname>

(--host can be omitted if the command is run on the actual system)


# asu64 set SYSTEM_PROD_DATA.SysInfoProdIdentifier "System x3850 X6"

Then dmidecode should return the correct system name after a system reboot.

18.5 Installer [1.8.80-12]: Installation of RHEL 6.5 fails

To install RHEL 6.5 using image [1.8.80-12] several adaptations to the installion process described in the
Implementationguide for image [1.8.80-12] are necessary.
The installation-process looks like the following:
1. Start the installation like described in ’Lenovo - SAP HANA Implementation Guide X6-1.8.80-12’.
Don’ t forget to set saphancfg for MTM 6241. Proceed until chapter 6.4 ’Phase2 - RHEL’ Step 10.
2. There are problems with the installation, because dmidecode delivers additional output, which is
misinterpreted by the installer. To overcome this situation, you have to perform the following:
(a) check dmidecode:
# dmidecode -s bios-vendor
# SMBIOS implementations newer than version 2.7 are not
# fully supported by this version of dmidecode.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 186


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

IBM
# which dmidecode
/usr/sbin/dmidecode

(b) Create a wrapper script in the /usr/local/sbin directory, which is in the PATH statement
placed before /usr/sbin, where the original dmidecode is placed. Create a file /usr/local/
sbin/dmidecode with the following content:
#!/bin/bash
/usr/sbin/dmidecode $* | sed '/^#/d'

Hint: If you are not able to type the ’|’ in the remote console, you can enter ctrl + shift + u
7c. After pressing return this is converted to ’|’.
(c) You have to log-out and to log-in again to make the change take affect:
# dmidecode -s bios-vendor
IBM
# which dmidecode
/usr/local/sbin/dmidecode

3. Now we have to redo some parts of the already started installation. Perform
# bash /tmp/bootstrap.sh
# /opt/ibm/saphana/bin/saphana-udev-config.sh -sw

4. Reboot
5. Continue with chapter 6.4 ’Phase2 - RHEL’ Step 11 in ’Lenovo - SAP HANA Implementation Guide
X6-1.8.80-12’.
Don’t forget to apply SAP Note 1658845 – Recently certified SAP HANA hardware not recognized -
HanaHwCheck.py. Place HanaHwCheck.zip in/root before executing saphana-setup-saphana.sh.

18.6 Installer [1.10.102-14]: Installation Issues

If you are using installer version 1.10.102-14


• You are configuring an XFS filesystem on a Lenovo System x3850 X6 or x3950 X6 and the RAID
setup cannot be completed because of a missing Feature on Demand key. OR
• You are installing a Lenovo System x3950 X6 server and the automated UEFI configuration fails
because of unknown settings on the second (IMM) node. OR
• You are installing a Lenovo System x3850 X6 or x3950 X6 and the mountpoint for the HANA
filesystem was configured to /hana although you entered another path in the configuration dialog.
Apply the provided fixes provided as attachment of SAP Note 2274681 – Installation Issues with Installer
1.10.102-14 for the installer before executing saphana-setup-saphana.sh as described.

18.7 SAP Note 1641148 HANA server hang caused by GPFS issue

https://service.sap.com/sap/support/notes/1641148

18.7.0.1 Symptom You are running a SAP HANA scale out landscape and see different time zone
settings for the sidadm user.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 187


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

18.7.0.2 Reason and Prerequisites Your SAP HANA scale out landscape shows different time
zone settings for at least one server, i.e. the master node shows time zone UTC and all other nodes
show time zone CET. This may be caused by an inconsistency in the installation process and should be
corrected.

18.7.0.3 Solution To change the time zone settings of the sidadm user: go to the home directory
/usr/sap/
.sapenv.csh: setenv TZ <time zone>
.sapenv.sh: export TZ=<time zone>

Make sure this is done for all HANA nodes. Additionally, for a scale out installation a NTP server should
be configured. You may either use your corporate NTP or ask your hardware partner to setup a NTP
server for you, i.e. on the management node of the appliance. If you see different time settings for the
sidadm and the root user check /etc/adjtime. If you see quite big values check your NTP and do a
re-sync. If the time setting is done login as sidadm user again and restart the database.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 188


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

Appendices
A GPFS Disk Descriptor Files
GPFS 3.5 introduced a new disk descriptor format called stanzas. The old disk descriptor format is
deprecated since GPFS 3.5. This stanza format is also valid for GPFS 4.1 (introduced with release 1.8).
Create the file /var/mmfs/config/disk.list.data.gpfsnode01 by concatenating the following parts:
1. Always add
%nsd: device=/dev/sdb
nsd=data01node01
servers=gpfsnode01
usage=dataAndMetadata
failureGroup=1001
pool=system

2. When having one RAID array in the SAS expansion unit


%nsd: device=/dev/sdc
nsd=data02node01
servers=gpfsnode01
usage=dataAndMetadata
failureGroup=1001
pool=system

3. When having two RAID arrays in SAS expansion unit, add also
%nsd: device=/dev/sdd
nsd=data03node01
servers=gpfsnode01
usage=dataAndMetadata
failureGroup=1001
pool=system

4. Always add these lines at the end


%pool:
pool=system
blockSize=1M
usage=dataAndMetadata
layoutMap=cluster
allowWriteAffinity=yes
writeAffinityDepth=1
blockGroupFactor=1

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 189


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

B Topology Vectors (GPFS 3.5 failure groups)


This is currently valid only for the DR-enabled clusters, for standard HA-enabled clusters use the plain
single number failure groups as described in the instructions above.
With GPFS 3.5 TL2 (the base version for DR) a new failure group (FG) format called "Topology vectors"
was introduced which is being used for the DR solution. A more detailed description for topology vectors
can be found in the GPFS 3.5 Advanced Administration Guide chapter "GPFS File Placement Optimizer".
In short, the topology vector is a replacement for the old FGs, storing more information on the infras-
tructure of the cluster. Topology vectors are used for NSDs, but as the same topology vector is used for
all disks of a server node it will be explained in the context of a server node.
In a standard DR cluster setup all nodes are grouped evenly into four FGs (five when using the Tiebreaker-
Node) with two FGs on every site.
A topology vector consists of three numbers divided by commas. The first of the three numbers is either
1 or 2 (for all the SAP HANA nodes) or 3 for the tiebreaker node. The second number is 0 (zero) for all
site A nodes and 1 for all site B nodes. The third number enumerates the nodes in each of the failure
groups starting from 1.
In a standard eight node DR-cluster (4 nodes per site) we would have these topology vectors:

Site Failure Group Topology Vector Node


Failure group 1 1,0,1 gpfsnode01 / hananode01
(1,0,x) 1,0,2 gpfsnode02 / hananode02
Site A
Failure group 2 2,0,1 gpfsnode03 / hananode03
(2,0,x) 2,0,2 gpfsnode04 / hananode04
Failure group 3 1,1,1 gpfsnode05 / hananode01
(1,1,x) 1,1,2 gpfsnode06 / hananode02
Site B
Failure group 4 2,1,1 gpfsnode07 / hananode03
(2,1,x) 2,1,2 gpfsnode08 / hananode04
Failure group 5
Site C 3,0,1 gpfsnode99
(tiebreaker) (3,0,x)

Table 75: Topology Vectors in a 8 node DR-cluster

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 190


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

C Quotas

C.1 Quota Calculation

Note
This section is only for information purposes. Please use the quota calculator in the next
section C.2: Quota Calculation Script on page 191.

Note
The calculation formulas are the same for GPFS and XFS, but XFS is only supported on
Single Nodes, so the number of active nodes is always 1 in this case.
The quota calculation is more complex than the quota calculations in previous appliance releases. An
utility script is provided to make the calculation easier.
In general the quota calculations are based on SAP’s recommendations for HANA 1.4 and later.
For HANA single nodes and HA-enabled clusters, quotas will be for HANA log files, HANA data volumes
and for the shared HANA data. In DR-enabled cluster a quota should be set only for SAP HANA’s log
files.
The formula for the quota calculation is
quota for logs = (# active nodes) x 1024 GB
quota for data = (# active nodes) x (RAM per node in GB) x 3 x (replication factor)
quota for shared = (available space) - (quota for logs) - (quota for data)

The number of active nodes needs explanation. For single nodes, this number is of course 1. For clusters
this is the count of all cluster nodes which are not dedicated standby nodes. A dedicated standby node
is a node which has no HANA instance running with a configured role of master/slaves. Two examples:
• In an eight node cluster, there is only one HANA database installed. The first six nodes are
installed as worker nodes, the last two are installed as standby nodes. So this cluster has clearly
two dedicated standby nodes.
• Another eight node cluster has a HANA system ABC installed with the first seven nodes as workers
and the last nodes as a standby node. A second HANA system QA1 is installed with a worker node
on the last (eight) node and a standby node on node seven. This cluster has no dedicated standby
node as the eight node is not "standby only", it’s actually active for the QA1 cluster.
For DR the log quota will also be calculated based on the number of active nodes, in this case as only
one HANA cluster is allowed on the DR file system, solely on the count of the worker nodes.
The replication factor should be 1 for single nodes, 2 for clusters and 3 for DR enabled clusters.
Manual calculation is not recommended. Please use the new saphana-quota-calculator.sh.

C.2 Quota Calculation Script

A script is available to ease the quota calculation. The standard installation uses this script to calculate
the quotas during installation and the administrator can also call this script to recalculate the quotas after
a topology change happened, e.g. installation of more HANA instances, changing node role, shrinking or
growing the cluster.
Most values are read from the system or guessed. For a cluster the standard assumption is to have one
dedicated standby node. For a DR solution no reliable guess on the nodes can be made and manual
override must be used.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 191


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

The basic call is


# saphana-quota-calculator.sh

As a result it will give the calculated quotas and the commands to set them to the calculated result.
After reviewing these you can add the -a parameter to the call which will automatically set the quotas
as calculated.
In the case you are running a cluster and the number of dedicated standbys is not one, use the parameter
-s <# standby> to set a specific number of standby hosts. 0 is also a valid value.
In the case of a DR enabled cluster, the guess for the active worker nodes will be always wrong. Please
use also the parameter -w <# workers> to set the number of nodes running HANA as active worker.
The number of workers and standbys should equal the number of nodes on a site.
Additional parameters are -r to get a more detailed report on the quota calculation and -c to verify
the currently set quotas (allows a deviation of 10%, too inaccurate for larger clusters with more than 8
nodes).

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 192


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

D Lenovo X6 Server MTM List & Model Overview


Starting with the support of Intel Xeon IvyBridge EX family of processors, SAP has changed their naming
of the models. Previously, SAP had named these "T–Shirt" sizes of S,M,L,XL, etc. The new naming
convention is purely based on the amount of memory each predefined configuration should contain, for
example 128, 256, 512, etc. Each of these servers are orderable with the proper components to fulfill the
SAP pre-configured system sizes.
The following table shows the SAP HANA T-Shirt Sizes to Machine Type Model (MTM) code mapping.
The last x in the MTM is a placeholder for the region code the server was sold in, for example, a "U" for
the USA. While the Machine Type is 6241, the different Models are shown below.

Chassis CPUs Memory Usage Configuration Possible Model


128GB Standalone AC32S128S 6241-AC3, -H2x1 , -HZx2 , -HUx3 , -8Ax4
6241-AC3, -H3x1 , -HYx2 , -EGY2 , -HTx3 , -
Standalone AC32S256S
256GB EHY3 , -8Bx4 , -EKY4
Scale-out AC32S256C 6241-AC3
384GB Standalone AC32S384S 6241-AC3
2
Standalone AC32S512S 6241-AC3, -H4x1 , -HXx2 , -HSx3 , -8Cx4
512GB
Scale-out AC32S512C 6241-AC3
768GB Standalone AC32S768S 6241-AC3
1024GB Standalone AC32S1024S 6241-AC3
1536GB Standalone AC32S1536S 6241-AC3
2048GB Standalone AC32S2048S 6241-AC3
4U
256GB Standalone AC34S256S 6241-AC3
Standalone AC34S512S 6241-AC3, -H5x1 , -HWx2 , -HRx3 , -8Dx4
512GB
Scale-out AC34S512C 6241-AC3
768GB Standalone AC34S768S 6241-AC3
Standalone AC34S1024S 6241-AC3, -H6x1 , -HVx2 , -HQx3 , -8Fx4
1TB
Scale-out AC34S1024C 6241-AC3
4
Standalone AC34S1536S 6241-AC3
1.5TB
Scale-out AC32S1536C 6241-AC3
Standalone AC34S2048S 6241-AC3
2TB
Scale-out AC32S2048C 6241-AC3
3TB Standalone AC34S3072S 6241-AC3
4TB Standalone AC34S4096S 6241-AC3

Table 76: Lenovo MTM Mapping & Model Overview


1
IvyBridge processors with DDR3 DIMMs
2
Haswell processors with DDR3 DIMMs
3
Haswell processors with DDR4 DIMMs
4
Broadwell processors with DDR4 DIMMs

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 193


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

Chassis CPUs Memory Usage Configuration Possible Model


256GB Standalone AC44S256S 6241-AC4
6241-AC4, -HBx1 , -HEx2 , -EIY2 , -HHx3 , -
Standalone AC44S512S
512GB EJY3 , -8Gx4 , -ELY4
Scale-out AC44S512C 6241-AC4
768GB Standalone AC44S768S 6241-AC4
Standalone AC44S1024S 6241-AC4, -HCx1 , -HFx2 , -HIx3 , -8Hx4
4 1TB
Scale-out AC44S1024C 6241-AC4
Standalone AC44S1536S 6241-AC4
1.5TB
Scale-out AC44S1024C 6241-AC4
Standalone AC44S2048S 6241-AC4
2TB
Scale-out AC44S2048C 6241-AC4
3TB Standalone AC44S3072S 6241-AC4
8U 4TB Standalone AC44S4096S 6241-AC4
Standalone AC48S512S 6241-AC4
512GB
Scale-out AC48S512C 6241-AC4
Standalone AC481024S 6241-AC4
1TB
Scale-out AC481024C 6241-AC4
1.5TB Standalone AC48S1536S 6241-AC4
Standalone AC48S2048S 6241-AC4, -HDx1 , -HGx2 , -HJx3 , -8Jx4
2TB
8 Scale-out AC48S2048C 6241-AC4
Standalone AC483072S 6241-AC4
3TB
Scale-out AC483072C 6241-AC4
Standalone AC48S4096S 6241-AC4
4TB
Scale-out AC48S4096C 6241-AC4
6TB Standalone AC48S6144S 6241-AC4
8TB Standalone AC48S8192S 6241-AC4

Table 77: Lenovo MTM Mapping & Model Overview


1
IvyBridge processors with DDR3 DIMMs
2
Haswell processors with DDR3 DIMMs
3
Haswell processors with DDR4 DIMMs
4
Broadwell processors with DDR4 DIMMs

The model numbers follow this schema:


1. AC3/AC4 is describing the server chassis. AC3 are 4 rack unit sized servers for up to 4 CPU
books. AC4 servers are 8 rack unit sized server for up to 8 CPU books.
2. 2S/4S/8S give the number of installed CPU books and by this the number of populated CPU
sockets.
3. 128/256/... is the size of the installed RAM in GB.
4. S/C designates the intended usage, either S for Standalone/Single Node or C for Cluster/Scale-out
nodes.
These model numbers describe the current configuration of the server. A 6241-H2* is configured with 2
CPUs in a 4 Socket chassis with 128GB RAM and will be recognized as a AC32S128S by the installation
and any installed scripts. When upgrading this machine with additional 128GB of RAM, the installation
and already installed script will show the model as AC32S256S, while the burned-in MTM will still show
6241-H2* or 6241-AC3.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 194


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

E Frequently Asked Questions

Warning
These FAQ entries are only valid for certain appliance models and versions. Do not apply the
changes in this list until advised by either the support script or Lenovo support.
The support script saphana-support-lenovo.sh (or the deprecated version saphana-support-ibm.sh)
can detect various known problems in your appliance. In case such a problem is found, the support script
will give an FAQ entry number. Please follow only the instructions given in the particular entry. When
in doubt please contact Lenovo support via SAP’s OSS ticket system.
Information on how to run the support script can be found in the Lenovo SAP HANA Appliance Oper-
ations Guide 68 , section 2.3 Basic System Check. Please use always the latest support script which may
detect new issues found after installing your appliance. You can find the latest version attached to SAP
Note 1661146 – Lenovo/IBM Check Tool for SAP HANA appliances.

E.1 FAQ #1: SAP HANA Memory Limits

Running an In-Memory Database like SAP HANA makes free RAM a scarce resource, even on servers
with many terabytes of RAM. Besides the HANA database the operating system, hardware & hardware
drivers and 3rd party software (e.g. backup or monitoring solutions) consume a varying amount of
RAM during operation. It is the administrator’s duty to configure a reasonable memory limit for SAP
HANA, so that sufficient memory is available for all installed components. Lenovo cannot give memory
recommendations for any software not part of the standard installation.

E.1.1 Background

Memory Management in the Linux operating system is a complex topic with many interdependencies and
tunables. Even simple questions like how much memory is free or available or how much memory does a
given software occupy have no simple answers. In general the overall available memory is lower than one
might expect. From the physical installed memory 1.4% is not even visible for Linux, run the command
free -g and look at the total value (in GiB), e.g. 504GiB on a 512 GiB server. This total value is also used
by HANA to calculate default memory limits, and SAP HANA Studio will also show this value. But even
these 504GiB aren’t fully usable as another 1.4% is used by Linux for memory management ("mem_map"
is the keyword). So around 3% of the total physical memory is already used. Operating System parts
like the X11 Window System, Gnome and daemons like SSH, systemd, ntp and more also require each a
small portion of memory, but the total consumptions adds up. Each user session also requires memory
for the SSH connection and all programs run by the user. Hardware requires memory for the driver and
for I/O but this usage may not be as visible as it is not associated to a process. Same applies to any
software IO as this requires buffers and caches. Two special consumers are GPFS (only in GPFS based
installation) which is currently limited to use 4GB of RAM and the linux tunable vm.min_free_kbytes
which makes the last 2GiB of memory unavailable for user space programs like HANA. For the OS and
standard software 2-4 GB of memory are needed.
If left unconfigured, each HANA instance will calculate a default limit. The current formula is 90% of
the first 64GiB of visible memory and 97% of the remaining visible memory. For smaller servers and
virtual machines (< 512GB) this leaves not enough memory for the OS and standard software causing
either swapping or Out Of Memory (OOM) situations, especially when further software or I/O operations
like backups are installed & running. Manually lowering the Global Allocation Limit is recommended in
these scenarios. Also if there is more than one HANA instance running at the same time, the Global
Allocation Limit must be set on all instances as each HANA instance does not take other HANA instances
68 SAP Note 1650046 (SAP Service Marketplace ID required)

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 195


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

into consideration and each instance will try to allocate the default amount of memory which will lead
to extreme memory over commitment.

E.1.2 Recommendation

Recent versions of the Lenovo Solution can calculate a recommended value for the global allocation
limit. Run the command saphana-check-installation.sh -v -i memlimit. The output will contain a line
similar to "You can use up to 998123 MB RAM". The given value should be considered the highest
limit which can be set as global allocation limit in HANA. When installing additional software, it is
up to the administrator to set a lower value to accomodate the memory requirements of the additional
software. When running multiple HANA instances at the same time, the sum of the individual global
allocation limit of the running HANA instances must be equal or lower than the recommended value.
More information on the parameter global_allocation_limit can be found in the "HANA Administration
Guide" at http://help.sap.com/hana_appliance/. Please configure the memory limits as described
there.
In case of Out of Memory failures in Linux, setting a lower global allocation limit may be necessary. An
analysis of running tasks like backup operations, SAP maintenance operations or running HANA queries
at the time of the crash may reveal additional memory requirements which must be factored into the
calculation..

E.1.3 Further Reading

This is a complex topic and for a better understanding we recommend reading these web documents:
• SAP HANA Administration Guide: Allocated Memory Pools and Allocation Limits
• SAP HANA Administration Guide: Parameters that Control Memory Consumption
• SAP Note 1557506 – Linux paging improvements
• SAP Note 1999997 – FAQ: SAP HANA Memory
• Linux Memory Management Wiki: Low on Memory
• Linux Memory Management Wiki: Where did my memory go?
• Linux Memory Management Wiki: Out of Memory
• Red Hat on min_free_kbytes (via archive.org)

E.2 FAQ #2: GPFS parameter readReplicaPolicy

Problem: Older cluster installations do not have the GPFS parameter "readReplicaPolicy" set to "local"
which may improve performance in certain cases. Newer cluster installations have this value set and
single nodes are not affected by this parameter at all. It is recommended to configure this value.
Solution: Execute the following command on any cluster node at any time:
# mmchconfig readReplicaPolicy=local

This can be done during normal operation and the change becomes effective immediately for the whole
GPFS cluster and is persistent over reboots.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 196


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

E.3 FAQ #3: SAP HANA Memory Limit on XS sized Machines

Problem: For a general description of the SAP HANA memory limit see Appendix E.1: FAQ #1: SAP
HANA Memory Limits on page 195. XS sized servers have only 128GB RAM installed of which even a
single SAP HANA system will use up to 93.5% equaling 119GB (older revisions of HANA used 90% = b
115GB) if no lower memory limit is configured. This leaves too little memory for other processes which
may trigger Out-Of-Memory situations causing crashes.
Solution: Please configure the global allocation limit for the installed SAP HANA system to a more
appropriate value. The recommended value is 112GB if the GPFS page pool size is set to 4GB (see FAQ
#12: GPFS pagepool should be set to 4GB) and 100GB or less if the GPFS page pool is set to 16GB. If
multiple systems are running at the same time, please calculate the total memory allocation for HANA
so the sum does not exceed the recommended value. Please use only the physically installed memory for
your calculation.
More information on the parameter global_allocation_limit can be found in the "HANA Administration
Guide" at http://help.sap.com/hana_appliance/. Please configure the memory limits as described
there.

E.4 FAQ #4: Overlapping GPFS NSDs

Problem: Under some rare conditions single node SSD or XS/S gen 2 models may be installed with
overlapping NSDs. Overlapping means that the whole drive (e.g. /dev/sdb) as well as a partition on the
same device (e.g. /dev/sdb2) may be configured as NSDs in GPFS. As GPFS is writing data on both
NSDs, each NSD will overwrite and corrupt data on the other NSD. In the end at some point the whole
device NSD will overwrite the partition table and the partition NSD is lost and GPFS will fail. This is
the most common situation where the problem will be noticed.
Consider any data stored in /sapmnt to be corrupted even if the file system check finds no errors.
Solution: The only solution is to reinstall the appliance from scratch. To prevent installing with the
same error again, the single node installation must be completed in phase 2 of the guided installation.
Do not deselect "Single Node Installation".

E.5 FAQ #5: Missing RPMs

Problem: An upgrade of SAP HANA or another SAP software component fails because of missing
dependencies. As some of these package dependencies were added by SAP HANA after your system was
initially installed, you may install those missing packages and still receive full support of the Lenovo
Systems solution. If you no longer have the SLES for SAP DVD or RHEL DVD (depending on what
OS you are using) that had been delivered with your system, you may obtain it again from the SUSE
Customer Center respectively Red Hat.
Solution: Ensure that the packages listed below are installed on your appliance.
• SUSE Linux Enterprise Server for SAP Applications
– libuuid
– gtk2 - Added for HANA Developer Studio
– java-1_6_0-ibm - Added for HANA Developer Studio
– libicu - Added since revision 48 (SPS04)
– mozilla-xulrunner192-* - Added for HANA Developer Studio
– ntp

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 197


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

– sudo
– syslog-ng
– tcsh
– libssh2-1 - Added since revision 53 (SPS05)
– expect - Added since revision 53 (SPS05)
– autoyast2-installation - Added since revision 53 (SPS05)
– yast2-ncurses - Added since revision 53 (SPS05)
• Red Hat Enterprise Linux: At the moment there are no known packages that have to be installed
additionally.
Missing packages can be installed from the SLES for SAP DVD shipped with your appliance using the
following instructions. It is possible to add the DVD that was included in your appliance install as a
repository and from there install the necessary RPM package. First Check to see if the SUSE Linux
Enterprise Server is already added as an repository:
# zypper repos

# | Alias | Name | Enabled | Refresh


--+----------------+----------------+---------+--------
1 | SUSE-Linux-... | SUSE-Linux-... | Yes | No

If it doesn’t exist, please place the DVD in the drive (or add it via the Virtual Media Manager) and add
it as a repository. This example uses the SLES for SAP 11 SP1 media.
# zypper addrepo --type yast2 --gpgcheck --no-keep-packages\
--refresh --check dvd:///?devices=/dev/sr1 \
"SUSE-Linux-Enterprise-Server-11-SP1_11.1.1"

This is a changeable read-only media (CD/DVD), disabling autorefresh.


Adding repository 'SLES-for-SAP-Applications 11.1.1' [done]
Repository 'SUSE-Linux-Enterprise-Server-11-SP1_11.1.1'
successfully added
Enabled: Yes
Autorefresh: No
GPG check: Yes
URI: dvd:///?devices=/dev/sr1

Reading data from 'SUSE-Linux-Enterprise-Server-11-SP1_11.1.1'


media
Retrieving repository 'SUSE-Linux-Enterprise-Server-11-SP1_11.1.1'
metadata [done]
Building repository 'SUSE-Linux-Enterprise-Server-11-SP1_11.1.1'
cache [done]

The drawback of this solution is, that you always have to insert the DVD into the DVD-Drive or mounted
via VMM or KVM. Another possibility is to copy the DVD to a local repository and add this repository
to zypper. First find out if the local repository is a DVD repository
# zypper lr -u
# | Alias | Name ←-
,→ | Enabled | Refresh | URI

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 198


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

--+--------------------------------------------------+-----------------------------------------------
,→
1 | SUSE-Linux-Enterprise-Server-11-SP3 11.3.3-1.138 | SUSE-Linux-Enterprise-Server←-
,→-11-SP3 11.3.3-1.138 | Yes | No | cd:///?devices=/dev/sr0

Copy the DVD to a local Directory


# cp -r /media/SLES-11-SP3-DVD*/* /var/tmp/install/sles11/ISO/

Register the directory as a repository to zypper


# zypper addrepo --type yast2 --gpgcheck --no-keep-packages -f file:///var/tmp/←-
,→install/sles11/ISO/ "SUSE-Linux-Enterprise-Server-11-SP3"
Adding repository 'SUSE-Linux-Enterprise-Server-11-SP3' [done]
Repository 'SUSE-Linux-Enterprise-Server-11-SP3' successfully added
Enabled: Yes
Autorefresh: Yes
GPG check: Yes
URI: file:/var/tmp/install/sles11/ISO/

For verification you can list the repositories again. you should see an output similar to this
# zypper lr -u
# | Alias | Name ←-
,→ | Enabled | Refresh | URI
--+--------------------------------------------------+-----------------------------------------------
,→
1 | SUSE-Linux-Enterprise-Server-11-SP3 | SUSE-Linux-Enterprise-Server←-
,→-11-SP3 | Yes | Yes | file:/var/tmp/install/sles11/ISO/
2 | SUSE-Linux-Enterprise-Server-11-SP3 11.3.3-1.138 | SUSE-Linux-Enterprise-Server←-
,→-11-SP3 11.3.3-1.138 | Yes | No | cd:///?devices=/dev/sr0

Then search to ensure that the package can be found. This example searches for libssh.
# zypper search libssh

Loading repository data...


Reading installed packages...

S | Name | Summary | Type


--+-----------+-------------------------------------+--------
| libssh2-1 | A library implementing the SSH2 ... | package

Then install the package:


# zypper install libssh2-1

Loading repository data...


Reading installed packages...
Resolving package dependencies...
:
:
1 new package to install.
Overall download size: 55.0 KiB. After the operation, additional 144.0
KiB will be used.
Continue? [y/n/?] (y):

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 199


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

Retrieving package libssh2-1-0.19.0+20080814-2.16.1.x86_64 (1/1), 55.0


KiB (144.0 KiB unpacked)
Retrieving: libssh2-1-0.19.0+20080814-2.16.1.x86_64.rpm [done]
Installing: libssh2-1-0.19.0+20080814-2.16.1 [done]

E.6 FAQ #6: CPU Governor set to ondemand

Problem: Linux is using a technology for power saving called "CPU governors" to control CPU throttling
and power consumption. By default Linux uses the governor "ondemand" which will dynamically throttle
CPUs up and down depending on CPU load. SAP advised to use the governor "performance" as the
ondemand governor will impact HANA performance due to too slow CPU upscaling by this governor.
Since appliance version 1.5.53-5 (or simply SLES for SAP 11 SP2 based appliances) we changed the CPU
governor to performance. In case of an upgrade you also need to change the governor setting. If you are
still running SLES for SAP 11 SP1 based appliances, you may also change this setting to trade in power
saving for performance. This performance boost was not quantified by the development team.
Solution: On all nodes append the following lines to the file /etc/rc.d/boot.local:
bios_vendor=$(/usr/sbin/dmidecode -s bios-vendor)
# Phoenix Technologies LTD means we are running in a VM and governors are not ←-
,→available
if [ $? -eq 0 -a ! -z "${bios_vendor}" -a "${bios_vendor}" != "Phoenix Technologies ←-
,→LTD" ]; then
/sbin/modprobe acpi_cpufreq
for i in /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor
do
echo performance > $i
done
fi

The setting will change on the next reboot. You can also change safely the governor settings immediately
by executing the same lines at the shell. Copy & paste all the lines at once, or type them one by one.

E.7 FAQ #7: No disk space left bug (Bug IV33610)

Problem: Starting HANA fails due to insufficient disk space. The following error message will be found
in indexserver or nameserver trace:
Error during asynchronous file transfer, rc=28: No space left on device.

Using the command ’df’ will show that there is still disk space left. This problem is due to a bug in
GPFS versions between 3.4.0-12 and 3.4.0-20 which will cause GPFS to step into a read-only mode. See
SAP Note 1846872 – "No space left on device" error reported from HANA.
Solution: Make sure to shutdown all HANA nodes by issuing shutdown command from the studio, or
login in with ssh using the sidadm user. Then run:
HDB info

to see if there is any HANA processes running. If there are, run


kill -9 proc_pid

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 200


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

to shut them down, one by one.


Download and apply GPFS version 3.4.0.23. Refer to the section 13.5: Updating & Upgrading GPFS on
page 149 for information about how to upgrade GPFS.
Note
It is recommended that you consider upgrading your GPFS version from 3.4 to 3.5 as support
for GPFS 3.4 has been discontinued from IBM.
SAP highly recommends that you run uniqueChecker.py script after patching GPFS to make sure that
your database is consistent.

E.8 FAQ #8: Setting C-States

Problem: Poor performance of SAP HANA due to Intel processor settings.


Solution: As recommended in the SAP Notes 1824819 – SAP HANA DB: Recommended OS settings for
SLES 11 / SLES for SAP Applications 11 SP2 and 1954788 – SAP HANA DB: Recommended OS settings
for SLES 11 / SLES for SAP Applications 11 SP3 and additionally described in the IBM RETAIN Tip
H20700069 - Linux Ignores C-State Settings in Unified Extensible Firmware Interface (UEFI), the control
(’C’) states of the Intel processor should to be turned off for the most reliable performance of SAP HANA.
By default C-States are enabled in the UEFI due to the fact that we set the processor to Customer
Mode. With C-States being turned on you might see performance degradations with SAP HANA. We
recommend to turn off the processor C-States using the Linux kernel boot parameter:
processor.max_cstate=0

The Linux kernel used by SAP HANA includes a built-in driver (’intel_idle’) which will ignore any
C-State limits imposed by Basic Input/Output System (BIOS)/Unified Extensible Firmware Interface
(UEFI) when it is active.
This driver may cause issues by enabling C-States even though they are disabled in the BIOS or UEFI.
This can cause minor latency as the CPUs transition out of a C-State and into a running state. This is
not the preferred state for the SAP HANA appliance and must be changed.
To prevent the ’intel_idle’ driver from ignoring BIOS or UEFI settings for C-States, add the following
start parameter to the kernel’s boot loader configuration file:
intel_idle.max_cstate=0

Append both parameters to the end of the kernel command line of your boot loader (/boot/grub/menu.lst)
and reboot the server.
Warning
For clustered configurations, this change needs to be done on each server of the cluster. Only
make this change when all servers can be rebooted at once, or when you have an active stand-
by node to take over the rebooting systems HANA services. Do not try to reboot more servers
than stand-by nodes are active
For further information please refer to the SUSE knowledgebase article.
69 http://www.ibm.com/support/entry/portal/docdisplay?lndocid=migr-5091901

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 201


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

E.9 FAQ #9: ServeRAID M5120 RAID Adapter FW Issues

Problem: After the initial release of the new X6-based servers (x3850 X6, x3950 X6) a serious issue
in various firmware versions of the ServeRAID M5120 RAID adapter has been found which can trigger
continuous controller resets. This happens only under heavy load and each controller reset may cause
service interruption. Certain firmware versions do not exhibit this issue, but these versions show severely
degraded I/O performance. Only servers using the ServeRAID M5120 controller for attaching an external
SAS enclosure are affected.
Future appliance versions will be have the workaround for the controller reset issue preinstalled while the
performance issue can be only solved by an up- or downgrade to an unaffected firmware version.
Non-exhaustive list of known affected firmware versions:

Issue Affected versions


Controller resets 23.7.1-0010, 23.12.0-0011, 23.12.0-0016, 23.12.0-0019
Lowered Performance 23.16.0-0018, 23.16.0-0027

Table 78: ServeRAID M5120 Firmware Issues

Solution: The current recommendation is to use firmware version 23.22.0-0024 (or newer, if listed as
stable by Lenovo SAP HANA Team) and to change the following configuration value in the installed OS.
Both can be done after installation.

E.9.1 Changing Queue Depth

On the installed appliance, please edit /etc/init.d/ibm-saphana and change the lines
function start() {
QUEUESIZE=1024
for i in /sys/block/sd* ; do
if [ -d $i ]; then
echo $QUEUESIZE > $i/queue/nr_requests
fi
done

to this version (if not already set)


function start() {
QUEUESIZE=1024
QUEUEDEPTH=250
for i in /sys/block/sd* ; do
if [ -d $i ]; then
echo $QUEUESIZE > $i/queue/nr_requests
echo $QUEUEDEPTH > $i/device/queue_depth
fi
done

by inserting lines 3 & 7. The new settings will be set on the next reboot or by calling
# service ibm-saphana start

Please ignore any output.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 202


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

E.9.2 Use recommended Firmware version

1. Check which FW Package Build is installed on all M5120 RAID controllers:


# /opt/MegaRAID/storcli/storcli64 -AdpAllInfo -aAll | grep 'M5120' -B 5 -A 3

Adapter #1

==============================================================================
Versions
================
Product Name : ServeRAID M5120
Serial No : xxxxxxxxxx
FW Package Build: 23.22.0-0024

Currently, version 23.22.0-0024 is recommended. Download the 23.22.0-0024 FW package for


ServeRAID 5100 SAS/SATA adapters via Lenovo support http://support.lenovo.com/us/en/.
2. Make the downloaded file executable and then run it:
chmod +x ibm_fw_sraidmr_5100-23.22.0-0024_linux_32-64.bin
./ibm_fw_sraidmr_5100-23.22.0-0024_linux_32-64.bin -s
3. Please reboot the server after updating all M5120 controllers.
4. After reboot: Check if the queue depth is set to 250 for all devices on M5120 RAID controller:
for dev in $(lsscsi |grep -i m5120 |grep -E -o '/dev/sd[a-z]+'| cut -d '/' -f3)←-
,→ ; do cat /sys/block/${dev}/device/queue_depth ; done

E.10 FAQ #10: GPFS Parameter enableLinuxReplicatedAIO

With GPFS version 3.5.0-13 the new GPFS parameter enableLinuxReplicatedAIO was introduced.
Please note the following:
• Single node installations: Single node installations are not affected by this parameter. It can
be set to "yes" or "no".
• Cluster installations:
– GPFS 3.5.0-13 - 3.5.0-15: The parameter must be set to "no". When upgrading to GPFS
3.5.0-16 or higher you have to manually set the value to "yes".
Warning
Instead of setting the parameter to "no" we highly recommend to upgrade GPFS
to 3.5.0-16 or higher.
– GPFS 3.5.0-16 or higher: The parameter must be set to "yes".
• DR cluster installations: The parameter must be set to "yes".
The support script (saphana-support-ibm.sh) checks if the parameter is set correctly. If it is not set
correctly, adjust the setting:
# mmchconfig enableLinuxReplicatedAIO=no
# mmchconfig enableLinuxReplicatedAIO=yes

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 203


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

E.11 FAQ #11: GPFS NSD on Devices with GPT Labels

Problem: In some very rare occasions GPFS NSDs may be created on devices with a GUID Partition
Tables (GPT). When the NSD is created parts of the primary GPT header are overwritten. Newer UEFI
firmware releases offer an option to repair damaged GPTs and if activated the UEFI may try to recover
the primary GPT from the backup copy during boot-up. This will destroy the NSD header and in case
of single nodes this leads to the loss of all data in the GPFS filesystem.
To cause this issue, the following prerequisites must all apply:
• A storage device used as a NSD in a GPFS filesystem must have a GPT before the NSD was
created. This can only happen if the drive or RAID array was used before and has not been wiped
or reassembled. As part of the HANA appliance, GPT labels on non-OS disks are only created as
part of the mixed eX5/X6 clusters. If a system was only used for the HANA appliance, this cannot
occur unless there was a misconfiguration.
• GPFS 3.4 or GPFS 3.5 was used when the NSD and the filesystem was created, either during
installation or manually after installation, regardless of the current running GPFS version. GPFS
4.1 uses protective partition tables to prevent this issue when creating new NSDs.
• An UEFI version with GPT recovery functionality is either installed or an upgrade to such a version
is planned. Further risk comes from the UEFI upgrade as these new UEFI versions will enable the
GPT recovery by default.
The probability for this combination is very low.
Solution: Deactivate the automatic GPT recovery setting in UEFI.
When the ASU tool is installed, run the command
# /opt/lenovo/toolscenter/asu/asu64 show | grep -i gpt

If the Lenovo Systems Solution for SAP HANA Platform Edition was installed with an ISO image below
version 1.9.96-13, the ASU tool will reside in directory: /opt/ibm/toolscenter/asu
The setting has various names, but any variable named GPT and Recovery should be set to "None". If it
is set to "Automatic" do not reboot the system. If there is no such setting, do not upgrade the UEFI
firmware until the GPTs have been cleared.
Use the installed ASU tool to change the GPT recovery parameter to "None" and reboot the system
afterwards.
Assuming that "asu64 show | grep -i gpt" returned "DiskGPTRecovery.DiskGPTRecovery=Automatic"
the command would be:
# /opt/lenovo/toolscenter/asu/asu64 set DiskGPTRecovery.DiskGPTRecovery None

As second option you may download and install ASU tool on another server and modify the UEFI settings
via remote IMM access. Please download the ASU tool via https://www-947.ibm.com/support/entry/
portal/docdisplay?lndocid=lnvo-asu and consult the ASU documentation for further details.
Or boot into UEFI and complete the following steps:
1. Reboot the server.
2. When the prompt <F1> Setup is displayed, press F1 .
3. From the setup utility main menu, select
System Settings Recover and RAS Disk GPT Recovery .

4. Change Disk GPT Recover to <None>.


5. Exit and save settings.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 204


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

E.12 FAQ #12: GPFS pagepool should be set to 4GB

Problem: GPFS is configured to use 16GB RAM for its so called pagepool. Recent tests showed
that the size of this pagepool can be safely reduced to 4GB which will yield 12GB of memory for other
running processes. Therefore it is recommended to change this parameter on all appliance installations
and versions. Updated versions of the support script will warn if the pagepool size is not 4GB and will
refer to this FAQ entry.
Solution: Please change the pagepool size to 4GB. Execute
# mmchconfig pagepool=4G

to change the setting cluster-wide. This means this command needs to be run only once on Single Node
and clustered installation.
The pagepool is allocated during the startup of GPFS, so a GPFS restart is required to activate the new
setting. Please stop HANA and any processes that access GPFS filesystems before restarting GPFS. To
restart GPFS execute
# mmshutdown
# mmstartup

In clusters all nodes need to be restarted. You can do this one node at a time or restart all nodes at
once by adding the parameter -a to both commands. In the latter case please make sure no program is
accessing GPFS filesystems on any node.
To verify the configured pagepool size run
# mmlsconfig | grep pagepool

To verify the current active pagepool size run


# mmdiag --config

and search for the pagepool line. This value is shown in bytes.

E.13 FAQ #13: Limit Page Cache

Warning
Please change this setting only when instructed by Lenovo, SAP or Novell support.
This entry has been removed. If the Lenovo support script pointed you to this entry, please update the
support script.

E.14 FAQ #14: restripeOnDiskFailure and start-disks-on-startup

GPFS 3.5 and higher come with the new parameter restripeOnDiskFailure. The GPFS callback script
start-disks-on-startup automatically installed on the Lenovo Solution is superseded by this parameter
– IBM GPFS NSDs are automatically started on startup when restripeOnDiskFailure is activated.
On DR cluster installations, neither the callback script nor restripeOnDiskFailure should be activated.
Solution: To enable the new parameter on all nodes in the cluster execute:
# mmchconfig restripeOnDiskFailure=yes -N all

To remove the now unnecessary callback script start-disks-on-startup execute:

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 205


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

# mmdelcallback start-disks-on-startup

E.15 FAQ #15: Rapid repair on GPFS 4.1

"Rapid repair" is a new functionality introduced in IBM GPFS 4.1, which enables replication on block
level. As a result replication time is reduced considerably.
If you are running GPFS 4.1.0 to including GPFS 4.1.1-1:
• It is unsafe to have rapid repair enabled!
• Upgrade to GPFS 4.1.1-2 or higher as soon as possible.
• If an upgrade is not possible at the moment, disable rapid repair temporarily until you upgraded
to GPFS 4.1.1-2. See procedure below.
If you are running GPFS 4.1.1-2 or higher:
• It is safe to enable rapid repair.
• Rapid repair brings performance improvements. Enable it by following the procedure below.
Before enabling or disabling rapid repair, SAP HANA must be stopped and all GPFS filesystems un-
mounted. There must not be any filesystem access while changing this setting!
# mmdsh service sapinit stop # Stop HANA on all nodes
# mmdsh killall hdbrsutil # Stop this process on all nodes
# mmumount all -a # Unmount all GPFS filesystems on all nodes

If the mmumount command failes, there are still processes accessing the shared filesystem: Stop them,
then try unmounting the filesystem again.
For enabling rapid repair please use this command (where fs is e.g. sapmntdata):
# mmchfs <fs> --rapid-repair

For disabling please use this command:


# mmchfs <fs> --norapid-repair

After this you can mount the GPFS filesystem and start HANA again:
# mmmount all -a

E.16 FAQ #16: Parameter changes for performance improvements

With release 1.10.102-14 some parameters were changed to improve the performance. These changes
should also be implemented on appliances that were set up with older installation media.
1. sysctl parameter vm.min_free_kbytes:
Add the line vm.min_free_kbytes = 2097152 to file /etc/sysctl.conf. Then reload the sysctl
settings via:
# sysctl -e -p

2. IBM GPFS log file size (only applicable on GPFS based installations):
Update GPFS to at least version 4.1.1-2, then run the following command to increase the log file
size to 512MB:

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 206


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

# mmchfs sapmntdata -L 512M

If your GPFS filesystem is called differently, replace sapmntdata by the correct name.
A restart of the GPFS daemon on every node in the GPFS cluster is mandatory to apply the
changes.
Note
Changing the Log file size may fail with a message containing a text similar to the
maximum possible log file size is 472907776. The maximum log file size is calculated
during filesystem creation and cannot be changed later.
In this happens, use the value given as maximum, e.g mmchfs sapmntdata
-L 472907776. To silent the support script check, run echo gpfs_logfile »
/etc/lenovo/supportscript_check_blacklist afterwards.
3. IBM GPFS ignorePrefetchLUNCount parameter (only applicable on GPFS based installations):
Update GPFS to at least version 4.1.1-2, then run the command to enable the parameter:
# mmchconfig ignorePrefetchLUNCount=yes

E.17 FAQ #17: GPFS 4.1.1-3 behaviour change

Problem: This entry is only valid on DR-enabled clusters with a dedicated quorum node. The support
script will issue a warning on all these setups regardless of the installed GPFS version. Please blacklist
the particular check to silent the warning.
In GPFS version 4.1.1-3 the cluster manager appointment behaviour in split-brain situations changed. In
GPFS version 4.1.1-2 and earlier the cluster manager node must to be located at the passive/secondary
site, while starting with GPFS version 4.1.1-3 the active/primary site must contain the cluster manager
node. Customers updating from pre-4.1.1-3 versions must relocate the cluster manager when upgrading
GPFS to 4.1.1-3 or later.
Solution: When upgrading to GPFS 4.1.1-3 appoint a quorum node on the primary site as the cluster
manager. This is a one time change and can be done at any time before, during or after the GPFS
upgrade and will not interrupt normal operation.
Verify the location of the cluster manager:
# mmlsmgr

and set the cluster manager to any node on the primary site which designated as a quorum node. To get
a list of nodes execute
# mmlscluster

To change the cluster manager node run


# mmchmgr -c <node>

The instructions in this guide have been updated to reflect the behaviour change.
To silent the warning execute
echo check_dr_gpfs_4_1_1_3 >> /etc/lenovo/supportscript_check_blacklist

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 207


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

E.18 FAQ #18: Setting the HANA Parameters

Problem: You have upgraded SAP HANA to version 10 or later or your SAP HANA System version 10
or later was installed with an older image and still uses the previous recommended values for the HANA
parameters. Make sure to have the HANA parameters at the recommended values for appropriate fileIO
performance. The recommended values for SAP HANA Version 10 and later are:
• async_read_submit=on
• async_write_submit_active=on
• async_write_submit_blocks=all
For RHEL 7.2 the following additional parameters are needed:
• size_kernel_io_queue=2048
• max_parallel_io_requests=512
Solution: login to an HANA Server with user <sid>adm and run the following commands:
hdbparam --paramget fileio.async_read_submit
hdbparam --paramget fileio.async_write_submit_active
hdbparam --paramget fileio.async_write_submit_blocks

RHEL 7.2:
hdbparam --paramget fileio.size_kernel_io_queue
hdbparam --paramget fileio.max_parallel_io_requests

If the values returned by these commands differ from the recommended values you can set the parameters
with the following commands:
hdbparam --paramset fileio.async_read_submit=on
hdbparam --paramset fileio.async_write_submit_active=on
hdbparam --paramset fileio.async_write_submit_blocks=all

RHEL 7.2:
hdbparam --paramset fileio.size_kernel_io_queue=2048
hdbparam --paramset fileio.max_parallel_io_requests=512

An appliance reboot or HANA restart is not required.

E.19 FAQ #19: Performance Settings

Please review the following configuration settings if the support script indicates it:
1. Change Processor C-State Boot parameter
This will disable the use of some processor C-States, which can reduce power consumption but lower
performance. This boot parameter should not have any effect on Lenovo solutions, as restricting
the processor C-state is done in other settings. However, SAP requires this parameter be set at
boot.
(a) SLES 11 based systems with ELILO:
Change line 12 in /etc/elilo.conf from
append = "resume=/dev/sda3 splash=silent transparent_hugepage=never ←-
,→intel_idle.max_cstate=0 showopts "

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 208


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

to
append = "resume=/dev/sda3 splash=silent transparent_hugepage=never ←-
,→intel_idle.max_cstate=0 processor.max_cstate=0 showopts "

(b) SLES 12 based systems:


At the moment there are no known updates to the kernel command line for SLES 12 systems
(c) RHEL 6 based systems with Grub:
Change in /boot/efi/efi/redhat/grub.conf line 17 from e.g.
kernel /boot/vmlinuz-2.6.32-504.el6.x86_64 ro root=UUID=3d420911-eef8-46de-←-
,→b019-aff9d6e7d36a rd_NO_LUKS KEYBOARDTYPE=pc KEYTABLE=us LANG=en_US.←-
,→UTF-8 rd_NO_MD SYSFONT=latarcyrheb-sun16 crashkernel=auto rd_NO_LVM ←-
,→rd_NO_DM intel_idle.max_cstate=0 transparent_hugepage=never ←-
,→crashkernel=auto rhgb quiet rhgb quiet

to
kernel /boot/vmlinuz-2.6.32-504.el6.x86_64 ro root=UUID=3d420911-eef8-46de-←-
,→b019-aff9d6e7d36a rd_NO_LUKS KEYBOARDTYPE=pc KEYTABLE=us LANG=en_US.←-
,→UTF-8 rd_NO_MD SYSFONT=latarcyrheb-sun16 crashkernel=auto rd_NO_LVM ←-
,→rd_NO_DM intel_idle.max_cstate=0 processor.max_cstate=0 ←-
,→transparent_hugepage=never crashkernel=auto rhgb quiet rhgb quiet

2. TCP Window Adjustment


These settings adjust the network receive and transmit buffers for all connections in the OS. These
settings are raised from their defaults in order to increase performance on scale-out systems. To
have these changes applied on boot, create the file /etc/init.d/after.local with the following
lines:
#!/bin/bash
sysctl -w net.ipv4.tcp_rmem="8388608 8388608 8388608"
sysctl -w net.ipv4.tcp_wmem="8388608 8388608 8388608"

Make the file executable:


chmod 755 /etc/init.d/after.local

The lines 17-18 in /etc/sysctl.conf should be changed from


net.ipv4.tcp_rmem=4096 262144 8388608
net.ipv4.tcp_wmem=4096 262144 8388608

to
net.ipv4.tcp_rmem=8388608 8388608 8388608
net.ipv4.tcp_wmem=8388608 8388608 8388608

To temporarily apply the changes immediately without a reboot, run the following commands:
sysctl -w net.ipv4.tcp_rmem="8388608 8388608 8388608"
sysctl -w net.ipv4.tcp_wmem="8388608 8388608 8388608"

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 209


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

3. Queue-Size
To apply this configuration on boot, you can edit the /etc/init.d/lenovo-saphana (formerly
/etc/init.d/ibm-saphana) file installed with the machine. Insert the change into this script
Change line 25 to:
QUEUESIZE=16384

To temporarily apply the settings immediately without a reboot, perform the following command
for each disk entry (sda, sdb, etc) in /sys/block/
echo 16384 > /sys/block/sda/queue/nr_requests

4. Linux I/O Scheduler Adjustment


The Linux I/O scheduler should be changed from the default mode (CFQ, or Completely Fair
Queuing) to the noop mode. This algorithm change increases I/O performance on SAP HANA.
To apply this configuration on boot, you can edit the /etc/init.d/lenovo-saphana (formerly
/etc/init.d/ibm-saphana) file installed with the machine. Insert the scheduler change into this
script
Insert at line 30:
echo noop > ${i}/queue/scheduler

Before the change lines 26-31 looks like:


for i in /sys/block/sd* ; do
if [ -d $i ]; then
echo $QUEUESIZE > $i/queue/nr_requests
echo $QUEUEDEPTH > $i/device/queuedepth
fi
done

Afterwards lines 26-32 looks like:


for i in /sys/block/sd* ; do
if [ -d $i ]; then
echo $QUEUESIZE > $i/queue/nr_requests
echo $QUEUEDEPTH > $i/device/queuedepth
echo noop > ${i}/queue/scheduler
fi
done

To temporarily apply the settings immediately without a reboot, perform the following command
for each disk entry (sda, sdb, etc) in /sys/block/
echo noop > /sys/block/sda/queue/scheduler

5. Add the line vm.min_free_kbytes = 2097152 to file /etc/sysctl.conf. Then reload the sysctl
settings via:
# sysctl -e -p

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 210


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

E.20 FAQ #20: Disks mounted by "ID"

Problem: Storage devices like hard disk drives and SSDs are usually addressed by their Linux device
name, e.g. /dev/sda, partitions on these devices are named liked /dev/sda1. These device names are
rather volatile and are assigned at boot time in the order of detection, so it is not guaranteed that
the operating system disk is always /dev/sda. As a more stable alternative the /dev/disk/by-id/
path was chosen which is based upon the subsystem (e.g. scsi), serial number of the storage device (e.g.
3600605b002f90b6019793f194ecc4685) and partition number (e.g. part3). The full path in this example is
/dev/disk/by-id/scsi-3600605b002f90b6019793f194ecc4685-part2 and is a symbolic link pointing
to /dev/sda2 automatically created and updated by Linux. Recent experience showed that this method
is not as stable as thought and certain events and actions like the replacment of a failed RAID controller
may cause a change of the generated serial number of RAID arrays and render the operating system
unbootable as the OS partitions cannot be identified during boot.
The safest way to address the disk partitions during boot is to use the UUID70 of the filesystems stored
in these partitions. This filesystem ID number is independent of the RAID controller and the configured
RAID array. In contrast to the serial number derived identification, the RAID controller can be replaced
and the RAID array can be reimported as a foreign configuration, which allows moving the drives and
even the complete storage book to a different server and the drive may also be imaged to a new drive.
The most important rationale to use the new UUID naming convention is the ability to replace a failed
RAID controller without the need to reinstall the operating system, minimizing the server downtime.
Servers installed with later versions of the appliance installation software will use the UUID naming
convention, while most systems will use the old naming scheme. Changing the configuration files to
the new naming is highly recommended, but not necessary for normal operation. This is only a safety
mesasure in case a RAID controller must be replaced.
Solution: The following configuration files must be edited if existing:
• /etc/fstab
• /boot/efi/efi/SuSE/elilo.conf
• /etc/elilo.conf
• /boot/efi/EFI/redhat/grub.cfg
• /etc/grub2-efi.cfg
• /boot/grub/menu.lst
No all files will exist on any server, ignore any non-existing. Edit each file and look out for paths
like /dev/disk/by-id/scsi-3600605b002f90b6019793f194ecc4685-part2 and replace all occurrences
with the corresponding UUID entry formatted like UUID=e728a864-a12a-4546-88b2-9f3526969787, e.g.
replace
/dev/disk/by-id/scsi-3600605b002f90b6019793f194ecc4685-part2 / ext3 acl,user_xattr 1 1

in the /etc/fstab with


UUID=4c5877a0-66cd-41fa-8a62-f44f0efd9850 / ext3 acl,user_xattr 1 1

and in /boot/grub/menu.lst a line like


kernel /boot/vmlinuz-3.0.13-0.27-default root=/dev/disk/by-id/scsi-3600605b00302af8016e25bfd2f16a349-part1 resume=/dev/←-
,→sda1 splash=silent intel_idle.max_cstate=0 instmode=cd saphanapermitted=1 crashkernel=256M-:128M showopts vga=0←-
,→x314

must be changed to
kernel /boot/vmlinuz-3.0.13-0.27-default root=UUID=4c5877a0-66cd-41fa-8a62-f44f0efd9850 resume=/dev/sda1 splash=silent ←-
,→intel_idle.max_cstate=0 instmode=cd saphanapermitted=1 crashkernel=256M-:128M showopts vga=0x314

70 Universally Unique Identifier

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 211


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

The easiest way to obtain the required partition UUIDs is to use the tool blkid on the old /dev/disk/
by-id/ path
# blkid -o value -s UUID /dev/disk/by-id/scsi-3600605b002f90b6019793f194ecc4685-part3
9c4fe522-18e7-4cff-b2ab-3c341bfc84bb

Please note that for the UUIDs shown for the EFI partitions are no valid UUIDs, but still work for
identifying the right partitions.
Instead of manually changing the UUIDs, you can use this convenience script:
#!/bin/bash

for old in /dev/disk/by-id/*part*


do
new=$(blkid -o value -s UUID "$old" )
if [ -e "$old" -a -n "$new" ]; then
for file in "/boot/efi/efi/SuSE/elilo.conf" \
"/etc/elilo.conf" \
"/boot/efi/EFI/redhat/grub.cfg" \
"/etc/grub2-efi.cfg" \
"/boot/grub/menu.lst" \
"/etc/fstab"
do
if [ -e "$file" ]; then
sed -i "s#$old#UUID=$new#g" "$file"
fi
done
fi
done

Either upload this script to the server and run it or copy all lines at once into a terminal session to
execute it immediately.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 212


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

F References

F.1 Lenovo References

Lenovo Solution Documentation


• Lenovo Systems Solution for SAP HANA Quick Start Guide
• In-memory Computing with SAP HANA on Lenovo X6 Systems Planning / Implementation
• Lenovo Systems X6 Solution for SAP HANA
– MT 3837 Implementation Guide
– MT 6241 Implementation Guide
download from ftp-server "ftp://lenovosap.solutions/". Please contact sapsolutions@lenovo.com
to get login credentials
• Lenovo Systems X6 Solution for SAP HANA Installation Guide: SLES11
• Lenovo Systems X6 Solution for SAP HANA Installation Guide: SLES12
• Lenovo Systems X6 Solution for SAP HANA Installation Guide: RHEL6
• Lenovo Systems X6 Solution for SAP HANA Installation Guide: RHEL7
• Special Topic Guide for System x eX5/X6 Servers: Backup and Restore
• Special Topic Guide for System x eX5/X6 Servers: Mixed Cluster
• Special Topic Guide for System x X6 Servers: Virtualization
• Special Topic Guide for System x eX5/X6 Servers: Monitoring
Download the Guides from ftp-server "ftp://lenovosap.solutions/". Please contact sapsolutions@lenovo.com
to get login credentials
• SAP Note 1650046 – Lenovo Systems X6 Solution for SAP HANA Operations Guide
Lenovo System x Documentation
• X6 Portfolio Overview Positioning Information
• Lenovo RackSwitch G8052 Product Guide
• Lenovo RackSwitch G8124E Product Guide
• Lenovo RackSwitch G8264 Product Guide
• Lenovo RackSwitch G8272 Product Guide
• Lenovo RackSwitch G8296 Product Guide
• Lenovo Advanced Settings Utility (ASU)
• Lenovo Dynamic System Analysis (DSA)
• Lenovo Bootable Media Creator (BOMC)
• Lenovo SSD Wear Gauge CLI utility
• Lenovo UpdateXpress (UXSPI)

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 213


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

F.2 IBM References

IBM System Storage EXP2500 Express Specifications


IBM General Parallel File System/ Spectrum Scale Documentation
• Documentation
– IBM General Parallel File System Documentation
– IBM Spectrum Scale Documentation
• IBM Spectrum Scale FAQ (with supported OS levels)
• IBM GPFS/ Spectrum Scale Service on IBM Fix Central (IBM ID required) or Lenovo Support
– GPFS 3.5.0
– GPFS 4.1.0
– Spectrum Scale 4.1.1 and above
• GPFS Books
– IBM developerWorks Article: GPFS Quick Start Guide for Linux
• Support in IBM Support Portal (IBM ID required)
– GPFS
– Spectrum Scale

F.3 SAP General Help (SAP Service Marketplace ID required)

• SAP Service Marketplace


• SAP Help Portal
• SAP HANA Ramp-Up Knowledge Transfer Learning Maps
• SAP HANA Software Download at SAP Software Download Center → Software Downloads →
Installations & Upgrades / Support Packages & Patches → By Alphabetical Index (A-Z) → H (for
SAP HANA)
• SAP HANA Administration Guide

F.4 SAP Notes (SAP Service Marketplace ID required)

Generic SAP Notes about SAP HANA


• SAP Note 1730996 – Unrecommended external software and software versions
• SAP Note 1730929 – Using external tools in an SAP HANA appliance
• SAP Note 1803039 – Statistics server CHECK_HOSTS_CPU intern. error when restart
• SAP Note 1906381 – Network setup for external communication
SAP Notes about the Lenovo Systems Solution for SAP HANA
• SAP Note 1650046 – Lenovo SAP HANA Appliance Operations Guide
• SAP Note 1661146 – Lenovo Check Tool for SAP HANA appliances

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 214


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

• SAP Note 1880960 – Lenovo Systems Solution for SAP HANA Platform Edition FW/OS/Driver
Maintenance
• SAP Note 2281578 – UDEV rules update necessary after Mellanox driver update
• SAP Note 2274681 – Installation Issues with Installer 1.10.102-14
• SAP Note 2130171 – Atomated installer does not recognize MT 6241 as valid
SAP Notes regarding SAP HANA
• SAP Note 1523337 – SAP HANA Database 1.00 - Central Note
• SAP Note 2380229 – SAP HANA Database 2.00 - Central Note
• SAP Note 2399995 – Hardware requirement for SAP HANA 2.0
• SAP Note 2298750 – SAP HANA Platform SPS 12 Release Note
• SAP Note 2323817 – SAP HANA SPS 12 Database Revision 121
• SAP Note 1681092 – Multiple SAP HANA databases on one SAP HANA system
• SAP Note 1642148 – FAQ: SAP HANA Database Backup & Recovery
• SAP Note 1780950 – Connection problems due to host name resolution
• SAP Note 1829651 – Time zone settings in HANA scale out landscapes
• SAP Note 1743225 – Potential failure of connections with scale out nodes
• SAP Note 1888072 – SAP HANA DB: Indexserver crash in __strcmp_sse42
• SAP Note 1890444 – Slow HANA system due to CPU power save mode
• SAP Note 2191221 – hdbrsutil can block unmount of the filesystem
• SAP Note 2235581 – SAP HANA: Supported Operating Systems
• SAP Note 1658845 – Recently certified SAP HANA hardware not recognized - HanaHwCheck.py
SAP Notes regarding SUSE Linux Enterprise Server for SAP Applications
• SAP Note 784391 – SAP support terms and 3rd-party Linux kernel drivers
• SAP Note 1310037 – SUSE LINUX Enterprise Server 11: Installation notes
• SAP Note 1954788 – SAP HANA DB: Recommended OS settings for SLES 11 / SLES for SAP
Applications 11 SP3
• SAP Note 2240716 – SAP HANA DB: Recommended OS settings for SLES 11 / SLES for SAP
Applications 11 SP4
• SAP Note 2205917 – SAP HANA DB: Recommended OS settings for SLES 12 / SLES for SAP
Applications 12
• SAP Note 618104 – Linux SAP System Information Tool
• SAP Note 1056161 – SUSE Priority Support for SAP applications
• SAP Note 2001528 – Linux: SAP HANA Database SPS 08, SPS 09 and SPS 10 on RHEL 6 or
SLES 11
• SAP Note 2228351 – Linux: SAP HANA Database SPS 11 revision 110 (or higher) on RHEL 6 or
SLES 11
SAP Notes regarding Red Hat Enterprise Linux
• SAP Note 2013638 – SAP HANA DB: Recommended OS settings for RHEL 6.5

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 215


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

• SAP Note 2136965 – SAP HANA DB: Recommended OS settings for RHEL 6.6
• SAP Note 2247020 – SAP HANA DB: Recommended OS settings for RHEL 6.7
• SAP Note 2292690 – SAP HANA DB: Recommended OS settings for RHEL 7.2
• SAP Note 2001528 – Linux: SAP HANA Database SPS 08, SPS 09 and SPS 10 on RHEL 6 or
SLES 11
• SAP Note 2228351 – Linux: SAP HANA Database SPS 11 revision 110 (or higher) on RHEL 6 or
SLES 11
SAP Notes regarding IBM GPFS
• SAP Note 1084263 – Cluster File System: Use of GPFS on Linux
• SAP Note 1902281 – GPFS 3.5 incompatibility with Linux kernel 3.0.58 and higher
• SAP Note 2051052 – GPFS "No space left on device" when df shows free space
SAP Notes regarding Virtualization
• SAP Note 1122387 – Linux: SAP Support in virtualized environments
• SAP Note 2024433 – Multiple SAP HANA VMs on VMWare vSphere in production
• SAP Note 1995460 – Single SAP HANA VM on VMWare vSphere in production

F.5 Novell SUSE Linux Enterprise Server References

• SUSE Linux Enterprise Server for SAP Applications product page


• SUSE Linux Enterprise Server 11 SP3 Release Notes
• SUSE Linux Enterprise Server for SAP Applications 11 SP3 Media
• SUSE Linux Enterprise Server 11 SP4 Release Notes
• SUSE Linux Enterprise Server for SAP Applications 11 SP4 Media
• SUSE Linux Enterprise Server 12 Release Notes
• SUSE Linux Enterprise Server for SAP Applications 12 Media
• SUSE Linux Enterprise Server 12 SP1 Release Notes
• SUSE Linux Enterprise Server for SAP Applications 12 SP1 Media

F.6 Red Hat Enterprise Linux References (Red Hat account required)

• Red Hat Enterprise Linux 6 Why can I not install or start SAP HANA after a system upgrade?
• Red Hat Enterprise Linux 6 Red Hat Enterprise Linux for SAP HANA: system updates and sup-
portability
• Red Hat Enterprise Linux 6 SAP HANA Multi host install fails with the message
LIBSSH2_ERROR_KEY_EXCHANGE_FAILURE, unable to exchange encryption keys¨

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 216


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

F.7 VMware References

• VMware downloads
• VMware documentation
• Best Practices Guide

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 217


1.12.121-16 © Copyright Lenovo, 2016
Technical Documentation

G Changelog
This section describes the changes that have been done within a release version since it was published.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 218


1.12.121-16 © Copyright Lenovo, 2016

You might also like