Professional Documents
Culture Documents
Sap Hana Scale Up High Availability With Hana System Replication Using Suse PDF
Sap Hana Scale Up High Availability With Hana System Replication Using Suse PDF
By Milind Pathak
January 2016
Feedback
Hitachi Data Systems welcomes your feedback. Please share your thoughts by sending an email message to
SolutionLab@hds.com. To assist the routing of this message, use the paper number in the subject and the title of this white
paper in the text.
Contents
SAP HANA High Availability ............................................................................................................ 1
SAP HANA System Replication ...................................................................................................... 2
SUSE Linux Enterprise High Availability Extension (SUSE HAE) for SAP HANA High Availability . 3
Hitachi Compute Blade 2500 .......................................................................................................... 3
Hitachi Virtual Storage Platform Family .......................................................................................... 5
Solution Overview................................................................................................................ 6
Key Solution Elements......................................................................................................... 8
Hardware Elements......................................................................................................................... 8
Software Elements .......................................................................................................................... 9
Solution Design................................................................................................................... 10
Hitachi Compute Blade 2500 Chassis Configuration ................................................................... 10
Network Design............................................................................................................................. 11
Planning and Prerequisites ........................................................................................................... 13
Setting up HANA System Replication .......................................................................................... 14
SUSE HAE Installation and Configuration..................................................................................... 18
SAPHanaSR Installation and Configuration .................................................................................. 21
Engineering Validation ...................................................................................................... 24
Test Automated Failover ............................................................................................................... 25
1
SAP HANA Scale-Up High Availability with HANA System
Replication and Automated Failover using SUSE High Availability
Extension
Reference Architecture Guide
This reference architecture guide describes how to achieve high availability for SAP HANA Haswell scale-up systems with
HANA System Replication and automated failover using SUSE High Availability Extension (SUSE HAE).
Hardware Failures: Hitachi Data Systems SAP HANA solutions using Hitachi Compute Blade 2500 with 520X B2 server
blades offer redundant hardware components to provide fault tolerance such as redundant power supplies and fans, two
hot-swappable management modules, and multiple Ethernet and Fibre Channel HBA interfaces. Similarly, Hitachi Virtual
Storage Platform family storage arrays offer redundant hardware components such as dual controllers, redundant front-
end and back-end I/O modules, and power supply units. Storage design followed by Hitachi Data Systems for SAP HANA
solutions use striping and parity to provide redundancy for automatic recovery from disk failures. For more information
about deploying SAP HANA using Hitachi Data Systems servers and storages, refer to the reference architecture guides
mentioned in the Hitachi Virtual Storage Platform Family section.
Software Failures: To provide fault recovery SAP HANA software includes a watchdog function, that automatically restarts
configured services (index server, name server, and so on) in case of their failure. In addition to these features, SAP and its
partners offer the following high availability mechanism for SAP HANA. These solutions are based on completely
redundant servers and/or storage.
Host Auto-Failover: One (or more) standby nodes are added to a SAP HANA system and configured to work in
standby mode. In case of failure, data and log volumes of a failed worker node are taken over by a standby node. The
standby node becomes a worker node and takes over user load. This solution does not need additional storage, only
servers.
Storage Replication: Data replication is achieved by means of storage mirroring independent from the database
software. Disks are mirrored without a control process from the SAP HANA system. SAP HANA hardware partners
offer this solution. This solution needs additional servers and storage.
SAP HANA System Replication: SAP HANA replicates all data to a secondary SAP HANA system constantly. Data
can be constantly pre-loaded in the memory of the secondary system to minimize the recovery time objective (RTO).
This solution needs additional servers and storage. The focus of this reference architecture guide is SAP HANA
System Replication.
Refer to the document SAP HANA – High Availability FAQ to read more about SAP HANA High Availability.
1
2
SAP HANA System Replication
SAP HANA System Replication is implemented between two different SAP HANA systems with same number of active
nodes. After system replication is setup between the two SAP HANA systems, it replicates all of the data from the primary
HANA system to the secondary HANA system (initial copy). After this, any logged changes in the primary system are also
sent to the secondary system but log entries are not replayed. Also, data snapshots are sent from primary to secondary at
regular intervals. This means whenever the secondary has to takeover, only log entries received after the last data
snapshot need to be replayed. With the data snapshot, the primary system also sends information about the tables
loaded in memory if the parameter 'preload_column_tables' is set to 'true'. If this parameter is also set to 'true' on the
secondary system, these tables are preloaded in the memory of the secondary database. This reduces the RTO, making
HANA system replication a faster high availability solution in terms of recovery. The entire process of data replication
occurs on the software level and is fully controlled by the SAP HANA database kernel.
Synchronous on disk (mode=sync): Transaction is committed after log entries are written on primary and secondary
systems.
Synchronous in memory (mode=syncmem): Transaction is committed after the secondary system receives the
logs, but before they are written to disks.
Asynchronous (mode=async): Transaction is committed after log entries are sent without any response from the
secondary system.
Full Sync: Full synchronization is supported by SAP but cannot be configured with SUSE HAE. Full Sync mode stops
the surviving node if either node is down, so failover with SUSE HAE is not possible. The procedure described in this
reference architecture is applicable to all three replication modes. System replication can be setup using SAP HANA
Studio or the command line. This reference architecture uses SAP HANA Studio for the setup and only explains the
system replication process on HANA scale-up systems. Refer to the SAP HANA Administration Guide to read more
about system replication. Figure 1 shows an overview of HANA system replication.
Figure 1
2
3 If the primary SAP HANA system fails, the system administrator must perform a manual takeover. Takeover can be
performed using SAP HANA Studio or the command line. Manual failover requires continuous monitoring and could
lead to higher recovery times. To automate the failover process, SUSE Linux Enterprise High Availability Extension
(SUSE HAE) can be used. The use of SUSE HAE for the takeover process helps customers achieve service level
agreements for SAP HANA downtime by enabling faster recovery without any manual intervention.
SUSE Linux Enterprise High Availability Extension (SUSE HAE) for SAP HANA High
Availability
SUSE Linux Enterprise High Availability Extension is an integrated suite of open source clustering technologies that
enable you to implement highly available physical and virtual Linux clusters. SUSE and SAP have developed a solution for
SAP HANA high availability using HSR (HANA System Replication) and SUSE HAE. Along with SUSE HAE, SUSE delivers
a package, SAPHanaSR, with two resource agents: SAPHanaTopology and SAPHana. When installed and configured,
SUSE HAE and SAPHanaSR provide the automated takeover mechanism for HANA system replication. The basic
components of SUSE HAE and SAPHanaSR are the following:
Corosync/OpenAIS layer: Corosync is implemented as a daemon process running on both cluster nodes and
exchanging heartbeat messages.
Cluster Information Base (CIB): CIB is a repository that contains information about cluster nodes. This information is
kept in the form of an XML file. There is one master file, which is replicated to other cluster nodes.
Cluster Resource Manager (CRM): CRM is implemented as part of the Pacemaker component in SUSE HAE and
runs as a daemon (crmd) on both HANA cluster nodes. Communication between the two nodes happens through
CRM. One CRM is selected as Designated Coordinator and keeps the master copy of CIB. In this reference
architecture guide, this will be the server, which runs the primary HANA node in system replication.
Local Resource Manager (LRM): LRM manages the resource agents on behalf of CRM. It can perform start/stop/
monitor operations on resource agents. Resource agents are application specific agents such as SAPHanaTopology
and SAPHana in this reference architecture.
Policy Engine (PE): When a cluster-wide change is required, the Policy Engine calculates the next state of the cluster
based on the current state and configuration.
SAPHanaTopology: This is a resource agent provided by the SAPHanaSR package for SAP HANA high availability. It
runs on both primary and secondary HANA nodes and gathers information about the status and configurations of SAP
HANA system replication.
SAPHana: This is a resource agent provided by the SAPHanaSR package for SAP HANA high availability. It performs
the actual check of the SAP HANA database instances and is configured as a master/slave resource. The master is
responsible for the SAP HANA database running on the primary mode, and the slave is responsible for the HANA
database that is operated in synchronous (secondary) status.
Note — The description of components above is a brief overview restricted to their role in this reference architecture
guide and does not replace the architecture design in SUSE documentation. Refer to SUSE Linux Enterprise High
Availability Extension SLEHA Guide for more details about the architecture and configuration of SUSE HAE.
Flexible I/O architecture and logical partitioning allow configurations to match application needs exactly with Hitachi
Compute Blade 2500. Multiple applications easily and securely co-exist in the same chassis.
3
4
Add server management and system monitoring at no cost with Hitachi Compute Systems Manager. Seamlessly integrate
with Hitachi Command Suite in Hitachi storage environments
One SAP HANA scale-up configuration uses one, two, or four 520X B2 server blades in the Hitachi Compute Blade 2500
chassis for the different sized solutions. This high availability reference architecture uses two SAP HANA scale-up nodes
that are the same size. Table 1 lists the supported configurations for the 520X B2 server blades used in the various HANA
scale-up solutions.
Number of blades 1 2 4
Network Ports 2 × 2-port 10GBASE-SR LAN PCIe adapter on two I/O board modules:
IOBD 01B
IOBD 02B
4
5
Table 1. 520X B2 Server Blade Configuration (Continued)
Feature Configuration
Small (2-Socket) Large (8-Socket)
Medium (4-Socket)
Fibre Channel Ports 2 × Hitachi 16 Gb/sec 2-port Fibre Channel adapters on two I/O board modules:
IOBD 01A
IOBD 02A
Other Interfaces 1 USB 3.0 port
VSP family systems are built on legendary Hitachi reliability, offering complete system redundancy, hot-swappable parts,
outstanding data protection and non-disruptive updates to keep storage operations up and running at optimal
performance. Additional data recovery and protection tools allow for application-aware recovery, simpler backup, restore,
failover and consistency across copies, reducing business risk, downtime and migration concerns. VSP family
complements virtualized server environments with its ability to consolidate multiple file and block workloads in a single
system. Additional integration offloads storage-intensive processing from the server hosts to increase virtual machine
density, improve performance and reduce workload contention. And it extends those benefits to legacy-attached storage
via external storage virtualization. Five models in the VSP family, based on Hitachi Storage Virtualization Operating
System (SVOS), provide a uniquely scalable, software-defined storage foundation. Powered with Hitachi global storage
virtualization, new software capabilities unlock IT agility and enable the lowest storage total cost of ownership.
Out of the members of the Hitachi Virtual Storage Platform family, validation of VSP G200, VSP G400 and VSP G600 has
been done for SAP HANA scale-up systems. VSP G800 and VSP G1000 are validated for SAP HANA scale-out systems
and are not discussed in this reference architecture.
For information about deploying SAP HANA scale-up system(s) using these servers and storage, refer to following
reference architecture guides:
TDI with VSP G400 and VSP G600: Tailored Datacenter Integration Implementation of Multiple SAP HANA Scale-Up
HSW Appliances on Hitachi Virtual Storage Platform G400 and G600. Multiple scale-up SAP HANA nodes can be
deployed; VSP G400 supports a maximum of six scale-up nodes and VSP G600 supports a maximum of eight scale-
up nodes,
Appliance with VSP G200: Hitachi Unified Compute Platform for the SAP HANA® Platform in a Scale-Up
Configuration Using Hitachi Compute Blade 2500 and Hitachi Virtual Storage Platform G200.
5
6
Solution Overview
This document provides an example configuration of SAP HANA high availability using SAP HANA System Replication
and automated takeover using SUSE HAE and SAPHanaSR resource agents. SAP HANA scale-up systems follow the
architecture defined by Hitachi Data Systems and use Hitachi 520X B2 server blades and a VSP family storage array. The
two SAP HANA scale-up systems are installed in the same CB 2500 chassis and are connected through Brocade VDX
6740 Ethernet switches.
6
7
Data replication between the two scale-up systems is performed through SAP HANA System Replication and automated
takeover is performed using SUSE HAE. Two medium size (4-socket) SAP HANA nodes are used to validate the solution
design in this reference architecture. Since HANA System Replication is a solution offered by SAP, and automated
takeover for HANA System Replication is a solution offered by SUSE, the procedure may be applied to all other Hitachi
Data Systems and SAP supported SAP HANA scale-up systems, but hardware and network configuration will change
accordingly. Figure 2 gives an overview of this solution. HANA System Replication is setup from node 1 to node 2. SUSE
HAE with its resource agent, SAPHana provided by SAPHanaSR package, performs the actual checks on the HANA
database. This resource agent is configured as a master/slave resource where master is responsible for primary HANA
system and slave is responsible for the secondary system. Another resource agent, SAPHanaTopology, also provided
by SAPHanaSR package, monitors the status and configuration of HANA System Replication. Resource agents use the
script landscapeHostConfiguration.py to monitor the status of the database. In case of failure of the primary
node, the secondary node is promoted to take over the role of the primary node. Also, the virtual IP is moved to the
secondary server.
Figure 2
7
8
Key Solution Elements
These are the key hardware and software elements used in this reference architecture.
Hardware Elements
Table 2 describes the hardware used in this reference architecture for two SAP HANA scale-up systems required for
HANA System Replication.
8
9
Table 2. Hardware Elements (Continued)
Hitachi Virtual Storage 1 Single frame Block
Platform G400 storage for
SAP HANA
OR
Software Elements
Table 3 describes the software products used to deploy the two High Availability nodes.
SAPHanaSR-doc-0.149-0.8.1
Hitachi Virtual Storage Platform G400 80-02-01-00
Brocade VDX switches NOS version 4.1.3b
Note - SAPHanaSR 0.149 is used for validation of this reference architecture, but it is strongly recommended that the
latest version is used.
9
10
Solution Design
This is the detailed solution design of this reference architecture. It includes the following sections:
Network Design
Solution Design
There are two management modules on the Hitachi Compute Blade 2500 chassis to connect to the management
network.
A maximum of 28 I/O board modules (IOBD) can be mounted on one Hitachi Compute Blade 2500 chassis, but the
solution only uses four I/O board modules.
Hitachi FIVE-FX 16 Gb/sec 2-port Fibre channel PCIe adapters are installed on IOBD 01A and IOBD 02A.
10GBase-SR 2-port network PCIe adapters are installed on IOBD 01B and IOBD 02B.
10
11
Figure 3 shows the server blades for the two HANA nodes in Compute Blade 2500 chassis.
Figure 3
Network Design
SAP recommends using a dedicated network for system replication. To determine your network requirements for system
replication, refer to Network Recommendations for SAP HANA System Replication.
Hitachi 520X B2 server blades offer two dedicated 10 GbE ports used for system replication. There are two 10GBASE-SR
2-port LAN adapters installed on the PCIe slots of the I/O board module of blade 1 of the Hitachi Compute Blade 2500
chassis. This solution uses two 10 GbE ports on the 10GBASE-SR 2-port LAN adapters for connectivity with the 10 GbE
external switches.
11
12 management module on the Hitachi Compute Blade 2500 chassis is connected to an external switch for management
The
connectivity. Make the following network connections for client and replication networks as shown in Figure 4.
Port 0 of the I/O board module on PCIe slot IOBD 01B to port 1 of Brocade VDX 6740-48B.
Port 0 of the I/O board module on PCIe slot IOBD 02B to port 1 of Brocade VDX 6740-48A.
Port 1 of the I/O board module on PCIe slot IOBD 01B to port 3 of Brocade VDX 6740-48B.
Port 1 of the I/O board module on PCIe slot IOBD 02B to port 3 of Brocade VDX 6740-48A.
Port 0 of the I/O board module on PCIe slot IOBD 05B to port 2 of Brocade VDX 6740-48B.
Port 0 of the I/O board module on PCIe slot IOBD 06B to port 2 of Brocade VDX 6740-48A.
Port 1 of the I/O board module on PCIe slot IOBD 05B to port 4 of Brocade VDX 6740-48B.
Port 1 of the I/O board module on PCIe slot IOBD 06B to port 4 of Brocade VDX 6740-48A.
Figure 4
12
13 solution connects two Brocade VDX 6740 switches together using ISL. It enables both switches to act together as
This
one single logical switch with the characteristics that, if one switch fails, there still is a path to the hosts. Create separate
VLANs for the ports used for client network and ports used for the replication network. At the operating system level,
active-active network bond mode with options "mode= 802.3ad miimon=100 xmit_hash_policy=layer3+4
updelay=5000 lacp_rate=fast" is used. The compute network setup uses the ports on the 10GBASE-SR 2-
port LAN adapters. Create bonds at operating system level using two network ports for client network as well as for
replication network for each SAP HANA system as listed in Table 5.
13
14
Following are the prerequisites as described by SAP and SUSE for this solution:
Both primary and secondary SAP HANA nodes must have the same SAP system ID and instance number but a
different hostname.
Changes to the .ini file configuration parameters made on one HANA node are not replicated and should be manually
made on the other system.
The secondary HANA node can be used for running any other HANA system (like non-production systems such as QA
or Development), but the scenario must be tested thoroughly before implementing in production. This procedure is
not described in this reference architecture guide.
Technical users and groups such as <sidadm> are defined locally in the Linux system.
Name resolution of the cluster nodes and the virtual IP address must be done locally on all cluster nodes.
Automated registration of a failed primary node after takeover is validated in this solution.
Note —This list does not include some of the prerequisites that are already addressed by the described design. Check
the complete list of prerequisites described by SAP in the document SAP HANA Administration Guide and by SUSE in
the document SAP HANA System Replication on SLES for SAP Applications.
Table 6 lists the information used to setup high availability for SAP HANA.
14
15
Configure Name Resolution for Replication Network
The section Network Design describes the physical network connections required to provide a dedicated network for
HANA System Replication. Additionally, HANA nodes must be configured to identify the replication network. This must be
done before system replication is configured. Refer to Network Configuration for SAP HANA System Replication to
configure name resolution. This is configured in the section system_replication_hostname_resolution in the
global.ini file on the Configuration tab of SAP HANA Studio. Configure the replication IP address of the secondary node
on the primary node and configure the replication IP address of the primary node on the secondary node. Figure 5 shows
how the IP addresses are configured in HANA Studio on the primary node.
Figure 5
You should be able to connect to the database using HDBSQL and the output should be 'ACTIVE'
Note — SID HIT and Instance number 10 are used in previous procedure; adapt the values according to your system
information. Database user SYSTEM was used for testing purposes. If you use a different user, provide the username
and password in step 3 accordingly. Also, provide system privilege "DATA_ADMIN" to that user. If more granular rights
must be given to the user, follow SAP documentation to identify correct privileges. Database user key 'slehaloc' is a
fixed name required by SAPHanaSR and cannot be changed unless SUSE provides a procedure to do so.
15
16
Setup HANA System Replication from Primary HANA Node to Secondary HANA Node
Perform system replication setup using the HANA System Replication Guide. The configuration can be further tuned by
applying parameters described in the System Replication Configuration Parameters and HANA System Replication Guide
based on individual customer requirements. Validate that the system replication is running by running the command
"hdbnsutil -sr_state" on the primary HANA server as user <sid>adm as shown in Figure 6.
Figure 6
Also, run the following command to validate that the replication IP addresses (192.168.100.111 and 192.168.100.112) are
used for replication.
16
17 command also provides the details of parameters such as replication status and replication mode as highlighted in
This
Figure 7.
Figure 7
17
18
SUSE HAE Installation and Configuration
This section describes how to install and configure SUSE HAE to automate the failover process in SAP HANA system
replication. SUSE HAE is part of the SUSE Linux Enterprise Server for SAP Application.
Installation
Mount the installation media and install the SUSE HAE using the command "zypper in -t pattern ha_sles" as
shown in Figure 8.
Figure 8
This installs a number of rpm packages required for SUSE HAE. The installation must be performed on both primary and
secondary HANA servers.
Configuration
Create STONITH Device
STONITH (shoot the other node in the head) is the way to implement fencing in SUSE HAE. If a cluster member is not
behaving normally, it must be removed from the cluster. This is referred as fencing. A cluster without the STONITH
mechanism is not supported by SUSE. There are multiple ways to implement STONITH, but in this reference architecture,
STONITH Block Devices (SBD) is used.
Create a small LUN (1 MB) on the storage array that is shared between the cluster members. Map this LUN to both
primary and secondary HANA servers through storage ports. Make note of the SCSI identifier of this LUN (the SCSI
identifier should be the same on both primary and secondary HANA servers). It is possible to add more than one SBD
device in a cluster for redundancy. If the two HANA nodes are installed on separate storage arrays, an alternate method
such as IPMI can be used for implementing STONITH. Refer to the SUSE Linux Enterprise High Availability Extension
SLEHA Guide for best practices for implementing STONITH. The validation of this reference architecture has been
performed using shared storage and SBD for STONITH implementation.
18
19
Configure SUSE HAE on Primary HANA Server
These steps are used for the basic configuration of SUSE HAE on primary HANA servers. Start the configuration by
running the command "sleha-init". The script asks for the following input:
Network Address to bind: Provide the subnet of the replication network 192.168.100.0
Multicast Address: Type the multicast address or leave the default value if using unicast
Multicast Port: Leave the default value or type the port that you want to use
Path to storage device: Type the SCSI identifier of the SBD device created in the step Create STONITH Device
Are you sure you want to use this device [y/N]: Type y
Configuration on the primary HANA server is shown in Figure 9.
Figure 9
19
20 Secondary HANA Server to the Cluster
Add
To add the secondary HANA server to the cluster configured on the primary HANA server, run the command "sleha-
join" on the secondary HANA server as root user. The script asks for the following input:
IP address or hostname of existing node: Enter the primary node replication IP address
The cluster configuration is copied to the secondary HANA server after this, and it is added to the cluster as shown in
Figure 10.
Figure 10
This completes the basic cluster configuration on the primary and secondary HANA servers.
20
21 all of the previous steps are finished, login to Hawk (HA Web Konsole) using the URL 'https://192.168.100.111:7630'
After
with the user ID 'hacluster' and password 'linux'. The default password can be changed later. You should see the cluster
members 'saphanap' and 'saphanas' online as shown in Figure 11.
Figure 11
Based on your requirements, it is possible to add a second ring for fault-tolerance (the Client network can be used), and
you can also change to unicast communication as described in Best Practices for SAP on SUSE Linux Enterprise. Unicast
communication was used to validate this reference architecture.
Installation
Download the latest SAPHanaSR packages from the SUSE website. A user ID with sufficient authorization must be used
for download.(If the server is registered to SUSE, the command "zypper in SAPHanaSR" can be used to get the latest
version available. SAPHanaSR 0.149 is used in this reference architecture with following packages:
SAPHanaSR-0.149-0.8.1.noarch.rpm
SAPHanaSR-doc-0.149-0.8.1.noarch.rpm
21
22
Login as 'root' user on the primary and secondary HANA servers and run the following command: "zypper install /
usr/sap/ SAPHanaSR-0.149-0.8.1.noarch.rpm /usr/sap/ SAPHanaSR-doc-0.149-0.8.1.noarch.rpm"
as shown in Figure 12.
Figure 12
Configuration
The SAPHanaSR package can be configured using the Hawk wizard. Follow the procedure described in the SAP HANA
System Replication on SLES for SAP Applications for configuration steps. Below is a list of the following parameters
required by the Hawk wizard:
SAP SID: SAP System Identifier. The SAP SID is always a 3 character alphanumeric string. HIT is used as the SAP
SID in this reference architecture.
SAP Instance Number: The instance number must be a two-digit number including a leading zero. Instance number
10 is used in this reference architecture.
Virtual IP Address: The Virtual IP Address will be configured on the host where the primary database is running.
Virtual IP Address 192.168.150.121 is used in this reference architecture.
22
23 the configuration is complete, login to Hawk and check that cluster members and all resources are online as in
After
Figure 13.
Figure 13
Note — The default timeout parameters configured by the Hawk wizard are just a good starting point. Do intensive
testing and tune these parameters so they work for your environment.
23
24
Engineering Validation
The failover tests listed in Table 7 were performed in the Hitachi Data Systems lab to validate this solution.
The section that follows describes the procedure to test automated failover using test case 1 in Table 7.
24
25
Test Automated Failover
Follow these steps to perform the automated failover procedure.
Create a host file entry in your management server where HANA Studio is installed as follows:
192.168.150.121 vsaphanaprd
Open HANA Studio and add a HANA system with Host Name 'vsaphanaprd' and Instance Number 10, as shown in
Figure 14.
Figure 14
In HANA Studio, verify that the primary node is running on the virtual hostname as shown in Figure 15.
Figure 15
Stop the HANA database on the primary server. Login as user <sid>adm and execute the command "HDB stop".
Within a few seconds, the cluster realizes that the HANA database is down on the primary server and automatic
failover to the secondary server starts. The slave on the secondary server is promoted as master and the virtual IP
address is moved to the secondary server saphanas. The operation's progress is shown in Figure 16.
25
26
Figure 16
After failover is complete, the HANA database on virtual hostname 'vsaphanaprd' is now running on the secondary
server saphanas as shown in Figure 17.
Figure 17
26
After the primary HANA node 'saphanap' is up and running again, perform the steps described in the guide How to
27
Perform System Replication for SAP HANA to register it as a secondary node in system replication.
In web console, the secondary node 'saphanas' is now the 'Master' with the primary node 'saphanap' as the 'Slave'
as shown in Figure 18.
Figure 18
27
1
HITACHI is a trademark or registered trademark of Hitachi, Ltd., Other notices if required. All other trademarks, service marks and company names are properties of their respective
owners.
Notice: This document is for informational purposes only, and does not set forth any warranty, expressed or implied, concerning any equipment or service offered or to be offered by
Hitachi Data Systems Corporation.
AS-427-01 January 2016.