Professional Documents
Culture Documents
Issue 02
Date 2017-04-10
and other Huawei trademarks are trademarks of Huawei Technologies Co., Ltd.
All other trademarks and trade names mentioned in this document are the property of their respective
holders.
Notice
The purchased products, services and features are stipulated by the contract made between Huawei and
the customer. All or part of the products, services and features described in this document may not be
within the purchase scope or the usage scope. Unless otherwise specified in the contract, all statements,
information, and recommendations in this document are provided "AS IS" without warranties, guarantees or
representations of any kind, either express or implied.
The information in this document is subject to change without notice. Every effort has been made in th e
preparation of this document to ensure accuracy of the contents, but all statements, information, and
recommendations in this document do not constitute a warranty of any kind, express or implied.
Purpose
This document describes how to install SAP HANA cluster on the Huawei FusionServer
RH5885H V3/RH8100 V3(RH5885H V3/RH8100 V3 for short) SLES12 SP1 and Huawei
OcanStor 5500 V3 (5500 V3 for short). It covers the software and hardware planning and
configurations on the RH5885H V3/RH8100 V3and 5500 V3, dedicated operating system
(OS) installation, and database installation and installation verification of the SAP HANA.
This document provides guidance on installing SAP HANA Database Software SP09_091.0
or later and SAP HANA Database Software SP10_100.0 or later.
Intended Audience
This document is intended for:
Technical support engineers
Maintenance engineers
Users
Symbol Conventions
The symbols that may be found in this document are defined as follows.
Symbol Description
Symbol Description
Change History
Changes between document issues are cumulative. The latest document issue contains all the
changes made in earlier issues.
Issue 01 (2016-11-18)
This issue is the first office release.
Issue 02 (2017-04-10)
Update LVM partitioner.
Contents
1 Installation Planning
Table 1-5 File system mount planning (using eight 2 TB nodes as an example)
Table 1-7 File system mount planning (using eight 2 TB nodes as an example)
Partition File System Partition Partition Source Description
Type planning Size
---
configuration description:
Rules for configuring the number of disks in the shared volume: The value is calculated based on the
number of HANA nodes and memory capacity of each HANA node. All disks in the shared
volume are installed in the first set of SAN+NAS storage.
8*900G SAS (RAID5,4D+1P);
Rules for configuring the number of disks in the data volume: The value is calculated based on the
memory capacity of HANA nodes.
5500 V3_1 (SAN+NAS):
DiskDomain _Data_1: 12*900G SAS ; (512GB/1TB/1.5TB/2TB RAM per node); (RAID5,8D+1P);
DiskDomain _Data_2: 12*900G SAS ; (512GB/1TB/1.5TB/2TB RAM per node); (RAID5,8D+1P);
DiskDomain _Data_3: 12*900G SAS ; (512GB/1TB/1.5TB/2TB RAM per node) ;(RAID5,8D+1P);
5500 V3_2...N(SAN):
DiskDomain _Data_4: 12*900G SAS ; (512GB/1TB/1.5TB/2TB RAM per node) ;(RAID5,8D+1P);
DiskDomain _Data_5: 12*900G SAS ; (512GB/1TB/1.5TB/2TB RAM per node) ;(RAID5,8D+1P);
DiskDomain _Data_6: 12*900G SAS ; (512GB/1TB/1.5TB/2TB RAM per node) ;(RAID5,8D+1P);
DiskDomain _Data_7: 12*900G SAS ; (512GB/1TB/1.5TB/2TB RAM per node) ;(RAID5,8D+1P);
Rules for configuring the number of disks in the log volume: The value is calculated based on the
memory capacity of HANA nodes.
5500 V3_1 (SAN+NAS):
8*600G SSD(512GB/1TB/1.5TB/2TB RAM per node) (RAID5,4D+1P);
5500 V3_2…N(SAN):
8*600G SSD(512GB/1TB/1.5TB/2TB RAM per node) (RAID5,4D+1P);
Ensure that IP addresses of the logical ports and that of the client which used to access the NAS file
system are on the same network segment.
… … … … … …
The NFS shared planning uses a HANA cluster configured with eight 2 TB nodes as an example.
… … … … … … …
LUN_Log 8 thick 512 GB CTE.B Default Intelligent
_007 prefetch
The LUN planning uses a HANA cluster configured with eight 2 TB nodes (including seven worker
nodes) as an example. Seven data LUNs and seven log LUNs are planned.
Table 1-19 LUN group planning (for the first 5500 V3)
Name ID LUN
LUN_Group_HANA 0 LUN_Data_001
LUN_Data_002
LUN_Data_003
LUN_Log_001
LUN_Log_002
LUN_Log_003
Table 1-20 LUN group planning (for the second 5500 V3)
Name ID LUN
LUN_Group_HANA 0 LUN_Data_004
…
LUN_Data_007
LUN_Log_004
…
LUN_Log_007
… … … … …
… … … … …
You can run the following commands to check the host initiator information in Linux.
Command for checking world wide name (WWN): cat /sys/class/fc_host/host*/port_name
Command for checking Fibre Channel (FC) port status: cat /sys/class/fc_host/host*/port_state
Command for checking FC port rate: cat /sys/class/fc_host/host*/speed
HANA 0 HANA01
HANA02
HANA08
… …
… …
… …
… …
… …
The host name of the cluster intranet can be the same as that of the uplink service.
… … … … … …
The binding mode of uplink service bond ports can be adjusted according to configuration rules of
the customer's live network; however, those of the cluster intranet and NAS network are always
Active-Active, and you are not advised to change them.
The NIC binding rule is that a bond port must consist of two different physical NIC ports. Figure 1-2
shows the 10GE ports on the rear of the RH5885H V3.
2 5500 V3 Deployment
Figure 2-1 Process of configuring storage space for the first 5500 V3
Figure 2-2 shows the process of configuring storage space (data_04/05/06/07 and
log_04/05/06/07) for the second 5500 V3.
Figure 2-2 Process of configuring storage space for the second 5500 V3
Or log in to the 5500 V3 CLI, and run the following commands to create disk domains:
For the first 5500 V3:
create disk_domain name=HANA_shared disk_number=8 disk_type=SAS disk_domain_id=0
create disk_domain name=HANA_Data_1 disk_number=12 disk_type=SAS disk_domain_id=1
create disk_domain name=HANA_Data_2 disk_number=12 disk_type=SAS disk_domain_id=2
create disk_domain name=HANA_Data_3 disk_number=12 disk_type=SAS disk_domain_id=3
create disk_domain name=HANA_Log disk_number=8 disk_type=SSD disk_domain_id=4
For the required hard disk capacity, the following RAID5 attributes are recommended:
StoragePool_Shared: 4D+1P;
StoragePool_Data: 8D+1P;
StoragePool_Log: 4D+1P;
Or log in to the 5500 V3 CLI, and run the following commands to create storage pools:
Switch to an advanced user and obtain corresponding rights.
change user_mode current_mode user_mode=developer
Or log in to the 5500 V3 CLI, and run the following command to create a file system:
Step 5 Check the Ethernet ports whose running status is Link up.
Log in to the DeviceManager, choose Huawei Storage > Provisioning > Port > Ethernet
Ports, and check the Ethernet ports whose running status is Link up.
Prerequisites:
You have imported an NFS license and enabled the NFS service.
You have created an NFS.
Data required for the NFS sharing configuration has been ready.
1 Log in to the DeviceManager, choose Huawei Storage > Provisioning > Share > NFS
(Linux/UNIX/MAC) > Create.
2 In the dialog box shown in Figure 2-13, enter related information according to Table
1-16.
In the Name or IP Address text box, enter the IP addresses of clients to be accessed. If you enter *, any
host can access the NFS shared path.
Or log in to the 5500 V3 CLI, and run the following commands to create LUNs:
Or log in to the 5500 V3 CLI, and run the following commands to create LUN groups:
create lun_group name=HANA lun_id_list=0,1,2,3,4,5 lun_group_id=0
create lun_group name=HANA lun_id_list=0,1,2,3,4,5,6,7 lun_group_id=0
Or log in to the 5500 V3 CLI, and run the following commands to create hosts:
create host name=HANA01 operating_system=Linux host_id=0 ip_address=192.168.2.11
create host name=HANA02 operating_system=Linux host_id=1 ip_address=192.168.2.12
create host name=HANA03 operating_system=Linux host_id=2 ip_address=192.168.2.13
…
If there are multiple host initiators to be modified, you can run the following commands using
the CLI:
admin:/>change initiator initiator_type=FC wwn=21000024ff89ac2c multipath_type=ALUA
admin:/>change initiator initiator_type=FC wwn=21000024ff89ac2d multipath_type=ALUA
admin:/>change initiator initiator_type=FC wwn=21000024ff89ac8a multipath_type=ALUA
admin:/>change initiator initiator_type=FC wwn=21000024ff89ac8b multipath_type=ALUA
admin:/>change initiator initiator_type=FC wwn=21000024ff89adbe multipath_type=ALUA
admin:/>change initiator initiator_type=FC wwn=21000024ff89adbf multipath_type=ALUA
Or log in to the 5500 V3 CLI, and run the following commands to add host initiators and
enable ALUA:
add host initiator host_id=0 initiator_type=FC wwn=10000090fa502b0a multipath_type=ALUA
add host initiator host_id=0 initiator_type=FC wwn=10000090fa502b0b multipath_type=ALUA
add host initiator host_id=1 initiator_type=FC wwn=21000024ff4a5170 multipath_type=ALUA
add host initiator host_id=1 initiator_type=FC wwn=21000024ff4a5171 multipath_type=ALUA
add host initiator host_id=2 initiator_type=FC wwn=21000024ff4bc554 multipath_type=ALUA
add host initiator host_id=2 initiator_type=FC wwn=21000024ff4bc555 multipath_type=ALUA
…
add host initiator host_id=7 initiator_type=FC wwn=21000024ff36b932 multipath_type=ALUA
add host initiator host_id=7 initiator_type=FC wwn=21000024ff36b933 multipath_type=ALUA
Or log in to the 5500 V3 CLI, and run the following commands to create host groups:
create host_group name=HANA host_id_list=0,1,2,3,4,5,6,7 host_group_id=0
Or log in to the 5500 V3 CLI, and run the following command to create a mapping view:
create mapping_view name=HANA host_group_id=0 lun_group_id=0
Step 15 Configure a device to automatically synchronize time with the NTP server.
Except production storage devices, all hardware devices (such as backup storage devices) in a HANA
environment are configured to synchronize time with the same NTP server, to avoid system abnormality
caused by inconsistent time.
----End
Step 2 Enter the user name and password for logging in to iBMC (the default user name is root, and
the default password is Huawei12#$).
The iBMC page is displayed.
Step 3 Choose Remote Control > Remote Virtual Console (shared mode).
Step 4 The remote keyboard, video, and mouse (KVM) screen is displayed, as shown in Figure 3-3.
----End
Step 3 When a screen show as Figure 3-5, enter the BIOS password (sensitive to uppercase and
lowercase) in the dialog box.
NOTE
The BIOS default password is "Huawei12#$". If you have not configured any BIOS password, the
BIOS screen is displayed, as shown in Figure 3-6.
Step 4 Select IntelRCSetup on the menu, select Advanced Power Management Configuration
using the downward arrow, and press Enter, as shown in Figure 3-7.
Step 5 Turning off Power Policy Select: Select Custom, and press Enter, as shown in Figure 3-8.
Step 6 In the displayed list box, select Disable, and press Enter to turn off P State, as shown in
Figure 3-9.
Step 7 Turning off C-States: Press Esc to exit the CPU P State Control screen, select CPU C State
Control using the downward arrow, and press Enter, as shown in Figure 3-10.
Turning off CPU C3 State: Select CPU C3 report, and press Enter. Select Disable, and
press Enter to turn off CPU C3 State.
Turning off CPU C6 State: Select CPU C6 report, and press Enter. Select Disable, and
press Enter to turn off CPU C6 State.
Turning off Enhanced Halt State: Select Enhanced Halt State, and press Enter. Select
Disable, and press Enter to turn off Enhanced Halt State.
Step 8 Turning off T-States: Press Esc to exit the CPU C State Control screen, select CPU T State
Control using the downward arrow, and press Enter, Press Esc to exit the CPU T State
Control screen , as shown in Figure 3-11.
Step 9 Press F10, select Yes, and press Enter to save the BIOS settings and restart the server, as
shown in Figure 3-12.
Figure 3-12 Saving the BIOS settings and restarting the server
Step 1 During the server startup, when "Press <Ctrl><R> for Run MegaRAID Configuration Utility"
is displayed, press Ctrl+R to open the RAID configuration screen, as shown in Figure 3-13.
If this RAID controller card has been configured (has been used in maintenance and OS
reinstallation scenarios before), we need to clear the "old RAID configured".
Caution: All hard disk data and RAID information will be lost after you Del the RAID
Configured. Back up data before this operation.
Step 3 Press "F2", On the displayed page, select Create Virtual Drive, as shown in Figure 3-15.
Step 4 Users can perform RAID configuration according to the requirements specific parameter, for
example as shown in Figure 3-16.
RAID level: RAID-1;
Caution: When you select "Initialize" item will prompt the need for initialization page, and
prompts to initialize will delete all data, here select "OK", as shown in Figure 3-18
Step 6 After configuring the "Advanced" page ,click "OK" in the "Create New VD" page as Figure
3-19.And then choose No in the "enable SSD caching" page as Figure 3-19.
Step 7 It will return to the main administration page , and will be prompted to "Initialization
complete on VD 0", here only need to select "OK", as shown in Figure 3-20.
If you complete to create the VD0, the main administration page will show the VD0 all
information, as shown in Figure 3-20;
Step 8 Set the Boot device : Press Ctrl-N to Ctrl Mgmt Page and set VD0 as the Boot device; as
shown in Figure 3-21.Then select "Apply" press "Enter".
Step 9 Choose Warm Reset from the drop-down list, as shown in Figure 3-22. Restart the system.
----End
Step 1 Click .
----End
3.2 Installing an OS
Step 1 On the installation page, select Install Red Hat Enterprise Linux 7.2, and press Enter, as
shown in Figure 3-24.
Step 2 Use default language configuration , and click Continue, as shown in Figure 3-25.
Step 3 On INSTALLATION SUMMARY page, click DATE&TIME, the as shown in Figure 3-26.
Step 4 Select a time zone, and click Done, as shown inFigure 3-27.
NOTE
You can set the time zone based on the actual condition.
Step 8 On INSTALLATION DESTINATION page, click sda Disks, and click I will configure
partitioning , then click Done , as shown in Figure 3-31.
Step 9 Choose Standard Partition, and Click + to add a partition, click Modify and choose sda for
partitioning, as shown in Figure 3-32.
Step 10 Input /boot to mount, and Click Add mount point, as shown in Figure 3-33.
Step 11 Input /boot to mount point row , Input 1 GiB to Desired Capacity ,choose Standard
Partition , and choose ext4 file system , click Modify and choose sda for partitioning , as
shown in Figure 3-34.
Step 12 Click + ,Select swap, and Click Add mount point, as shown in Figure 3-35.
Step 13 Input 20 GiB to Desired Capacity ,choose LVM , and choose swap file system ,select Create
a new volume group in the Volume Group and choose sda to create vg_os , as shown in
Figure 3-36 Figure 3-37.
Step 14 After Save the vg_os, click Update Setting, as shown in Figure 3-38.
Step 15 then click +, Input /usr/sap to mount point row, and Click Add mount point.Input 200 GiB
to Desired Capacity ,choose LVM , and choose ext4 file system ,then select vg_os in the
Volume Group. Click update setting, as shown in Figure 3-39 Figure 3-40.
Step 16 Click + and Input / to mount point row , then Click Add mount point . Input 336.85 GiB to
Desired Capacity, choose LVM , and choose ext4 file system, then select vg_os in the
Volume Group. Click update setting, as shown in Figure 3-41 Figure 3-42.
Step 18 On SUMMARY OF CHANGES page, Click Accept Changes, as shown inFigure 3-44.
Step 20 Input a Host name, such as HW00001, and click Done, as shown in Figure 3-46.
Step 21 On INSTALLATION SUMMARY page, click Begin Installation, as shown in Figure 3-47
Step 22 On the displayed page shown in Figure 3-48, click USER SETTINGS , set a password of
user root, as shown in Figure 3-49
The password must consist of uppercase, lowercase, digits, and special characters, for
example, Huawei_123.
Step 23 The system will reboot after installation finish, The page shown in Figure 3-50 is displayed,
indicating that OS installation is complete.
----End
TYPE=Ethernet
BOOTPROTO=static
DEFROUTE=yes
PEERDNS=yes
PEERROUTES=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_PEERDNS=yes
IPV6_PEERROUTES=yes
IPV6_FAILURE_FATAL=no
NAME=ens18f0
UUID=a9770467-b59a-458a-b3c3-91fe63b5ad63
DEVICE=ens18f0
ONBOOT=yes
IPADDR=192.168.34.61
NETMASK=255.255.255.0
Step 4 Run the "systemctl stop firewalld" "systemctl disable firewalld" command to close the
firewall, and press Enter.
Enter IP address and port then click Open, as shown in Figure 3-52.
Enter the user name and password, and press Enter, as shown in Figure 3-53.
Step 3 Press ESC and enter wq! to save the file and exit.
Step 4 Repeat step1~step3 to configure every node in /etc/hosts. (create mapping relation about IP
address and hostname).
----End
Step 2 Run mount /dev/sr0 /mnt/ to mount the CD-ROM drive to the /mnt/ directory.
In the command, sr0 indicates a CD-ROM path. Set the path based on actual situations:
[root@hw00001]# mount /dev/sr0 /mnt
mount: /dev/sr0 is write-protected, mounting read-only
Step 3 Run cd /mnt command , to enter mnt directory.Then copy the prepare_hana_host.sh to
/home
Step 4 Enter home directory and add execute permission .Run ./prepare_hana_host.sh the script .
[root@hw00001 mnt]# cd /home
[root@hw00001 mnt]# chmod +x *
[root@hw00001 mnt]# ./prepare_hana_host.sh
Remarks: If you run the script , it will install " kernel upgrade, all the required patches,
installing installation packages, OS parameters required for HA configuration".
You can skip chapter 3.5.
----End
CAUTION
If you have run the script, skip operations in this chapter.
Step 2 Run mkdir -p /mnt/cdrom to create the cdrom directory in /mnt for mounting the ISO file.
[root@HW00001 ~]# mkdir -p /mnt/cdrom
[root@HW00001 ~]#
Step 3 Run mount /dev/sr0 /mnt/cdrom to mount the CD-ROM drive to the /mnt/cdrom directory.
In the command, sr0 indicates a CD-ROM path. Set the path based on actual situations.
[root@HW00001 ~]# mount /dev/sr0 /mnt/cdrom/
mount: block device /dev/sr0 is write-protected, mounting read-only
Step 4 Run vi /etc/yum.repos.d/rhel.repo to edit the rhel.repo file and add HA component
directories to the rhel.repo file.
Add actual HA file paths to the file, like the information in red.
[root@HW00001 yum.repos.d]# vi /etc/yum.repos.d/rhel.repo
[rhel-rpms]
name=Red Hat Enterprise Linux $releasever - $basearch - Source
baseurl=file:///mnt/cdrom
enabled=1
gpgcheck=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
----End
Step 2 Turn off auto-numa balancing. Run vi /etc/sysctl.d/sap_hana.conf to edit the file, insert
kernel.numa_balancing = 0 into this file.
vi /etc/sysctl.d/sap_hana.conf
kernel.numa_balancing = 0
Step 4 To disable the usage of transparent hugepages during runtime, set the kernel settings at
runtime with the command ,Run echo never >
/sys/kernel/mm/transparent_hugepage/enabled .
echo never > /sys/kernel/mm/transparent_hugepage/enabled
Step 5 It is nessesary to edit the OS bootloader configuration to make it valid until the next system
start ,Run vi /etc/default/grub to edit this file ,insert the red part to he line starting with
"GRUB_CMDLINE_LINUX".
[root@HW00011 home]# vi /etc/default/grub
GRUB_TIMEOUT=5
GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g' /etc/system-release)"
GRUB_DEFAULT=saved
GRUB_DISABLE_SUBMENU=true
GRUB_TERMINAL_OUTPUT="console"
GRUB_CMDLINE_LINUX="crashkernel=auto rhgb quiet transparent_hugepage=never intel_idle.max_cstate=0
processor.max_cstate=0"
GRUB_DISABLE_RECOVERY="true"
Step 6 Then, in case of a non-UEFI configuration, activate the new configuration by issuing the
Command ,Run grub2-mkconfig -o /boot/grub2/grub.cfg .
grub2-mkconfig -o /boot/grub2/grub.cfg
Step 8 Run vi /etc/systemd/logind.conf to edit this file ,insert the following line to the file.
vi /etc/systemd/logind.conf
RemoveIPC=no
ln -s /usr/lib64/libldap-2.3.so.0 /usr/lib64/libldap.so.199
ln -s /usr/lib64/liblber-2.3.so.0 /usr/lib64/liblber.so.199
ln -s /usr/lib64/libssl.so.0.9.8e /usr/lib64/libssl.so.0.9.8
ln -s /usr/lib64/libssl.so.1.0.1e /usr/lib64/libssl.so.1.0.1
ln -s /usr/lib64/libcrypto.so.0.9.8e /usr/lib64/libcrypto.so.0.9.8
ln -s /usr/lib64/libcrypto.so.1.0.1e /usr/lib64/libcrypto.so.1.0.1
Step 12 Run vi hdbenv.csh as <SID>adm user , Remove the following line in the file 'hdbenv.csh' in
the home directory of the <SID>adm user:
Step 13 Run vi hdbenv.sh as <SID>adm user , Remove the following line in the file 'hdbenv.sh' in the
home directory of the <SID>adm user:
if [[ $RHEL -eq 6 ]]; then
LD_PRELOAD=/opt/rh/SAP/lib64/compat-sap-c++.so
export LD_PRELOAD
fi
----End
This section uses port bond2 (physical ports eth4 and eth5) as an example to describe how to configure
network IP addresses.
Step 1 Run nmcli con add type bond ifname bond0 mode balance-xor to create bond0 .
Step 2 Run nmcli connection modify bond-bond0 ipv4.addresses 192.168.1.11/24 to set an IP
address for bond0.
Step 3 Run nmcli connection modify bond-bond0 ipv4.method manual .
Step 4 Run nmcli con add type bond-slave ifname ens1f0 master bond-bond0 to add a slave
ethernet port. The ‘ens1f0’ vary according to the customer's environment.
Step 5 Run nmcli con add type bond-slave ifname ens2f1 master bond-bond0 to add a slave
ethernet port. The ‘ens2f1’ vary according to the customer's environment:
Step 6 Run nmcli connection show command to check the information like this:.
default 10.10.1.254
According to the NFS network plan, the IP addresses of the NAS file shared networks can be
192.168.2.19 and 192.168.2.20. Therefore, HANA cluster nodes can be evenly mounted. For example,
nodes 1 to 4 are mounted to 192.168.2.19, and nodes 5 to 8 are mounted to 192.168.2.20.
Step 6 The NFS service is automatically mounted after the server restarts.
It takes a period of time to establish communication between network services and a NAS
server after network services are started up. The NFS service cannot be successfully mounted
before the communication is established. To enable the NFS service to be automatically
mounted after the network communication is established and before the sapinit service is
started, add the following information to the /etc/fstab script needs to be added.
[root@HW00002 network-scripts]# cat /etc/fstab
#
# /etc/fstab
# Created by anaconda on Thu Aug 25 02:38:11 2016
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
UUID=ac27e19c-4437-4012-b280-50f7a928bbeb / ext4 defaults 11
UUID=8b80a44a-43bd-43af-83a5-93a2f36d1499 /boot ext4 defaults 12
UUID=ef1beb53-19aa-4023-ab1e-b41186b5cff9 /usr/sap ext4 defaults 12
UUID=c6d3f38f-0d43-4327-a1ac-b9ec51c0c55c swap swap defaults 00
192.168.2.19:/ FileSystem_share /hana/shared/ nfs intr,nolock,nfsvers=3,timeo=10,rsize=1048576,wsize=1048576,_netdev
00
Step 4 On the CLI shown in Figure 3-57, press Esc, run :wq, and press Enter to save the
information.
----End
Step 3 Edit the multipath.conf file as follows: Write 5500 V3 disk array information to the file
without modifying other statements.
The World Wide Identifiers (WWIDs) of data_n and log_n are determined by the scanned LUNs in Step
7 in 2.2
combining with the LUN IDs generated in 2.4 Step 9 in 2.4 Storage Space Configuration.
[root@hw00001]# vi /etc/multipath.conf
------------------------------------------------------------------
devices {
device {
vendor "HUAWEI"
product "XSG1"
path_grouping_policy group_by_prio
getuid_callout "/lib/udev/scsi_id --whitelisted --device=/dev/%n"
prio alua
path_checker tur
path_selector "round-robin 0"
failback immediate
rr_min_io 1
}
}
Step 8 Update WWIDs in the multipath.conf file. And sda WWID into blacklist in the
multipath.conf.
Update the multipath.conf file based on the scanned LUN WWIDs in Step 7 in 3.9
Configuring grub Parameters
Step 9 Log in to the OS as user root.
Step 10 Run the vi /etc/default/grub command.
The information similar to that in Figure 3-55 is displayed.
Step 12 On the CLI shown in Figure 3-57, press Esc, run :wq, and press Enter to save the
information.
----End
combining with the LUN IDs generated in 2.4 Step 9 in 2.4 Storage Space Configuration.
[root@hw00001]# vi /etc/multipath.conf
------------------------------------------------------------------
devices {
device {
vendor "HUAWEI"
product " XSG1"
path_grouping_policy group_by_prio
getuid_callout "/lib/udev/scsi_id --whitelisted --device=/dev/%n"
prio alua
path_checker tur
path_selector "round-robin 0"
failback immediate
rr_min_io 1
}
}
multipaths {
multipath {
wwid 36fce33c100a68ff30003a50400000001
alias data_1
}
multipath {
wwid 36fce33c100a68ff30003b4b800000002
alias data_2
}
multipath {
wwid 36fce33c100a68ff30003c18900000003
alias data_3
}
multipath {
wwid 36fce33c100ad38740006401800000000
alias data_4
}
multipath {
wwid 36fce33c100ad387400064c5900000001
alias data_5
}
multipath {
wwid 36fce33c100ad38740006526100000002
alias data_6
}
multipath {
wwid 36fce33c100ad387400065c5600000003
alias data_7
}
multipath {
wwid 36fce33c100a68ff30003d9fb00000004
alias log_1
}
multipath {
wwid 36fce33c100a68ff30003daab00000005
alias log_2
}
multipath {
wwid 36fce33c100a68ff30003db6300000006
alias log_3
}
multipath {
wwid 36fce33c100ad38740006e76f00000004
alias log_4
}
multipath {
wwid 36fce33c100ad38740006e7c300000005
alias log_5
}
multipath {
wwid 36fce33c100ad38740006e86e00000006
alias log_6
}
multipath {
wwid 36fce33c100ad38740006e92000000007
alias log_7
}
}
blacklist {
wwid "360030130f090000001e3c3997c3c794a"
}
------------------------------------------------------------------
Step 14 Add the OS disk wwid to multipath blacklist,run ll /dev/disk/by-id/ |grep sda to get sda wwid
like this:
[root@hw00001]# ll /dev/disk/by-id/ |grep "sda"
lrwxrwxrwx 1 root root 9 Sep 18 15:23 scsi-36101b5442bcc70001f4ed57112e0c501 -> ../../sda
Step 15 Update blacklist wwids in the multipath.conf file. Add red font into the multipath.conf as
flowing information:
[root@hw00001]# vi /etc/multipath.conf
devices {
device {
vendor "HUAWEI"
product " XSG1"
path_grouping_policy group_by_prio
prio alua
path_checker tur
path_selector "round-robin 0"
failback immediate
rr_min_io 1
}
}
blacklist {
wwid 36101b5442bcc70001f4ed57112e0c501
}
multipaths {
multipath {
wwid 36fce33c100ad387406524b5200000004
alias data_1
}
multipath {
wwid 36fce33c100ad387406524f2200000005
alias data_2
}
multipath {
wwid 36fce33c100ad387406525f4000000006
alias data_3
}
multipath {
wwid 36fce33c100a68ff30007e22f00000000
alias data_4
}
multipath {
wwid 36fce33c100a68ff30007e5a400000001
alias data_5
}
multipath {
wwid 36fce33c100a68ff30007e97a00000002
alias data_6
}
multipath {
wwid 36fce33c100a68ff3000804be00000003
alias data_7
}
multipath {
wwid 36fce33c100ad38740651c1ea00000001
alias log_1
}
multipath {
wwid 36fce33c100ad38740651e32a00000002
alias log_2
}
multipath {
wwid 36fce33c100ad38740651f7bf00000003
alias log_3
}
multipath {
wwid 36fce33c100a68ff3000807ee00000004
alias log_4
}
multipath {
wwid 36fce33c100a68ff300080b2500000005
alias log_5
}
multipath {
wwid 36fce33c100a68ff300080db400000006
alias log_6
}
multipath {
wwid 36fce33c100a68ff3000810a000000007
alias log_7
}
}
Step 17 Log in to the OS as user root in other nodes,and repeat step1~step11 to complete DM
multipath service configuration .
Step 18 Create XFS file systems for DM devices on one node.
Run the ll /dev/disks/by-id/ command to find out DM devices corresponding to the preceding
LUNs. For example, if the DM devices are dm-0, dm-1, dm-2, dm-3…, dm-13, run the
following commands:
mkfs.xfs -f -d agcount=60 /dev/dm-0
mkfs.xfs -f -d agcount=60 /dev/dm-1
mkfs.xfs -f -d agcount=60 /dev/dm-2
mkfs.xfs -f -d agcount=60 /dev/dm-3
mkfs.xfs -f -d agcount=60 /dev/dm-4
mkfs.xfs -f -d agcount=60 /dev/dm-5
mkfs.xfs -f -d agcount=60 /dev/dm-6
mkfs.xfs -f -d agcount=60 /dev/dm-7
mkfs.xfs -f -d agcount=60 /dev/dm-8
mkfs.xfs -f -d agcount=60 /dev/dm-9
mkfs.xfs -f -d agcount=60 /dev/dm-10
mkfs.xfs -f -d agcount=60 /dev/dm-11
mkfs.xfs -f -d agcount=60 /dev/dm-12
mkfs.xfs -f -d agcount=60 /dev/dm-13
----End
4 Switch Configuration
The standard switches of the RH5885H V3+5500 V3 SAP HANA cluster are Huawei S6700
series switches. The entire cluster supports HANA internal communication and provides NAS
and upper-layer service ports using the 10GE Ethernet network. Two CE6800 switches need
to be stacked to form an HA network, as shown in Figure 4-1.
RH01_FC1_TO_B_H0 "10,2;10,6"
RH01_FC1_TO_B_H3 "10,3;10,6"
RH02_FC1_TO_A_H0 "10,0;10,7"
RH02_FC1_TO_A_H3 "10,1;10,7"
RH02_FC1_TO_B_H0 "10,2;10,7"
RH02_FC1_TO_B_H3 "10,3;10,7"
RH03_FC1_TO_A_H0 "10,0;10,8"
RH03_FC1_TO_A_H3 "10,1;10,8"
RH03_FC1_TO_B_H0 "10,2;10,8"
RH03_FC1_TO_B_H3 "10,3;10,8"
RH04_FC1_TO_A_H0 "10,0;10,9"
RH04_FC1_TO_A_H3 "10,1;10,9"
RH04_FC1_TO_B_H0 "10,2;10,9"
RH04_FC1_TO_B_H3 "10,3;10,9"
RH05_FC1_TO_A_H0 "10,0;10,10"
RH05_FC1_TO_A_H3 "10,1;10,10"
RH05_FC1_TO_B_H0 "10,2;10,10"
RH05_FC1_TO_B_H3 "10,3;10,10"
SNS2124_01 RH01_FC2_TO_A_H1 "20,0;20,6"
RH01_FC2_TO_A_H4 "20,1;20,6"
RH01_FC2_TO_B_H1 "20,2;20,6"
RH01_FC2_TO_B_H4 "20,3;20,6"
RH02_FC2_TO_A_H1 "20,0;20,7"
RH02_FC2_TO_A_H4 "20,1;20,7"
RH02_FC2_TO_B_H1 "20,2;20,7"
RH02_FC2_TO_B_H4 "20,3;20,7"
RH03_FC2_TO_A_H1 "20,0;20,8"
RH03_FC2_TO_A_H4 "20,1;20,8"
RH03_FC2_TO_B_H1 "20,2;20,8"
RH03_FC2_TO_B_H4 "20,3;20,8"
RH04_FC2_TO_A_H1 "20,0;20,9"
RH04_FC2_TO_A_H4 "20,1;20,9"
RH04_FC2_TO_B_H1 "20,2;20,9"
RH04_FC2_TO_B_H4 "20,3;20,9"
RH05_FC2_TO_A_H1 "20,0;20,10"
RH05_FC2_TO_A_H4 "20,1;20,10"
RH05_FC2_TO_B_H1 "20,2;20,10"
RH05_FC2_TO_B_H4 "20,3;20,10"
For details about the zone configuration commands of the SNS2124 FC switch, see SNS2124
Switch User Guide.
Before installing a SAP HANA database, configure required basic services for the HANA
nodes to ensure that the SAP HANA database can be successfully installed. In the operating
environment of the RH5885H V3+5500 V3 SAP HANA, RH5885 servers are used as NTP
server and DNSs. If NTP and DNSs already exist on the live network, use the live network
environment directly. Configuration file parameters on the server need to be modified as
required.
You are advised to configure an NTP server independent from the cluster, used for time synchronization
between each server and storage.
Step 1 Determine the NTP server, for example, the RH2288 whose IP address is 192.168.34.8 and
host name is WIN-EJMB5JAMNVN.
Step 2 Configure the NTP server configuration file. For details, see A ntp.conf.
Note that the configuration parameters need to be modified based on the IP address of the live
network.
Step 3 Enable the NTP service on the server.
chkconfig ntpd on
Step 4 Add the host name of the NTP server to the hosts file in /etc of each node.
192.168.34.8 WIN-EJMB5JAMNVN WIN-EJMB5JAMNVN
----End
You are advised to configure an NTP server independent from the cluster, used for time synchronization
between each server and storage. If there is no independent NTP server, you can use a node (standby
node is recommended) in the cluster as the NTP server. If the node is faulty and cannot synchronize with
other nodes, the running of the cluster database will be affected.
The IP address of the NTP server (standby node, using the internal network of the HANA
cluster database) is 192.168.1.18.
Perform the following steps on the standby node:
Step 1 Configure the NTP service to start upon OS startup.
[root@hw00008 ~]# chkconfig ntpd on
Step 2 Configure the configuration file on the NTP server to allow the NTP clients of specified IP
addresses to automatically synchronize time with the NTP server.
[root@hw00008 ~]# vi /etc/ntp.conf
----------------------------------------------------------------------------------
……
##
## Miscellaneous stuff
##
restrict 192.168.1.11 255.255.255.0 nomodify
restrict 192.168.1.12 255.255.255.0 nomodify
…
restrict 192.168.1.17 255.255.255.0 nomodify
……
----------------------------------------------------------------------------------
ntpq –p
----End
Step 3 Run the vi /etc/ntp.conf command to open the NTP configuration file ntp.conf.
#vi /etc/ntp.conf
HW00001:/hana/shared # vi /etc/ntp.conf
# Clients from this (example!) subnet have unlimited access, but only if
# cryptographically authenticated.
#restrict 192.168.123.0 mask 255.255.255.0 notrust
##
## Miscellaneous stuff
##
driftfile /var/lib/ntp/drift/ntp.drift # path for drift file
logfile /var/log/ntp # alternate log file
# logconfig =syncstatus + sysevents
# logconfig =all
# statsdir /tmp/ # directory for statistics files
# filegen peerstats file peerstats type day enable
# filegen loopstats file loopstats type day enable
# filegen clockstats file clockstats type day enable
#
# Authentication stuff
#
keys /etc/ntp.keys # path for keys file
trustedkey 1 # define trusted keys
requestkey 1 # key (7) for accessing server variables
# controlkey 15 # key (6) for accessing server variables
restrict 127.0.0.1
restrict -6 ::1
server 192.168.20.9
Step 8 Run the systemctl restart ntpd.service command to restart the NTP service.
systemctl restart ntpd.service
Step 9 Run the ntpq -p command to check the NTP running status.
#ntpq -p
Example:
HW00001: # ntpq -p
Step 4 In the /var/lib/named directory, create the site.zone and 192.168.255.zone files.
The HANA cluster can be connected to the DNS through an uplink service port or a service management
port (recommended).
In this document, the host names of cluster service management ports range from HW00001MG to
HW00008MG, and corresponding IP addresses range from 192.168.34.61 to 192.168.34.68.
----End
Step 2 Add the following information (IP addresses and host names of all HANA nodes and the DNS)
to the hosts file in /etc:
192.168.34.8 WIN-EJMB5JAMNVN WIN-EJMB5JAMNVN
----End
6 Performance Optimization
Step 2 To make the configuration take effect automatically upon system startup, add the following
commands to the boot.local file in /etc/init.d.
[root@hw00001]# cat /etc/init.d/boot.local
#! /bin/sh
#
# Copyright (c) 2002 SuSE Linux AG Nuernberg, Germany. All rights reserved.
#
# Author: Werner Fink <werner@suse.de>, 1996
# Burchard Steinbild, 1996
#
# /etc/init.d/boot.local
#
# script with local commands to be executed from init on system startup
#
# Here you should add things, that should happen directly after booting
# before we're going to the first run level.
#
echo never > /sys/kernel/mm/transparent_hugepage/enabled
echo never > /sys/kernel/mm/transparent_hugepage/defrag
Step 3 After the system is started, run the following commands to check whether THP is disabled.
[root@hw00001]# cat /sys/kernel/mm/transparent_hugepage/enabled
always madvise [never]
[root@hw00001]# cat /sys/kernel/mm/transparent_hugepage/defrag
always madvise [never]
In the preinstalled environment, RH5885H V3/RH8100 V3hardware uses the default configuration. You
need to run only the xxxx/enabled commands.
----End
----End
Step 2 On the CLI, run the following command to make the configuration take effect:
sysctl -p
----End
net.core.optmem_max = 16777216
net.core.wmem_default = 16777216
net.core.rmem_default = 16777216
net.core.netdev_max_backlog = 500000
net.core.rps_sock_flow_entries = 4194304
net.ipv4.tcp_timestamps = 0
net.ipv4.tcp_fin_timeout = 15
net.ipv4.tcp_sack = 0
net.ipv4.tcp_tso_win_divisor = 32
net.ipv4.conf.all.promote_secondaries = 0
net.ipv4.conf.default.promote_secondaries = 0
net.ipv4.conf.lo.rp_filter = 1
net.ipv4.conf.lo.accept_source_route = 0
net.ipv4.conf.bond0.rp_filter = 1
net.ipv4.conf.bond0.accept_source_route = 0
net.ipv4.conf.bond0.promote_secondaries = 0
Step 2 Run the following command to make the configuration take effect:
sysctl –p
Step 3 For port bond0 in the cluster (for example, consisting of physical ports eth6 and eth9), add the
following commands to /etc/rc.d/boot.local to make the configuration take effect
automatically:
ethtool -C eth6 rx-usecs 25
ethtool -C eth9 rx-usecs 25
ethtool -G eth6 rx 4096 tx 4096
ethtool -G eth9 rx 4096 tx 4096
service irqbalance stop
----End
----End
Step 2 Add the following commands to /etc/init.d/mountinit created in 3.9 to make the
configuration take effect upon startup:
for sd in `ls -lha /dev/disk/by-id/scsi-data_* | awk -F '/' '{print $NF}'`; do echo noop >
/sys/block/$sd/queue/scheduler; done
for sd in `ls -lha /dev/disk/by-id/scsi-log_* | awk -F '/' '{print $NF}'`; do echo noop >
/sys/block/$sd/queue/scheduler; done
----End
This document uses SAP HANA SP12 (1.00.122.05.1481577062) as an example to describe SAP HANA
database installation.
The installation mode of the latest SAP Host Agent software has been updated. After decompression, run
the RPM package installation command to install the SAP Host Agent.
For the HANA database of the latest version, install the SAP Host Agent by setting database installation
command parameters.
Storage HA configuration:
partition_1_data__wwid indicates the WWID (starting with 3) of Data_01,
partition_1_log__wwid indicates the WWID (starting with 3) of Log_01, and other
parameters follow the same rule.partition_*_data__mountoptions and
partition_*_log__mountoptions are mount parameters.
###########################################################################
# Storage HA configuration
############################################################################ .short
_desc
# name of python HA provider script
[storage]
ha_provider = hdb_ha.fcClient
# ha_provider_path = /hana/shared/
# these parameters name the WWIDs of the devices for each partition/usage_type combination
# if you have more nodes, add your LUNs here.
# for proper usage, replace the '...' with specified WWID in your system.
partition_*_*__prType = 5
partition_1_data__wwid = data_1
partition_1_log__wwid = log_1
partition_2_data__wwid = data_2
partition_2_log__wwid = log_2
partition_3_data__wwid = data_3
partition_3_log__wwid = log_3
…
partition_7_data__wwid = data_7
partition_7_log__wwid = log_7
…
#Set mount parameters.
partition_*_data__mountoptions = -o noatime,nodiratime
partition_*_log__mountoptions = -o noatime,nodiratime
Persistence configuration:
Change the red part in the following to the actual database SID, for example, ANA.
###########################################################################
# Persistence configuration
###########################################################################
[persistence]
basepath_datavolumes=/hana/data/ANA/
basepath_logvolumes=/hana/log/ANA/
basepath_shared=yes
Communication:
Change the red part in the following to the actual internal network IP addresses and host
names.
###########################################################################
# Communication
###########################################################################
[communication]
listeninterface = .global
[internal_hostname_resolution]
192.168.1.11 = NODE01
192.168.1.12 = NODE02
192.168.1.13 = NODE03
…
192.168.1.18 = NODE08
This section uses SAP HANA SP10_100.0 as an example to describe how to install a database on the
master node.
Run the cd command to switch to the directory where the SAP HANA database file is saved.
The following command output is an example (the detailed password and parameters vary
according to the actual situation):
HW00001: /hana/shared/51049354/DATA_UNITS/HDB_LCM_LINUX_X86_64/ # ./hdblcm
--action=install --sid=ANA --number=00 --sapmnt=/hana/shared/
--storage_cfg=/hana/shared/ --root_user=root --autostart=on --restrict_max_mem=off
--max_mem=0 --logpath=/hana/log/ANA --nostart=off --datapath=/hana/data/ANA
--shell=/bin/sh --hostname=NODE01 --remote_execution=ssh --install_hostagent=on
--db_mode=singledb --install_ssh_key=on
--addhosts=NODE02:role=worker:group=default:storage_partition=2,NODE03:role=worker:
group=default:storage_partition=3,NODE04:role=worker:group=default:storage_partitio
n=4,NODE05:role=worker:group=default:storage_partition=5,NODE06:role=worker:group=d
efault:storage_partition=6,NODE07:role=worker:group=default:storage_partition=7,NOD
E08:role=standby:group=default --password=Huawei123 --system_user_password=Huawei123
--root_password=Huawei123 --internal_network=192.168.1.0/24
The installation procedure is similar to the following (parameter options are specified by the
customer):
SAP HANA Lifecycle Management - SAP HANA 1.00.122.05.1481577062
***************************************************************
-----------------------------------------------------------------------------------
-
1 | server | No additional components
2 | all | All components
3 | client | Install SAP HANA Database Client version 1.00.122.05.1481577062
4 | afl | Install SAP HANA AFL (Misc) version 1.00.122.05.1481577062
5 | lcapps | Install SAP HANA LCAPPS version 1.00.122.05.1481577062
6 | smartda | Install SAP HANA Smart Data Access version 1.00.4.004.0
7 | studio | Install SAP HANA Studio version 2.1.4.000000
8 | trd | Install SAP TRD AFL FOR HANA version 1.00.122.05.1481577062
For HANA database applications, select only the first and third options (server and client) by
default. You can select the second option (all) to install all components based on customer
requirements.
Select a system environment type, such as production, test, development, and custom based
on customer requirements.
node02
Role: Database Worker (worker)
High-Availability Group: default
Storage Partition: 2
node03
Role: Database Worker (worker)
High-Availability Group: default
Storage Partition: 3
node08
Role: Database Standby (standby)
High-Availability Group: default
Storage Partition: N/A
node06
Role: Database Worker (worker)
High-Availability Group: default
Storage Partition: 6
node04
Role: Database Worker (worker)
High-Availability Group: default
Storage Partition: 4
node07
Role: Database Worker (worker)
High-Availability Group: default
Storage Partition: 7
Installing components...
Installing SAP HANA Database...
Preparing package 'Saphostagent Setup'...
Preparing package 'Python Support'...
Preparing package 'Python Runtime'...
Preparing package 'Product Manifest'...
Preparing package 'Binaries'...
Preparing package 'Installer'...
Preparing package 'Ini Files'...
Preparing package 'HWCCT'...
Preparing package 'Emergency Support Package'...
Preparing package 'EPM'...
Preparing package 'Documentation'...
Preparing package 'Delivery Units'...
Preparing package 'DAT Languages'...
Preparing package 'DAT Configfiles'...
Creating System...
Extracting software...
Installing package 'Saphostagent Setup'...
Installing package 'Python Support'...
Installing package 'Python Runtime'...
Installing package 'Product Manifest'...
Installing package 'Binaries'...
Installing package 'Installer'...
Installing package 'Ini Files'...
Installing package 'HWCCT'...
Installing package 'Emergency Support Package'...
When the message "Creating System..." is displayed, run the following commands on each
node:
ll /usr/sap
This section uses SAP HANA SP10_100.0 as an example to describe how to perform database
installation on nodes one by one.
Master Node
HW00001:/hana/shared/51049932_SP10_100/DATA_UNITS/HDB_LCM_LINUX_X86_64 # ./hdblcm
--action=install --sid=ANA --number=00 --sapmnt=/hana/shared/
--storage_cfg=/hana/shared/ --root_user=root --autostart=on --restrict_max_mem=off
--max_mem=0 --logpath=/hana/log/ANA --nostart=off --datapath=/hana/data/ANA
--shell=/bin/sh --hostname=NODE01 --remote_execution=ssh --install_hostagent=on
--db_mode=singledb --install_ssh_key=on --password=Huawei123
--system_user_password=Huawei123 --root_password=Huawei123
--internal_network=192.168.1.0/24
-----------------------------------------------------------------------------------
-
1 | server | No additional components
2 | all | All components
For HANA database applications, select only the first and third options (server and client) by
default. You can select the second option (all) to install all components based on customer
requirements.
Select a system environment type, such as production, test, development, and custom based
on customer requirements.
Installing components...
Installing SAP HANA Database...
Preparing package 'Saphostagent Setup'...
Preparing package 'Python Support'...
Preparing package 'Python Runtime'...
Preparing package 'Product Manifest'...
Preparing package 'Binaries'...
Preparing package 'Installer'...
Preparing package 'Ini Files'...
Preparing package 'HWCCT'...
Preparing package 'Emergency Support Package'...
Preparing package 'EPM'...
Preparing package 'Documentation'...
Preparing package 'Delivery Units'...
Preparing package 'DAT Languages'...
Preparing package 'DAT Configfiles'...
Creating System...
Extracting software...
Installing package 'Saphostagent Setup'...
Installing package 'Python Support'...
Installing package 'Python Runtime'...
Installing package 'Product Manifest'...
Installing package 'Binaries'...
Installing package 'Installer'...
Installing package 'Ini Files'...
When the message "Creating System..." is displayed, run the following commands on each
node:
ll /usr/sap
Slave Node
HW00002:/hana/shared/ANA/global/hdb/install/bin # ./hdbaddhost --hostname=NODE02
--install_hostagent --role=worker --storage_partition=2 --group=default
Standby Node
HW00008:/hana/shared/ANA/global/hdb/install/bin # ./hdbaddhost --hostname=NODE08
--install_hostagent --role=standby --group=default
You can run the following commands to start or stop the database:
HW00001:/usr/sap/ANA/HDB00> HDB start
HW00001:/usr/sap/ANA/HDB00> HDB stop
Login to any of the HANA nodes, and use the command below to adjust the access parameter
of the HANA DataBase:
hw00001:~ # su – s00adm
s00adm@hw00001:/usr/sap/S00/HDB00> HDB start
\q to quit
OS command line
Run the following commands on any node to invoke the HANA cluster service verification
program:
[root@HW00001 ~]# su - anaadm
anaadm@HW00011:/usr/sap/ANA/HDB00> cdpy
anaadm@HW00011:/usr/sap/ANA/HDB00/exe/python_support> python
landscapeHostConfiguration.py
HANA Studio
Login to the HANA Studio, and go to the overview, or LandscapeHosts to check the
HANA DataBase status.
HDB Admin
Run the following commands on any node to invoke the HANA cluster service verification
program:
HW00001:/ # su – anaadm
HW00001: /usr/sap/ANA/HDB00> export DISPLAY=192.168.32.50:0.0
HW00001: /usr/sap/ANA/HDB00> HDB admin
1. 192.168.32.50 is the IP address of the PC used to log in to the service management port on a
node remotely.
2. the Xmanager should be installed in the PC.
WARNING
Before uninstalling a SAP HANA database, ensure that you have obtained customers'
approval and customer (cluster) services have been stopped and backed up.
WARNING
Before uninstalling a SAP HANA database, ensure that key data of the database has been
backed up.
/hana/shared/ANA/global/hdb/install/bin
The installation path of a SAP HANA database is different from its uninstallation path. The
uninstallation path is under the installation instance directory.
This section uses SAP HANA SP10_100.0 as an example to describe how to uninstall a SAP HANA
database in one-click mode.
HW00001:/hana/shared/ANA/global/hdb/install/bin # ./hdbuninst
Confirm (y/n): y
Uninstalling SAP HANA Database...
Removing SAP HANA Database instance...
Uninstalling hosts...
Uninstalling host 'node06'...
Uninstallation of host 'node06' done.
Uninstalling host 'node04'...
Uninstallation of host 'node04' done.
Uninstalling host 'node07'...
Uninstallation of host 'node07' done.
Uninstalling host 'node05'...
Uninstallation of host 'node05' done.
Uninstalling host 'node03'...
Uninstallation of host 'node03' done.
Uninstalling host 'node02'...
Uninstallation of host 'node02' done.
Uninstalling host 'node08'...
----End
This section uses SAP HANA SP10_100.0 as an example to describe how to perform database
uninstallation on nodes one by one.
HDB00
version: 1.00.122.05.1481577062
Confirm (y/n): y
Uninstalling SAP HANA Database...
Removing SAP HANA Database instance...
Uninstallation of SAP HANA System is not yet finished.
To complete uninstallation, run 'hdbuninst --scope=instance' on host: hw00002
(worker), hw00003 (worker), hw00004 (worker),hw00005 (worker), hw00006 (worker), hw00007
(worker),hw00008 (standby)
Uninstallation done.
Log file written to '/var/tmp/hdb_ANA_uninstall_2015-08-05_09.07.06/hdbuninst.log' on
host 'hw00001'.
A ntp.conf
The following dimmed information is the content of the ntp.conf configuration file.
The red information needs to be modified based on the customer's environment.
################################################################################
## /etc/ntp.conf
##
## Sample NTP configuration file.
## See package 'ntp-doc' for documentation, Mini-HOWTO and FAQ.
## Copyright (c) 1998 S.u.S.E. GmbH Fuerth, Germany.
##
## Author: Michael Andres, <ma@suse.de>
## Michael Skibbe, <mskibbe@suse.de>
##
################################################################################
##
## Radio and modem clocks by convention have addresses in the
## form 127.127.t.u, where t is the clock type and u is a unit
## number in the range 0-3.
##
## Most of these clocks require support in the form of a
## serial port or special bus peripheral. The particular
## device is normally specified by adding a soft link
## /dev/device-u to the particular hardware device involved,
## where u correspond to the unit number above.
##
## Generic DCF77 clock on serial port (Conrad DCF77)
## Address: 127.127.8.u
## Serial Port: /dev/refclock-u
##
## (create soft link /dev/refclock-0 to the particular ttyS?)
##
# server 127.127.8.0 mode 5 prefer
##
## Undisciplined Local Clock. This is a fake driver intended for backup
## and when no outside source of synchronized time is available.
##
server 127.127.1.0
broadcastdelay 0.008
driftfile /var/lib/ntp/drift/ntp.drift
# path for drift file
logfile /var/log/ntp
# alternate log file
# logconfig =syncstatus + sysevents
# logconfig =all
#
# Authentication stuff
#
keys /etc/ntp.keys
# path for keys file
trustedkey 1
# define trusted keys
requestkey 1
restrict 127.0.0.1
restrict 192.168.40.0 mask 255.255.255.0 nomodify
restrict 198.168.1.230
##
## Add external Servers using
## # rcntp addserver <yourserver>
##
##
## Miscellaneous stuff
##
restrict 198.168.1.230
##
## Add external Servers using
## # rcntp addserver <yourserver>
##
##
## Miscellaneous stuff
##
B global.ini
The following dimmed information is the content of the global.ini configuration file.
The red information needs to be modified based on the customer's environment.
# .short_desc
# Global landscape configuration
# .full_desc
# This configuration file describes global parameters for each service in the
# landscape.
# .file
###############################################################################
# Persistence configuration
###############################################################################
# .short_desc
# Configuration of persistence
# .full_desc
# This section contains various parameters which are related to configuration
# of data and log location as well as data and log backup.
[persistence]
# .short_desc
# Base path for data volumes
# .full_desc
# All data volumes will be stored under this path.
# .type path
# .change offline
# basepath_datavolumes=$(DIR_GLOBAL)/hdb/data
basepath_datavolumes=/hana/data/ANA/
# .short_desc
# Base path for log volumes
# .full_desc
# All log volumes will be stored under this path.
#
# \see \ref logger
# .type path
# .change offline
basepath_logvolumes=/hana/log/ANA/
basepath_shared=yes
# .short_desc
# Directory layout of volumes
# .full_desc
# Determines whether there should be an extra subpath bewteen base path and volumes or
not
# .type path
# .change offline
use_mountpoints = yes
# .short_desc
# Data backup path
# .full_desc
# Data backups will be stored in this directory.
# .type path
# .change offline
basepath_databackup=$(DIR_INSTANCE)/backup/data
# .short_desc
# Log backup path
# .full_desc
# Log backups will be stored in this directory.
# .type path
# .change offline
basepath_logbackup=$(DIR_INSTANCE)/backup/log
# .short_desc
# Enable automatic log backup (IN DEVELOPMENT)
# .full_desc
# Automatic log backup is permanently backing up closed log segments of the
# database. Generated backups will be stored in
# \ref param_persistence_basepath_logbackup.
#
# \see \ref param_persistence_log_backup_timeout_s, \ref logger
# .type bool
# .change offline
enable_auto_log_backup=yes
# .short_desc
# Checksum algorithm to use for writing out data pages and log
# .full_desc
# This parameter defines which checksum algorithm will be used to write newly-modified
data
# pages to the disk. Data pages already on the disk will not get new checksum. Similarly,
new
# log buffers will be written using this checksum algorithm.
#
# Following checksum algorithms are available:
# - CRC32 - CRC32 over whole page (default, faster than ADLER32 for CPUs with CRC32
instruction)
# - ADLER32 - Adler checksum over whole page (faster than CRC32 on CPUs lacking CRC32
instruction)
# - CRC32_SPARSE - CRC32 over first 64 bytes in each 512 byte block (to speed up
checksumming, unsafe)
# - NULL - checksum off (completely unsafe)
#
# It is strongly recommended to use CRC32 to checksum whole pages and log buffers.
# .change online
# .range CRC32,ADLER32,CRC32_SPARSE,NULL
# .dev
checksum_algorithm=CRC32
# .short_desc
# Savepoint interval
# .full_desc
# Sets savepoint interval. Setting to 0 will disable the savepoint for testing
# purposes (e.g., log I/O performance tests; DO NOT USE for productive settings).
#
# Savepoint interval controls how often the internal buffers are flushed to
# the disk and a restart record is written. Upon restart after a power failure
# or crash, the log since the last savepoint needs to be replayed. Thus, this
# parameter indirectly controls restart time.
#
# \see \ref pers_u_savepoint, \ref pers_u
# .type integer
# .unit second
# .range 0,10-7200
# .change online
savepoint_interval_s = 300
# .short_desc
# Maximal number of job execution threads used by garbage collection
# .full_desc
# Sets the number of maximal parallel executed garbage collection jobs.
# A value of 0 will cause the maximum number of threads be set to a default value
# equal to the actual number of logical CPUs (up to a maximum of 256).
#
# Decreasing number too much can lead to "database full" situations because
# historical data may grow faster than garbage collection is able to clean up.
#
# Garbage collection uses job executer threads for execution. Therefore number
# of threads used may depend as well on executer configuration.
# Only an upper limit of threads used can be specified using this parameter.
#
# .type integer
# .range 0-256
# .change offline
max_gc_parallelity = 0
# .short_desc
# Number of recovery queues to use
# .full_desc
# Sets the number of parallel recovery queues to speed up database log replay.
# Value 0 signifies to use number of CPUs (up to a maximum of 64).
#
# Increasing recovery queue count also increases memory demand for various
# control structures and possibly increases synchronization overhead in higher
# layers during recovery, resulting in higher CPU usage per log amount processed
# (which amortizes itself by using more CPUs in parallel). Since the log replay
# is normally I/O bound, default settings should be sufficient.
#
# \see \ref logger, \ref pers_u
# .type integer
# .range 0-64
# .change offline
recovery_queue_count = 0
# .short_desc
# Log mode
# .full_desc
# Sets logging mode. Following logging modes are supported:
# - <b>normal</b>: normal mode, log segments must be backed up (default for HANA DB
1.0 SPS03+),
# - <b>overwrite</b>: overwrite mode, log segments are freed by the savepoint (e.g.,
# useful for test installations without backup/recovery),
# - <b>legacy</b>: legacy HANA 1.0 pre-SPS03 mode, segments will be kept until full
# backup is executed to allow recovery from full backup + log in the log
# area.
#
# You can optionally release free log files explicitly (e.g., after backup
# in log mode legacy or after savepoint in other log modes) using
# \ref sql_reclaim_log SQL command.
#
# \see \ref logger
# .type enum
# .range normal,overwrite,legacy
# .change offline
log_mode=normal
# .short_desc
# Log segment size in megabytes
# .full_desc
# Sets one log segment size in megabytes.
#
# A segment is backup/recovery
# and restart unit. Only whole segments are considered there, thus increasing
# the segment size may lead to longer restart times, since even after correct
# shutdown, a complete log segment must be read at restart (to be optimized).
#
# \note After changing this parameter online, it will only affect new segments.
# I.e., current segment will be finished as-is and any new or reused
# segments will be set to this new size. You can force closing current
# segment for instance by forcing log segment backup, e.g., using
# management_console command \ref pgm_console_log "log backup".
#
# \see \ref param_persistence_log_buffer_size_kb,
# \ref param_persistence_log_buffer_count,
# \ref param_persistence_log_preformat_segment_count, \ref logger
# .type integer
# .unit MB
# .range 8-4096
# .change online
log_segment_size_mb=1024
# .short_desc
# Size of one in-memory buffer in kilobytes
# .full_desc
# Sets size of one in-memory log buffer in kilobytes.
#
# Setting higher buffer size may increase throughput at the cost of COMMIT
# latency. During COMMIT of a transaction, at most this much data must be
# flushed to the I/O subsystem (provided all preceding buffers are already
# flushed).
#
# \see \ref param_persistence_log_segment_size_mb,
# \ref param_persistence_log_buffer_count, \ref logger
# .type integer
# .unit KB
# .range 128-16384
# .change online
log_buffer_size_kb=1024
# .short_desc
# Count of in-memory buffers per log partition
# .full_desc
# Sets count of log buffers per physical partition.
#
# Increasing this parameter will allow buffering an additional peak load at
# the cost of possibly increasing latency of parallel short transactions.
# If the I/O subsystem allows highly-parallel writes, you might consider
# increasing this parameter to allow better throughput for large
# transactions.
#
# \see \ref param_persistence_log_buffer_size_kb,
# \ref param_persistence_log_segment_size_mb, \ref logger
# .type integer
# .range 4-128
# .change offline
log_buffer_count=8
# .short_desc
# Log segment backup timeout in seconds
# .full_desc
# Sets log backup timeout in seconds (0 = disabled).
#
# Log backup timeout specifies, how much time may pass since a
# COMMIT until the log segment containing this COMMIT is put into log segment
# backup queue. In case this amount of time passes before the segment is
# full, the segment will be closed prematurely and put to the log segment
# backup queue. Thus, the administrator may indicate how much work can get
# lost in case of catastrophic failure (backup timeout + actual log segment
# backup time).
#
# \see \ref param_persistence_enable_auto_log_backup, \ref logger
# .type integer
# .unit second
# .change online
log_backup_timeout_s=900
# .short_desc
# Number of log segments to preformat in each partition at the initialization
# .full_desc
# Sets count of log segments to preformat at the startup of the database, when
# using directory-based log partitions.
#
# Normally, log segments are preformatted on-demand, so there is no need to
# change this parameter, except maybe for performance tests to make sure
# there is no logging slowdown due to preformatting of log segments. You have
# to wait with the test, until the segments are preformatted, to get the full
# speed.
#
# \see \ref param_persistence_log_segment_size_mb, \ref logger
# .type integer
# .change offline
log_preformat_segment_count=2
# .short_desc
# Number of log entries per log replay step
# .full_desc
# Sets the number of log entries that are processed in one log replay step
# during log recovery.
#
# This parameter sets how many log entries the master index server is ahead
# of the slave servers during log replay in case of log recovery.
#
# .type integer
# .range 64-2147483648
# .change offline
log_replay_step_size=1073741824
# .short_desc
# Handle page corruptions
# .full_desc
# This parameter decides how to deal with page corruptions. Following ways are supported:
# - <b>ignore</b>: Ignore error (if possible) NOT RECOMMENDED FOR PRODUCTIVE SYSTEMS
# - <b>exception</b>: Throw exception, upper layers decide how to handle this.
# - <b>crash</b>: Crash
#
# .type enum
# .range ignore,exception,crash
# .change online
handle_corrupt_pages=ignore
# .short_desc
# Retry corrupted pages
# .full_desc
# This parameter decides if the PageIO layer tries to reload corrupted pages.
#
# .type bool
# .change online
retry_corrupt_pages=true
# .short_desc
# Dump corrupted pages
# .full_desc
# This parameter decides if corrupted pages should be dumped to the instance's trace
directory.
# if \ref retry_corrupt_pages is TRUE and a retry is successful, this page is also dumped.
#
# .type bool
# .change online
dump_corrupt_pages=true
# .short_desc
# Write runtime dump for corrupted pages
# .full_desc
# This parameter decides if a runtime dump (suffix "page") should be written when
encountering
# a corrupted page.
#
# .type bool
# .change online
runtimedump_corrupt_pages=true
# .short_desc
# Initialize pages with pattern for read
# .full_desc
# This parameter decides if a page that is to be read from disk is initialized with
# a memory pattern. Setting this parameter to true comes with a certain performance
penalty.
#
# .type bool
# .change online
initialize_pages_before_read=false
# .short_desc
# Data volume encryption
# .full_desc
# Defines if the data volume will be encrypted
# .type bool
# .change online
data_encryption=false
# .short_desc
# M_DISKS summation logic
# .full_desc
# Defines how the M_DISKS view handles the storage configuration of data and log:
# auto: guessing logic, which concludes disk IDs from /proc/mounts
# shared: same ID for all storages, because the data and log storage is shared across
all hosts
# nonshared: distinct ID for each storage, because each host has its own storage partition
# .type string
# .change online
m_disks_summation_logic = auto
###############################################################################
# Basis configuration
###############################################################################
# .short_desc
# Various parameters which are related to configuration of how threads behave.
[threads]
# .short_desc
# Default stack size for newly-generated threads.
# .type integer
# .unit KB
# .range 128-16384
# .change online
default_stack_size_kb = 1024
# .short_desc
# Default stack size for newly-generated worker threads.
# .type integer
# .unit KB
# .range 128-16384
# .change online
worker_stack_size_kb = 1024
# .short_desc Defines which and how execution time values are measure
# .full_desc
# Execution like user, system, wait and io time can be measure to analyze
# performance bottlenecks. These values can be obtained by system call
# or in user space. System call are much more expensive, user space does
# not consider thread preemption by the system scheduler.
# Possible values are:
# NONE = 0,
# SYS_USER_TIME = 1,
# SYS_KERNEL_TIME = 2,
# SYS_WAIT_TIME = 4,
# SYS_IO_TIME = 8,
#
# ALL_SYS_TIMES = 15,
#
# CONTEXT_USER_TIME = 16,
# CONTEXT_KERNEL_TIME = 32,
# CONTEXT_WAIT_TIME = 64,
# CONTEXT_IO_TIME = 128,
#
# ALL_CONTEXT_TIMES = 240,
#
# CONTEXT_AND_SYS_TIMES = 255,
#
# L2_CACHE_MISSES = 256,
#
# ALL_VALUES = 511
#
# .type integer
# .change online
#
instrumentation_config = 0
###############################################################################
# Memory management configuration
###############################################################################
# .short_desc
# Configuration of memory management
# .full_desc
# This section contains parameters which are related to configuration
# of memory management
[memorymanager]
# .short_desc
# Global alloation limit in megabytes
# .full_desc
# Sets global allocation limit in megabytes.
#
# Default value is 0 (a reasonable alloation limit according to the physical ram
# is chosen - usually 90% of the physical memory)
#
# .type integer
# .unit MB
# .change offline
global_allocation_limit=0
# .short_desc
# Threshold to start memory garbage collection proactively
# .full_desc
# Starts memory garbage collection when async_free_threshold percent of
# the global allocation limit has been allocated.
#
# Default value is 100 (proactive memory garbage collection is disabled)
#
# .type integer
# .unit percent
# .change offline
async_free_threshold=100
# .short_desc
# Target of proactive memory garbage collection
# .full_desc
# Proactive garbage collection tries to reduce allocated memory below
# async_free_target percent of the global allocation limit.
#
# Default value is 95 (% of the global allocation limit).
#
# .type integer
# .unit percent
# .change offline
async_free_target=95
# .short_desc
# Parameter for statement memory limitation
# .full_desc
# The memory that can be allocated in connection with a statement is
# limited by this parameter. If the memory allocated exceeds this limit,
###############################################################################
# Memory objects configuration
###############################################################################
# .short_desc
# Configuration of memory object manager and memory objects
# .full_desc
# This section contains parameters which are related to configuration
# of memory object manager and memory objects
[memoryobjects]
# .short_desc
# Weight of disposition early_unload.
# .full_desc
# Sets weight of disposition early_unload for LRU strategy.
# The higher the weight the more important the memory object is regarded.
# Memory object container tends to unload memory objects with lower weights earlier
# in case of memory shortage than memory objects with higher weights.
#
# Default value is 100
#
# .type integer
# .change online
disposition_weight_early_unload=100
###############################################################################
# Backup configuration
###############################################################################
# .short_desc
# Configuration of backup and recovery
# .full_desc
# This section contains various parameters which are related to configuration
# data and log backup and recovery.
[backup]
# .short_desc
# Buffer size for copying log backups
# .full_desc
# Defines the buffer size used to copy
# log segments into backups.
#
# Default value is 128MB.
#
# .unit MB
# .range 16-4096
# .change online
log_backup_buffer_size=128
# .short_desc
# Log backups done over backint.
# .full_desc
# Defines whether log backup are done
# using backint.
#
# Default value is false.
#
# .type bool
# .change online
log_backup_using_backint=false
# .short_desc
# Backint parameter file for log backups.
# .full_desc
# Defines the parameter file which is used
# while log backup using backint.
#
# Default value is not defined.
#
# .type string
# .change online
log_backup_parameter_file=$(DIR_INSTANCE)/backup/log
# .short_desc
# Buffer size for copying data backups
# .full_desc
# Defines the buffer size used to copy
# data for page into backups.
#
# Default value is 512MB.
#
# .unit MB
# .range 16-4096
# .change online
data_backup_buffer_size=512
# .short_desc
# Backint parameter file for data backups.
# .full_desc
# Defines the parameter file which is used
# while data backup using backint.
#
# Default value is not defined.
#
# .type string
# .change online
data_backup_parameter_file=$(DIR_INSTANCE)/backup/data
# .short_desc
# Maximum age of the recovery file
# .full_desc
# .short_desc
# Max number of parallel backint channels per request
# .full_desc
# Defines the maximum number of backint channels while recovery per
# backint request.
#
# Default value is 64 channels.
#
# .type integer
# .unit channels
# .change online
max_recovery_backint_channels = 64
###############################################################################
# Watchdog configuration
###############################################################################
[self_watchdog]
# set interval to 0 to disable self_watchdog
interval=10
initial_sleep=180
ping_timeout=180
retries_before_abort=5
###############################################################################
# Job executor configuration
###############################################################################
# .short_desc
# Configuration of job executor
# .full_desc
# This section contains parameters related to the job executor.
[execution]
# .short_desc
# Maximum number of parallel threads in job executor.
# .full_desc
# Sets the maximum number of parallel threads to execute jobs in the job executor
# system. This number is a hint for the job executor to not start more than the
# specified number of JobWorker threads, however if it becomes necessary to start
# more threads, the job executor will do so.
#
# A value of 0 will cause the maximum number of threads be set to a default value
# derived from the actual number of logical CPUs, which currently is half of them.
#
# .type integer
# .range 0-number_of_logical_CPUs
# .change online
max_concurrency=0
###############################################################################
# Tracer configuration
###############################################################################
# .short_desc
# Configuration of tracer
# .full_desc
# This section contains parameters related to tracing various messages to
# database trace file and trace levels for individual components.
[trace]
formatter=connection
#
saptracelevel = 1
#
maxfilesize = 10000000
maxfiles = 10
#
maxalertfilesize = 50000000
#
flushinterval = 5
#
default = error
alert = error
#
basis=info
fileio=info
eventhandler=info
historymanager=info
logger=info
memory=info
persistencemanager=info
assign=info
tracecontext=info
###############################################################################
# . short_desc
# inifile configuration
[inifile]
# .short_desc
# delay between inifile modification and reconfig in distributed landscapes
# .full_desc
# to reduce problems with visibility of inifile updates due to NFS caching, some time
# should be waited betweeen save and distributed reconfigure
# .type integer
# .unit milliseconds
distributed_reconfig_delay=0
###############################################################################
# Storage HA configuration
###############################################################################
# . short_desc
# storage HA configuration
[storage]
# .short_desc
# name of python HA provider script
ha_provider = hdb_ha.fcClient
# ha_provider_path = /hana/shared/
# these parameters name the WWIDs of the devices for each partition/usage_type combination
# if you have more nodes, add your LUNs here.
# for proper usage, replace the '...' with specified WWID in your system.
partition_*_*__prType = 5
partition_1_data__wwid = data_1
partition_1_log__wwid = log_1
partition_2_data__wwid = data_2
partition_2_log__wwid = log_2
partition_3_data__wwid = data_3
partition_3_log__wwid = log_3
partition_4_data__wwid = data_4
partition_4_log__wwid = log_4
partition_5_data__wwid = data_5
partition_5_log__wwid = log_5
partition_6_data__wwid = data_6
partition_6_log__wwid = log_6
partition_7_data__wwid = data_7
partition_7_log__wwid = log_7
partition_*_data__mountoptions = -o noatime,nodiratime
partition_*_log__mountoptions = -o noatime,nodiratime
###############################################################################
# EventHandler
###############################################################################
# .short_desc
# Configuration of EventHandler
# .full_desc
# This section contains parameters related to (automatic) handling of events
[event_handler]
# .short_desc
# AutoEventHandler period
# .full_desc
# This parameter controls the time interval between automatic retries of events
# A value of 0 means no automatic retries
# .type integer
# .unit second
auto_retry_interval=60
###############################################################################
# Resource tracking
###############################################################################
# .short_desc
# .short_desc
# Main switch for resource tracking.
# .full_desc
# This parameter controls the resource tracking and allows deactivation of
# all resource tracking without having to change the individual settings.
#
# Possible values:
#
# - off/0:
# This disables all resource tracking. This is the default for
# performance reasons.
#
# - on/1:
# If this parameter is set to "on", all resources for which tracking has
# been enabled will be measured and are available in the pertinent views
# and traces.
#
# .type bool
# .unit online
enable_tracking=off
# .short_desc
# Mode of CPU time measurement.
# .full_desc
# This parameter controls if and how the CPU times are measured. A greater
# precision incurs a higher performance impact.
#
# Possible values:
#
# - off:
# In this case, no CPU times are determined for threads, statements or
# sessions. This is the default.
#
# - fast:
# In fast mode the collected CPU times provide ballpark figures. The
# measured values may be too high.
#
# - detailed:
# Detailed mode provides exact CPU times which can be used for finding
# CPU bottlenecks. The performance impact is much higher than in fast
# mode.
#
# .type string
# .unit online
cpu_time_measurement_mode=off
# .short_desc
# Enables/disables memory tracking.
# .full_desc
# This parameter controls whether memory consumption for statements and
# sessions will be tracked. If memory consumption is tracked, memory used
# for computing query results as well as shared resources will be tracked.
#
# Possible values:
#
# - off/0:
# The tracking of the memory usage is disabled if this parameter is set
# to "off". This is the default.
#
# - on/1:
# If set to on, memory tracking is enabled. The performance may be
# impacted noticeably.
#
# .type bool
# .unit online
memory_tracking=off
###############################################################################
# System Replication Configuration
###############################################################################
# .short_desc
# Configuration of System Replication
# .full_desc
# This section contains various parameters which are related to configuration
# of system replication.System replication itself cannot be activated by
# public configuration parameters; this must be done hdbnsutil commands
# starting with "sr_".
# The configuration parameters described here affect only the behaviour
# of a system with system replication configured.
[system_replication]
# .short_desc
# Minimum time interval between two data shipping requests from the secondary.
# .full_desc
# If data shipping_logsize_threshold is rached first, the data shipping request
# will be sent before the time interval is elapsed, when the logsize threshold is reached.
# This parameter is set on the secondary.
# .type integer
# .change online
datashipping_min_time_interval = 600
# .short_desc
# Minimum amount of log shipped between two data shipping requests from the secondary.
# .full_desc
# If the time defined by datashipping_min_time_interval has passed before reaching this
threshhold,
# the data shipping request will be sent before this threshhold is reached,
# when the time interval has elapsed.
# This parameter is set on the secondary.
# .type integer
# .change online
datashipping_logsize_threshold = 5368709120
# .short_desc
# Activate preloading of column tables.
# .full_desc
# When this parameter is set, preloading of column table main parts is activated.
# The parameter can be set on the primary as well as on the secondary.
# If set on the primary, the loaded table info is collected and stored in the snapshot,
that is shipped.
# If set on the secondary, this information is evaluated and the tables are actually
preloaded
# there according to the information in the loaded table info.
# .type bool
# .change online
preload_column_tables = true
# .short_desc
# Log shipping timeout for the primary.
# .full_desc
# Number of seconds, the primary waits for shipping of a single log buffer.
# If the log shipping request is not handled within the configured time frame, it is
assumed,
# that a error situation occurred. In this case the log buffer is freed and the replication
session is canceled.
# .type integer
# .change online
logshipping_timeout = 30
# .short_desc
# Reconnect timeout for the secondary
# .full_desc
# If a secondary is disconnected from the primary due to network problems, the secondary
tries to reconnect periodically
# after the time interval specified in this parameter has passed
# .type integer
# .change online
reconnect_time_interval = 30
###############################################################################
# Debug configuration
###############################################################################
# .short_desc
# Configuration of some debug settings
# .full_desc
# This section contains parameters which are related to configuration
# of various debug settings
[debug]
# .short_desc
# Sets the debug break mode to control debug break handling
# .full_desc
#
###############################################################################
# . short_desc
# crashdump configuration
[crashdump]
# .short_desc
# Timeout until each crash dump section needs to be finished
# .full_desc
# To prevent dead lock and to avoid too long running crashdumps the section_timeout
# parameter defines the time in seconds which will spend at most for writing each
# crashdump section. Zero defines no timeout.
# .type integer
# .unit seconds
section_timeout=30
# .short_desc
# Timeout until a running crashdump is killed
# .full_desc
# To prevent dead locks and to kill too long running crashdumps the kill_timeout
# parameter defines the time in seconds which will spend at most for
# the whole crashdump writing. After this timeout the process is going to be
# killed. Zero defines no timeout.
# .type integer
# .unit seconds
kill_timeout=300
###############################################################################
# Communication
###############################################################################
# .short_desc
# Configuration of communication settings
# .full_desc
# This section contains parameters which are related to configuration
# of various communication settings
[communication]
# .short_desc
# the network interface the processes shall listen on
# .full_desc
#
# Possible values are:
# .short_desc
# specifies the resolution of hostnames to addresses
# .full_desc
# This section mimics the behaviour of /etc/hosts. IP addresses might be assigned to
# a list of hostname aliases. If an interface address is found in this list it is
# considered internal.
# The format should be ipaddress = hostname[,alias].
# e. g. 192.168.100.1 = hanahost01, hanahost01.example.com
[internal_hostname_resolution]
192.168.1.11 = NODE01
192.168.1.12 = NODE02
192.168.1.13 = NODE03
192.168.1.14 = NODE04
192.168.1.15 = NODE05
192.168.1.16 = NODE06
192.168.1.17 = NODE07
192.168.1.18 = NODE08