You are on page 1of 31

Dynamic Tiering

With MDC Database

Use case

In my documentation I’ll explain how install/ configure SAP Hana MDC with Dynamic Tiering and
deploy SAP Hana Data Warehouse Foundation 1.0 in order to support data management and
distribution within my landscape including Hadoop (spark) and Sybase IQ.

For my setup I’ll use my own lab on Vmware Vsphere 6.0, run SAP Hana revision 112.02 and use
Sybase IQ and Hadoop HDFS stack 2.7.2.

I’ll will create a new environment by using vm template explain in my previous documentation.

Disclaimer: My deployment is only for test purpose, I make the security simple from a
network perspective in order to realize this configuration and use open source software.

In order execution

Install Hana in MDC mode


Connect tenant database to IQ and Hadoop over SDA
Install Dynamic Tiering
Setup Dynamic Tiering for Tenant database
Install SAP Hana Data Warehouse Foundation
Move table to external source
Query tables from external

Guide u sed:

SAP HANA Multitenant Database Containers


SAP HANA Dynamic Tiering: Administration Guide
SAP HANA Data Warehousing Foundation Installation Guide
SAP HANA Data Warehousing Foundation 1.0 Planning PAM

Note used:

2225582 - SAP HANA Dynamic Tiering SPS 11 Release Note


2092669 - Release Note SAP HANA Data Warehousing Foundation
2290350 - Spark Controller Compatibility Matrix
2183717 - Data Type Support for Extended Tables
2290922 - Unsupported Features and Datatypes for a Spark Destination

Link used:

Help SAP Hana


High Level Architecture overview

From a high level architecture point of view I’ll deploy 4 vms all registered in my internal DNS
vmhana01 – master hana node multi-tenant
vmhana07 – dynamic tiering worker node
vmiq01 – Sybase IQ 16.0
Hadoop – Horthonworks Haddop HDFS stack 2.7.2
Detail overview

From a detail point of view, my Hana MDC database will be setup with one tenant database (TN1)
connected over SDA to Sybase IQ and Hadoop by Spark controller.
The TN1 database will have DWF 1.0 deployed on it and will be configured with the DT host as a
dedicated service.
The dynamic tiering host share the /hana/shared file system with the vmhana01 host in order to
be installed with HDT database.
Install Hana in MDC mode

I my previous documentation I have already explain how to install and configure Hana in MDC
mode by command line and sql statement.

I’ll re-explain how to do it this time by using the graphical tool (hdblcmgui) and setup tenant
database by the Hana cockpit.

My media downloaded I’m ready to start


Note : I’ll just capture the important screen

Note: I do my system as a single, because I’ll add my DT host after in my process


Note: Dynamic Tiering doesn’t suppose high tenant isolation.
My system is now up and running

The system ready, from the cockpit I’ll create my tenant database
My tenant is now up and running

Now from a network perspective if I want to access my tenant database cockpit some change
needs to be done at the system database layer
From the configuration panel filter on “xsengine.ini” and open the “public_urls” parameter

Double click on the http_url or https_url to setup the url (alias) access of the tenant database
Once done you can see that the url to access the tenant TN1 database is setup

Note: make sure if you are working with a DNS that the alias is registered, if you are not using a
DNS enter the entry into the /etc/hosts of Hana
My alias added I can access the cockpit
Connect tenant database to Sybase IQ and Hadoop over SDA

My tenant database is now running I need to connect it to remote source to store my aging data,
let start with my IQ database, before create the connection in SDA install and set the lib on
Hana server.
To create my connection I will use the following statement:

create remote source IQHOMELAB adapter iqodbc configuration


'Driver=libdbodbc16_r.so;ServerName=IQLAB;CommLinks=tcpip(host=vmiq01:1113)'
with CREDENTIAL TYPE 'PASSWORD' USING 'user=iqhomelab;password=xxxxx';

My IQ connection is working I can add the other on Hadoop over Spark controller

Install Dynamic Tiering

Install dynamic tiering is done in 2 part, you need to first install the add-on component and then
add the host which will execute the query, this can be done in one step.

Note: before to start the installation make sure the necessary folder or file system are created

And the /hana/shared file system is mounted on the dynamic tiering host
The installation can be done from graphical interface, command line or web interface for my
documentation I’ll use the second option (command line) since last I did by web interface
Once the installation completed we can see now the Dynamic Tiering installed but not configured
yet
From a service perspective the DT appear as a “utility” from the SYSTEMDB hosts tab and is not
visible for tenant database

Setup Dynamic Tiering for tenant database

Make the setup of DT for tenant database consist to make the DT service (esserver) visible to
tenant database. Keep in consideration that DT and tenant database work as 1:1

The first step is to modify properties in the global.ini file to prepare resources on each tenant
database to support SAP HANA dynamic tiering.

On the SYSTEM database run the following SQL to enable the tenant database to use DT
functionalities:

ALTER SYSTEM ALTER CONFIGURATION ( 'global.ini', 'SYSTEM' ) SET(


'customizable_functionalities', 'dynamic_tiering' ) = 'true'

And check if the parameter is set to “true” in the global.ini


The next step will be to isolate the “log” and the “data” of the tenant database for DT, to do so I
will first create at OS layer two specific directory which belong to my tenant DB “TN1”

And run the two following SQL statement to make is active:

ALTER SYSTEM ALTER CONFIGURATION ( 'global.ini', 'DATABASE' , 'TN1' )


SET( 'persistence', 'basepath_datavolumes_es') = '/hana/data_es/TN1' WITH
RECONFIGURE

ALTER SYSTEM ALTER CONFIGURATION ( 'global.ini', 'DATABASE' , 'TN1' )


SET( 'persistence', 'basepath_logdatavolumes_es') = '/hana/log_es/TN1' WITH
RECONFIGURE

Then check in the global.ini


The preparation completed I can now provision the DT service to my tenant DB by running the
following SQL command at the SYSTEMDB layer:

ALTER DATABASE TN1 ADD 'esserver'

TN1 service before the DT provisioning

After the provisioning we can that DT is now available to TN1

Note: after the service (esserver) affected to the tenant database, it’s no longer visible to
the SYSTEMDB
The configuration ready, I need to deploy the dynamic tiering delivery unit in to TN1 in
order to administrate it. From modeler perspective select your tenant DB and select the
HANA_TIERING.tgz and HDC_TIERING.tgz file from server to be imported.
Once the DU imported to the tenant I assign the necessary role to my user

Now done I can access the cockpit and finish the configuration of it
Once successfully created we can check at os layer if the data are written at the correct place

Dynamic Tiering on tenant database completed, I can start the deployment DWF
Install SAP Hana Data Warehouse Foundation

SAP DWF content is delivered in software components, each software component contains a
functional delivery unit (independent delivery units) and a language delivery unit.
 Functional delivery units provide core services
 SAP HANA Data Warehousing Foundation applications
 Language delivery units
 Documentation for the applications

Once the DWF zip file is downloaded store it but do not decompress it, form the tenant database
cockpit load the zip file in order to install the new software

Run the installation


The component installed now, in order to configure SAP HANA Data Warehousing Foundation
some parameter needs to be added at the xsengine.ini of the tenant database.

From the SYSTEMDB expand the xsengine.ini and add the following parameter and value

Data Distribution Optimizer and Data Lifecycle Manager use this mechanism use SQL statements
from server-side JavaScript application when generating and executing redistribution plans in
Data Distribution Optimizer.

To enable this functionality, I need to enable it from the XS Artifact Administration of my tenant
database.
Located the two component sap.hdm.ddo.sudo and sap.hdm.core.sudo and activate them
Now activated I can provide the necessary role to my user so I can administrate DWF from the
cockpit
Notre: I gave my account all admin role, but in real work it won’t happen ;-)
And now from the cockpit I can see it at the following url:
 http://<tenant>:<port>/sap/hdm/dlm/index.html
 http://<tenant>:<port>/sap/hdm/ddo/index.html

Finally generate default schema for Generated Objects and roles needed for Data Lifecycle
Manager by the following statement:

call "SAP_HDM_DLM"."sap.hdm.dlm.core.db::PREPARE_BEFORE_USING"();
Create external storage

The DWF installed I now able to make some movement of table to external destination, but before
doing it I need to make create the destination over DLM
Note: When creating a storage destination DLM provides a default schema for the
generated objects, this schema can be overwritten

Dynamic Tiering
IQ 16.0
Note: for the parameter to use, the information must be according the SDA connection
SPARK
Note: for spark the schema of the source persistence object is used for the generated objects,
Before to create the remote I have to specify to my index server that I will use my Spark connection
for aging data

I run the following sql statement from the studio:


ALTER SYSTEM ALTER CONFIGURATION ('indexserver.ini', 'SYSTEM')
SET ('data_aging', 'spark_remote_source') = 'SPARK_LAB' WITH RECONFIGURE;

ALTER SYSTEM ALTER CONFIGURATION ('xsengine.ini', 'SYSTEM')


SET ('data_aging', 'spark_remote_source') = 'SPARK_LAB' WITH RECONFIGURE;

Also from the Spark Controller, the hanaes-site.xml file needs to be edit in order to set the extend
storage
My 3 external storage are now created but as we can see they are inactive, so to activate hit
“Activate”

Once activated
Move table to external storage

My external storage added to DLM, in order to move table into them I need to lifecycle profile for
each of them

Which will allow me to specify if I want to move group of table or only specific table, the way I
want to move them “trigger based or manual”
Note: When using SAP IQ as the storage destination type, you need to manually create the
target tables in IQ. (use the help menu to generate the DDL)

From a destination attribute option you can specify the reallocation direction of the table transfer
and the Packet Size to be transfer:
Note: Spark doesn’t support the packaging
Depending on the option chosen above a clash strategy can be define in order to handle unique
key constraint violation

Note : Spark doesn’t support the clash strategies. This means that unique key constraint
violations are ignored and records with a unique key might be relocated multiple times,
which can result in incorrect data in the storage.

Once the destination attribute define you will need to setup the reallocation rule in order to
identifies the relevant records in the source persistence to be relocated to the target persistence

When satisfied save and activate your configuration, eventually run a simulation to test it.
When the configuration is saved and activate for IQ and DT, the generated object “aka: generated
procedure” is created

For my document purpose I’ll trigger all my data movement manually

When the trigger job is running, according the rule define in the reallocation rule, the amount of
record count should match. For each external destination the log can be check
Query table from external source

In order to query the data from the external since that table has been moved, I first need to check
in the destination schema the generated object
I can see the 2 tables moved, 1 in dynamic tiering “Inusrance” and the other one as a virtual table
fir IQ “Crime”

One additional table “PRUNING” show the scenario and the criteria define from the rule editor for
the table

For Spark the schema of the source persistence object is used for the generated objects

My configuration is now completed for DT on Hana MDC with DLM.

You might also like