You are on page 1of 16

SAP HANA Best Practices

SAP HANA Best Practices

Version: 1.1

Authors:

Ranajay Mukherjee
Govind V Bajaj
Sayan Paul
Rajib K Pal
Amit K Das
SAP HANA Best Practices

Document Control

1 OVERVIEW 3

2 HANA BEST PRACTICES 3

2.1 HANA Modeling tips 3


2.1.1 Manage tables with large data volume in HANA 3
2.1.2 Filter data to the lowest level by using parameter or variable 4
2.1.3 Date Format in HANA 4
2.1.4 Date Conversion Function 4
2.1.5 Use Appropriate Nodes for Graphical Calculation View 5
2.1.6 Calculated Attributes – Rows Vs Sets 6
2.1.7 Restricted Measures Vs Logical Partitioning 7
2.1.8 Calculation View – SQL vs CE Functions 7
2.1.9 Joining Script based calculation view with Graphical view 8
2.1.10 Constant value in Manage mapping inside union 9
2.1.11 Avoid database trigger on SLT configured schema 9
2.1.12 Leverage HANA Live views (if possible) 9
2.1.13 Parameter mapping 10

2.2 Join Types 11

3 SAP HANA LIVE 13

3.1 Overview 13

3.2 Copying SAP HANA Live Content 15

3.3 Basic Guidelines 16

Page 2 of 16
SAP HANA Best Practices

1 Overview

This document provides the Best Practices experienced and followed in different
Enterprise SAP HANA projects. This will ensure the SAP HANA system provides the
desired performance. It also covers the best practices while using HANA Live for HANA
Modeling.

This document is intended for those individuals who design and develop objects that
reside within the SAP HANA landscape:
• SAP HANA Architects
• SAP HANA Solution Consultants
• SAP HANA Developers
• Technical Leads

2 HANA Best Practices


2.1 HANA Modeling tips
2.1.1 Manage tables with large data volume in HANA

Tables with large number of rows (2 billion or more) should be partitioned and in a scale-
out (multiple nodes) HANA installation, these partitions should be spread across all
nodes for utilizing Massive Parallel Processing (MPP) capabilities of HANA. This also
helps with faster data loads into HANA.

Tables which are created by data provisioning through System Landscape Transformation
(SLT) can be partitioned before data load in SLT.

Joins on large tables in HANA may impact performance. If joins can be avoided through
transformation during replication, it will help with run-time performance of queries on
HANA. Similarly, multiple joins in attribute views can be avoided by making data highly
de-normalized and thereby avoiding joins.

Page 3 of 16
SAP HANA Best Practices

2.1.2 Filter data to the lowest level by using parameter or variable

Avoid transfer of larger dataset from one node to another. Use filter wherever possible to
reduce data volume at the lowest possible level.

2.1.3 Date Format in HANA

Date format from ECC is not replicated as date format in HANA. This may have impact
during report creation as underlying data element is not in date format. If data
provisioning tool is SLT, mapping of metadata can be done. This will result in all date
columns across all tables to appear as date format in HANA.

2.1.4 Date Conversion Function

If Date Conversion is necessary, use the date conversion function at the lowest level
where parameter(s) are getting passed, otherwise conversion will happen on the entire
data set.

Page 4 of 16
SAP HANA Best Practices

2.1.5 Use Appropriate Nodes for Graphical Calculation View

1. Use ‘set based’ processing rather than ‘record’ based processing. Avoid using
‘Calculate Before Aggregation’ scenario.
2. Use aggregation/projection node for grouping of data.
3. Avoid join of an analytical view and table in graphical calculation view otherwise
it results in very bad performance as analytical and join engines are used
4. Though INNER join is efficient, use LEFT OUTER join for tables (like header
and item) in calculation view with cardinality 1:1 or N:1. This will ensure that if
the grain of requested data is at header level and no columns are requested from
right table, measures from header level are not duplicated. This is due to the
reason that join will not be performed if no columns(fields) are requested from
right table.
5. Counters (distinct counts) performs better in calculation views which run in calc
engine.
6. Handling of Null values in calculation views with execution engine as SQL is
different. See SAP Note – 1857202.
7. Conversion of an attribute to time using TIME function should be avoided. TIME
function causes the variable push down to be terminated.
8. Use Input parameter when user needs to pass single value as filter instead of
variables. Calculation engine ensures that value of input parameter is passed down
to the database level (if used in a filter expression in projection node).
9. Avoid creating calculated attributes in analytical views. This will create a wrapper
calculated view and the analytical view will be executed in two engines –
Calculation and OLAP engine. This may impact performance.
10. Avoid having two target nodes based on one source node (V shape modelling) in
calculation views running in calc engine. Variable push down will be terminated
after V is encountered.

Page 5 of 16
SAP HANA Best Practices

2.1.6 Calculated Attributes – Rows Vs Sets

On the right hand side of above picture, calculations are being done in projection node.
This ensures that calculations are performed on ‘aggregated’ data while on left hand side
we see calculations are being performed at analytical view level which will perform
calculations at row level.

Page 6 of 16
SAP HANA Best Practices

2.1.7 Restricted Measures Vs Logical Partitioning

Restricted measures work well in situations where there are many values of attributes on
which a base measure is restricted while logical partitioning works well when modeler
knows the values on which filter needs to be done (for example – Plant A, B and C Vs
Plant A, B and Rest). Also if granularity of data presented by analytical view is not very
high, then logical partitioning is best from performance perspective as it will use ‘set
based’ calculations rather than ‘record’ based calculations.

2.1.8 Calculation View – SQL vs CE Functions

Calculation Engine (CE) functions are optimized. For example columns which are
requested by client tool are only retrieved from database while in SQL all columns are
retrieved irrespective of columns requested by client tool. Also preprocessor will find it
hard to break SQL statements into multiple statements for parallel processing as
compared in CE functions. See the diagram below for details.

Page 7 of 16
SAP HANA Best Practices

Use Calculation Engine function for complex calculation logic as they are performed in
the database layer. By using this user can ensure that whatever execution is happening
that completely bypass SQL optimizer and use directly column store.

2.1.9 Joining Script based calculation view with Graphical view

Always avoid join Script base calculation view with Graphical view. Filters, user given
parameters (F4) are not passed down to the lowest level in script base calculation view
inside Graphical Calculation view.
Parameter is not
getting passed in
SQL calc. view

Parameter is getting
passed in Graphical
calc. view

Page 8 of 16
SAP HANA Best Practices

2.1.10 Constant value in Manage mapping inside union

Use constant selection in union node. This ensures that if ‘where’ clause is having the
column associated with constant selection, other input nodes for union are not executed.

Use constant value in manage mapping section inside union to identify projection /
aggregation wise unique record set while using union.

2.1.11 Avoid database trigger on SLT configured schema

Always avoid to create database trigger on tables which are replicated by SLT. Trigger
will be dropped when admin users/developers stop or pause the replication. By restarting
the replication, existing trigger will not be automatically created.

2.1.12 Leverage HANA Live views (if possible)

Extend HANA live views where ever possible as these are optimized for performance.

Page 9 of 16
SAP HANA Best Practices

2.1.13 Parameter mapping

Avoid to use single graphical view in multiple projections in case of parameter mapping.
In HANA Studio (Ver. 2.0 ++), it will not be possible to map parameters from multiple
projections of the same view. Unless parameters are not passed into the lowest level, this
will not give optimum performance.

Page 10 of 16
SAP HANA Best Practices

2.2 Join Types

Use the right Join Types based on data requirement. The following table summarizes the
key points on Join Types.

Join Type How the join functions Be aware of that…


INNER Output will consist of result set This join is ALWAYS performed
where join condition satisfy irrespective of fact that columns from
right table are required or not

LEFT OUTER All rows from the left table Join is omissible if columns from right
(visible in join properties table are not required
which is the left and right
table) are returned, whether or
not there is a matching record
in the right table.
It is customary to make the left
table the many side of the join,
so the fact table.

RIGHT OUTER All rows from the right table Join is ALWAYS performed
(visible in join properties
which is the left and right
table) are returned, whether or
not there is a matching record
in the left table.
It is customary to make the left
table the many side of the join,
so the fact table.
REFERENTIAL Basically functions as an inner Join works as ‘INNER’ join but is
join (when actually triggered!) executed only when the data is
and will only retrieve rows required from both sides of the join.
with a match in both sides of
the join.
However there is one
difference, HANA assumes
this referential integrity is
ensured when populating the
tables, so it will NOT test
whether this is true or not, but
only execute the join when
data is needed from both sides.
TEXT When a one to many relation This join is only available for SAP
between two tables exists only ERP tables (that use the SPRAS field

Page 11 of 16
SAP HANA Best Practices
because descriptions/names for as language column) or equivalent
a code are available in more design (including the language code
than one language, this type of values).
join allows you to set a filter
(based on the user language) This join is ALWAYS performed and
on that language column, so acts as an INNER.
the join is a normal 1
(name/description) : 1.

Page 12 of 16
SAP HANA Best Practices

3 SAP HANA Live

3.1 Overview
SAP HANA Live has a comprehensive set of Virtual Data Models (VDMs). They are a
structured representation of operational data organized by the SAP Business Suite
application that helps to build operational reporting. These models contain all necessary
joins and transformations to turn the data in the HANA database into meaningful
information. These models are categorized as:

• Query Views (which can be consumed directly)


• Reuse Views (can be altered and re-used)
• Private Views (these generally work on physical tables and hence should not be
altered)

Figure 5.1 below shows the typical SAP HANA Live architecture.

Figure 5.1 – SAP HANA Live Architecture

Query Views – These views are exposed and hence can be used for consumption via
analytical tools (generic SQL/MDX tools – e.g. Business Objects or ODATA tools e.g.
SAP UI5). The name of the views end in the keyword 'Query'. This type of view is the
topmost view in the hierarchy. They are not designed for reuse in other views.

Reuse Views – These are at the core of the VDM for HANA Live. All relevant business
data is structured together so that all business rules are realised in a consistent manner via

Page 13 of 16
SAP HANA Best Practices
this model. This kind of view should be reused and not for direct consumption via
analytical applications.

Private Views – Certain SQL transformations can be encapsulated via this kind of view.
As they do not carry any business semantics, they are intended to be used in other views
as a subroutine or private method. They can be created on database tables, private views
and reuse views.

All the pre-configured HANA Live Calculation Views are stored in SAP package
hierarchy (shown below in Figure 5.2).

Figure 5.2 – HANA Live Calculation Views

Page 14 of 16
SAP HANA Best Practices

3.2 Copying SAP HANA Live Content


It is advised to copy these reuse views into your own development package as follows:

• Create View (with copy from) with custom naming standard

Page 15 of 16
SAP HANA Best Practices

• Activate newly created view:

3.3 Basic Guidelines


• It is recommended that all SAP HANA Live content is copied into a custom
version using a custom naming standard if required for use. This is to avoid any
issues when future upgrades/patches are applied.

• Unit testing of SAP HANA Live views can be carried out by using SAP standard
transactions in the source system.

• A prerequisite of the SAP HANA Live installation is to have all applicable tables
replicated by SLT or else the install will fail.

Page 16 of 16

You might also like