You are on page 1of 10

1)Define and differentiate input parameter and variable?

A) Both (input parameter and variable) are run time filters.

Input Parameters:
We create input parameters in Projection so that while checking the data preview of
the Hana model, as per the input parameter value we give, the data will be filtered
out in the projection level so that unnecessary data will not be sent to the next node
hence it improves the performance of the Hana model.

Variable: we create Variable in Semantics node, even though we give the variable
value
while checking the data preview of the Hana model, unnecessary data will be sent
from projection to semantics at the semantics the data will be filtered out as per the
variable value we give

we are able to find one extra functionality of the variable is that we create range

range means for example: if want to see the particular period of time on “year”
column then we create variable, while checking the data preview of the Hana model it
will ask that which period of time you want to see the data preview
then we can give like 2020 to 2022.

2)Why we apply input parameter at projection level and variable at top level:
A) variables we are able to create only in Sematics node, we cannot find variable in
any node except in Semantics.
we are able to create input parameters in any node and but we create input
parameters in Projection so that while checking the data preview of the Hana model,
as per the input parameter value whatever we give, the data will be filtered out in the
projection level so- that unnecessary data will not be sent to the next node hence it
improves the performance of the Hana model.
3) Types of joins and difference between inner and referential join
A) calculation views:
Inner join
Left outer join
Right outer join
Text join
Full outer join
Analytical view:
Referential join
Inner join
Left outer join
Right outer join
Text join
Full outer join

Difference b/w referential join and inner join:


A inner join gives data only when it exists in both left and right table. An inner join is always
executed.
A referential join is also an inner join but it assumes that the referential integrity is
maintained. This means if the columns from the right table are not asked in the query then
the join will not be executed and it gives all the rows from the left. Thus, it is an omissible
join.
4) When we go for referential join?
Referential join is the default join type in SAP HANA modeling. A referential join is similar to
an inner join. The only difference between the two is referential integrity must be ensured in
the case of referential join otherwise it cannot be formed.

So, before we move forward with understanding referential integrity. It is formed between
a fact table (transaction data or Analytic View) and a master data table( Attribute View).
Every master data table has a primary key column which acts as a foreign key in the
fact table.
Referential joins in SAP HANA are used whenever there is a primary key and foreign key
association between two tables. And, referential integrity is when for every value in the
foreign key column, there is a reference value in the primary key column of the master data
table.
From a performance point of view, referential joins are better than inner joins. Referential
joins are recommended for star schemas as long as referential integrity is maintained.

Master data table


Customer_ID Customer_Name City_ Code Region
CT1 Rajesh Sharma MUM East
CT2 Sameer Khanna IDR Central
CT3 Neeti Rana HYD South

Fact data table:

Order_No. Customer_ID Product_Name Total_Units PRICE


1101987 CT1 iPad 300 40,000
1102568 CT1 MacBook 200 80,500
1103282 CT2 Fridge 500 95,000
1104229 CT3 LED TV 650 1,20,000

5) what is Text join?


We use text joins to join a text table with a master data table. The text table must have a
primary key column linked to the other data table and a language key column which contains
user language preferences. Text joins are also used with SAP tables having SPRAS session
language columns.
Text joins in SAP HANA provides a description of text records in the specific language to the
user. If user has selected the language as German, then all the details regarding the table
and columns will be displayed to the user in German.
6) what is cardinality?
You can set the cardinality when defining joins. Appropriate cardinality settings can help the
optimizer choose an optimal execution path. For fact tables, when joined to a dimension
table on the primary key, the cardinality is usually N:1. cardinality setting should respect the
actual data relation; otherwise, it may lead to errors or low performance
It tells us no of records matches between two tables
Inner join 1:1
Left outer join n:1
Right outer join 1:n
Fullouter join n:m
7) Difference between aggregation and projection node
The projection is used to apply filter inside a calculation view. There is no other way of
applying filters inside a calculation view than applying a projection.
Whereas, aggregation is the default feature of calculation view. All the unions, joins or
projections will finally merge into the aggregation box. Only the selected columns inside the
aggregation will be visible in the calculation view output, and we perform aggregation
functions like sum, avg,min,max on the measurable columns hence duplicate columns will
be removed after aggregation.
8) Diffence between Union and join?

A Join combines data into new columns. A Join between two tables shows data from the first
table in one set of columns alongside the second table’s column in the same row. Joins are
generally used to look up specific values and including them in results.
A Union combines data into new rows. A Union between two tables shows data from the first
table in one set of rows, and the data from the second table in another set of rows. Unions
are generally used to combine two datasets into a single result to perform operations.

9.How we can remove Duplicates in Caluculation view?


By doing aggregation we can remove the duplicated rows.
10) How we debug performance of calc view
we can use Plan Visualizer tool to analyse/debug the performance.
This is the most useful tool to analyse the performance of overall HANA model to
understand the key statistics of query execution such as.
▪ Overall run time & Which are the dominant operations taking maximum run time
▪ Node level statistics ▪ Filters applied on various tables ▪ Memory allocation ▪
Parallelization of operations ▪ Usage of various engines at different nodes
Options to run a query under Plan Visualizer: We can use the following methods to run a
SELECT query through Plan Visualizer: ▪ In SQL editor use the Visualize Plan option from
the menu:
Right click on the SQL query and choose Visualize Plan → Execute
14. DIFFERENCE B/W UNION ALL(UNION IN HANA) AND UNION?
 UNION ALL(union node in HANA) is the default union and the fastest to create. It
combines two or more SELECT statements or queries and includes duplicate rows.
The UNION combines two or more SELECT statements or queries. It takes longer to create
it because it removes duplicate rows.
15.How we can improve the performance of hana view?
A) by using Input parameters,
by applying filters as per the requirement,
by doing table partition,
by using Cardinality

16) what are the different Data categories and what's the blank type denotes?
CUBE-FOR CREATING HANA MODELS WHICH ARE USEFUL FOR REPORTING
DIMENSION-WHICH SUPPORTS TO CREATE MAIN HANA MODELS WHICH ARE
USEFUL FOR REPORING
BLANK
When it is blank, the view is executed in the calculation engine. When it is set to SQL
ENGINE, the view is executed in the SQL engine. Each engine has its own strength. For
example, the calculation engine is good at calculating currency conversion, whereas the
SQL engine is good at optimizing join order. The ExECUTE IN field should be customized on
a case-by-case basis to determine which engine should be chosen. Some native functions
are supported only with the column engine and not convertible such as with DATE() .When
the view contains such functions, you cannot set it to be executed in the SQL engine.
17) Define and differentiate table function and stored procedure
Table Functions are used to implement the solutions where we need to return the
results represented in a table. Mostly we will implement table functions to address the
data modelling requirements which cannot be achieved using graphical calculation
views.
Key features of Table Functions:
▪ Replaces the Script based Calculation views
▪ Can be used as data sources in graphical calculation views
▪ Read-only – we cannot implement any logic that updates the data
▪ Accept multiple input parameters and return exactly one results table
Key Points:
▪ We can use the table functions as data source in calculation views
▪ We can call the table functions from stored procedures and other table functions

Stored procedures are the reusable processing blocks of logic, which can be used to
implement solutions as per specific business requirements.
Commonly we implement stored procedures to build the solutions for the
scenarios such as: 1) Persisting results in HANA database – example Snapshots,
Reusable results of HANA views – to avoid frequent execution of complex views.
2) Reusable solutions – Data conversions, Calculations etc.
▪ Procedures can be defined as Read only OR Read/ Write Procedures
▪ These are the typical ways of calling procedures:
▪ Calling stored procedures from another procedure or function
▪ Scheduling the procedure call from XS Job engine which is in-build in HANA
▪ Scheduling the procedure call from external ETL tools such as Business Objects
Data Services Procedures can be created as:
1. Catalog Procedures: These are not transportable, since they are created using the
CREATE PROCEDURE statement and they are not created under a package.
2. Repository Procedures: These are the procedures created using the HANA
development perspective (Extension .hdbprocedure). This is the recommended
approach for creating stored procedures since they can be transported, and version
management is available

18.what is star schema


Star Schema – a single object (the fact table) sits in the middle and is connected to other
surrounding objects (dimension tables) like a star. Each dimension is represented as a
single table.

19) Difference b/w cube with star join and cube with out star join view
Discussed in class
20).When we use SLT replication
Discussed in class
21).Define all SLP replication techniques load replicate stop suspend resume
Discussed in class
22) .what kind of method we can use in SLT means trigger based or any other
A)TRIGGER BASED
23) .Is batch process is possible through SLT
A) YES, while configuring SLT CONNECTION, admin has to select time or interval
instead of realtime then it becomes batch process
24) what are the Tcodes we can use in SLT
LTRC-for replication
LTRS-for transformation(like filter the data and columns on the table)
LTR-to create the SLT connection by admin
25)How we transport hana view. Complete process
Explain HTA tcode or Delivery unit as discussed in class
26) what are analytical privileges and how we can apply them.
Analytic privileges control access to SAP HANA data models. Analytic privileges help you to
achieve row-level security on information views. To define analytic privileges, specify the
range of values that a user is permitted to access. When the user uses the view, a filter
based on the analytic privilege will be applied on the view to retrieve the records that the
user is permitted to access. You can define two types of analytic privileges: classical analytic
privilege and SQLbased analytic privilege.
Classical Analytic Privilege An XML-based, or classic, analytic privilege allows you to assign
selective access to information views to users based on data combinations.
In the analytic privilege definition window, you can define a date range in the PRIVILEGE
VALIDITY pane (see Figure). Within that range, those users with the privilege are authorized
to access the views. Otherwise, they are not authorized. This defines the privilege from a
time perspective. To define row-level security, first choose the attribute you want to add the
restriction to, then set a filter on the column under the ASSIGN RESTRICTIONS pane.

27)what is schema mapping


All the database objects must be defined under specific schema. Schemas acts as
containers to group related database objects. It is generally recommended to have same
schema names across all HANA systems (Dev, QA & Prod). If the source system schema
names are different between HANA systems (Dev/ QA/ Prod), Schema Mapping functionality
can be used in HANA Studio to map the physical HANA DB schema to Authoring schema.
This will ensure that the development artifacts will work across the HANA system landscape.
We can implement schema mappings in the Modeler perspective → Quick Launch tools.

Some questions already we discussed in class hence I am not writing answers in the
document.

You might also like