You are on page 1of 103

A

perfect handbook for SAP Hanna architects

SAP Hanna the real game changer


Introduction
SAP HANA is a modern, in-memory database and platform that is deployable on-premise
or in the cloud The SAP HANA platform is a flexible data source agnostic in-memory data
platform that allows customers to analyze large volumes of data in real-time. It is also a
development platform, providing an infrastructure and tools for building highperformance applications based on SAP HANA Extended Application Services (SAP
HANA XS). It is the foundation of various SAP HANA editions, like the SAP HANA
Platform Edition, providing core database technology, and the SAP HANA Enterprise
Edition, bundling additional components for data provisioning. The SAP HANA Platform
Edition integrates a number of SAP components, including the SAP HANA database, SAP
HANA studio, and SAP HANA clients. SAP HANA is an in-memory data platform that is
deployable as an on-premise appliance, or in the cloud. It is a revolutionary platform
thats best suited for performing real-time analytics, and developing and deploying realtime applications. At the core of this real-time data platform is the SAP HANA database
which is fundamentally different than other database systems
And this is how SAP Hanna high level architecture looks like this

SAP HANA - platform 1


What is SAP Hanna
SAP HANA is an in-memory, column-oriented, relational database management system
developed and marketed by SAP SE Its primary function as database server is to store and
retrieve data as requested by the applications. In addition, it performs advanced analytics
(predictive analytics, spatial data processing, text analytics, text search, streaming
analytics, graph data processing) and includes ETL capabilities and an application server,
SAP HANA is an in-memory database: - It is a combination of hardware and software
made to process massive real time data using In-Memory computing.
- It combines row-based, column-based database technology.
- Data now resides in main-memory (RAM) and no longer on a hard disk.
- Its best suited for performing real-time analytics, and developing and deploying
real-time applications.
The the necessity of SAP Hanna?
Ten years ago SAP needed to overcome complexity and cost to find a way to pull
ahead and stay ahead of the market

Over twenty major techniques, new and modified, were needed. Interestingly only
one was about using memory rather than disks, there was much more to it than that
Exploiting modern microchip features, that appeared from about 2005 onward, was
key this could yield speed improvements of 100,000 times(sic) or more
The dramatic speed increase eliminated the need for aggregates and indexes in
applications, other research told us these caused 60-95% or more of application
complexity a massive simplification. Speedup is so great that even after
dynamically producing aggregates on the fly there is still plenty of performance spare
to be hundreds or thousands of times faster than traditional systems
But this had to be based on column stores, notoriously difficult to update, so they
invented a unique technique suitable for running thousands or millions of
transactions per second
Thus Hybrid Transactional and Analytical Processing (HTAP) became possible, with
both types of processing being done on the same single copy of the data, which
allows for huge landscape simplification. Operational reporting can now return to the
operational systems with large systems savings and the agility of reporting on real
time operational data.
The speedup also benefited text processing, spatial processing, planning, predictive,
and graph processing, all could be mixed together at will, in-memory, and in-parallel.
This allowed for greater agility and productivity, not just through the speed of
execution but by having all these techniques at your fingertips ready to be used and
combined.
The SAP Development organization, working 24/7 following the sun in multiple
centers around the world expanded the research out into todays fully featured HANA
Platform
This work produced an enterprise system that is fundamentally simpler, and
that enables high productivity, much greater agility, higher performance and lower
TCO.
Capabilities
1. Functional capabilities
High-performance computing
Leverage the latest hardware and software innovations to accelerate performance for
all applications.
Collapse
Comprehensive data processing
Embed multiple data processing engines and predictive libraries to maximize value
from Big
Data and the Internet of Things (IoT).
Collapse
OLAP and OLTP support
Allow processing for transactional and analytic workloads on the same system with
online transaction processing (OLTP) and online analytical processing (OLAP).

Collapse
Administration and security
Monitoring system health and providing network security are key tasks for
administrators. Theyre both built into SAP HANA.
Integration services
All of your data sources can be integrated into SAP HANA to complement your
SAP HANA applications or to perform in-depth analyses.
Technical capabilities
SAP HANA Enterprise 1.0 is an in-memory computing appliance that combines SAP
database software with pre-tuned server, storage, and networking hardware from one of
several SAP hardware partners. It is designed to support real-time analytic and
transactional processing.
Technical components that make up HANA?
The heart of SAP HANA Enterprise 1.0 is the SAP In-Memory Database 1.0, a massively
parallel processing data store that melds row-based, column-based, and object-based
storage techniques. Other components of SAP HANA Enterprise 1.0 include:
SAP In-Memory Computing Studio,
SAP Host Agent 7.2,
SAPCAR 7.10,
Sybase Replication Server 15,
SAP HANA Load Controller 1.00, and,
SAP Landscape Transformation 1 - SHC for ABA.
SAP HANA runs the SUSE Linux Enterprise Server 11 SP1 operating system. It is
generally delivered as an on-premise appliance and is available no
Concepts and support
SAP HANA is designed to replicate and ingest structured data from SAP and non-SAP
relational databases, applications, and other systems quickly. One of three styles of data
replication trigger-based, ETL-based, or log-based - is used depending on the source
system and desired use-case. The replicated data is then stored in RAM rather than loaded
onto disk, the traditional form of application data storage. Because the data is stored inmemory, it can be accessed in near real-time by analytic and transactional applications that
sit on top of HANA.
What applications does HANA support?
SAP has delivered several next-generation, targeted analytic applications designed
specifically to leverage the real-time functionality offered by HANA, including SAP
Smart Meter Analytics and SAP CO-PA Accelerator. It is developing others focused on
analytics related to retail, financial, telecommunications, and other industries and to
horizontal use-cases such as human capital management.
HANA is highly-optimized to interface with the SAP Business Objects portfolio of
reporting, dash boarding, and other analytic products. SAP plans to add HANA support for
Business Warehouse in November, eliminating the need for Business Warehouse
Accelerator in most customer environments. SAP is also migrating Business ByDesign,

SAPs on-demand business suite for small and mid-sized businesses, on to HANA.
Currently, HANA does not easily support non-SAP analytic or transactional applications
without significant application re-architecting.
What does HANA cost and how large can it scale?
SAP has not publicly released specific pricing information regarding HANA, but early
estimates indicate customers can initially have HANA up and running for under $300,000,
including hardware, software, and services. Depending on scale, pricing levels can reach
up to $2 million or more. HANA is not capable of storing petabyte-levels of data.
However, due to its advanced compression capabilities, HANA deployments can store tens
of terabytes of data or more, which is considered large data volumes in most current SAP
customer environments.
What is the HANA value proposition to customers?
Enterprises collect large volumes of structured data via legacy ERP, CRM, and other
systems. Most struggle to make use of the data while spending large sums to store and
protect it. One option to make use of this data is to extract, transform, and load subsets
into a traditional enterprise data warehouse for analysis. This process is time-consuming
and requires significant investment in related proprietary hardware. The result is often an
expensive, bloated EDW that provides little more than backward-looking views of
company data.
SAP HANA offers enterprises a new approach to harnessing the value of all that corporate
data. As mentioned above, HANA runs on inexpensive commodity hardware from any of
several SAP partners, including IBM, Dell, and HP. Its data replication and integration
capabilities vastly speed up the process of loading data into the database. And because it
uses in-memory storage, applications on top of HANA can access data in near-real time,
meaning end-users can gain meaningful insight while there is still time to take meaningful
action. HANA can also perform predictive analytics to help organizations plan for future
market developments.
How is it different From Competing Offerings from Oracle?
Oracle unveiled an in-memory analytic appliance of its own, called Exalytics, at Oracle
OpenWorld in October 2011. Among the important differences compared to SAP HANA,
Exalytics is designed to run on Sun-only hardware, it is a mash-up of various existing
Oracle technologies, and there are few, if any, systems in production. As with all Oracle
technologies, the risk of vendor lock-in is high, and the cost is significantly higher than
comparable HANA deployments.
What are compelling SAP HANA use cases?
Real-time analytics as supported by SAP HANA have numerous potential use cases
including:
Profitability reporting and forecasting,
Retail merchandizing and supply-chain optimization,
Security and fraud detection,
Energy use monitoring and optimization, and,
Telecommunications network monitoring and optimization.
What are HANAs limitations?

SAP HANA is not a platform for loading, processing, and analyzing huge volumes
petabytes or more of unstructured data, commonly referred to as big data. Therefore,
HANA is not suited for social networking and social media data analytics. For such uses
cases, enterprises are better off looking to open-source big-data approaches such as
Apache Hadoop or LexisNexis HPCC Systems, or even MPP-based next generation data
warehousing appliances like EMC Greenplum or Teradata AsterData.
While SAP has promised a slew of new HANA-optimized applications, currently only a
handful are on the market. It is incumbent upon SAP to follow through on its commitment
with practical applications that address real-world business problems. Also, SAP HANA is
not pre-optimized to support non-SAP applications, which requires significant application
re-engineering on the part of enterprise IT groups.
What does HANA mean for SAPs product direction?
Enterprises are increasingly demanding real-time analytic and transactional processing
capabilities from business applications. HANA puts SAP in a good position to deliver such
functionality for its customer-base of traditional enterprises. But SAP must balance
innovation in the form of HANA and related applications with continuing support for its
legacy back-office ERP and other business applications that form the backbone of many
an enterprise IT environment. Further, as unstructured data processing and analytics
becomes more commonplace at traditional (read: non-Web 2.0 companies) enterprises,
Open environment
Support standard JDBC/ODBC and RESTFul Webservice to help you build applications
that can be easily integrated with your legacy systems.
Componentized data integration
Allow maximum flexibility and shrink TCO with componentized data integration.


Efficient systems management
Integrate development, administration, and monitoring tools to manage systems more
efficiently
SAP Hanna certifications
The SAP HANA Hardware Directory lists all hardware that has been certified or is
supported under the following scenarios:
Hardware that has been certified within the SAP HANA hardware certification
program
Previously validated hardware based on Westmere technology - as reflected earlier in
the Product Availability Matrix PAM.
Supported Entry Level Systems: Only Intel Xeon E5 v2/v3 based 2-socket single
node systems with minimum 8 cores per CPU and is valid for particular SPS releases
The certification is valid for a particular group of appliances/storage family of the

hardware manufacturer wherein multiple models might be included. For further details,
like released CPU types, please compare the scenario pages in SCN: SAP HANA
Hardware Certification - Appliance (HANA-HWC-AP SU) for SUSE Linux Enterprise
Server (SLES) (see details & disclaimer)
SAP HANA Hardware Certification - Appliance (HANA-HWC-AP RH) for Red Hat
Enterprise Linux (RHEL) .SAP HANA Hardware Certification Enterprise Storage
Scenario (HANA-HWC-ES) (see details & disclaimer).Details in the listings are as
provided by the Partner on the certification/publishing date. They are subject to
change and may be changed by SAP at any time without notice. It is not intended to
be binding upon SAP to any particular course of business, product strategy and/or
development. The certification is valid for the stipulated time-period. Any errors in a
listing do not result in the right to get support for a particular configuration. The
Supported Entry Level Systems are valid for specific service packs. The hardware
was tested by the hardware partner with SAP Linux Lab. The systems are supported
for SAP HANA..For SAP HANA compute nodes memory chips have to be
homogeneous, spread across all CPUs symmetrically and providing maximum
bandwidth. The hardware is required to have a valid SAP HANA Hardware
certification at the point of purchase by the customer. Once the validity date of the
certification has passed, the hardware will continue to be supported by the Partner
until the end of maintenance as indicated by the Partners
Certified Appliances
The certification is valid for a particular group of appliances from
the hardware manufacturer wherein multiple models might be included. During the
preparation phase of the certification Partner and SAP can jointly agree on the group of
appliances. The group of appliances can be defined as a set of appliance models that share
and fulfill all of the following criteria:
They are based on the same architecture
Within one group of certified appliances different CPU models within the same
microarchitecture can be used as released by SAP HANA reference architecture:
Nehalem EX architecture: Intel X7560
Westmere EX architecture: Intel E7-#870 (# is for 2,4, or 8)
Ivy Bridge EX architecture: Intel E7-#880v2, E7-#890v2 (# is for 2,4, or 8)
Haswell EX architecture: Intel E7-8880v3, E7-8890v3, or E7-8880Lv3
Broadwell EX architecture: Intel E7-8880v4, E7-8890v4
Empty CPU slots may be filled up to the maximum number supported by the
appliance

Memory chips have to be homogeneous, spread across all CPUs symmetrically


providing maximum bandwidth
They have the same disk layout
They use the same storage connector(s) and/or RAID controller as described in the
documentation
Comparable components are used as with the tested setup
All the KPIs are met by the lowest member of the group of appliances
Consistent documentation for the group of appliances which is applicable for all SAP
HANA setup scenarios: single node, scale up and scale out
Scale-up: BWoH/DM/SoH (formerlySingle node) includes hardware approved
for all single server configuration scenarios for SAP BW powered by SAP
HANA, SAP Business Suite powered by SAP HANA and more
Scale-up: SoH (formerly SoH) includes additional single server configurations
specific for SAP Business Suite powered by SAP HANA not covered by Scaleup: BWoH/DM/SoH
Scale-out: BWoH/DM (formerly Scale out) includes multi-server configuration
scenario for SAP BW powered by SAP HANA
SAP Business One includes single server configurations specific for SAP
Business One
Appliance Certification Scenarios
There are several scenarios available for HANA Hardware Appliance certification. Please
see details below for the test procedure which comes with each scenario
version applicable to the HANA Hardware Certification Check Tool
(HWCCT). Certifications that were released under scenario 1.0 remain valid until their
expiration date and to be used with all SAP HANA revisions.
1. Scenario Version 1.0
Appliance for SUSE Linux Enterprise Server (SLES): HANA-HWC-AP SU 1.0
Appliance for Red Hat Enterprise Linux (RHEL): HANA-HWC-AP RH 1.0
For certification tests, HWCCT based on HANA SPS8 or SPS9 and related
revisions is used.
2. Scenario Version 1.1
Appliance for SUSE Linux Enterprise Server (SLES): HANA-HWC-AP SU 1.1

Appliance for Red Hat Enterprise Linux (RHEL): HANA-HWC-AP RH 1.1


For certification tests, HWCCT based on HANA SPS10 and related revisions are
used.
3. Legacy appliances released prior to 2014
HWCCT cannot be used for validating storage performance, see SAP Note
2187426.
HWCCT can only be used to validate operating system configuration settings
and network performance.
Entry Level Systems
These systems are supported solutions approved for use with SAP HANA scenarios. SAP
HANA hardware certifications do not apply. Entry Level systems have the following
criteria:
Intel Xeon E5 v2/v3 based systems
2-socket systems
CPU with a minimum of 8 cores
Memory chips have to be homogeneous, spread across all CPUs symmetrically
providing maximum bandwidth.
Single node solutions
Valid for specific Support Package Stack (SPS) releases
Certified Enterprise Storage
The certification is valid for a particular enterprise storage family of the hardware
manufacturer wherein multiple models might be included. During the preparation phase of
the certification, the Partner and SAP can jointly agree on the group of appliances to be
certified. The storage family can be defined as a set of enterprise storage models that share
and fulfill all of the following criteria:
They are based on the same architecture
They use the same storage connector(s)
Comparable components are used as with the tested setup
All the KPIs are met by the lowest member of the storage family
Consistent documentation for the storage family which is applicable for all SAP
HANA setup scenarios: single node, scale up and scale out
Enterprise Storage Certification Scenarios

There are several scenarios available for SAP HANA enterprise storage certification.
Please see details below for the test procedure which comes with each scenario
version applicable to the HANA Hardware Certification Check Tool (HWCCT).
1. Scenario Version HANA-HWC-ES 1.0
For certification tests, HWCCT based on HANA SPS8 or SPS9 and related
revisions is used.
2. Scenario Version HANA-HWC-ES 1.1
For certification tests, HWCCT based on HANA SPS10 and related revisions are
used.
New KPI table is introduced
The Enterprise Storage Certification Scenario HANA-HWC-ES 1.1 is the successor of
HANA-HWC-ES 1.0 with an updated testing method and an adopted set of KPIs.
Certifications that were released under scenario 1.0 remain valid until their expiration date
and to be used with all SAP HANA revisions. From July 14th 2015 onwards the
certification scenario 1.1 is mandatory
SAP HANA is an in-memory data platform that is deployable as an on-premise appliance,
or in the cloud. This in-memory database is highly suited, for performing real-time
analytics, and developing and deploying real-time applications. And this is powered by,
real-time DB platform, Hanna database, which is fundamentally different than any other
database engines which are available currently and they are, and platform architecture is
shown below.
Introduction to In-memory computing
In-memory computing technology combines hardware and software technology
innovations. Hardware innovations include blade servers and CPUs with multicore
architecture and memory capacities measured in terabytes for massive parallel scaling.
Software innovations include an in-memory database with highly compressible row and
column storage specifically designed by SAP to maximize in memory computing
technology. Parallel processing takes place in the database layer rather than in the
application layer.
In-memory computing is the storage of information in the main random access memory
(RAM) of dedicated servers rather than in complicated relational databases operating on
comparatively slow disk drives. In-memory computing helps business customers,
including retailers, banks and utilities, to quickly detect patterns, analyze massive data
volumes on the fly, and perform their operations quickly. The drop in memory prices in
the present market is a major factor contributing to the increasing popularity of in-memory
computing technology. This has made in-memory computing economical among a wide
variety of applications. Many technology companies are making use of this technology.
For example, the in-memory computing technology developed by SAP, called High-Speed
Analytical Appliance (HANA), uses a technique called sophisticated data compression to
store data in the random access memory. HANAs performance is 10,000 times faster
when compared to standard disks, which allows companies to analyze data in a matter of

seconds instead of long hours.


Some of the advantages of in-memory computing include:
The ability to cache countless amounts of data constantly. This ensures extremely fast
response times for searches.
The ability to store session data, allowing for the customization of live sessions and
ensuring optimum website performance.
The ability to process events for improved complex event processing
SAP In-Memory Computing technology enables real-time computing by bringing
together online transaction processing (OLTP) applications and online analytical
processing (OLAP) applications at a low total cost. Combining the advances in
hardware technology with SAP In-Memory Computing empowers the entire business
from shop floor to boardroom by giving real-time business processes
instantaneous access to data. The alliance of these two technologies can eliminate
todays information lag for the business.
With the revolution of in-memory computing already under way, the question isnt if
this revolution will impact businesses but when and, more importantly, how. Similar
to the advance of enterprise resource planning (ERP) software in the 1990s, inmemory computing wont be introduced because a company can afford the
technology. It will be brought on board because a business cannot afford to allow its
competitors to adopt the technology first.
While non-SAP customers can integrate in-memory technology into their existing
software environments, SAP customers can transform their businesses into real-time
enterprises with game-changing potential without disrupting their existing IT
landscapes. The Business Transformation Services group of SAP Consulting is
available to help you assess the impact this technology can have on own IT
environment and develop a strategy that leverages it specifically to differentiate your
business. The first product to leverage SAP InMemory Computing technology is SAP
In-Memory Appliance (SAP HANA) software, which is currently in use at SAP
and with SAP customers.
In order to establish an effective business intelligence (BI) strategy, IT professionals
must ask themselves which half of the money they spend on BI investments is
working. Within the marketing organization, you probably use focus groups to
determine what works and what does not. By using SAP BusinessObjects business
intelligence (BI) solutions, as designer, it is possible get help from powerful statistics
tools to focus your efforts quickly. Statistics can help you determine, for example,
which 20% of the data in your data warehouse is used by 80% of your users. When
you know this, you can focus 80% of your investment on that fraction of the data.
SAP has a highly skilled set of technical folks,. They can help the IT team to manage
BI complexities, s the reports and data that are simply never used and maximize
the usefulness of business intelligence that is in high demand. A great complementary
tool for optimizing your BI investment is one or more BI focus groups that include
end users. Although using system statistics to manage BI spend is available to all
customers, we find that some companies dont perform even this basic step. As a
result, they remain unaware of what BI is available to them through their software
and dont leverage it.

In order to establish an effective business intelligence (BI) strategy, IT professionals


must ask themselves which half of the money they spend on BI investments is
working. Within the marketing organization, you probably use focus groups to
determine what works and what does not. By using SAP BusinessObjects business
intelligence (BI) solutions, you can get help from powerful statistics tools to focus
your efforts quickly. Statistics can help you determine, for example, which 20% of
the data in your data warehouse is used by 80% of your users. When you know this,
you can focus 80% of your investment on that fraction of the data. Our experience
shows that BI statistics are a good starting point for optimizing BI spends. They can
help you manage your BI graveyards the reports and data that are simply never
used and maximize the usefulness of business intelligence that are in high demand.
A great complementary tool for optimizing the BI investment is one or more BI focus
groups that include end users. Although using system statistics to manage BI spend is
available to all customers, we find that some companies dont perform even this basic
step. As a result, they remain unaware of what BI is available to them through their
software and dont leverage it. They certainly are in no position to take full advantage
of the new in-memory computing technology. Even the best technology will not help
if you lack an effective BI strategy.
Enable mixed workloads of analytics, operations, and performance management in
a single software landscape Support smarter business decisions by providing
increased visibility of very large volumes of business information
Enable users to react to business events more quickly through real-time analysis and
reporting of operational data
Provide greater flexibility by delivering innovative real-time analysis and reporting
functions Support the deployment of innovative new business applications
Help streamline the IT landscape and reduce total cost of ownership (TCO) Business
Examples That Leverage In-Memory Computing To show how compelling this
technology can be, we have included a handful of examples showing how in-memory
computing works in real-life, real-time situations to make business more responsive,
more effective, and more successful. In manufacturing enterprises, in-memory
computing technology will connect the shop floor to the boardroom, and the shop
floor associate will have instant access to the same data as the board member. The
technology supports this by integrating on-premise, on-demand, and on-device
architectures. Once the appropriate business processes are in place, empowered shop
floor staff can take immediate action based on real-time data to make whatever
adjustments on the shop floor are necessary. They will then see the results of their
actions reflected immediately in the relevant KPIs. You could call this true 360degree BI, as it eliminates the middleman as well as the need to create any reports
other than whatever statutory or legal reports may be required. SAP BusinessObjects
Event Insight software is key. In what used to be called exception reporting, the
software deals with huge amounts of real-time data to determine immediate and
appropriate action for a real-time situation. Product managers will still look at
inventory and point-of-sale data, but in the future they will also receive, for example,
notifications when customers broadcast their dissatisfaction with a product to the
masses over Twitter.
An excellent example comes from todays entertainment companies. Conventionally,

bad movies have been able to enjoy a great opening weekend before crashing the
second weekend when negative word-of-mouth feedback has cooled off the initial
enthusiasm. That week-long grace period is about to disappear for silver screen flops.
In the future, consumer feedback wont take a week, a day, or an hour. The very
second showing of a movie could suffer from a noticeable falloff in attendance due to
consumer criticism piped instantaneously through the new technologies. Since such
rapid response is not possible with old-fashioned movie reels, changes in technology
both disruptive and accelerated will surge through other industries besides IT.
Another example worth mentioning harks back to McDermotts middleman: It will
no longer be good enough to have the weekend numbers ready for executives on
Monday morning. Executives will run their own reports on revenue, Twitter their
reviews over the weekend and by Monday morning have acted on their decisions.
Our final example is from the utilities industry: The most expensive energy that a
utilities company provides is energy to meet unexpected demand during peak periods
of consumption. In those cases, the provider may have to buy additional energy to
support the power grid, which can get expensive. However, if the company could
analyze trends in electrical power consumption based on real-time meter reading, it
could offer its consumers in real time extra low rates for the week or month if
they reduce their consumption during the following few hours. Consumers then have
the option to save money by modifying their immediate consumption patterns,
perhaps by switching off the power at their residence and going to a movie. By
giving consumers an informed choice and an incentive, utilities companies have a
chance to moderate peaks in energy consumption. This advantage will become much
more dramatic when we switch to electric cars; predictably, those cars are going to be
recharged the minute the owners return home from work, which could be within a
very short period of time. In order to establish an effective BI strategy, IT
professionals must ask themselves which half of the money they spend on BI
investments is working. Within the marketing organization, you probably use focus
groups to determine what works and what does not. By using SAP BusinessObjects
BI solutions, you can get help from powerful statistics tools to focus your efforts
quickly. 6 SAP Thought Leadership SAP In-Memory Computing Technology
In-memory computing technology combines hardware and software technology
innovations. Hardware innovations include blade servers and CPUs with multicore
architecture and memory capacities measured in terabytes for massive parallel
scaling. Software innovations include an in-memory database with highly
compressible row and column storage specifically designed by SAP to maximize inmemory computing technology. Parallel processing takes place in the database layer
rather than in the application layer as we know it from the client-server architecture.
Total cost is expected to be 30% lower than traditional relational database technology
due to: Leaner hardware and less system capacity required, as mixed workloads of
analytics, operations, and performance management are handled within a single
system, which also reduces redundant data storage Reduced extract, transform, and
load (ETL) processes between systems and fewer prebuilt reports, reducing the
support effort required to run the software Replacing traditional databases in SAP
applications with in-memory computing technology resulted in report runtime
improvements of up to a factor of 1000 and compression rates of up to a factor of 10.

Performance improvements are expected to be even higher in SAP applications


natively developed for in memory databases, with initial results showing a reduction
of computing time from several hours to a few seconds. Currently SAP Net Weaver
Business Warehouse Accelerator (SAP Net Weaver BW Accelerator) software
Leverages in-memory computing technology. The accelerated version of SAP
Business Objects Explorer software makes use of the technology for data provided
by the SAP Net Weaver Business Warehouse (SAP Net Weaver BW) component and
SAP BusinessObjects Data Services software. However, in-memory computing will
not eliminate the need for data warehousing. A real-time reporting function will
solve old challenges and create new opportunities, but new challenges will arise.
SAP HANA 1.0 software supports real-time database access to data from the SAP
applications that support OLTP. In a parallel environment, updates of data in real
time utilize database replication developed by Sybase. The first SAP application to
run on an in-memory database is SAP NetWeaver BW 7.30 using SAP HANA 1.5.
This advance has eliminated the need for separate hardware to run SAP NetWeaver
BW Accelerator. Both versions of the appliance software can be accessed with the
SAP BusinessObjects Business Intelligence platform. The platform provides a
shared semantic data layer for SAP BusinessObjects BI solutions and SAP
BusinessObjects enterprise performance management (EPM) solutions. It allows
optimized BI applications to take full advantage of the in-memory computing
technology. In manufacturing enterprises, in-memory computing technology will
connect the shop floor to the boardroom, and the shop floor associate will have
instant access to the same data as the board member. The technology supports this
by integrating on-premise, on-demand, and on-device architectures. Once the
appropriate business processes are in place, empowered shop-floor staff can take
immediate action based on real-time data to make whatever adjustments on the shop
floor are necessary.
Business intelligence is more than fancy dashboards at least most BI professionals agree
with that. At first, computer-generated data was simply referred to as reports. Over time
new terminology was introduced: executive information system (EIS) in the early 1990s,
enterprise information management (EIM), and strategic information management (SIM,
see Figure 1), to name a few. Business intelligence can be complex and, depending on its
definition, could be understood to cover functionality offered by SAP BusinessObjects
EPM solutions. This would include financial planning and consolidation applications,
strategy management and profitability software, and cost management simulation tools.
Add to the BI category statistical applications that perform predictive analysis and SAP
BusinessObjects Event Insight which is based on information streams issuing from an
event grid and two things happen. It becomes almost impossible to agree on an industrywide definition of business intelligence, and it becomes impossible to separate BI from
other IT data processing. For that reason, we do not focus on traditional BI definitions in
this thought leadership paper. We focus our attention instead on analyzing the overall
impact of in-memory computing technology.
What makes HANNA successful?
Owning to its hybrid structure for processing transactional workloads and analytical
workloads fully in-memory, SAP HANA combines the best of both worlds. It is now

simple to load data from your transactional database into your reporting database, or even
build traditional tuning structures to enable that reporting. As transactions are happening,
and besides, administrators can report against them live. By consolidating two landscapes
(OLAP and OLTP) into a single database, SAP HANA provides companies with massively
lower TCO in addition to mind-blowing speed. But even more important is the new
application programming paradigm enabled for extreme applications. Since the SAP
HANA database resides entirely in-memory all the time, additional complex calculations,
functions and data-intensive operations can happen on the data directly in the database,
without requiring time-consuming and costly movements of data between the database and
applications. This incredible simplification and optimization of the data layer is the killer
feature of SAP HANA because it removes multiple layers of technology and significant
human effort to get incredible speed. It also has the benefit of reducing the overall TCO
of the entire solution. Some other database engines on the market today might claim to
provide one or another benefit that SAP HANA brings. However, none of them can deliver
on all of them. This is real-time computing, and customers can take advantage of this
today via SAP BW on SAP HANA, Accelerators on SAP HANA and native SAP HANA
applications
But even more important is the new application programming paradigm enabled for
extreme applications. Since the SAP HANA database resides entirely in-memory all the
time, additional complex calculations, functions and data-intensive operations can happen
on the data directly in the database, without requiring time-consuming and costly
movements of data between the database and applications. This incredible simplification
and optimization of the data layer is the killer feature of SAP HANA because it removes
multiple layers of technology and significant human effort to get incredible speed. It also
has the benefit of reducing the overall TCO of the entire solution. Some other database
engines on the market today might claim to provide one or another benefit that SAP
HANA brings. However, none of them can deliver on all of them. This is real-time
computing, and customers can take advantage of this today via SAP BW on SAP HANA,
Accelerators on SAP HANA and native SAP HANA applications
Impactful architecture
When enterprise architecture plays a role in setting up and maintaining BI projects, it
helps reduce costs by considering the overall context of the project. It examines business
needs, IT landscape, performance requirements, data model complexity, and the tools and
software required to meet reporting demands. In seeing that these requirements are met, it
helps establish a unified BI platform that supports administration, thereby significantly
trimming the TCO for a BI project. Incrementally, and compatibility with current and
legacy landscapes. In-memory computing technology provides scaling and flexibility of
hardware for higher performance. On-the-fly aggregations relieve IT staff from manual
query tuning and data aggregation tasks. In cases where an enterprise data warehouse is
not in place, SAP HANA provides instant access to real-time data via replication from
ERP software from SAP with no need for complex ETL processes. By contrast, in
traditional data warehouse environments, ever higher performance and functional
requirements lead to the acquisition of additional hardware, software, and performance
tuning tasks. In highly heterogeneous environments, multiple BI solution sets require
additional independent lifecycle management, which adds to solution maintenance efforts.

Cost-Effective Impact of Enterprise Architecture When enterprise architecture plays a role


in setting up and maintaining BI projects, it helps reduce costs by considering the overall
context of the project. It examines business needs, IT landscape, performance
requirements, data model complexity, and the tools and software required to meet
reporting demands. In seeing that these requirements are met, it helps establish a unified
BI platform that supports administration, thereby significantly trimming the TCO for a BI
project.
Analytics based on in-memory computing also contribute to lower TCO of both new and
existing landscapes. Some huge cost benefits for IT include reduced hardware costs,
higher performance and business agility, faster deployment, the opportunity to adopt
incrementally, and compatibility with current and legacy landscapes. In-memory
computing technology provides scaling and flexibility of hardware for higher
performance. On-the-fly aggregations relieve IT staff from manual query tuning and data
aggregation tasks. In cases where an enterprise data warehouse is not in place, SAP
HANA provides instant access to real-time data via replication from ERP software from
SAP with no need for complex ETL processes By contrast, in traditional data warehouse
environments, ever higher performance and functional requirements lead to the acquisition
of additional hardware, software, and performance tuning tasks. In highly heterogeneous
environments, multiple BI solution sets require additional independent lifecycle
management, which adds to solution maintenance efforts.

Strategic Information Management 1


World is characterized by rapid changes, disruptive technologies, and strong competition.
Clearly, no business can afford to waste its limited resources on duplicate efforts; on
freewheeling short-term, throwaway development; or on shadow IT efforts stopgap,
maverick IT work performed outside the official corporate IT strategy. Nor will the
biggest corporate spenders necessarily be tomorrows leaders. That role will go to the
companies with IT strategies that enable their enterprise architects to align the IT
infrastructure with business strategy across business units. That is the reason why IT
portfolio management is a top priority for companies today. However, to put that in place,

you must understand your business strategy, then establish an IT portfolio that reinforces
that strategy with every IT decision made. In doing so, your IT can help align company
priorities across lines of business and corporate support organizations, achieve business
objectives, and keep lines of business profitable. The objectives for IT portfolio
management, assists achieve business objectives, and keep lines of business profitable.
The objectives for IT portfolio managers s, executing project work, and maintaining
budget accountability. Aligning existing and planned projects with these guidelines.
Prohibiting all IT and business initiatives not aligned with the guidelines of the IT
portfolio If a company intends to leverage in memory computing technology, its IT
portfolio management becomes even more important. We give two examples that illustrate
this. Formerly, operational reporting functionality was transferred from OLTP applications
to a data warehouse. With in-memory computing technology, this functionality is
integrated back into the transaction system. The consequence is s, executing project work,
and maintaining budget accountability .Aligning existing and planned projects with these
guidelines Prohibiting all IT and business initiatives not aligned with the guidelines of
the IT portfolio If a company intends to leverage in memory computing technology, its IT
portfolio management becomes even more important. We give two examples that illustrate
this. Formerly, operational reporting functionality was transferred from OLTP applications
to a data warehouse. With in-memory computing technology, this functionality is
integrated back into the transaction system. The consequence is hat transaction processing
functionality will have to be aligned much more closely with the integrated BI
functionality. SAP BusinessObjects Event Insight, which uses in-memory computing,
requires a tightly woven network of data processes and data exploration to identify
specific thresholds of performance measures. Reaching a threshold then triggers additional
steps in one or more business processes. In order to implement an end-to-end business
process with embedded BI, both business process effort and IT effort must be carefully
orchestrated. Enterprise architecture emerges where business requirements are formally
and rigorously sustained by IT. In advanced enterprise architectural processes, no
technology is implemented that has not been vetted and approved by the enterprise
architecture office with regard to strategic viability and medium- to long-term benefit for
the enterprise. However, those organizations wanting a head-start in in-memory
computing must deploy at least one in-memory computing project as soon as possible to
develop know-how, resources, and a feel for how the new technology will impact their
unique situation. Obviously, it will take some time before the first applications developed
specifically to exploit in-memory computing affect the enterprise application architecture.
Yet in-memory computing technology will have a major impact on BI and data warehouse
application architecture as well as on OLTP applications. This is something enterprise
architects should take stock of earlier rather than later. Real-time data access to BI
information and the repurposing of data warehouses will be key to orchestrating their onpremise, on-device, and on demand architecture successfully. And they will find that inmemory computing will open up new avenues and change the way they regard three-tier
architecture for data processing.
A Clean, Spare Architecture
Adopting in-memory computing results in an uncluttered architecture based on a few,
tightly aligned core systems enabled by service-oriented architecture (SOA) to provide
harmonized, valid metadata and master data across business processes. Some of the most

salient shifts and trends in future enterprise architectures will beta shift to BI self-service
application like data exploration, instead of rolling out static report solutions for structured
and unstructured data. Full integration of planning business processes with instant BI
provisioning from the source applications, replacing data warehouses for operational data
and near-real-time data transfers. Substitution of traditional ETL architecture and
expansive data cleansing and harmonization processes with real-time data validation
during all manual and automated data input processes. Central metadata and master-data
repositories that define the data architecture, allowing data stewards to work effectively
across all business units and all architecture platforms Instantaneous analysis of real-time
trending algorithms with direct impact on live execution of business processes Offline
long-term historic trending that can impact future execution of business processes
.Construction of an event insight grid architecture combining live business applications
across on-premise, on device, and on-demand architectures for proactive use of BI instead
of analyzing historic events after the fact What specific changes are introduced to existing
landscapes depend on how functional requirements, such as high availability or disaster
recovery, were implemented. The technical specifications of the hardware making up the
landscape also play a role. Another factor is whether a companys data center is committed
to a single vendors technology or is prepared to incorporate technology from a range of
vendors.
It is most likely that future deployments of SAP Net Weaver BW will not require separate
hardware to run SAP NetWeaver BW Accelerator. However, to what extent existing
hardware components can be reused for SAP NetWeaver BW Accelerator will depend on
how that hardware exploits the in-memory computing technology of the application. Realtime in-memory computing technology will most probably because a decline in sheer
numbers of Structured Query Language (SQL) satellite databases. The purpose of those
databases as flexible, ad hoc, more business-oriented, less IT-static tools might still be
required, but their offline status will be too much of a disadvantage and will delay data
updates. Some might argue that satellite systems equipped with in-memory computing
technology will take over from satellite SQL databases. For limited sandbox purposes, that
is a possibility. However, because in-memory computing technology can process massive
quantities of real-time data to provide instantaneous results, traditional satellite
architectures will always be at least one step behind. They are also likely to inherit
undesired transformations made during the ETL process.
SAP Hanna system architecture
The SAP HANA database is developed in C++ and runs on SUSE Linux Enterprise
Server. SAP HANA database consists of multiple servers and the most important
component is the Index Server. SAP HANA database consists of Index Server, Name
Server, Statistics Server, Preprocessor Server and XS Engine.

SAP server architecture 1


Whenever companies have to go deep within their data sets to ask complex and interactive
questions, and have to go broad (which means working with enormous data sets that are of
different types and from different sources) at the same time, SAP HANA is well-suited.
Increasingly there is a need for this data to be recent and preferably in real-time. Add to
that the need for high speed (very fast response time and true interactivity), and the need
to do all this without any pre-fabrication (no data preparation, no pre-aggregates, notuning) and you have a unique combination of requirements that only SAP HANA can
address effectively. When this set of needs or any subset thereof have to be addressed (in
any combination),
SAP HANA is in its elements
Index Server:
Index server is the main SAP HANA database component
It contains the actual data stores and the engines for processing the data.
The index server processes incoming SQL or MDX statements in the context of
authenticated sessions and transactions.
Persistence Layer:
The database persistence layer is responsible for durability and atomicity of transactions.
It ensures that the database can be restored to the most recent committed state after a
restart and that transactions are either completely executed or completely undone.
Preprocessor Server:

The index server uses the preprocessor server for analyzing text data and extracting the
information on which the text search capabilities are based.
Name Server:
The name server owns the information about the topology of SAP HANA system. In a
distributed system, the name server knows where the components are running and which
data is located on which server.
Statistic Server:
The statistics server collects information about status, performance and resource
consumption from the other servers in the system.. The statistics server also provides a
history of measurement data for further analysis.
Session and Transaction Manager:
The Transaction manager coordinates database transactions, and keeps track of running
and closed transactions. When a transaction is committed or rolled back, the transaction
manager informs the involved storage engines about this event so they can execute
necessary actions.
XS Engine:
XS Engine is an optional component. Using XS Engine clients can connect to SAP HANA
database to fetch data via HTTP.
Analytics and Applications
Real-time analytics The Categories of Analytics which HANA specializes
1. Operational Reporting (real-time insights from transaction systems such as
custom or SAP ERP). This covers Sales Reporting (improving fulfillment rates
and accelerating key sales processes), Financial Reporting (immediate insights
across revenue, customers, accounts payable, etc.), Shipping Reporting (better
enabling complete stock overview analysis), Purchasing Reporting (complete
real-time analysis of complete order history) and Master Data Reporting (realtime ability to impact productivity and accuracy).
2. Data Warehousing (SAP NetWeaver BW on HANA) BW customers can run
their entire BW application on the SAP HANA platform leading to
unprecedented BW performance (queries run 10-100 times faster; data loads 510 times faster; calculations run 5-10 times faster), a dramatically simplified IT
landscape (leads to greater operational efficiency and reduced waste), and a
business community able to make faster decisions. Moreover, not only is the
BW investment of these customers preserved but also super-charged. Customers
can migrate with ease to the SAP HANA database without impacting the BW
application layer at all.
3. Predictive and Text analysis on Big Data - To succeed, companies must go
beyond focusing on delivering the best product or service and uncover
customer/employee /vendor/partner trends and insights, anticipate behavior and
take proactive action. SAP HANA provides the ability to perform predictive and
text analysis on large volumes of data in real-time. It does this through the
power of its in-database predictive algorithms and its R integration capability.
With its text search/analysis capabilities SAP HANA also provides a robust way
to leverage unstructured data.

Real-time applications The Categories of Applications which HANA specializes


1. Core process accelerators Accelerate business reporting by leveraging ERP
Accelerators, which are non-disruptive ways to take advantage of in-memory
technology. These solutions involve an SAP HANA database sitting next to a
customers SAP ERP system. Transactional data is replicated in real-time from
ECC into HANA for immediate reporting, and then results can even be fed back
into ECC. Solutions include CO-PA Accelerator, Finance and Controlling
Accelerator, Customer Segmentation Accelerator, Sales Pipeline Analysis, and
more.
2. Planning, Optimization Apps SAP HANA excels at applications that require
complex scheduling with fast results, and SAP is delivering solutions that no
other vendor can match. These include Sales & Operational Planning,
BusinessObjects Planning & Consolidation, Cash Forecasting, ATP calculation,
Margin calculation, Manufacturing scheduling optimization (from start-up
Optessa), and more.
3. Sense & response apps These applications offer real-time insights on Big
Data such as smart meter data, point-of-sale data, social media data, and more.
They involve complexities such as personalized insight and recommendations,
text search and mining, and predictive analytics. Only SAP HANA is well suited
for such applications, including Smart Meter Analytics, SAP Supplier InfoNet,
SAP precision retailing, and Geo-spatial Visualization apps (from start-up
Space-Time Insight). Typically these processes tend to be data-intensive and
many could not be deployed in the past owing to cost and performance
constraints.
SAP Hanna In-memory computing
SAP HANA is a modern, in-memory database and platform that is deployable on-premise
or in the cloud. The SAP HANA platform is a flexible data source agnostic in-memory
data platform that allows customers to analyze large volumes of data in real-time. It is also
a development platform, providing an infrastructure and tools for building highperformance applications based on SAP HANA Extended Application Services (SAP
HANA XS). It is the foundation of various SAP HANA editions, like the SAP HANA
Platform Edition, providing core database technology, and the SAP HANA Enterprise
Edition, bundling additional components for data provisioning. The SAP HANA Platform
Edition integrates a number of SAP components, including the SAP HANA database, SAP
HANA studio, and SAP HANA clients.
In-memory databases use a computers main memory to store data distinguishing them
from conventional database management systems, which store data on hard disk drives
and use the main memory for temporary working copies only. Among the many benefits
of in-memory databases are significantly faster access times that result in substantial
improvements in data storage and analysis.

What is the functionality?
In-memory computing is transforming business models everywhere and it is here to stay.
What started out as a means to accelerate data analysis is now a driver of business

processes. Its extremely fast response times ensure performance indicators are available
within seconds, allowing companies to adapt to any changes in their business operations
without delay. It also paves the way for never-before-seen applications that build on the
ability to analyze all available data with unmatched flexibility and speed, for example for
complex material requirements planning processes, the real-time analysis of customer
behavior, or available-to-promise checks to find out if an order can actually be delivered to
the customer in the specified delivery time. Companies can turn to REALTECH as an
independent partner with extensive hands-on experience in the delivery of in-memory
projects. We provide consulting services to help you implement, transition to, and run an
in-memory platform. We will work with you develop a custom in-memory strategy that is
unique to your business and allows you to optimize your business processes and drive
value. Tap into our knowledge, gained from hundreds of customer projects, and our
expertise as an SAP partner for SAP HANA projects.
The benefits

Cost
SAP Hanna has the capabilities to
optimization deliver higher levels of cost reductions
and provide significant reduction in
the database footprint, without losing
any information also means that you
can dramatically reduce the
complexity of your system landscape.
This means lower purchase and
maintenance costs and less time spent
on system and lifecycle management.
High levels
of flexibity

The lightning-fast processing speeds


of in-memory computing facilitate,
accommodation of more, resources,
and give you the ability to connect
different systems, enabling the system
administrators to run more complex
analyses and reports and deliver the
results faster and based on the latest

data. And with the much shorter


response times, it now makes sense to
use mobile devices to query and return
data. The amount of data has been
reduced, allowing for faster and more
flexible data handling across all
processes.
Innovation In-memory computing releases
and
comparatively higher resources that
Optimization are better spent fostering innovation in
of Business your business processes Innovative
Models
data mining and the analysis of
unstructured data open the door for
profitable new business processes.
Development and Implementation
How can in-memory computing contribute to accelerating your IT processes and along
with them your business processes? What new opportunities does it create for your
company? We will help you develop and implement an in-memory strategy tailored to the
specific needs of your company.
Plan
Proof of Concept
Review technical and functional requirements
Optimize ABAP code
Trainings: theoretical foundations and strategic aspects of SAP HANA
Integrating the in-memory project with the overall IT strategy
Build
Transition on-premise or private/public cloud
Custom operational concepts
Code pushdown
Development and administration training
Run

Creation of an operations manual


Technical support
On- and off-site managed operation services
Continual Service Improvement
What is in-memory computing?
SA P In-Memory Computing technology enables real-time computing by bringing together
online transaction processing applications and online analytical processing applications at
a low total cost. Combining the advances in hardware technology with SA P In-Memory
Computing empowers the entire business from shop floor to boardroom by giving realtime business processes instantaneous access to data. The alliance of these two
technologies can eliminate todays information lag for your business.
With SAP Hanna, in memory computing, enterprises will be able to organizations, to bring
together, by bringing together online transaction processing (OLT P) applications and
online analytical processing (OLA P) applications at a low total cost. Combining the
advances in hardware technology with SA P In-Memory Computing empowers the entire
business from shop floor to boardroom by giving real-time business processed
instantaneous access to data. The alliance of these two technologies can eliminate todays
information lag for the business operations, With the revolution of in-memory computing
already under way, the question isnt if this revolution will impact businesses but when
and, more importantly, how. Similar to the advance of enterprise resource planning (ERP)
software in the 1990s, in-memory computing wont be introduced because a company can
afford the technology. It will be brought on board because a business cannot afford to
allow its competitors to adopt the technology first.
In order to establish an effective business intelligence (BI) strategy, IT professionals must
ask themselves which half of the money they spend onBI investments is working. Within
the marketing organization, you probably use focus groups to determine what works and
what does not. By using P Business Objects business intelligence (BI) solutions, you
can get help from powerful statistics tools to focus the organization efforts quickly.
Statistics can help Companies determine, for example, which 20%of the data in your data
warehouse is used by 80% of your users. When you know this, you can focus 80% of your
investment on that fraction of the data.
Listed below are few items that in-memory compute can offer
Listed are a few items that, in-memory computing can offer
Enable mixed workloads of analytics, operations, and performance management
In a single software landscape
Support smarter business decisions by providing increased visibility of very large
volumes of business information
Enable users to react to business events more quickly through real-time analysis and
reporting of operational data
Provide greater flexibility by delivering innovative real-time analysis and reporting
functions
S support the deployment of innovative new business applications

Assists in streamlining of the IT landscape and reduce total cost of ownership (TCO
And in the manufacturing enterprises, in-memory computing technology will connect the
shop floor to the boardroom, and the shop floor associate will have instant access to the
same data as the board member. The technology supports this by integrating on-premise,
on-demand, and on-device architectures. Once the appropriate business processes are in
place, empowered shop floor staff can take immediate action based on re all-time data to
make whatever adjustments on the shop floor are necessary. They will then see the results
of their actions reflected immediately in the relevant KPIs. You could call this true 360degree BI, as it eliminates the middleman as well as the need to create
Any reports other than whatever statutory or legal reports may be required.
Besides, one of the main use cases of in-memory compute is the shop-floors of a factory,
where enterprises, in-memory computing technology will connect the shop floor to the
boardroom, and the shop floor associate will have instant access to the same data as the
board member.
The technology supports this by integrating on-premise, on-demand, and on-device
architectures. Once the appropriate business processes are in place, empowered shop floor
staff can take immediate action based on real-time data to make whatever adjustments on
the shop floor are necessary. They will then see the results of their actions reflected
immediately in the relevant KPIs. And this process if properly implemented, it to a large
extent eliminates the need for the middleman as well as the need to create any reports
other than whatever statutory or legal reports may be required.
An apt case can be entertainment companies. Conventionally, bad movies have been able
to enjoy a great opening weekend before crashing the second weekend when negative
word-of-mouth feedback has cooled off the initial enthusiasm. That week-long grace
period is about to disappear for silver screen flops. In the future, consumer feedback wont
take a week, a day, or an hour. The very second showing of a movie could suffer from a
noticeable falloff in attendance due to consumer criticism piped instantaneously through
the new technologies. Since such rapid response is not possible with old-fashioned movie
Another case, relatest to the utility segment. The most expensive energy that a utilities
company provides is energy to meet unexpected demand during peak periods of
consumption. In those cases, the provider may have to buy additional energy to support
the power grid, which can get expensive. However, if the company could analyze trends in
electrical power consumption based on real-time meter reading, it could offer its
consumers in real time extra low rates for the week or month if they reduce their
consumption during following the next few hours.
SAP Hanna Architecture
The in-memory computing engine of SAP Hanna, database is developed in C++ and runs
on SUSE Linux Enterprise Server. SAP HANA database consists of multiple servers and
the most important component is the Index Server. SAP HANA database consists of
Index Server, Name Server, Statistics Server, Pre-processors Server and XS Engine and
powered by the following components
Index server
Name server
Statistics server

Pre-processor server
The SAP HANA database is developed in C++ and runs on SUSE Linux Enterprise
Server. SAP HANA database consists of multiple servers and the most important
component is the Index Server. SAP HANA database consists of Index Server, Name
Server, Statistics Server, Preprocessor Server and XS Engine. And the functionalities are
listed below

Index server
Persistent
layer

The database persistence layer is


responsible for durability and atomicity of
transactions. It ensures that the database
can be restored to the most recent
committed state after a restart and that
transactions are either completely
executed or completely undone

Processor
server

The index server uses the preprocessor


server for analyzing text data and
extracting the information on which the
text search capabilities are based.

Name
server

The name server owns the information


about the topology of SAP HANA system.
In a distributed system, the name server
knows where the components are running
and which data is located on which server.

Statistics
server

The statistics server collects information


about status, performance and resource
consumption from the other servers in the

system.. The statistics server also provides


a history of measurement data for further
analysis.
Session and The Transaction manager coordinates
Transaction database transactions, and keeps track of
Manager: running and closed transactions. When a
transaction is committed or rolled back,
the transaction manager informs the
involved storage engines about this event
so they can execute necessary actions.
XS Engine XS Engine is an optional component.
Using XS Engine clients can connect with
SAP HANA database to fetch data via
HTTP

SAP Hanna system 1


The index server is the most important component and referred as the heart of SAP Hanna
system.

The Index server


This component is responsible for creating and managing sessions and connections for the
database clients. Once a session is established, clients can communicate with the SAP
HANA database using SQL statements. For each session a set of parameters are
maintained like, auto-commit, current transaction isolation level etc. Users are
Authenticated either by the SAP HANA database itself (login with user and password) or
authentication can be delegated to an external authentication providers such as an LDAP
directory

Index server 1
Connection and Session Management
This component is responsible for creating and managing sessions and connections for
the database clients. Once a session is established, clients can communicate with the
SAP HANA database using SQL statements. For each session a set of parameters are
maintained like, auto-commit, current transaction isolation level etc. Users are
Authenticated either by the SAP HANA database itself (login with user and password)
or authentication can be delegated to an external authentication providers such as an
LDAP directory.
The Authorization Manager This component is invoked by other SAP HANA
database components to check whether the user has the required privileges to execute
the requested operations.SAP HANA allows granting of privileges to users or roles. A
privilege grants the right to perform a specified operation (such as create, update,
select, execute, and so on) on a specified object (for example a table, view, SQLScript

function, and so on).The SAP HANA database supports Analytic Privileges that
represent filters or hierarchy drilldown limitations for analytic queries. Analytic
privileges grant access to values with a certain combination of dimension attributes.
This is used to restrict access to a cube with some values of the dimensional attributes.
Request Processing and Execution Control: The client requests are analyzed and
executed by the set of components summarized Request Processing and Execution
Control. The Request Parser analyses the client request and dispatches it to the responsible
component. The Execution Layer acts as the controller that invokes the different engines
and routes intermediate results to the next execution step.
SQL Processor: Incoming SQL requests are received by the SQL Processor. Data
manipulation statements are executed by the SQL Processor itself. Other types of requests
are delegated to other components. Data definition statements are dispatched to the
Metadata Manager, transaction control statements are forwarded to the Transaction
Manager, planning commands are routed to the Planning Engine and procedure calls are
forwarded to the stored procedure processor.
SQLScript: The SAP HANA database has its own scripting language named SQLScript
that is designed enable optimizations and parallelization. SQLScript is a collection of
extensions to SQL. SQLScript is based on side effect free functions that operate on tables
using SQL queries for set processing. The motivation for SQLScript ito offloads dataintensive application logic into the database.
Multidimensional Expressions (MDX):
MDX is a language for querying and manipulating the multidimensional data stored
in OLAP cubes. Incoming MDX requests are processed by the MDX engine and also
forwarded to the Calc Engine.
Planning Engine:
Planning Engine allows financial planning applications to execute basic planning
operations in the database layer. One such basic operation is to create a new version of a
data set as a copy of an existing one while applying filters and transformations. For
example: planning data for a new year is created as a copy of the data from the previous

year. Another example for a planning operation is the disaggregation operation that
distributes target values from higher to lower aggregation levels based on a distribution
function.
Calc engine:
The SAP HANA database features such as SQLScript and Planning operations are
implemented using a common infrastructure called the Calc engine. The SQLScript,
MDX, Planning Model and Domain-Specific models are converted into Calculation
Models. The Calc Engine creates Logical Execution Plan for Calculation Models. The
Calculation Engine will break up a model, for example some SQL Script, into operations
that can be processed in parallel.
Transaction Manager:
In HANA database, each SQL statement is processed in the context of a transaction. New
sessions are implicitly assigned to a new transaction. The Transaction Manager
coordinates database transactions, controls transactional isolation and keeps track of
running and closed transactions. When a transaction is committed or rolled back, the
transaction manager informs the involved engines about this event so they can execute
necessary actions.
The transaction manager also cooperates with the persistence layer to achieve atomic and
durable transactions.
Metadata Manager:
Metadata can be accessed via the Metadata Manager component. In the SAP HANA
database, metadata comprises a variety of objects, such as definitions of relational tables,
columns, views, indexes and procedures. Metadata of all these types is stored in one
common database catalog for all stores. The database catalog is stored in tables in the Row
Store. The features of the SAP HANA database such as transaction support and multiversion concurrency control are also used for metadata management. In the center of the
figure you see the different data Stores of the SAP HANA database. A store is a subsystem of the SAP HANA database which includes in-memory storage, as well as the
components that manages that storage.
Persistence Layer:

The Persistence Layer is responsible for durability and atomicity of transactions. This
layer ensures that the database is restored to the most recent committed state after a restart
and that transactions are either completely executed or completely undone. To achieve this
goal in an efficient way, the Persistence Layer uses a combination of write-ahead logs,
shadow paging and save points. The Persistence Layer offers interfaces for writing and
reading persisted data. It also contains the Logger component that manages the transaction
log. Transaction log entries are written explicitly by using a log interface or implicitly
when using the virtual file abstraction.
Architecture overview
The SAP HANA database is developed in C++ and runs on SUSE Linux Enterprise
Server. SAP HANA database consists of multiple servers and the most important
component is the Index Server. SAP HANA database consists of Index Server, Name
Server, Statistics Server, Preprocessor Server and XS Engine. The SAP HANA database
has been developed on C++ and runs on SUSE Linux Enterprise Server. SAP HANA
database consists of multiple servers and the most important component is the Index
Server. SAP HANA database consists of Index Server, Name Server, Statistics Server,
Preprocessor Server and XS Engine.The SAP Hanna server architecture is designed to
provide a sturdy, high available, and reliable, in-memory computing facility for the
enterprises, and especially designing large and complex computer architecture, the
proprietary in-memory computing is designed to serve the purpose. And the adoption of
SAP Hanna database has increased exponentially over the years and it has turned out to be
a industry leader and industry standard for in-memory computing requirements, across all
business verticals and all relevant requirements. Shown is the Server architecture of SAP
Hanna

The table below describes the functionalities of each server

Index
server

The Index server is the main SAP


Hanna DB component
Index server is the main SAP HANA
database component. It contains the
actual data stores and the engines for
processing the data..The index server
processes incoming SQL or MDX
statements in the context of
authenticated sessions and transactions.

Persistent The database persistence layer is


layer
responsible for durability and atomicity
of transactions. It ensures that the
database can be restored to the most
recent committed state after a restart
and that transactions are either
completely executed or completely
undone
Name serverThe name server owns the information
about the topology of SAP HANA
system. In a distributed system, the
name server knows where the
components are running and which data
is located on which server.
Statistics
server

The statistics server collects


information about status, performance
and resource consumption from the
other servers in the system.. The
statistics server also provides a history
of measurement data for further
analysis

Session and The Transaction manager coordinates


Transaction database transactions, and keeps track
Manager: of running and closed transactions.
When a transaction is committed or

rolled back, the transaction manager


informs the involved storage engines
about this event so they can execute
necessary actions.
XS engine XS Engine is an optional component.
Using XS Engine clients can connect to
SAP HANA database to fetch data via
HTTP.

The Index server architecture
The Index server is often termed as the heart of SAP hanna.And it is the core to the
executions of the functions of the SAP Hanna. The SAP HANA Index Server contains the
majority of the magic behind SAP HANA and the architecture is shown below:

Index server architecture 2



Connection and Session Management

This component is responsible for creating and managing sessions and connections for the
database clients, once a session is established, clients can communicate with the SAP
HANA database using SQL statements. For each session a set of parameters are
maintained like, auto-commit, current transaction isolation level etc. Users are
Authenticated either by the SAP HANA database itself (login with user and password) or
authentication can be delegated to an external authentication providers such as an LDAP
director
The Authorization Manager
This component is invoked by other SAP HANA database components to check whether
the user has the required privileges to execute the requested operations. SAP HANA
allows granting of privileges to users or roles. A privilege grants the right to perform a
specified operation (such as create, update, select, execute, and so on) on a specified
object (for example a table, view, SQLScript function, and so on).The SAP HANA
database supports Analytic Privileges that represent filters or hierarchy drilldown
limitations for analytic queries. Analytic privileges grant access to values with a certain
combination of dimension attributes. This is used to restrict access to a cube with some
values of the dimensional attributes.
Request Processing and Execution Control:
The client requests are analyzed and executed by the set of components summarized as
Request Processing and Execution Control. The Request Parser analyses the client request
and dispatches it to the responsible component. The Execution Layer acts as the controller
that invokes the different engines and routes intermediate results to the next execution
step.
SQL Processor:
Incoming SQL requests are received by the SQL Processor. Data manipulation statements
are executed by the SQL Processor itself. Other types of requests are delegated to other
components. Data definition statements are dispatched to the Metadata Manager,
transaction control statements are forwarded to the Transaction Manager, planning
commands are routed to the Planning Engine and procedure calls are forwarded to the
stored procedure processor.
SQLScript:
The SAP HANA database has its own scripting language named SQLScript that is
designed to enable optimizations and parallelization. SQLScript is a collection of
extensions to SQL. SQLScript is based on side effect free functions that operate on tables
using SQL queries for set processing. The motivation for SQLScript is to offload data-

intensive application logic into the database.


Multidimensional Expressions (MDX):
MDX is a language for querying and manipulating the multidimensional data stored in
OLAP cubes. Incoming MDX requests are processed by the MDX engine and also
forwarded to the Calc Engine.
Planning Engine:
Planning Engine allows financial planning applications to execute basic planning
operations in the database layer. One such basic operation is to create a new version of a
data set as a copy of an existing one while applying filters and transformations. For
example: planning data for a new year is created as a copy of the data from the previous
year. Another example for a planning operation is the disaggregation operation that
distributes target values from higher to lower aggregation levels based on a distribution
function.
Calc engine:
The SAP HANA database features such as SQLScript and Planning operations are
implemented using a common infrastructure called the Calc engine. The SQLScript,
MDX, Planning Model and Domain-Specific models are converted into Calculation
Models. The Calc Engine creates Logical Execution Plan for Calculation Models. The
Calculation Engine will break up a model, for example some SQL Script, into operations
that can be processed in parallel.
Transaction Manager:
In HANA database, each SQL statement is processed in the context of a transaction. New
sessions are implicitly assigned to a new transaction. The Transaction Manager
coordinates database transactions, controls transactional isolation and keeps track of
running and closed transactions. When a transaction is committed or rolled back, the
transaction manager informs the involved engines about this event so they can execute
necessary actions.
The transaction manager also cooperates with the persistence layer to achieve atomic and
durable transactions.
Metadata Manager:
Metadata can be accessed via the Metadata Manager component. In the SAP HANA
database, metadata comprises a variety of objects, such as definitions of relational tables,
columns, views, indexes and procedures. Metadata of all these types is stored in one
common database catalog for all stores. The database catalog is stored in tables in the Row
Store. The features of the SAP HANA database such as transaction support and multiversion concurrency control are also used for metadata management. In the center of the
figure you see the different data Stores of the SAP HANA database. A store is a subsystem of the SAP HANA database which includes in-memory storage, as well as the
components that manages that storage.
The Row Store:
The Row Store is the SAP HANA database row-based in-memory relational data engine.
The Column Store:
The Column Store stores tables column-wise. It originates from the TREX (SAP
NetWeaver Search and Classification) product.

Want to know more about Row Data and Column Data Storage?
Persistence Layer:
The Persistence Layer is responsible for durability and atomicity of transactions. This
layer ensures that the database is restored to the most recent committed state after a restart
and that transactions are either completely executed or completely undone. To achieve this
goal in an efficient way, the Persistence Layer uses a combination of write-ahead logs,
shadow paging and save points. The Persistence Layer offers interfaces for writing and
reading persisted data. It also contains the Logger component that manages the transaction
log. Transaction log entries are written explicitly by using a log interface or implicitly
when using the virtual file abstraction.
SAP Hanna Studio
HANA Studio is an Eclipse-based, integrated development environment (IDE) that is used
to develop artifacts in a HANA server. It enables technical users to manage the SAP
HANA database, to create and manage user authorizations, to create new or modify
existing models of data etc.
Supported platforms
The SAP HANA studio runs on the Eclipse platform 3.6. We can use the SAP HANA
studio on the following platforms:
Microsoft Windows x32 and x64 versions of: Windows XP, Windows Vista,
Windows 7
SUSE Linux Enterprise Server SLES 11: x86 64-bit version


System requirements
Java JRE 1.6 or 1.7 must be installed to run the SAP HANA studio. The Java runtime must
be specified in the PATH variable. Make sure to choose the correct Java variant for
installation of SAP HANA studio:
For a 32-bit installation, choose a 32-bit Java variant.
For a 64-bit installation, choose a 64-bit Java variant.
SAP Hanna clients
HANA Client is the piece of software which enables you to connect any other entity,
including Non-Native applications to a HANA server. This other entity can be, say, an
NW Application Server, an IIS server etc.
The HANA Client installation also provides JDBC, ODBC drivers. This enables
applications written in .Net, Java etc. to connect to a HANA server, and use the server as a
remote database. So, consider client as the primary connection enabler to HANA server.
HANA Client is installed separately from the HANA studio.
SAP Hanna can be installed using the following installation paths
Microsoft Windows 32-bit -> C:\Program Files\sap\hdbstudio
Microsoft Windows 64-bit -> C:\Program Files\sap\hdbstudio
Microsoft Windows 32-bit (x86) -> C:\Program Files (x86)\sap\hdbstudio

Linux x86, 64-bit -> /usr/sap/hdbstudi


And the installation paths are
Microsoft Windows:
Go to start menu
2. Start > All Programs > SAP HANA > SAP HANA Studio
1.

Linux:
Open a shell and go to the installation directory, such as /usr/sap/hdbstudio
2. Execute the following command ./hdbstudio.
SAP Hanna Studio
What is SAP HANA Studio? Eclipse Open Integrated Development Environment (IDE)
integrates different tools in a unified environment, big ecosystem of tools
Extensibility, Multi-platform, broad adoption, Eclipse IDE-based developer environment
for SAP HANA Integrated Environment for Administration and end-to-end application
and content development for the SAP HANA Platform. Tools and Plug-Ins for working
with SAP HANA SAP HANA Studio Tools: basic components for design-time SAP
HANA repository interaction and access to run-time objects in SAP HANA database
catalog Domain specific editors for HANA development artifacts, composed in eclipse
perspectives like Administration, Development, and Modeler. The SAP HANA Systems
view is one of the basic elements within SAP HANA Studio. You can use the SAP HANA
Systems view to display the contents of the SAP HANA repository that is hosting your
development project artifacts. The catalog displays the database objects that have been
activated, for example, from design-time objects or from SQL DDL statements. The
objects are divided into schemas, which is a way to organize activated database objects.
1.

The SAP HANA Repositories view enables you to browse the contents of the repository
on a specific SAP HANA system; you can display the package hierarchy and use the
Checkout feature to download files to the workspace on your local file system. The SAP
HANA Repositories view is a list of repository workspaces that you have created for
development purposed on various SAP HANA systems. Generally, you create a
workspace, check out files from the repository, and then do most of your development
work in the systems. The Project Explorer view shows you the development files located
in the repository workspace you create on your workstation. You use the Project Explorer
view to create and modify development files. Using context-sensitive menus, you can also
commit the development files to the SAP HANA repository and activate them.

Data provisioning
It is highly necessary to be aware of the different provisioning techniques that can be
employed, and it is essential that is very important to have a clear idea on different data
provisioning techniques available. For HANA, which are with using the existing and the
tools offered b SAP Hanna repository and external tools offered by different vendors.
Broadly that can be classified as:
SAP HANA in-built tools
External Tools
And listed below are some of the options

SAP Hanna EIM (Enterprise Information


Built in
Management): The Enterprise
tools
Information Management-based data
provisioning option uses smart data
integration and smart data quality to load
data, in batch or real-time, into HANA
(on premise or in the cloud) from a
variety of sources using pre-built and
custom adapters. Installation is by
installing a Data Provisioning Agent to
house adapters and connect the source
system with the Data Provisioning
server, housed in the HANA system. The
next step is to create replication tasks,
using Web IDE, to replicate data, or flow
graphs, using Application Function
Modeler nodes, to transform and cleanse
the data and load to HANA.This feature
is available from SAP HANA SPS9
version and requires additional licensing
cost.

Flat file
upload

Using the SAP HANA in-built


functionality, we can load the data from
a flat file (excel, .csv) into SAP HANA
using HANA Studio. The manuals
available on the website offers all the
information

Remote
data sync

The Remote Data Sync service on HCP


(HANA Cloud Platform) is used to
synchronize huge numbers of remote
databases into a consolidated SAP
HANA database in the cloud. This
service is based on SAP SQL Anywhere
and its Mobil ink technology. The
Remote Data Sync service can be used
for scenarios with occasional Internet
connectivity and it provides a
sophisticated strategy for resolving data
change conflicts. By this, it ensures
transactional integrity also over unstable
networks. The synchronization of data
can be used bi-directional, that is, to
synchronize data both from a remote
database into the cloud database, and
vice versa. Typical scenarios in which
Remote Data Sync service can be used
are from the Internet-of-Things (IoT)

area
Smart data The SAP HANA smart data streaming
streaming option processes high-velocity, highvolume event streams in real time,
allowing us to filter, aggregate, and
enrich raw data before committing it to
your database.With SAP HANA smart
data streaming, you can accept data
input from a variety of sources including
data feeds, business applications,
sensors, IT monitoring infrastructure and
so on, apply business logic and analysis
to the streaming data and store your
results directly in SAP HANA.This
option is available from SAP HANA
SPS 9 revision.
Smart data This option is used to remotely access
access
the data from any source without
SDA
physically loading to SAP HANA and
can be used to build modeling objects on
top of the data. This is achieved by
creating remote connection and then
virtual tables on top of source tables.
The restriction with virtual tables is, it
can be only used to build calculation
views in SAP HANA.This option is

available from SAP HANA SPS6


revision.
There are four categories of utilities that comes with SAP Hanna for assisting designers
and developers in provisioning, as listed below

SAP Replication Server (SRS)


moves and synchronizes
transactional data including DML
and DDL across the enterprise,
providing low impact, guaranteed
data delivery, real-time business
intelligence, and zero operational
downtime.SRS supports log-based
replication from and to
heterogeneous databases, except for
the homogeneous SAP HANA to
SAP HANA replication, which is
trigger-based. We can use SAP
Replication Server to do an initial
load as well as replication in real
time to SAP HANA, at both table
and database level, from these
primary databases:
SAP Adaptive Server
Enterprise
Oracle
Microsoft SQL Server

DB2 UDB LUW


SAP Business Suite running
on SAP ASE, Oracle, MSSQL
Server or DB2 UDB LUW
SAP HANA database
Also, SRS includes Replication
Agent for SAP HANA (RAH), a
light-weight server that replicates
data from a primary SAP HANA to
a replicate SAP HANA database
using trigger-based replication,
primarily used for real time data
distribution and real time reporting.
2.Direct
Extractor
Connection
(DXC):

The SAP HANA Direct Extractor


Connection (DXC) is used to
redirect data from embedded SAP
BW system (For SAP ECC
extractors) to HANA table using
http connection. We face significant
complexity while building
modeling objects for SAP ECC
extractors in SAP HANA. In many
cases, data from different areas in
SAP Business Suite systems
requires application logic to
appropriately represent the state of

business documents. SAP Business


Content Data Source Extractors
have been available for many years
as a basis for data modeling and
data acquisition for SAP Business
Warehouse; now with DXC, these
SAP Business Content Data Source
Extractors are available to deliver
data directly to SAP HANA.DXC is
a batch-driven data acquisition
technique; it should be considered
as a form of extraction,
transformation and load although its
transformation capabilities are
limited to user exit for extraction.
SAP data
services

SAP Data Services is an enterprise


level ETL (Extraction,
Transformation and Loading) tool
which can used be used to load data
from any source to any target in
either real-time or batch. SAP Data
Services is a certified ETL tool
from SAP to perform batch loading
into SAP HANA. Please go through
the below articles to know more
about SAP Data Services.

Introduction and Overview of SAP


Data Services Data Loading into
SAP HANA using Data Services.
SAP Landscape The SAP Landscape
Transformation Transformation tool uses trigger(SLT):
based technology to transfer the
data from any source to SAP
HANA in real-time. Most of the
time this tool is used if the source is
an SAP application likes SAP ECC,
CRM.Please go through the below
article to get details about how to
replicate data from source to SAP
HANA in real-time using SLT.
Data Replication to SAP HANA
using SLT system
In-memory reporting and analysis of business data require data provisioning from a source
system to the SAP HANA database.

Task of loading business data from a source system


The methods for performing data replication are shown in the figure below. The main
components involved in all replication scenarios are:

SAP HANA, consisting of the SAP HANA database and SAP HANA studio, which is an
administration tool. User interfaces, such as SAP BusinessObjects Dashboards or Web
Intelligence, are not part of SAP HANA.
Source system

Trigger based replication or SAP landscape transformation
The methods for performing data replication are shown in the figure below. The main
components involved in all replication scenarios are:
SAP HANA, consisting of the SAP HANA database and SAP HANA studio, which is
an administration tool. User interfaces, such as SAP BusinessObjects Dashboards or
Web Intelligence, are not part of SAP HANA.

Initial Data Load process 1


The initial load of business data is initiated using the SAP HANA studio. The initial load
message is sent from the SAP HANA system to the SLT system, which in turn passes the
initialization message to the ERP system. Besides,, the SLT system initiates the set-up of
replication log tables in the database of the ERP system for each table to be replicated, and
post which, the transaction tables are completed, the SLT system begins a multi-threaded
replication of data to the target system, which enables high speed data transfer.
The initial load of data can be executed while the source system is active. The system load
that this process causes can be controlled by adjusting the number of worker threads
performing the initial replication. Simultaneously, the SLT system begins detecting any
data changes that occur while the initial load process is running. These changes are
already recorded in logging tables during the initial load phase and are propagated during
the replication phase to the target SAP HANA system after the initial load has been
completed. The multi-version concurrency control (MVCC) of the SAP HANA database
prevents issues that might be caused by the overlapping of the initial load process and new
database transactions, and After the initial load process has completed, the SLT system
continues to monitor the transaction tables in the ERP system, and replicates data changes
in the source system to the SAP HANA system in near real time
Required Software Components
The replication method requires the following component: SAP Landscape
Transformation: this controls the entire replication process by triggering the initial load
and coordinating the delta replication. Installation considerations. The SLT system can be
installed in the ways shown below. You can select between these options depending on
your current system landscape and the software versions in your landscape: Installation on
your ERP systems Installation on a standalone SAP system (recommended setup)
Rational and overview
Deployment of projects may face significant complexities in modeling entities in SAP
Hanna business suite systems. And in many cases, data is different from the areas in SAP
business Suite systems requires application logic to appropreriately represent the state of

business documents. Hanna, ensures that SAP Business Content Data source Extractors,
have been made available which forms the fundamentals and for data modeling and data
acquisition for SAP business warehouse.Hana ensures that, SAP business content data
source extractors have been made available which creates the fundamentals for data
modeling and data acquisition for SAP business warehouse and with DCX the SAP
Business content data source extractors are available to deliver data directly to SAP
Hanna. DCX is a batch oriented data acquisition method and it should be considered as a
form of extraction and transformation and load although capabilities are limited to the
users for extraction. Besides, a key point about the DCX is that, in many means, the
acquisition occurs, every 15 minutes.
ETLeverage pre-existing foundational data models of SAP Business Suite entities for use
in SAP HANA data mart scenarios significantly reduces complexity of data modeling
tasks in SAP HANA, besides, accelerating timelines for SAP HANA implementation
projects. Provide semantically rich data from SAP Business Suite to SAP HANA:
a.Ensures that data appropriately represents the state of business documents from ERP.b,
Application logic to give the data the appropriate contextual meaning is already built into
many extractors. Reduces the TCO -Re-uses existing proprietarextrxtion methods to
transformation, and load mechanism built into SAP Business Suite systems over a simple
http(s) connection to SAP HANA. There is no No additional server or application needed
in system landscape Change data capture (delta handling): Efficient data acquisitions
only bring new or changed data into SAP HANA DXC provides a mechanism to properly
handle data from all delta processing types


Default DXC Configuration for SAP Business Suite
DXC is available in different configurations based on the SAP Business Suite system: The
default configuration is available for SAP Business Suite systems based on SAP
NetWeaver 7.0 or higher such as ECC 6.0., The alternative configuration is available for
SAP Business Suite systems based on releases lower than SAP Net Weaver 7.0 such as
SAP ERP 4.6, for example An SAP Business Suite system is based on SAP Net Weaver.
As of SAP Net Weaver version 7.0, SAP Business Warehouse (BW) is part of SAP Net
Weaver itself, which means a BW system exists inside SAP Business Suite systems such
as ERP (ECC 6.0 or higher). This BW system is referred to as an embedded BW system.
Typically, this embedded BW system inside SAP Business Suite systems is actually not
utilized, since most customers who run BW have it installed on a separate server, and they
rely on that one. With the default DXC configuration, that SAP Hanna systems, adopt,
scheduling and monitoring features of this embedded BW system, but do not utilize its
other aspects such as storing data, data warehousing, or reporting / BI. DXC extraction
processing essentially bypasses the normal dataflow, and instead sends data to SAP
HANA. The following illustration depicts the default configuration of DXC.

DXC Configuration 1
An In-Memory Data Store Object (IMDSO) is generated in SAP HANA, which will
directly corresponds to the structure of the Data Source which is currently been in use.
The This IMDSO consists of several tables and an activation mechanism. The active data
table of the IMDSO can be utilized as basis for building data models in SAP HANA
(attribute views, analytical views,
Data is transferred from the source SAP Business Suite system using an HTTP connection.
Generally, the extraction and load process is virtually the same as when extracting and
loading SAP Business Warehouse you rely on Info Package scheduling, the data load
monitor, process chains, etc. which are all well-known from operating SAP Business
Warehouse. And calculation views).And shown below is the ETL replication architecture

ETL Replication architecture 1


The figure above gives an overview of the ETL-based replication method. Here, data
replication is operated by Data Services. Its main components are the Data Services
Designer, where you model the data flow, and the Data Services Job Server for the
execution of the replication jobs. An additional repository is used to store the metadata
and the job definitions, which are listed below:

Data flow As for any replication scenario you have


to define a series of parameters for the
two systems involved. Utilizing Data
Services you have to set up data stores to
define such parameters. You use the
Designer to set up data stores.
Data Setup Setting up a data store for the source
system SAP ERP, choose SAP
Applications for the type of data store,
and specify the address of the system, the

user name and password allowing Data


Services to access the system. Additional
settings depend on the type of SAP ERP
objects to be read. For the target system
of the replication, the SAP HANA
database, you have to set up a separate
data store as done for the source system.
Data flow System designers and administrators are
modeling required to define a series of parameters
for the two systems involved for any
replication scenerio.In order for the data
stores to be utilized, it is necessary to
establish such parameters, even during
the initial stages of the deployment, these
parameters should be done as a part of
the systems configurations.
Data Flow
for Initial
Load and
Update

SAP data services are adopted for the


task of load of business data from the
source system into SAP HANA database
as well as updating the replicated data
(delta handling)

Replication Since this methods permit scheduling of


job
replication jobs by the administrators,
schedule they Since you can schedule the

replication jobs when using Data


Services, this method is suitable where
the source system must be protected
from additional load during the main
business hours. In this way, SAP Hanna
also allows the administrators to shift the
replication workload, for example, to the
night. As a result, the data that is
available for reporting always represents
the state reached by the time when the
latest replication job was started. Use the
Management Console, which comes with
Data Services, to schedule replication
jobs, should be used for this purpose. It
also provides a wide range of options
where, both designers and administrators
can Select from different tools and
methods for the scheduling. And it is
always recommended to use the
Management console, to monitor and
manage the replication process
SAP Replication Server (SRS) moves and synchronizes transactional data including DML
and DDL across the enterprise, providing low impact, guaranteed data delivery, real-time
business intelligence, and zero operational downtime. SRS supports log-based replication
from and to heterogeneous databases, except for the homogeneous SAP HANA to SAP
HANA replication, which is trigger-based. You can use SAP Replication Server to do an
initial load as well as replication in real time to SAP HANA, at both table and database
level, from these primary databases:
SAP Adaptive Server Enterprise
Oracle

Microsoft SQL Server


DB2 UDB LUW
SAP Business Suite running on SAP ASE, Oracle, MSSQL Server or DB2 UDB
LUW
SAP HANA database
Besides SRS includes Replication Agent for SAP HANA (RAH), a light-weight server
that replicates data from a primary SAP HANA to a replicate SAP HANA database
using trigger-based replication, primarily used for real time data distribution and real
time reportingFor all of the above primary databases, initial load materialization of data
as well as continuous real-time transactional replication are supported. The initial load
materialization feature allows you to set up replication without any downtime of the
primary data server and offers high performance
Designers and administrators are provided the options, of setting up the replication
environment for replication into the SAP HANA database using the Replication
Management Agent (RMA).Also, it provides for options for the Data Assurance that
compares row data and schema between two or more databases, reports and rectifies
discrepancies. Now comparison between any data and row storage is made possible.
Currently, the option is provided for comparing row data between any combinations of
SAP Adaptive Server Enterprise (SAP ASE), SAP HANA, IBM DB2 Universal
Database(UDB), Microsoft SQL Server, or Oracle databases in a heterogeneous
comparison environment.

SAP Hanna DB System 1


And these components are necessary for implementing a PrimaryDB-to-SAP-HANA
database replication system:

A primary data server

A replicate SAP HANA database data server

A Replication Server (with Express Connect for SAP HANA database)



The Replication Server Options component (this component is not required for ASE
Primary DB to SAP HANA): Replication Agent for MSSQL or DB2 or Oracle

Replication Agent for SAP HANA (RAH)
SAP Hanna Architecture components
The SAP HANA database is developed in C++ and runs on SUSE Linux Enterprise
Server. SAP HANA database consists of multiple servers and the most important
component is the Index Server. SAP HANA database consists of Index Server, Name
Server, Statistics Server, Pre-processor Server and XS Engine, which is described below

Index Server Contains the actual data and the engines


for processing the data. It also
coordinates and uses all the other servers.
Name Server Holds information about the SAP HANA
database topology. This is used in a
distributed system with instances of
HANA database on different hosts. The
name server knows where the
components are running and which data
is located on which server.
Statistics
server

Collects information about Status,


Performance and Resource Consumption
from all the other server components.
From the SAP HANA Studio we can
access the Statistics Server to get status of
various alert monitors.

Pre-processor Is used for Analyzing Text Data and

server

extracting the information on which the


text search capabilities are based.

XS engine

An optional component. Using XS


Engine clients can connect to SAP HANA
database to fetch data via HTTP.


Listed below are the functionality

Connections Component is responsible for creating


and Sessions and managing sessions and
management connections for the database clients.
Once a session is established, clients
can communicate with the SAP
HANA database using SQL
statements. For each session a set of
parameters are maintained like, autocommit, current transaction isolation
level etc. Users are Authenticated
either by the SAP HANA database
itself (login with user and password)
or authentication can be delegated to
an external authentication providers
such as an LDAP directory.
Processing
The Request Parser analyses the client
and Execution request and dispatches it to the
control
responsible
component.
The
Execution Layer acts as the controller

that invokes the different engines and


routes intermediate results to the next
execution step. For example,
Transaction Control statements are
forwarded to the Transaction
Manager. Data Definition statements
are dispatched to the Metadata
Manager and Object invocations are
forwarded to Object Store. Data
Manipulation
statements
are
forwarded to the Optimizer which
creates an Optimized Execution Plan
that is subsequently forwarded to the
execution layer.
S@L parser

Checks the syntax and semantics of


the client SQL statements and
generates the Logical Execution Plan.
Standard SQL statements are
processed directly by DB engine. The
SAP HANA database has its own
scripting language named SQLScript
that is designed to enable
optimizations and parallelization.
SQLScript is a collection of
extensions to SQL. SQLScript is
based on side effect free functions that

operate on tables using SQL queries


for set processing. The motivation for
SQLScript is to offload data-intensive
application logic into the database.
Multi-media (MDX) is a language for querying and
extensions
manipulating the multidimensional
data stored in OLAP cubes. The SAP

HANA database also contains a
component called the Planning
Engine that allows financial planning
applications to execute basic planning
operations in the database layer. One
such basic operation is to create a new
version of a dataset as a copy of an
existing one while applying filters and
transformations.
For
example:
Planning data for a new year is
created as a copy of the data from the
previous year. This requires filtering
by year and updating the time
dimension. Another example for a
planning
operation
is
the
disaggregation
operation
that
distributes target values from higher
to lower aggregation levels based on a
distribution function. The SAP HANA

database also has built-in support for


domain-specific models (such as for
financial planning) and it offers
scripting capabilities that allow
application-specific calculations to
run inside the database.
Calc Engine The SAP HANA database features
such as SQLScript and Planning
operations are implemented using a
common infrastructure called the Calc
engine. The SQLScript, MDX,
Planning Model and Domain-Specific
models are converted into Calculation
Models. The Calc Engine creates
Logical
Execution
Plan
for
Calculation Models. The Calculation
Engine will break up a model, for
example some SQL Script, into
operations that can be processed in
parallel. The engine also executes the
user defined functions
Hanna DB

Each SQL statement is processed in


the context of a transaction. New
sessions are implicitly assigned to a
new transaction. The Transaction

Manager
coordinates
database
transactions, controls transactional
isolation and keeps track of running
and closed transactions. When a
transaction is committed or rolled
back, the transaction manager informs
the involved engines about this event
so they can execute necessary actions.
The transaction manager also
cooperates with the persistence layer
to achieve atomic and durable
transactions
Hanna
Database

Each SQL statement is processed in


the context of a transaction. New
sessions are implicitly assigned to a
new transaction. The Transaction
Manager
coordinates
database
transactions, controls transactional
isolation and keeps track of running
and closed transactions. When a
transaction is committed or rolled
back, the transaction manager informs
the involved engines about this event
so they can execute necessary actions.
The transaction manager also
cooperates with the persistence layer

to achieve atomic and durable


transactions
Meta data
manager

The SAP HANA database metadata


comprises of a variety of objects, such
as definitions of relational tables,
columns, views, and indexes,
definitions of SQLScript functions
and object store metadata. Metadata
of all these types is stored in one
common catalogue for all SAP HANA
database stores (in-memory row store,
in-memory column store, object store,
disk-based). Metadata is stored in
tables in row store. The SAP HANA
database features such as transaction
support, multi-version concurrency
control, are also used for metadata
management. In distributed database
systems central metadata is shared
across servers. How metadata is
actually stored and shared is hidden
from the components that use the
metadata manager.

Authorization is invoked by other SAP HANA


manager
database components to check
whether the user has the required

privileges to execute the requested


operations. SAP HANA allows
granting of privileges to users or roles.
A privilege grants the right to perform
a specified operation (such as create,
update, select, execute, and so on) on
a specified object (for example a
table, view, SQLScript function, and
so on).
Database
optimizer

Optimizer gets the Logical


Execution Plan from the SQL Parser
or the Calc Engine as input and
generates the optimized Physical
Execution Plan based on the database
Statistics. The database optimizer
which will determine the best plan for
accessing row or column stores.

Database
Executor

Basically executes the Physical


Execution Plan to access the row and
column stores and also process all the
intermediate results.

Row and Column Store


SAP Hanna works on its unique storage model called Row and Column storage, this is a
proprietary storage from SAP, which is mainly designed for massive architectures. The
Row Store is the SAP HANA database row-based in-memory relational data engine.
Optimized for high performance of write operation, Interfaced from calculation /
execution layer. Optimised Write and Read operation is possible due to Storage separation

i.e. Transactional Version Memory & Persisted Segment.

Hanna storage design 1


Transactional Version Memory contains temporary versions i.e. Recent versions of
changed records. This is required for Multi-Version Concurrency Control (MVCC). Write
Operations mainly go into Transactional Version Memory. INSERT statement also writes
to the Persisted Segment.
Persisted Segment contains data that may be seen by any ongoing active
transactions. Data that has been committed before any active transaction was
started.
Version Memory Consolation moves the recent version of changed records
from Transaction Version Memory to Persisted Segment based on Commit ID. It
also clears outdated record versions from Transactional Version Memory. It can
be considered as garbage collector for MVCC.
Segments contain the actual data (content of row-store tables) in pages. Row
store tables are linked list of memory pages. Pages are grouped in segments.
Typical Page size is 16 KB.
Page Manager is responsible for Memory allocation. It also keeps track of
free/used pages.
The Column Store is the SAP HANA database column-based in-memory relational data
engine. Parts of it originate from TREX (Text Retrieval and Extraction) i.e. SAP Net
Weaver Search and Classification. This technology offers, full featured relational based
column and row based storage options. It SAP Hanna offers the best in-memory
computing, higher, performance of read and write operations, efficient data compression,
and has been optimized for high performance read operations, Interfaced from calculation
/ execution layer. Optimized Read and Write operation is possible due to Storage
separation i.e. Main & Delta
The Persistence Layer is responsible for durability and atomicity of transactions. It
ensures that the database is restored to the most recent committed state after a restart and
that transactions are either completely executed or completely undone. To achieve this
goal in an efficient way the persistence layer uses a combination of write-ahead logs,
shadow paging and save points. The persistence layer offers interfaces for writing and
reading data. It also contains SAP HANAs logger that manages the transaction log. Log
entries can be written implicitly by the persistence layer when data is written via the
persistence interface or explicitly by using a log interface

SAP Hanna Hosts Systems


SAP Hanna offers multiple technical deployment options, which are listed below, the
criteria for choosing one above the other is purely dictated by the requirements, the
availability factors and cost, and the technical deployment options, determine how SAP
HANA systems, hosts used for SAP HANA systems and applications running on SAP
HANA are deployed.
Single Application on One SAP HANA System (SCOS)
A single application on one SAP HANA system is also known as Single Component on
One System (SCOS and in order to have an understanding it always suggested to be aware
of the SCOS. In this configuration, a single application runs in a single schema, in a
single SAP HANA database as part of an SAP HANA system. This is a simple,
straightforward scenario that is supported for all scenarios without restriction. Multiple
Host System - Multiple Applications on One SAP HANA System (MCOD) Multiple
applications on one SAP HANA system is also known as Multiple Components on One
Database (MCOD).

Single host system 1

Multiple Host system 1


The technical deployment type MCOD refers to the scenario where more than one

application, scenario, or component runs on one SAP HANA system. This deployment
type is available, with restrictions, for production SAP HANA systems.
Multiple SAP HANA Systems on One Host (MCOS)


SAP Hanna System types
The number of hosts in a SAP HANA system landscape determines the SAP HANA
system type. Requires. The host provides links to the installation directory, data directory,
and log directory or to the storage itself. The storage needed for an installation does not
have to be on the host. In particular, shared data storage is required for distributed
systems. requires. The host provides links to the installation directory, data directory, and
log directory or to the storage itself. The storage needed for an installation does not have
to be on the host. In particular, shared data storage is required for distributed systems. An
SAP HANA system can be configured as one of the following types:
Single-host system - One SAP HANA instance on one host.

Distributed system (multiple-host system) - Multiple SAP HANA instances
distributed over multiple hosts, with one instance per host
Technical representation of single and multiple hosts systems are shown

Single host system 1


Multiple Host Systems
If the system consists of multiple connected hosts, it is called a distributed system The

following graphic shows the file system for a distributed installation:

Distributed host systems 1


Distributed systems can exhibit the following properties
With Distributed host systems scaling is is easy and feasible and can be achieved by either
by increasing RAM for a single server, or by adding hosts to the system to deal with larger
workloads. This allows you to go beyond the limits of a single physical server Distributed
systems can be used for failover scenarios and to implement high availability. Individual
hosts in a distributed system have different roles (master, worker, slave, and standby)
depending on the task.
SAP Hanna Application Development platform
SAP HANA provides the basis for an application development platform, where myriad
different types of applications can be built on, and run on, SAP HANA. Within this
category, there are two different types of applications that can be designed in this manner:
native SAP HANA applications, and applications with another application server that
connects to SAP HANA: Along with the development tools, these components form an
application development platform and runtime that is capable of build, deploy, and
operate SAP HANA-based software applications of all kind and such applications
normally have an HTML or mobile app user interface that connects to SAP HANA using
HTTP. The name for these described capabilities is SAP HANA Extended Application
Services, or simply XS. SAP HANA-based Applications with Another Type of
Application Server (for example, .NET or Java)
Various types of applications can be built on, and run on, SAP HANA utilizing the
architecture of other widely-known application servers and languages. Applications
written using .NET are integrated with SAP HANA using Open Database Connectivity
(ODBC), which is a standard, implementation-agnostic C-based API for accessing a
database. Applications written using Java integrate using Java Database Connectivity
(JDBC), which functions similarly to ODBC in principle. These interface types provide
methods for creating and maintaining connections, transactions, and other mechanisms for
create, read, update, and delete operations in SAP HANA; these methods map directly to
the underlying SQL semantics, hiding the actual communication details. Essentially, any
application that can utilize ODBC, ODBO, or JDBC can integrate with SAP HANA.

If the system consists of multiple connected hosts, it is called a distributed system. The
following graphic shows the file system for a distributed installation:

File system for a distributed installation 1


A distributed system might be necessary in the following cases:
Administrators will be able to scale SAP HANA either by increasing RAM for a single
server, or by adding hosts to the system to deal with larger workloads, thus allowing for
architectects to create a scalable architecture, which also allows to go beyond the standard
configuration of a single physical server. Besides, such systems can also be used in
failover scenarios and to implement high availability. Individual hosts in a distributed
system have different roles (master, worker, slave, and standby) depending on the task.

SAP Hanna Virtualization
The technical deployment type SAP HANA with Virtualization refers to the scenario
where one or more SAP HANA database SIDs are deployed on one or more Virtual
Machines running on SAP HANA server hardware and a single host virtualization
architecture is shown

Single Host architecture 1


SAP Hanna Network
It is always recommended that, designers, to separate external and internal
communications, by installing firewalls or creating routing tables, and separate network
adaptors, with a separate IP address for each of the different networks. SAP HANA
supports the isolation of internal communication from outside and for security purposes,
SAP Hanna adopts, separate external and internal communication, certified SAP HANA
hosts use a separate network adapter with different subnets and firewalls besides An SAP
HANA data center deployment can range from a database running on a single host to a
complex distributed system with multiple hosts located at a primary and one or more
secondary sites, and supporting a distributed multi-terabyte database with full high
availability and disaster recovery. In terms of network connectivity, SAP HANA supports
traditional database client connections with SAP HANA Extended Application Services
(SAP HANA XS), Web-based clients. SAP HANA can be integrated with transactionoriented databases using replication services, as well as with high-speed event sources.
SAP HANA-based applications can be integrated with external services such as e-mail,
Web, and R-code execution. SAP Hanna allows for customization of network firewall and
other security parameters to be customized to adapt to the individual environments. And
the setup of an SAP HANA system, and the corresponding data center and network
configurations, depends on your companys environment and implementation
considerations. Some of these considerations are:
Support for traditional database clients, Web-based clients, and administrative
connections

The number of hosts used for the SAP HANA system, ranging from a single-host
system to a complex distributed system with multiple hosts

Support for high availability through the use of standby hosts, and support for
disaster recovery through the use of multiple datacenters

Security and performance



SAP HANA has different types of network communication channels to support the
different SAP HANA scenarios and setups:

Channels used for external access to SAP HANA functionality by end-user
clients, administration clients, application servers, and for data provisioning via
SQL or HTTP

Channels used for SAP HANA internal communication within the database or,
in a distributed scenario, for communication between hosts
For the purpose of network separation, separate network the certified SAP HANA hosts
use a separate network adapter with a separate IP address for each of the different
networks. SAP HANA supports the isolation of internal communication from outside
access and also configurations can be created, to separate external and internal
communication and use of SSL for additional security

Network zones
Separate network zones, each with its own configuration, allow you to control and limit
network access to SAP HANA to only those channels required for your scenarios, while
ensuring the required communication between all components in the SAP HANA network
And these network zones can be basically described as follows:.

Client zone The network in this zone that is adopted


by SAP application servers, by clients
such as the SAP HANA studio or Web
applications running against the SAP
HANA XS server, and by other data
sources such as SAP NetWeaver
Business Warehouse.
Internet zone This zone covers the innermost network
between hosts in a distributed system as
well as the SAP HANA system
replication network.

Storage zone This is the zone, where (the details of )


network connections for the backups
and enterprise storage and it also
referred as the zone, where, the network
connections for backup storage and
enterprise storage. Generally, this area
or zone, which is often preferred, for
storage requirements, which often,
involves separate, externally attached
storage subsystem devices that are
capable of providing dynamic mountpoints for the different hosts, according
to the overall landscape.

The SAP HANA services use IP
Internal host addresses to communicate with each
resolution other. Host names are mapped to these
IP addresses through internal host name
resolution, a technique by which the use
of specific and/or fast networks can be
enforced and communication restricted
to a specific network

High availability
Designed to deliver, high availability and high performance, it is considered one of the
best platforms to host enterprise grade platform of processing large volumes of data and
data sets, and also in the area of recovery.
SAP HANA is fully designed for high availability. It supports recovery measures ranging
from faults and software errors, to disasters that decommission an entire data center. High
availability is the name given to a set of techniques, engineering practices and design

principles that support the goal of business continuity.



High availability is achieved by the means of eliminating single points of failure (fault
tolerance), and providing the ability of rapid resumption of business operarations,
immediately after an outage or systems failure with minimal business losses. Fault
recovery is the process of recovering and resuming operations after an outage due to a
fault. Disaster recovery is the process of recovering operations after an outage due to a
prolonged data center or site failure. Preparing for disasters may require backing up data
across longer distances, and may thus be more complex
Besides, data center redundancy. SAP HANA provides several levels of defense against
failure-related outages: Hardware vendors provide systems that comes with system
redundancy, at all levels. This includes redundant power supplies and fans, enterprise
grade error-correcting memories, fully redundant network switches and routers, and
uninterrupted power supply (UPS). Disk storage systems use batteries to guarantee writing
even in the presence of power failure, and use striping and mirroring to provide
redundancy for automatic recovery from disk failures. Generally speaking, all these
redundancy solutions are transparent to SAP HANAs operation, but they form part of the
defense against system outage due to single component failures.

Software SAP HANA is based on SUSE Linux Enterprise 11 for SAP and includes
security pre-configurations (for example, minimal network services). Additionally, the
SAP HANA system software also includes a watchdog function, which automatically
restarts configured services (index server, name server, and so on), Persistence SAP
HANA persists transaction logs, save points and snapshots to support system restart and
recovery from host failures, with minimal delay and without loss of data. Standby and
Failover Separate, dedicated standby hosts are used for failover, in case of failure of the
primary, active hosts. This improves the availability by significantly reducing the recovery
time from an outage.
Replication
As an in-memory database, SAP HANA is not only concerned with maintaining the
reliability of its data in the event of failures, but also with resuming operations with most
of that data loaded back in memory as quickly as possible.
SAP HANA supports the following recovery measures from failures:
Disaster recovery support
Backups: Periodic saving of database copies in safe place.
Storage replication: Continuous replication (mirroring) between primary storage and
backup storage over a network (may be synchronous).
System replication: Continuous update of secondary systems by primary system,
including in-memory table loading.
Fault recovery support:
Service auto-restart: Automatic restart of stopped services on host (watchdog).
Host auto-failover: Automatic failover from crashed host to standby host in the same
system.
System replication: Continuous update of secondary systems by primary system,

including in-memory table loading.



The system replication is capable of performing, both fault and disaster recovery to
achieve high availability. The data pre-load option can be used for fault recovery to enable
a quicker takeover than with Host Auto-Failover. You can build a solution with single
node
Systems and do not need a scale out system and the additional storage and associated
costs.

Host name resolutions

The SAP HANA services use IP addresses to communicate with each other. Host names
are mapped to these IP addresses through internal host name resolution, a technique by
which the use of specific and/or fast networks can be enforced and communication
restricted to a specific network. For single-host systems, no additional configuration is
required. The services listen on the loopback interface only (IP address 127.0.0.1). In the
global.ini files, the [communication] listen interface is set to .local,.
In a environment of distributed architecture, and with multiple hosts, it is must that, the
network needs to be configured so that underserviced communication is operational
throughout the entire landscape. In this setup, the host names (these could be virtual host
names) of all hosts must be known to each other and thus to the SAP HANA system. This
can be achieved by manually adding all hosts to each /etc/hosts file on the operating
system of each host and A distributed system can run with or without a separate network
definition for underserviced communication
The SAP HANA services use IP addresses to communicate with each other. Host names
are mapped to these IP addresses through internal host name resolution, a technique by
which the use of specific and/or fast networks can be enforced and communication
restricted to a specific network. And Single Host Versus Multiple Hosts, there are no
requirements of any additional configurations to be performed and the services listen on
the loopback interface only (IP address 127.0.0.1). In the global.ini files, the
[communication] listen interface is set to .local: Besides, in a distributed scenario with
multiple hosts, the network needs to be configured so that underserviced communication is
operational throughout the entire landscape. In this setup, the host names (these could be
virtual host names) of all hosts must be known to each other and thus to the SAP HANA
system. This can be achieved by manually adding all hosts to each /etc/hosts file on the
operating system of each host. Besides, a distributed system can run with or without a
separate network definition for underserviced communication. Host names, can be used
for the purpose of creating SAP Hanna name services, name server or index servers, and
start, resume and stop SAP Hanna services, and additionally, SAP HANA system views
with a HOST column show these host names.

Example of Default Host Names for SAP HA 1


Virtual Host names
Another approach is to specify alternative host names during installation. These are
referred to as virtual hostnames. Virtual host names must also be unique across multiple
SAP HANA systems if more than one data center or site is used.
Host names specified in this manner must be resolvable during installation time as well as
when SAP HANA is in operation. This is achieved, for example, by adding an <ip>
<hostname> line to the operating system file /etc/hosts that contain the hostname-to-IP
address mappings for the TCP/IP subsystem. Here is an example of what this might look
like at operating system level for one host:
Another approach is to specify alternative host names during installation. These are
referred to as virtual hostnames. Virtual host names must also be unique across multiple
SAP HANA systems if more than one data center or site is used.
Host names specified in this manner must be resolvable during installation time as well as
when SAP HANA is in operation. This is achieved, for example, by adding an <ip>
<hostname> line to the operating system file /etc/a host that contains the hostname-to-IP
address mappings for the TCP/IP subsystem. Here is an example of what this might look
like at operating system level for one host: Administrators are provided with privileges to
Virtual hostnames are assigned as part of the installation process with the LCM command
line tool hdblcm using the following parameter:
Host name resolution for system replication
The correct mapping of internal host names between primary and secondary systems is
required for system replication. With SAP HANA system replication, each SAP HANA
instance communicates on the service level with a corresponding peer in the secondary
system to persist the same data and logs as in the primary system. The replication of the
transactional load can be configured to work in synchronous or asynchronous mode,
depending mainly on the distance between the two sites. For a full description of system
replication, Communication between the primary and the secondary system is based on

internal host names. The host names of the other site must always be resolvable, for
example, through configuration in SAP HANA or corresponding entries in the /etc/hosts
file.
Host Name Resolution for Client Communications
Client applications communicate with SAP HANA servers from different platforms and
types of clients via a client library (such as SQLDBC, JDBC, ODBC, DBSL, ODBO or
ADO.NET) for SQL or MDX access. And in, In distributed systems, the application has a
logical connection to the SAP HANA system: that is, the client library may in fact use
multiple connections to different servers or change to a different underlying connection.
The client library supports load balancing and minimizes communication overhead by
Selecting connections based on load data
Routing statements based on information about the location of data
Communication with SAP HANA hosts from a Web browser or a mobile application is
requested using the HTTP protocol, which enables access to SAP HANA Extended
Application Services (SAP HANA XS).
Public Host Name Resolution
By design, An SQL client library always connects to the first available host specified in
the connect string. From this host, the client library then receives a list of all the hosts.
During operations, statements may be sent to any of these hosts this works as long as there
is only one external network. If a hostname or IP address is irresolvable, the client library
falls back on the host names in the connect string: In single-host systems, the user doesnt
normally notice this. In rare cases, the connection attempt does not fail immediately but
waits for a tcp timeout, making the first statement run very slowly. In distributed systems,
performance is impaired because statements must first be sent to the initial host and then
forwarded on the server side to the right host names
Connect String with Multiple Hostnames
In a distributed SAP HANA system consisting of more than one host, a list of hosts (host:
port) is specified in the SQL client library connect string. It is practically possible that, all
hosts can take the role of the active master, they are one of the three configured master
candidates, must be listed in the connect string to allow an initial connection to any of
them in the event of a host auto-failover. A host auto-failover is an automatic switch from
a crashed host to a standby host in the same system. One (or more) standby hosts are
added to a SAP HANA system and configured to work in standby mode. As long as they
are in standby mode, these hosts do not contain any data and do not accept requests or
queries. When an active (worker) host fails, a standby host automatically takes its place
and Inclusion of the standby hosts in the connect string is mandatory if they are master
candidates, otherwise optional. The client connection code (ODBC/JDBC) uses a roundrobin approach to reconnection, ensuring that the clients can always access the SAP
HANA database, even after failover. The following diagram illustrates how host autofailover works. An active host fails (in this example, Host 2), and the standby host takes
over its role by starting its database instance using the persisted data and log files of the
failed host.

Example of Auto Host-Failover 1


And, one way to look up the master candidates in your distributed SAP HANA database is
to use the following SQL statement:
Connection strings for SAP Hanna Replication
System replication is used; we recommend that you do not specify physical host names in
the SQL client connect string. Otherwise, you would have to reconfigure all of your
applications after a takeover. Instead, use, a virtual host name or virtual IP address, and
manage it using an external cluster manager. This virtual host name or IP address must
point to the active master host on the active primary site. System replication takeover
hooks can be implemented to provide notification about the takeover
Data Integration
Data Integration is the process of combining heterogeneous data sources in to a single
queriable schema so as to get a unified view of these data. Many organizations, as a
standard practice, have disparate and individual systems for maintaining the databases
and related information.Eventhough, it is possible and manageable, it becomes
cumbersome, in the event of increasing data and information, and the information
processing process becomes complex and time consuming, and also require more
manpower and time, but in most instances, such separations of the data provide them
better manageability and security, performing any cross departmental analysis on these
datasets becomes impossible.
Besides, Consider, the case of the sales and marketing department, where large data
volumes are generated, it becomes very complicated and time consuming for the
analysiss, to analyze such large volumes, and design specific stratagies,for example,
certain advertising campaign by the marketing department on sales of a product. Another
example can be in the HR department, where, there are varied requirements of storage and
varied set of databases has to maintained, and in a situation where these are all desperate
systems, it is next to impossible, with the tradional systems, to perform the needed tasks,
and in the case of HR, where individual sources are maintained, it might not be possible to
analyze the correlation between yearly incentives and employees productivity and the
only possible solution to address such issues is go in for data integration, and SAP Hanna,
with its compute architecture, makes it simpler to solve such issues.
Listed is a list of examples where data integration is required. The list, however, is not
comprehensive
Cross functional analysis - as discussed in the above example

Finding correlation - Statistical intelligence/scientific application


Sharing information - legal or regulatory requirements e.g. sharing customers credit
information among banks
Maintaining single point of truth - Higher management topping over several
departments may need to see a single picture of the business
Merger of Business - after merger two companies want to aggregate their individual
data assets
How is data integration achieved?
There are mainly 2 major approaches for data integration - commonly known as tight
coupling approach and loose coupling approach

Tight
Tight coupling is the first choice for the
coupling designers, and is generally implemented
through data warehousing. In this case,
data is pulled over from disparate sources
into a single physical location through the
process of ETL - Extraction,
Transformation and Loading. The single
physical location provides a uniform
interface for querying the data. ETL layer
helps to map the data from the sources so
as to provide a semantically uniform data
warehouse
Loose In direct contrast, the tight coupling
coupling approach, is a to tight coupling approach, a
virtual mediated schema provides a
interface that takes the query input from
the user, transform the query in the way
source database can understand and then
sends the query directly to the source

databases to obtain the result. In this


approach, the data does not really remain
in the schema and only remain in the
actual source databases. However,
mediated schema contains several
adapters or wrappers that can connect
back to the source systems in order to
bring the data to the front end. This
approach is often implemented through
middleware architecture (EAI)

The concept of coupling can be best understood, by illustrating the process, a particular,
database stores weather data of a country for past 50 years. Another database contains
crop production data of the country for each year. A user might want to query - In which
year crop production is lowest in spite of more than average rainfall?. The question can
be subdivided into 3 different questions as follows:
1. What is the average rainfall across all the years?
2. Which are the years where recorded rainfall was higher than the average?
3. Out of all the years obtained from the above query, which year has least crop production
And the below diagram depicts the schema

Loose Coupling method 1



Table describes, Data Warehouse and Mediated Schema approaches

Tight coupling

Advantage

Disadvantages

Independence (Lesser
dependency to source
systems since data is
physically copied
over).Faster query
processing. Complex
query processing
.Advanced data
summarization and
storage possible. High
Volume data processing

Latency (since data needs


to be loaded using ETL)
.Costlier (data
localization,
infrastructure, security)

Loose Coupling
Advantage

Disadvantages

Data Freshness (low


Semantic conflicts (The
latency - almost real-time) meaning of the measure
.Higher Agility (when a net profit can be
new source system comes different in different
or existing source system systems - so semantic
changes - only the
awareness is often
corresponding adapter is necessary in the mediated
created or changed schema*).Slower query
largely not affecting the response (due to network
other parts of the
/ bandwidth issue, non-

system). Less costlier


(Lot of infrastructure cost
can be saved since data
localization not required)

localization of data / work


load on the source system
etc.) High order
dependency to the data
sources

Amazon Web Services (AWS) provides SAP customers and partners with on-demand
access to servers, storage, and networking in the cloud, to run their SAP systems. AWS is
completely self-service, and customers need to pay only for the actual resources that are
required for their requirements, since all the instances are offered on a subscription model,
infrastructure planning gets simplified, as the instances, whatever required can be
purchased and configured within minutes, instead of previously waiting for hours or some
cases days. SAP use cases range from running a single SAP test system to hosting a
complete SAP production environment. Here are some of the most common uses. For
more information about these scenarios.

Test, training, With AWS, System administrators get


demo, POC, the freedom of creating, deploying
and project and configuring the required types of
systems
inftrasucture systems within minutes,
and begin the deployment of services.
There is no up-front costs, and upfront commitments. Can subscribe and
deploy instances, within minutes, and
AWS offers the widest range of
compute instances, at a very low
prices. And can subscribe and unsubscribe at any point of time
Trial SAP
solutions

Choose from a selection of SAP


solutions and try them out on AWS at
little or no cost quickly and easily

deploy and explore the value and


benefits of using SAP software on the
Amazon Web Services (AWS) cloud.
Test Drives are developed by AWS
partners and are free of charge for
educational and demonstration
purposes. Each Test Drive includes up
to 5 hours of complimentary AWS
server time for you to evaluate live
enterprise solution stacks. Feel free to
experiment and explore Test Drives
for SAP as well as other solutions
Production Host complete SAP environments on
SAP hosting the AWS cloud
Hybrid sap
hosting

Migrate existing SAP development


and test landscapes to the AWS cloud
while keeping production on premises.
Easily enable secure integration
between on-premises resources and
AWS by using Amazon Virtual
Private Cloud (Amazon VPC).

Disaster
recovery

Use the AWS cloud as a DR site for


on-premises SAP systems without the
expense of a second physical site and
standby infrastructure

Services for SAP on AWS


AWS operates, manages, and controls the components from the host operating system and
virtualization layer down to the physical infrastructure and facilities. The customer
manages the guest operating system and any applications and different databases running
on top. And Aws supports the clients, by connecting them to the AWS Partner Network
(APN) iwhich is a growing community of SAP partners who provide both SAP consulting
and SAP managed services on the AWS platform. SAP partners can help you reduce the
time to value of your SAP implementation or project on AWS, and help you maximize the
benefits of running SAP solutions on the AWS platform.
How to get started?
Some SAP solutions are available as prebuilt system images on AWS. These images
enable you to rapidly deploy a preinstalled and preconfigured SAP system in a fraction of
the time it would normally take to deploy a system on traditional infrastructure. Prebuilt
SAP system images are available from the following sources:
AWS Test Drive / SAP Trials / SAP Developer Editions
AWS Marketplace
SAP Cloud Appliance Library

Do it yourself
AWS Quick Start reference deployments guide is the best source of information to begin
with. A highly comprehensive set of documents are shared in the website, using which,
even a system engineer, can rapidly deploy fully functional enterprise software on the
AWS cloud, following AWS best practices for security and availability. With each Quick
Start you can easily launch, configure, and run AWS compute, networking, storage, and
other services to deploy that workload on AWS, often in an hour or less. And even an
administer will be able to create and configure the environments.
The process to install an SAP system on AWS is the same as the process to install an SAP
system on any other physical or virtual server. AWS provides standard operating system
images for Microsoft Windows Server, SUSE Linux Enterprise Server, and Red Hat
Enterprise Linux to be used as the starting point to build your new SAP system. Here are
the steps for building your own SAP environment on AWS and the below table explains
the steps

Task

How to?

Sign-up

This is the first task that needs to be


perormed.All that, is required, is a
credit card, login to the AWS web-site,
and browse for the purchase options,
and select the instances and pay

through a credit card, and the instances


is yours, and within the next 24 hours,
the instances will be activated and
developers and designers can start
using the instances
Connect
Once subscribed and confirmation mail
and Access is received, the developers or
AWS by
administrators can configuring the
using the
instances and use those instances for
AWS
the purposes for which it was
Management subscribed, with a feature rich AWS
Console
management console, which gives both
GUI, and CLI interfaces, it is very
simple for creating, configuring and
managing the instances, and AWS
permits, for the access of all the
resources through a single interface,
even instances can be configured
remotely and functions like software
development, and testing can be
performed remotely too
Install and
configure
SAP
systems.

Administrators and developers can


begin with basic instances which is
Windows Server, SUSE Linux
Enterprise Server, or Red Hat
Enterprise Linux system image, and

install the SAP solution, as would be


performed on any physical or virtual
servers
SAP Hanna on MS Azure
Microsoft Azure is one of the best platforms for hosting SAP Hanna, And MS Azure,
guaranteed, a highly reliable platform, to develop, design, and run the applications and
workloads including the mission critical ones, in much more simpler manner. Hanna
offers, a highly scalable, compliant, and enterprise-proven platform for all types of
workloads both on-premise and on the cloud. And it is one of the cost-effective platforms
to host all kinds of services, even enterprise ones. Offers higher scalability, flexibility and
ease of use, and affordable costs. With the partnership between SAP and MS Azure,
enterprises, will now be able to run SAP applications across dev/test and production
scenarios in Azureand be fully supported. From SAP NetWeaver to SAP S4/HANA,
Linux to Windows, SAP HANA to SQL servers,
And organizations, can afford to adopt, industry leading performance when running your
SAP workloads on Azure. With Azure Virtual Machines, you can scale your HANA
applications to 0.5 TB of RAM. And, with purpose-built hardware that is specifically
tuned for SAP HANA, you can scale your SAP HANA workloads up to 32 TB on
multimode configurations, besides, having the option of running the largest SAP HANA
workloads (OLTP and OLAP) of any hyper scale cloud provider, and all SAP Hanna
components are certified for AWS
Besides, Getting the the power to develop in-memory apps that analyze massive volumes
of data in real time with SAP HANA. Customers interested in using SAP HANA on Azure
can get started on GS5 virtual machines (for workloads under 0.5 TB) and obtaining
this services are simple, customer need to purchase, subscription license for SAP HANA
and the initial configuration can be performed by the following simple steps:
1.

.Create a new account: Create a new account to log into the SAP Cloud Appliance
Library. Select Azure as the cloud provider and enter your subscription ID.
Download the certificate

2.

Download the management certificate when prompted

3.

Add SAP management certificate to Azure

4.

Log into the Azure Portal using the subscription you provided and upload it to
Management Certificates under Settings.

5.

Activate free trial

6.

Activate your free trial of SAP HANA Developer Edition

7.

Create instance of SAP HANA

8.

Enter the required information to create an instance of SAP HANA Developer


Edition.

9.
10.

Start working
Once completed, your new instance of SAP HANA Developer Edition will be up and
running on Azure.

SAP Hanau User experience


SAP User Experience as a Service (UXaaS) is an integrated component of SAP HANA
Cloud Platform that empowers organizations to build and scale simple, personalized and
responsive user experience. Minimize costly design rework and accelerate application
development with SAP User Experience as a Service.
SAP Splash and BUILD enables non-designer project teams to design great software
experiences by providing an inspiring gallery of well-designed applications to build upon.
Tools such as learning content on UX methodologies and best practices, and an integrated
set of design tools to do user research and create prototypes get teams up and running,
some of the functionalities are listed below:

Design Better SAP Splash & BUILD empowers


Experiences business experts to build awesome
for Users
applications. And SAP teams assist
technical teams, or non-designers to
learn the basics of designing for their
end-users and also provide
consulting services, and the needed
tools, knowledge base, to get started
Application
gallery

By default, SAP Hanna, provides,


free of cost, inspiring application
designs and clone any of the apps to
jump-start your own app design

Team designs With a vast range of design


collections, and also, Subscribers are
allowed access to wide range of
informative and technically rich,
contents, such as method cards with

guided steps and templates and


online learning on design thinking
methodologies, best practices and
guidelines
Design the
correct and
own
applications
and services

Enable non-technical stakeholders to


rapidly build interactive prototypes
without coding. Ensure design
consistency across apps using SAP
Fiori, cloud service templates with
built-in Fiori design guidelines.
Facilitate collaboration across all
stakeholders throughout the
innovation lifecycle. Collect end-user
feedback with user research and
derive insights from user interaction
analysis.

Do with the
Simple and
Integrated
BUILD
Design Tool

Easily build prototypes with built-in


UI elements and floor plans and
conduct effective user research with
surveys, annotations, and usage
analytics.

SAP Hanna and cloud platform


SAP Hanna offers one of the best, in-memory based cloud computing platform with wide
range of services. Designers and architects, can build, run, host both critical and noncritical business and non-business applications and host both business and non-business
services, on the SAP Hanna Cloud platform. And SAP Hanna turns out to be one of the
best platforms for designers and application developers. Users can create, build and run

the critical and non-critical, and extra-modern business application on the cloud
platorm.Powered by the in-memory, technology, SAP Hanna Clouds platform platform-asa-service offers comprehensive capabilities to help business users and developers create
better, more agile applications in less time.
SAP Hanna offers one of the best, in-memory based cloud computing platform with wide
range of services. Designers and architects, can build, run, host both critical and noncritical business and non-business applications and host both business and non-business
services, on the SAP Hanna Cloud platform. And SAP Hanna turns out to be one of the
best platforms for designers and application developers. Users can create, build and run
the critical and non-critical, and extra-modern business application on the cloud
platorms.Powered by the in-memory, technology, SAP Hanna Clouds platform platformas-a-service offers comprehensive capabilities to help business users and developers create
better, more agile applications in less time. Listed below are the capabilities

Empower the Extend existing applications and


business
deliver beautiful, easy to use mobileready personalized applications for
individual employee, role, department
and customer segment.
Extend the Offers the facility and the ability to get
on-premise the enterprises digital network to SAP
to the digital business networkSAP Ariba, SAP
economy
Concur, SAP Fieldglass, besides,
creating connections to the enterprises
end point to the internal assets, by
deploying SAP Ariba, SAP Concur,
SAP Fieldglassand connect your
edge points, your assets, your
everything to the Internet of Things
with SAP HANA Cloud Platform IoT
and Mobile services
Deliver

Individual users and teams can now

open, agile build modern applications, by using the


flexible
open standards and the skills that
applications already possess, like, Java, JavaScript
and HTML5 and adopt the open
technologies and integrating Cloud
Foundry and Open Stack into SAP
HANA Cloud Platform, so youll
always be flexible, portable and agile.
Deliver
Offers a right platform with tools and
personalized facilities to design to Build
applications personalized solutions for SAP S/4
HANA, SAP Success Factors, SAP
Business Suite, SAP Ariba, SAP
Hybrid Cloud for Customer and core
SAP applications which can be
customized and delivered on-demand

Listed below are the technical capabilities

Integration
services

With the integration services of SAP


HANA Cloud Platform, administrators
and developers can have secure
collaboration with in order to explore
the possibilities of improving
efficiencies as well as gain real-time
insights from sensors, devices, and
social sentiment.

User
experience

SAP Hanna has the ability of


Delivering Great User Experience at
Scale with SAP User Experience as a
Service, besides, delivering an
integrated SAP User Experience as a
Service (UXaaS) is an integrated
component of SAP HANA Cloud
Platform that empowers organizations
to build and scale simple, personalized
and responsive user experience.
Further, SAP User Experience as a
Service (UXaaS) is an integrated
component of SAP HANA Cloud
Platform that empowers organizations
to build and scale simple, personalized
and responsive user experience. And,
Minimize costly design rework and
accelerate application development
with SAP User Experience as a
Service.

For
developers
and
operations

Development and Operations allows


developers to develop and manage
applicationsincluding complete life
cycle management. SAP HANA
Cloud Platform not only increase
developer productivity by simplifying

development but also improve team


productivity with the ability to code
and collaborate anywhere, to develop
and manage applicationsincluding
complete life cycle management. This
platform also strives to enhance the
developer productivity by simplifying
development but also improve team
productivity with the ability to code
and collaborate anywhere
CollaborationSAP Hanna attempts to connect or
bring together people and securely
access and share business content,
information, applications and
processes in order to deliver results
and increase team productivity. SAP
Hanna and SAP Jam, together, creates
a envirronment,that makes it simpler,
to create a web of social circuit, that is
capable of driving critical capabilities
and critical business data into the
processes and applications people use
every day.
Security

SAP HANA Cloud Platforms closely


integrated security services include
authentication, single sign-on, on-

premise integration and self-services


such as registration and password
reset for employees, customers,
partners and consumers. SAP HANA
Cloud Platforms closely integrated
security services include
authentication, single sign-on, onpremises integration and self-services
such as registration and password
reset for employees, customers,
partners and consumers
Business
services

SAP Business Services fuel the fast


development of Business apps and
services for the cloud and power an
open marketplace for new business
apps which includes SAP, SAP
Hybrids, ISV and other 3rd party
apps. SAP Business Services propel
the businesses, towards the fast
development of Business applications
and services for the cloud and power
an open marketplace for new business
apps which includes SAP, SAP
Hybrid, ISV and other 3rd party
applications. SAP HANA accelerates
the performance, to higher levels,

expedites processes and seriously


attempts to transform the business by
eliminating the divide between
transactions and analytics. And drive
faster, more reliable transaction
processingfor lesswith SAP
Adaptive Server Enterprise (SAP ASE
operations,
Mobile
services

SAP HANA Cloud Platform mobile


portfolio delivers key capabilities such
as multiple authentication methods,
secure access to on-premise and
cloud-based systems, offline
synchronization, remote logging
control and retrieval, automatic app
updates for hybrid apps, one to one
and one too many push notifications
and much more.

Internet of
things

The SAP HANA Cloud Platform


facilitates the organization, with
adoption of SAP Internet of Things
(IoT) services. With SAP IoT services,
the IT team and the designers, have
the privileges to onboard and manage
connected remote devices, get realtime predictive analysis to improve

intelligence and decision-making at


the edge of the network, and optimize
business processes


Sap Hanna security
SAP HANA is SAPs in-memory database technology that leverages hardware and
software innovations to enable very fast processing of large amounts of data. SAP HANA
can act as a standard SQL-based relational database. In this role it can serve as either the
data provider for classical transactional applications (OLTP) and/or as the data source for
analytical requests (OLAP). Database functionality is accessed through an SQL interface
In addition; SAP HANA comes with a built-in application server, the SAP HANA
Extended Application Services (SAP HANA XS). This server can be accessed through
HTTP and can serve data via OData calls or rich HTML user interfaces. In addition, SAP
HANA comes with a built-in application server, the SAP HANA Extended Application
Services (SAP HANA XS). This server can be accessed through HTTP and can serve
data via OData calls or rich HTML user interfaces. And In order to leverage the full
potential of SAP HANA, data-intensive operations are executed directly in the database.
Therefore SAP HANA provides a development/modeling environment that allows you to
create new data structures and programs, analytical views and queries, stored procedures,
and applications. The development environment is integrated into the SAP HANA studio,
which also serves as the client tool for database administration or can be accessed via a
browser interface. Design-time artifacts (like custom applications, roles, application
content) are stored and managed in the SAP HANA built-in repository With SAP Hanna,
Design time objects can be transported from development to quality assurance (QA) and
production systems, using either SAP HANAs export/import functions or standard SAP
mechanisms such as CTS+. Depending on the customers requirements (for example size
of the database), an SAP HANA system. In on premise deployments, SAP HANA is either
delivered to customers as a standardized and highly optimized appliance, or customers can
run SAP HANA in their own tailored server and storage combinations. Choosing the
option means that customers receive a completely installed and preconfigured SAP HANA
system on certified hardware from an SAP hardware partner, including the underlying preinstalled and pre-configured operating system. The second option enables installed base
customers to reduce hardware and operational costs, mitigate risk and optimize time-tovalue, in addition to gaining additional flexibility in hardware vendor selection. There is a
wide range of cloud offerings available for SAP HANA, from infrastructure- and
platform-as-a-service to enterprise-class managed application hosting.
Deployment steps
Upgrading an existing enterprise SAP ERP system to HANA platform may be
challenging in terms of cost as well as various risk factors. To mitigate it, a phase
wise migration plan can be best for many enterprises. But, where to start? What are

options available for the first step to start with?


When evaluating SAP HANA for the enterprise, the following factors are to be
considered:
How can HANA help your business? (2) What type of HANA solution do
you need to achieve this? (3) How should you plan your deployment
strategy?
Before deploying SAP HANA, an organization should take time to understand where
SAP HANA can deliver maximum benefit based on the business goals.
Steps in the migration should primarily be as follows
Understanding the opportunities and its value in your business.
Map the above to SAP HANA solution
Deploy SAP HANA
In best possible scenario your migration plan could start with an early phase
deployment of the SAP CO-PA Accelerator. Its a Profitability Analysis tool with InMemory Computing from SAP. Its implementation is easy, quick, and will help you
not only technically but also strategically in the migration plan. It will facilitate
organization to dig deep and structure the parameters such as the revenue which a
product or service generates and the costs entailed to it.
SAP Hanna Systems sizing
Sizing is a standard term in SAP, which means determining the hardware requirements of an SAP
System such as physical memory, CPU power, and I/O capacity.
Determining sizing requirement ensures that customers purchase hardware as per their
business need and also ensures lower cost and reduce TCO.Below shows how sizing is
carried out: The critical factor for SAP HANA implementation is the right sizing of the
server based on the business requirement and this means correct calculation of amount of
memory, the CPU processing power, SAP HANA sizing consists of Memory sizing for
static data, Memory sizing of objects created during runtime (data load and query
execution), Disk Sizing and CPU Sizing.
For the purpose of successful SAP HANA implementation, SAP has provided various
guidelines and methods to calculate the correct hardware size.
We can use any of the below method:
1. SAP HANA sizing using the QuickSizer tool
2. SAP HANA sizing using the DB specific scripts
3. SAP HANA sizing using the ABAP report
Key Performance Indicators for SAP HANA Sizing
Sizing of the SAP HANA appliance is mainly based on the required main memory size.
Memory sizing is determined by the amount of data that is to be stored in memory. In
general the sizing of other components within the server is derived from the main memory
size. Apart from main memory sizing, other 2 important part of sizing is Disk Sizing and
CPU Sizing
The three main KPIs used to size for SAP HANA is
Main memory (RAM) space,
CPU processing performance
Disk size.

And the following needs to be sized


Memory
Disk
CPU
HANA Main Memory Sizing:
SAP HANA Main Memory Sizing is divided into static and the dynamic RAM
requirement.

Static
requirement

Relates to the amount of main


memory that is used for the holding
the table data.
Static memory sizing of HANA is
determined by the amount of data
that is to be stored in memory.

Dynamic
requirement

Main memory is required for


objects that are created dynamically
when new data is loaded or queries
are executed. Since SAP
recommends reserving as much
memory for dynamic objects as for
static ones, for calculating the total
RAM and the static RAM
multiplied by factor of 2

Calculate Uncompressed Data Volume to be loaded in HANA - Source Data Footprint.


Determine the information that has to be transferred (either by replication or extraction) to
the SAP HANA database. Note that typically customers will only select a sub-set of
information from their ERP or CRM database, so this has to be done at the table level. The
sizing methodology is based on uncompressed source data size, so in case compression is
used in the source database, this has to be taken into account as well. The information
required for this step can be acquired with database tools. SAP Note 1514966 contains a
script supporting this process for several database systems, for example, DB2 LUW and
Oracle. The current size of all the tables (without DB indexes) storing the required
information in the source database is

Comprehension factor in SAP Hanna


In HANA as a side-by-side scenario, the expected compression was 1:5. Through new
compression mechanisms the compression ratio increased to 1:7. There are, however, also
significantly higher values, so customers are circulating examples where a factor has been
achieved of over 1:50
Calculation size of RAM
Static RAM Size means the amount of RAM required to store the data in the SAP HANA
database, while assuming the compression factor as 7: Dynamic RAM is for additional
main memory required for objects that are created dynamically when new data is loaded
or queries are executed. SAP recommends to keep Dynamic RAM size same as Static
RAM size.
Some basic fundamentals
General Sizing (SAP Note 1514966): This describes the sizing of the SAP HANA as it is
used e.g. for replication of ERP data coming from an ERP system. In particular, these rules
must not be used for sizing BW on HANA and Business Suite on HANA systems. there is
a sizing script available that supports sizing for a migration Non-active data concept for
BW on HANA (SAP Note 1767880) and Near line Storage Solutions: Large BW systems
contain large amounts of data that are no longer or rarely actively used but that should
remain in the system (historical data, keeping data for legal reasons, and so on). This data
is called non-active data. An implementation for BW on HANA allows displacing nonactive data in case of main memory bottlenecks leveraging a last-recently-used concept.
This concept improves main memory resource management, which has positive effects on
hardware sizing for a large amount of non-active data. For more information about this,
see also SAP Note 1736976. Besides, near line storage solutions could be used to store
cold data, which can additionally help to reduce the memory amount.


Disk sizing
SAP HANA is an in-memory database. But it still requires disk storage space to
preserve database information if the system shuts down, either intentionally or due to a
power loss, for instance.
Disk sizing can be categorized into
Persistence Layer (also called Data Volume)
Disk Log (also called Log Volume)
Persistent layer data volume
Data changes in the database are periodically copied to disk to ensure a full image of the
business data on disk in Data Volume (Persistence Layer). The capacity for this storage is
calculated based on the total amount of RAM
Disk Log volumes
Log Volume saves log files to ensure that changes are durable and the database can be
restored to the last committed state after a restart. The minimum size for the Log Volume
is equal to the size of the SAP HANA servers main memory. And it should be noted that

certified hardware configurations already take these rules into account, so there is no need
to perform this disk sizing.
CPU sizing
CPU sizing only has to be performed in addition to the memory sizing if a massive
amount of users working on a relatively small amount of data is expected. Choose the Tshirt configuration size that satisfies both the memory and CPU requirements. The CPU
sizing is user-based. The SAP HANA system has to support 300 SAPS for each
concurrently active user. The servers used for the IBM Systems Solution for SAP HANA
support about 60 - 65 concurrently active users per CPU, depending on the server model.
SAP Hanna Migration
Before migration, the following aspects needs to be considered and a final strategy has to
be put in place and the plan should be guide to the entire migration process, for the
planning of environments, migration procedure of SAP systems to SAP HANA in an onpremise landscape The plan should be detailed should include all stakeholders, include
each and every requirement for migration plan, and importantly it should not be a copy
paste job and such a document serves as the starting point,It should start with an overview
of available migration path options, explore options, and alternatives, and incorporate
recommendations from different stakeholders. And it is essential that the organization
hire, professional consultants, to derive at a rugged and a fool-proof migration strategy
and plan.

Migration path
Option 1:
SAP Hanna offers a couple of migration strategies, for instance, if the requirement is
transform the existing architecture, SAP has multiple choices to offer,viz, SAP Landscape
Transformation, where you install a new system for the transformation, such as
performing a step-wise or partly migration of an SAP system or the consolidation of
several systems into one system running on SAP HANA.
Option 2:
The classical migration of SAP systems to SAP HANA (that is, the heterogeneous system
copy using the classical migration tools software provisioning manager 1.0 and R3load) is
using reliable and established procedures for exchanging the database of an existing
system - it is constantly improved especially for the migration to SAP HANA.

Option 3:
SAP is also offering an option of a strategy which is, one-step procedure that combines
system update and database migration for the migration to SAP HANA. This is provided
with the database migration option (DMO) of Software Update Manager (SUM).The
common strategy or recommendation is to adopt the database migration options offered by
SAP Hanna as default, for migrations to SAP Hanna, where the organization can
benefitted, if adopt, simplified migration to SAP HANA, performed by one tool, with
minimized overall project cost and only one downtime window
The general recommendation is to use the database migration option of SUM, as it has
become our standard procedure for migrations to SAP HANA - with it, you can profit
from a. As reasonable alternative to our standard recommendation, in case the database
migration option of SUM does not fit your requirements, consider to use the classical
migration procedure with software provisioning manager, which is also continuously
improved especially for the migration to SAP HANA. Reasons might be that the database
migration option of SUM does not support your source release or if you prefer a
separation of concerns over a big bang approach as offered by DMO of SUM.As possible
exception, there are further migration procedures for special use cases, such as the
consolidation of SAP systems in the course of the migration project or the step-wise
migration to SAP HANA, as oultined above.
Perform an individual assessment
For migrations, SAP has already provides an exhaustive set of guidelines, and
possibilities and alternatives, that can be adopted to a particular environment, all that is
required to be done, is to chose the correct fit for that particular environment and
respective specific requirements, SAP has a speculated team, called the Professional
services, which will guide the organization, towards a incident free migration, Based on
the standard recommendation from SAP, find the best option depending on your individual
requirements and the boundary conditions you are facing. To support you in this process,
SAP provides a decision matrix in the End-to-End Implementation Roadmap for SAP
NetWeaver AS ABAP guide (SMP login required), which is intended to highlight
important aspects for your decision on the right migration procedure, including these key
considerations (see the guide for the latest version of the matrix).And listed below are a
few such considerations:
What is the release and Support Package level of your existing SAP system? Is an
update mandatory or desired as part of the migration procedure?
Is your existing SAP system already Unicode?
Do you plan any landscape changes - such as changing the SAPSID of your SAP
system or the hardware of your application server - as part of the migration or do you
rather prefer an in-place migration?
Do you plan the migration of your complete system or a partial migration?
Are your operating system and your database versions supported according to the
Product Availability Matrix (PAM) of the target release or are corresponding updates
required?
Do you expect a significant downtime due to a large database volume?
Data Center Integration

TDI stands for Tailored Datacenter Integration and describes a program that allows HANA
customers to leverage existing hardware and infrastructure components for their HANA
environment. Typically an HANA appliance comes with all necessary components preconfigured and is provided by certified HANA hardware partners. TDI targets the usage of
certain hardware and infrastructure components already existing in a customers landscape
instead of the corresponding components delivered with a HANA appliance, SAP HANA
tailored data center integration offers you more openness and freedom of choice to
configure the layer for SAP HANA depending on your existing data center layout.
In addition to SAP HANA as standardized and highly optimized appliance, you can use
the tailored data centre integration approach, which is more open and flexible, to run SAP
HANA in datacenters. This option enables a reduction in hardware and operational costs
through the reuse of existing hardware components and operational processes.
One of the easiest ways to integrate the organizations HANA system into datacenter is to
work with your VCE SAP specialist to get a datacenter in a box that is configured and
ready for deployment. The flexibility of SAP HANA TDI, combined with a converged
infrastructure from VCE, delivers the best of both worlds. By incorporating all the TDI,
the IT team gets a single point of experience of an appliance, without the limitations of an
appliance
The VCE is created to work with the in-house IT team and managers in order to
consolidate in house r HANA and non-HANA application requirements into a singlepoint-of-contact converged infrastructure that includes compute, network, and storage,
backup and replication options. Because VCE systems support both bare metal and
virtualized environments, nearly any SAP application requirement can be covered on this
single platform.
In the event of the IT team deciding to design and build a own SAP Hanna system, the
following steps can be followed, in order to achieve a reliable operational infrastructure
for your SAP HANA system. These steps incorporate the fundamental principles and
design aspects that were discussed previously and leverage hardware and software
components provided by EMC and its partners.
And the following needs to be established
platform & Appliance methodology (Installation & Update)
Persistence
Backup & Recovery (System Copy)
High Availability
Disaster Recovery
Monitoring & Administration
Security & Auditing
Roles and Responsibilities

With the appliance model SAP distributes all support requests regarding any component of
SAP HANA to the correct part of the support organization. With tailored data center
integration the customer is responsible for defining support agreements with the various
partners and organizing all aspects of support.
Service and Support
Customers should work with their hardware partners to ensure hardware support
requirements are being fulfilled.A supportability tool called the SAP HANA HW
Configuration Check Tool is provided by SAP, which allows you to check if the hardware
is optimally configured to meet the requirements of SAP HANA.
Installation
Number of requirements has to be fulfilled before proceeding with installation of SAP
HANA tailored data center integration.
The server must be certified and is listed in the hardware directory.
All storage devices must have successfully passed the hardware certification for SAP
HANA.
The exam SAP Certified Technology Specialist SAP HANA Installation
(E_HANAINS) needs to be needs to be successfully passed for a person to perform
SAP HANA software installations.
SAP HANA hardware partners and their employees do not need this certificate.
Companies, or their employees, who are sub-contractors of hardware partners
must be certified to perform SAP HANA software installations
Change management
Updating and patching the Operating System
With tailored data center integration the customer is responsible for updating and
patching the Operating System.
Updating and patching the SAP HANA Software
With tailored data center integration the customer is responsible for installing,
updating, and patching the SAP HANA software.
System Installation
A number of requirements have to be fulfilled before proceeding with installation of SAP
HANA tailored data center integration.
The server must be certified and is listed in the hardware directory.

All storage devices must have successfully passed the hardware certification for
SAP HANA.
The exam SAP Certified Technology Specialist SAP HANA Installation
(E_HANAINS) needs to be needs to be successfully passed for a person to perform
SAP HANA software installations. SAP HANA hardware partners and their
employees do not need this certificate.
Companies, or their employees, who are sub-contractors of hardware partners must
be certified to perform SAP HANA software installations.
Recommended steps to be performed
One of the easiest ways to integrate the organizations HANA system into datacenter is to
work with your VCE SAP specialist to get a datacenter in a box that is configured and
ready for deployment. The flexibility of SAP HANA TDI, combined with a converged
infrastructure from VCE, delivers the best of both worlds. By incorporating all the TDI,
the IT team gets a single point of experience of an appliance, without the limitations of an
appliance
The VCE is created to work with the in-house IT team and managers in order to
consolidate in house r HANA and non-HANA application requirements into a singlepoint-of-contact converged infrastructure that includes compute, network, and storage,
backup and replication options. Because VCE systems support both bare metal and
virtualized environments, nearly any SAP application requirement can be covered on this
single platform.
In the event of the IT team deciding to design and build a own SAP Hanna system, the
following steps can be followed, in order to achieve a reliable operational infrastructure
for your SAP HANA system. These steps incorporate the fundamental principles and
design aspects that were discussed previously and leverage hardware and software
components provided by EMC and its partners. And at the minimum the following
questions need to discuss before the design steps be undertaken:
What is the RAM requirement of your protection HANA system?
Which systems in your landscape must meet TDI KPIs
What non-production systems require the same performance and support from SAP?
What non-production systems require no SAP performance support and what is the
required performance percentage compared to production (50%, 25%, etc.)?
What is your RTO/RPO for local and remote scenarios?
What level of data protection do that is required?
What is your system refresh frequency?
What other systems have to be in sync for your system refresh?

SAP Hanna Deployment
Deployment of SAP Hanna is a complicated task and need to be undertaken by through
bead professionals, which should proceeded by a very detailed planning strategy. And top
class professionals needs to be employed. It is always recommended, the SAP
professional services to be engaged, who will take the planning and implementation, in ah
highly professional manner.
Upgrading an existing enterprise SAP ERP system to HANA platform may be challenging

in terms of cost as well as various risk factors. To mitigate it, a phase wise migration plan
can be best for many enterprises. But, where to start? What are options available for the
first step to start with?
When evaluating SAP HANA for your business, remember to keep the following in mind:
(1) how can HANA help your business? (2) What type of HANA solution do you need to
achieve this? (3) How should you plan your deployment strategy?
Before deploying SAP HANA, an organization should take time to understand where SAP
HANA can deliver maximum benefit based on the business goals.
Steps in the migration should primarily be as follows
1. Understanding the opportunities and its value in your business.
2. Map the above to SAP HANA solution
3. Deploy SAP HANA
In best possible scenario your migration plan could start with an early phase deployment
of the SAP CO-PA Accelerator. Its a Profitability Analysis tool with In-Memory
Computing from SAP. Its implementation is easy, quick, and will help you not only
technically but also strategically in the migration plan. It will facilitate organization to dig
deep and structure the parameters such as the revenue which a product or service
generates and the costs entailed to it.
Challenges in SAP Hanna Adoption
One of the biggest challenges when planning an SAP HANA adoption strategy stems
ironically from the flexibility of SAP HANA itself. Because SAP has transformed SAP
HANA from a relatively straightforward in-memory data warehouse into a platform
capable of running many enterprise applications, choosing which SAP product to start
with can be mind-boggling, The strategy to adopt Hanna platform, should necessarily take
into consideration, the deployment logistics, how the database will be installed and run
and managed and monitored, what are the purchase decisions need to be made by a
customer whether a customer, customer buy a standalone appliance? Integrate with an
existing data center environment? Or tap the cloud? Needs to factor the deployment
logistics, which hardware is suitable, and which platform. Should a customer opt for a
stand-alone, appliance or an integrative one or, customer buys a standalone appliance?
Integrate with an existing data center environment? Or tap the cloud? The answers, it turns
out, start with the fundamentals a good business case. Unfortunately, according to
Gartner, establishing a compelling business case is challenging for companies considering
SAP HANA adoption. The reason the business case for SAP HANA is such a major
challenge is because many organizations have large investments in their ERP
implementation, there seems to be a universal opinion
Organizations can deploy the SAP HANA in-memory database on premise to power realtime insights across your business and control systems and data behind your own
firewall. Choose a certified appliance provided by one of our partners for the fastest
implementation. Or deploy over your existing IT landscape to maximize your current data
center investments
SAP Hanna use case / application scenerio
SAP HANA, an in-memory platform, enables managing, analyzing, and processing of big
data, allowing applications to run analytics directly on transactional data and its users to

react to their businesses in real time.


The DNA of all organizations lies in their structure. Understanding the implicit nature of
an organizations structure on people and processes is not easy however. Given this
challenge, Startup Focus member Macro micro developed a technology that leverages big
data to tell the story of organizations and their structures. In Q4 of 2014, reached the final
phase of the Startup Focus Program to validate its HR Analytics Data Visualization
Service.
The company makes revolutionary visual tools that describe scale, structure, hierarchies,
and employee composition of large organizations to enable workforces to run
operationally smarter and plan with more data driven results in mind.
Macro micros solution OrgInsight was launched last fall on SAP HANA and will allow
Macro micro prospects, customers, and partners to soon purchase an OrgInsight
subscription on the SAP-hosted in-memory SAP HANA platform. OrgInsight and
Orgsight+ data is on-demand and delivered through the secure SAP HANA 9 platform,
and provides an integrated interface for workforce analysts to explore their organization
through data rich, easily interpreted visual interfaces. Currently, OrgInsight can be found
in the SAP App Center for existing SAP HANA customers, or as a subscription directly
from Macro micro.
With SAP HANA, we will be able to cross reference user data access permissions to HR
data quickly enough to provide dynamic access for any individual in an organization
(specifically line managers), increasing the number of individuals that can benefit from
visualized HR data, and contributing further to SAP HRIS solutions.
By visualizing data across an organizational hierarchy, patterns and concentrations
become apparent and provide a unique interface for shifting / stacking / filtering through
and comparing multiple HR data sets. Insights into HR data can be made that are not
possible with existing applications / tools / dashboards.
Combing the greatly increased data access speed of HANA with OrgInsight data
visualization increases the number of individuals that can benefit from visualized
workforce data, and contributes to further adoption of SAP HRIS solutions and HANA
licenses.


The SAP HANA powered technology will allow the function of much more in depth HR
solution and management including: retention management, spans & layers, spot rising
leaders, M&A integration, succession planning, data auditing & cleansing, self-discovery,
institutional balance, and job grade optimization.


For companies, these functionalities cover a breadth of organizational needs:
Retention Management: Shows in visual terms those who are prone to flight risk
and highlights the highest performers in the organization who have low compensation
ratios. Then, matches to this query are displayed and can be viewed layer-by-layer or
in divisions. It will be easier to spot higher concentrations of flight risk employees.
Spans & Layers: This will show concentrations and patterns of high spans of
control. No matter if your optimal spans and layers scenario is 88, 1010, or 46,
OrgInsight highlights those who fall outside the metric.
Spot Rising Leaders: It will be much more efficient to find future leaders within the

organization or on a division-by-division basis. This solution will filter through


various factors such as performance and leadership data to showcase the highest
performing individuals. When combined with job grade data, function, and division,
it will help identify the correct people for a promotion.
M&A Integration: This solution will use the organizational data to merge different
workforces. If there is a new acquisition, this tool will help balance and optimize
compensation and job grades, and provide an overview of the compensation ratios
between existing employees and on boarded ones.
Succession Planning: For those who are reaching retirement age and those who have
announced their departure, this tool will help identify the waves of retirement in key
positions. Rather than relying on various aggregate reports, it will help the company
intelligently prepare succession plans and be ready for an organizational change
whenever it may occur.
Data Auditing & Cleansing: Workforce analytics becomes comprehensive and
accurate with this tool as it shows inadvertent breaks in hierarchy, misappropriations,
or data anomalies.
Self-Discovery: This will allow teams to freely explore the organization of the
company and it is presented in a visual, interactive manner.
Institutional Balance: This is helpful for inquiries about diversity, gender, divisions,
job grades, and compensation ratios.
Job Grade Optimization: This feature will also promote inquiries about
reorganizations, mergers & acquisitions, or the structure of different divisions.




51