You are on page 1of 11

Features in BI 7.

0 or Netweaver 2004s
Metadata Search (Developer Functionality) :

It is possible to search BI metadata (such as InfoCubes, InfoObjects, queries, Web


templates) using the TREX search engine. This search is integrated into the Metadata
Repository, the Data Warehousing Workbench and to some degree into the object
editors. With the simple search, a search for one or all object types is performed in
technical names and in text. During the text search, lower and uppercase are ignored
and the object will also be found when the case in the text is different from that in
the search term. With the advanced search, you can also search in attributes. These
attributes are specific to every object type. Beyond that, it can be restricted for all
object types according to the person who last changed it and according to the time of
the change. For example, you can search in all queries that were changed in the last
month and that include both the term "overview" in the text and the characteristic
customer in the definition. Further functions include searching in the delivered (A)
version, fuzzy search and the option of linking search terms with “AND” and “OR”.
"Because the advanced search described above offers more extensive options for
search in metadata, the function ""Generation of Documents for Metadata"" in the
administration of document management (transaction RSODADMIN) was deleted. You
have to schedule (delta) indexing of metadata as a regular job (transaction
RSODADMIN).
Effects on Customizing Installation of TREX search engine Creation of an RFC
destination for the TREX search engine Entering the RFC destination into table
RSODADMIN_INT Determining relevant object types Initial indexing of metadata"
Remote Activation of DataSources (Developer Functionality) :

1. When activating Business Content in BI, you can activate DataSources remotely
from the BI system. This activation is subject to an authorization check. You need role
SAP_RO_BCTRA. Authorization object S_RO_BCTRA is checked. The authorization is
valid for all DataSources of a source system. When the objects are collected, the
system checks the authorizations remotely, and issues a warning if you lack
authorization to activate the DataSources.

2. In BI, if you trigger the transfer of the Business Content in the active version, the
results of the authorization check are based on the cache. If you lack the necessary
authorization for activation, the system issues a warning for the DataSources. BW
issues an error for the corresponding source-system-dependent objects
(transformations, transfer rules, transfer structure, InfoPackage, process chain,
process variant). In this case, you can use Customizing for the extractors to manually
transfer the required DataSources in the source system from the Business Content,
replicate them in the BI system, and then transfer the corresponding source-system-
dependent objects from the Business Content. If you have the necessary
authorizations for activation, the DataSources in the source system are transferred to
the active version and replicated in the BI system. The source-system-dependent
objects are activated in the BI system.
3. Source systems and/or BI systems have to have BI Service API SAP NetWeaver 2004s
at least; otherwise remote activation is not supported. In this case, you have to
activate the DataSources in the source system manually and then replicate them to
the BI system.
Copy Process Chains (Developer Functionality):

You find this function in the Process Chain menu and use it to copy the process chain
you have selected, along with its references to process variants, and save it under a
new name and description.

InfoObjects in Hierarchies (Data Modeling):

1. Up to Release SAP NetWeaver 2004s, it was not possible to use InfoObjects with a
length longer than 32 characters in hierarchies. These types of InfoObjects could not
be used as a hierarchy basic characteristic and it was not possible to copy
characteristic values for such InfoObjects as foreign characteristic nodes into existing
hierarchies. From SAP NetWeaver 2004s, characteristics of any length can be used for
hierarchies.
2. To load hierarchies, the PSA transfer method has to be selected (which is always
recommended for loading data anyway). With the IDOC transfer method, it continues
to be the case that only hierarchies can be loaded that contain characteristic values
with a length of less than or equal to 32 characters.

Parallelized Deletion of Requests in DataStore Objects (Data Management) :

Now you can delete active requests in a DataStore object in parallel. Up to now, the
requests were deleted serially within an LUW. This can now be processed by package
and in parallel.

Object-Specific Setting of the Runtime Parameters of DataStore Objects (Data


Management):

Now you can set the runtime parameters of DataStore objects by object and then
transport them into connected systems. The following parameters can be maintained:
· Package size for activation
· Package size for SID determination
· Maximum wait time before a process is designated lost
· Type of processing: Serial, Parallel(batch), Parallel (dialog)
· Number of processes to be used
· Server/server group to be used

Enhanced Monitor for Request Processing in DataStore Objects (Data Management):

1. For the request operations executed on DataStore objects (activation, rollback and
so on), there is now a separate, detailed monitor. In previous releases, request-
changing operations are displayed in the extraction monitor. When the same
operations are executed multiple times, it will be very difficult to assign the messages
to the respective operations.

2. In order to guarantee a more simple error analysis and optimization potential


during configuration of runtime parameters, as of release SAP NetWeaver 2004s, all
messages relevant for DataStore objects are displayed in their own monitor.

Write-Optimized DataStore Object (Data Management):

1. Up to now it was necessary to activate the data loaded into a DataStore object to
make it visible to reporting or to be able to update it to further InfoProviders. As of
SAP NetWeaver 2004s, a new type of DataStore object is introduced: the write-
optimized DataStore object.

2. The objective of the new object type is to save data as efficiently as possible in
order to be able to further process it as quickly as possible without addition effort for
generating SIDs, aggregation and data-record based delta. Data that is loaded into
write-optimized DataStore objects is available immediately for further processing.
The activation step that has been necessary up to now is no longer required.

3. The loaded data is not aggregated. If two data records with the same logical key
are extracted from the source, both records are saved in the DataStore object. During
loading, for reasons of efficiency, no SID values can be determined for the loaded
characteristics. The data is still available for reporting. However, in comparison to
standard DataStore objects, you can expect to lose performance because the
necessary SID values have to be determined during query runtime.

Deleting from the Change Log (Data Management):

The Deletion of Requests from the Change Log process type supports the deletion of
change log files. You select DataStore objects to determine the selection of requests.
The system supports multiple selections. You select objects in a dialog box for this
purpose. The process type supports the deletion of requests from any number of
change logs.

Using InfoCubes in InfoSets (Data Modeling):

1. You can now include InfoCubes in an InfoSet and use them in a join. InfoCubes are
handled logically in InfoSets like DataStore objects. This is also true for time
dependencies. In an InfoCube, data that is valid for different dates can be read.

2. For performance reasons you cannot define an InfoCube as the right operand of a
left outer join. SAP does not generally support more than two InfoCubes in an InfoSet.
Pseudo Time Dependency of DataStore Objects and InfoCubes in InfoSets (Data
Modeling) :

In BI only master data can be defined as a time-dependent data source. Two


additional fields/attributes are added to the characteristic. DataStore objects and
InfoCubes that are being used as InfoProviders in the InfoSet cannot be defined as
time dependent. As of SAP NetWeaver 2004s, you can specify a date or use a time
characteristic with DataStore objects and InfoCubes to describe the validity of a
record. These InfoProviders are then interpreted as time-dependent data sources.

Left Outer: Include Filter Value in On-Condition (Data Modeling) :

The global properties in InfoSet maintenance have been enhanced by one setting Left
Outer: Include Filter Value in On-Condition. This indicator is used to control how a
condition on a field of a left-outer table is converted in the SQL statement. This
affects the query results: If the indicator is set, the condition/restriction is included
in the on-condition in the SQL statement. In this case the condition is evaluated
before the join. If the indicator is not set, the condition/restriction is included in the
where-condition. In this case the condition is only evaluated after the join. The
indicator is not set by default.
Key Date Derivation from Time Characteristics (Data Modeling) :

Key dates can be derived from the time characteristics 0CALWEEK, 0CALMONTH,
0CALQUARTER, 0CALYEAR, 0FISCPER, 0FISCYEAR: It was previously possible to specify
the first, last or a fixed offset for key date derivation. As of SAP NetWeaver 2004s,
you can also use a key date derivation type to define the key date.

Repartitioning of InfoCubes and DataStore Objects (Data Management):

With SAP NetWeaver 2004s, the repartitioning of InfoCubes and DataStore objects on
the database that are already filled is supported. With partitioning, the runtime for
reading and modifying access to InfoCubes and DataStore objects can be decreased.
Using repartitioning, non-partitioned InfoCubes and DataStore objects can be
partitioned or the partitioning schema for already partitioned InfoCubes and
DataStore objects can be adapted.

Remodeling InfoProviders (Data Modeling):

As of SAP NetWeaver 2004s, you can change the structure of InfoCubes into which you
have already loaded data, without losing the data. You have the following remodeling
options: For characteristics: Inserting, or replacing characteristics with: Constants,
Attribute of an InfoObject within the same dimension, Value of another InfoObject
within the same dimension, Customer exit (for user-specific coding). DeleteFor key
figures: Inserting: Constants, Customer exit (for user-specific coding). Replacing key
figures with: Customer exit (for user-specific coding). DeleteSAP NetWeaver 2004s
does not support the remodeling of InfoObjects or DataStore objects. This is planned
for future releases. Before you start remodeling, make sure: (A) You have stopped any
process chains that run periodically and affect the corresponding InfoProvider. Do not
restart these process chains until remodeling is finished.
(B) There is enough available tablespace on the database.

After remodeling, check which BI objects that are connected to the InfoProvider
(transformation rules, MultiProviders, queries and so on) have been deactivated. You
have to reactivate these objects manually
Parallel Processing for Aggregates (Performance):

1. The change run, rollup, condensing and checking up multiple aggregates can be
executed in parallel. Parallelization takes place using the aggregates. The parallel
processes are continually executed in the background, even when the main process is
executed in the dialog.

2. This can considerably decrease execution time for these processes. You can
determine the degree of parallelization and determine the server on which the
processes are to run and with which priority.

3. If no setting is made, a maximum of three processes are executed in parallel. This


setting can be adjusted for a single process (change run, rollup, condensing of
aggregates and checks). Together with process chains, the affected setting can be
overridden for every one of the processes listed above. Parallelization of the change
run according to SAP Note 534630 is obsolete and is no longer being supported.

Multiple Change Runs (Performance):

1. You can start multiple change runs simultaneously. The prerequisite for this is that
the lists of the master data and hierarchies to be activated are different and that the
changes affect different InfoCubes. After a change run, all affected aggregates are
condensed automatically.

2. If a change run terminates, the same change run must be started again. You have
to start the change run with the same parameterization (same list of characteristics
and hierarchies). SAP Note 583202 is obsolete.

Partitioning Optional for Aggregates (Performance):

1. Up to now, the aggregate fact tables were partitioned if the associated InfoCube
was partitioned and the partitioning characteristic was in the aggregate. Now it is
possible to suppress partitioning for individual aggregates. If aggregates do not
contain much data, very small partitions can result. This affects read performance.
Aggregates with very little data should not be partitioned.

2. Aggregates that are not to be partitioned have to be activated and filled again
after the associated property has been set.

MOLAP Store (Deleted) (Performance):

Previously you were able to create aggregates either on the basis of a ROLAP store or
on the basis of a MOLAP store. The MOLAP store was a platform-specific means of
optimizing query performance. It used Microsoft Analysis Services and, for this reason,
it was only available for a Microsoft SQL server database platform. Because HPA
indexes, available with SAP NetWeaver 2004s, are a platform-independent alternative
to ROLAP aggregates with high performance and low administrative costs, the MOLAP
store is no longer being supported. Data Transformation (Data Management):

1. A transformation has a graphic user interfaces and replaces the transfer rules and
update rules with the functionality of the data transfer process (DTP).
Transformations are generally used to transform an input format into an output
format. A transformation consists of rules. A rule defines how the data content of a
target field is determined. Various types of rule are available to the user such as
direct transfer, currency translation, unit of measure conversion, routine, read from
master data.

2. Block transformations can be realized using different data package-based rule


types such as start routine, for example. If the output format has key fields, the
defined aggregation behavior is taken into account when the transformation is
performed in the output format. Using a transformation, every (data) source can be
converted into the format of the target by using an individual transformation (one-
step procedure). An InfoSource is only required for complex transformations
(multistep procedures) that cannot be performed in a one-step procedure.

3. The following functional limitations currently apply:


· You cannot use hierarchies as the source or target of a transformation.
· You can not use master data as the source of a transformation.
· You cannot use a template to create a transformation.
· No documentation has been created in the metadata repository yet for
transformations.
· In the transformation there is no check for referential integrity, the InfoObject
transfer routines are not considered and routines cannot be created using the return
table.

Quantity Conversion :

As of SAP NetWeaver 2004s you can create quantity conversion types using transaction
RSUOM. The business transaction rules of the conversion are established in the
quantity conversion type. The conversion type is a combination of different
parameters (conversion factors, source and target units of measure) that determine
how the conversion is performed. In terms of functionality, quantity conversion is
structured similarly to currency translation. Quantity conversion allows you to convert
key figures with units that have different units of measure in the source system into a
uniform unit of measure in the BI system when you update them into InfoCubes.

Data Transfer Process :

You use the data transfer process (DTP) to transfer data within BI from a persistent
object to another object in accordance with certain transformations and filters. In
this respect, it replaces the InfoPackage, which only loads data to the entry layer of
BI (PSA), and the data mart interface. The data transfer process makes the transfer
processes in the data warehousing layer more transparent. Optimized parallel
processing improves the performance of the transfer process (the data transfer
process determines the processing mode). You can use the data transfer process to
separate delta processes for different targets and you can use filter options between
the persistent objects on various levels. For example, you can use filters between a
DataStore object and an InfoCube. Data transfer processes are used for standard data
transfer, for real-time data acquisition, and for accessing data directly. The data
transfer process is available as a process type in process chain maintenance and is to
be used in process chains.

ETL Error Handling :

The data transfer process supports you in handling data records with errors. The data
transfer process also supports error handling for DataStore objects. As was previously
the case with InfoPackages, you can determine how the system responds if errors
occur. At runtime, the incorrect data records are sorted and can be written to an
error stack (request-based database table). After the error has been resolved, you can
further update data to the target from the error stack. It is easier to restart failed
load processes if the data is written to a temporary store after each processing step.
This allows you to determine the processing step in which the error occurred. You can
display the data records in the error stack from the monitor for the data transfer
process request or in the temporary storage for the processing step (if filled). In data
transfer process maintenance, you determine the processing steps that you want to
store temporarily.

InfoPackages :

InfoPackages only load the data into the input layer of BI, the Persistent Staging Area
(PSA). Further distribution of the data within BI is done by the data transfer
processes. The following changes have occurred due to this:
· New tab page: Extraction -- The Extraction tab page includes the settings for
adaptor and data format that were made for the DataSource. If data transfer from
files occurred, the External Data tab page is obsolete; the settings are made in
DataSource maintenance.
· Tab page: Processing -- Information on how the data is updated is obsolete because
further processing of the data is always controlled by data transfer processes.
· Tab page: Updating -- On the Updating tab page, you can set the update mode to
the PSA depending on the settings in the DataSource. In the data transfer process, you
now determine how the update from the PSA to other targets is performed. Here you
have the option to separate delta transfer for various targets.

For real-time acquisition with the Service API, you create special InfoPackages in
which you determine how the requests are handled by the daemon (for example,
after which time interval a request for real-time data acquisition should be closed and
a new one opened). For real-time data acquisition with Web services (push), you also
create special InfoPackages to set certain parameters for real-time data acquisition
such as sizes and time limits for requests.PSA :

The persistent staging area (PSA), the entry layer for data in BI, has been changed in
SAP NetWeaver 2004s. Previously, the PSA table was part of the transfer structure.
You managed the PSA table in the Administrator Workbench in its own object tree.
Now you manage the PSA table for the entry layer from the DataSource. The PSA table
for the entry layer is generated when you activate the DataSource. In an object tree
in the Data Warehousing Workbench, you choose the context menu option Manage to
display a DataSource in PSA table management. You can display or delete data here.
Alternatively, you can access PSA maintenance from the load process monitor.
Therefore, the PSA tree is obsolete.

Real-Time Data Acquisition :

Real-time data acquisition supports tactical decision making. You use real-time data
acquisition if you want to transfer data to BI at frequent intervals (every hour or
minute) and access this data in reporting frequently or regularly (several times a day,
at least). In terms of data acquisition, it supports operational reporting by allowing
you to send data to the delta queue or PSA table in real time. You use a daemon to
transfer DataStore objects that have been released for reporting to the ODS layer at
frequent regular intervals. The data is stored persistently in BI. You can use real-time
data acquisition for DataSources in SAP source systems that have been released for
real time, and for data that is transferred into BI using the Web service (push). A
daemon controls the transfer of data into the PSA table and its further posting into
the DataStore object. In BI, InfoPackages are created for real-time data acquisition.
These are scheduled using an assigned daemon and are executed at regular intervals.
With certain data transfer processes for real-time data acquisition, the daemon takes
on the further posting of data to DataStore objects from the PSA. As soon as data is
successfully posted to the DataStore object, it is available for reporting. Refresh the
query display in order to display the up-to-date data. In the query, a time stamp
shows the age of the data. The monitor for real-time data acquisition displays the
available daemons and their status. Under the relevant DataSource, the system
displays the InfoPackages and data transfer processes with requests that are assigned
to each daemon. You can use the monitor to execute various functions for the
daemon, DataSource, InfoPackage, data transfer process, and requests.

Archiving Request Administration Data :


You can now archive log and administration data requests. This allows you to improve
the performance of the load monitor and the monitor for load processes. It also
allows you to free up tablespace on the database. The archiving concept for request
administration data is based on the SAP NetWeaver data archiving concept. The
archiving object BWREQARCH contains information about which database tables are
used for archiving, and which programs you can run (write program, delete program,
reload program). You execute these programs in transaction SARA (archive
administration for an archiving object). In addition, in the Administration functional
area of the Data Warehousing Workbench, in the archive management for requests,
you can manage archive runs for requests. You can execute various functions for the
archive runs here.

After an upgrade, use BI background management or transaction SE38 to execute


report RSSTATMAN_CHECK_CONVERT_DTA and report
RSSTATMAN_CHECK_CONVERT_PSA for all objects (InfoProviders and PSA tables).
Execute these reports at least once so that the available request information for the
existing objects is written to the new table for quick access, and is prepared for
archiving. Check that the reports have successfully converted your BI objects. Only
perform archiving runs for request administration data after you have executed the
reports.

Flexible process path based on multi-value decisions :

The workflow and decision process types support the event Process ends with complex
status. When you use this process type, you can control the process chain process on
the basis of multi-value decisions. The process does not have to end simply
successfully or with errors; for example, the week day can be used to decide that the
process was successful and determine how the process chain is processed further.
With the workflow option, the user can make this decision. With the decision process
type, the final status of the process, and therefore the decision, is determined on the
basis of conditions. These conditions are stored as formulas.

Evaluating the output of system commands :

You use this function to decide whether the system command process is successful or
has errors. You can do this if the output of the command includes a character string
that you defined. This allows you to check, for example, whether a particular file
exists in a directory before you load data to it. If the file is not in the directory, the
load process can be repeated at pre-determined intervals.Repairing and repeating
process chains :

You use this function to repair processes that were terminated. You execute the same
instance again, or repeat it (execute a new instance of the process), if this is
supported by the process type. You call this function in log view in the context menu
of the process that has errors. You can restart a terminated process in the log view of
process chain maintenance when this is possible for the process type.

If the process cannot be repaired or repeated after termination, the corresponding


entry is missing from the context menu in the log view of process chain maintenance.
In this case, you are able to start the subsequent processes. A corresponding entry
can be found in the context menu for these subsequent processes.

Executing process chains synchronously :

You use this function to schedule and execute the process in the dialog, instead of in
the background. The processes in the chain are processed serially using a dialog
process. With synchronous execution, you can debug process chains or simulate a
process chain run.

Error handling in process chains :

You use this function in the attribute maintenance of a process chain to classify all
the incorrect processes of the chain as successful, with regard to the overall status of
the run, if you have scheduled a successor process Upon Errors or Always. This
function is relevant if you are using metachains. It allows you to continue processing
metachains despite errors in the subchains, if the successor of the subchain is
scheduled Upon Success.

Determining the user that executes the process chain :

You use this function in the attribute maintenance of a process chain to determine
which user executes the process chain. In the default setting, this is the BI
background user.

Display mode in process chain maintenance :

When you access process chain maintenance, the process chain display appears. The
process chain is not locked and does not call the transport connection. In the process
chain display, you can schedule without locking the process chain.

Checking the number of background processes available for a process chain :

During the check, the system calculates the number of parallel processes according to
the structure of the tree. It compares the result with the number of background
processes on the selected server (or the total number of all available servers if no
server is specified in the attributes of the process chain). If the number of parallel
processes is greater than the number of available background processes, the system
highlights every level of the process chain where the number of processes is too high,
and produces a warning.

Open Hub / Data Transfer Process Integration :


As of SAP NetWeaver 2004s SPS 6, the open hub destination has its own maintenance
interface and can be connected to the data transfer process as an independent
object. As a result, all data transfer process services for the open hub destination can
be used. You can now select an open hub destination as a target in a data transfer
process. In this way, the data is transformed as with all other BI objects. In addition
to the InfoCube, InfoObject and DataStore object, you can also use the DataSource
and InfoSource as a template for the field definitions of the open hub destination.
The open hub destination now has its own tree in the Data Warehousing Workbench
under Modeling. This tree is structured by InfoAreas.
The open hub service with the InfoSpoke that was provided until now can still be
used. We recommend, however, that new objects are defined with the new
technology.

You might also like