SAP BI 7.0 Data Transfer Process (DTP)

Working with SAP BI 7.0 Data Transfer Process (DTP)

Extraction Modes:

The data from source can be loaded into to target by using either Full or Delta mode.
Delta:
No initialization is required if extraction mode „Delta‟ selected. When the DTP is executed with this option for the first time, it brings all requests from the source into target and also sets the target in such way that it is initialized.

 


















If you selected transfer mode Delta, you can define further parameters:

a. Only get delta once: It can select this option where the most recent data required in data target. In case delete overlapping request from data target have to select this option and use delete overlapping request process type in process chain. If used these setting then from the second loads it will delete the overlapping request from the data target and keeps only the last loaded request in data target.

b. Get all new data request by request: If don‟t select this option then the DTP will load all new requests from source into a single request. Have to select this option when the number of new requests is more in source and the amount of data volume is more. If selected this option then the DTP will load request by request from source and keep the same request in target.

In 3.x, in info package have an option Initialization without data transfer. This can be achieved in 7.x

by putting „No data transfer, delta status in source: Fetched‟.


















Full: It behaves same like info package with option “Full”. It loads all data/requests from source into target.

Processing Mode:

These modes detail the steps that are carried out during DTP execution (e.g. Extraction, transformation, transfer etc). Processing mode also depends on the type of source.

The various types of processing modes are shown below:

1. Serial extraction, immediate parallel processing (asynchronous processing)

This option is most used in background processing when used in process chains. It processes the data packages in parallel.

2. Serial in dialog process (for debugging) (synchronous processing)

This option is used if we want to execute the DTP in dialog process and this is primarily used for debugging.

3. No data transfer; delta status in source: fetched

This option behaves exactly in the same way as explained above.

Temporary Data Storage Options in DTP:

In DTP, it can set in case to store the data temporarily in data loading process of any process like before extraction, before transformations. It will help in data analyzing for failed data requests.




 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 




Error Handling using DTP:
Options in error handling:

 


























Deactivated:
Using this option error stack is not enabled at all. Hence for any failed records no data is written to the error stack. Thus if the data load fails, all the data needs to be reloaded again.

No update, no reporting:

If there is erroneous /incorrect record and we have this option enabled in the DTP, the load stops there with no data written to the error stack. Also this request will not be available for reporting. Correction would mean reloading the entire data again.

Valid Records Update, No reporting (Request Red):

Using this option all correct data is loaded to the cubes and incorrect data to the error stack. The data will not be available for reporting until the erroneous records are updated and QM status is manually set to green. The erroneous records can be updated using the error DTP.

Valid Records Updated, Reporting Possible (Request Green):

Using this option all correct data is loaded to the cubes and incorrect data to the error stack. The data will be available for reporting and process chains continue with the next steps. The erroneous records can be updated using the error DTP.

How to Handle Error Records in Error Stack:
Error stack:

A request-based table (PSA table) into which erroneous data records from a data transfer process is written. The error stack is based on the data source, that is, records from the source are written to the error stack.

At runtime, erroneous data records are written to an error stack if the error handling for the data transfer process is activated. You use the error stack to update the data to the target destination once the error is resolved.

In below example explained error data handling using error DTP in invalid characteristics data records:



Modify the error record in error stack by clicking on edit button.






This DTP load will create a new load request in target and load these modified records into target. Here, can see the modified 3 records loaded into target.


Importance of Semantic Groups


This defined key fields in semantic group‟s works as key fields of data package while reading data from
source system and error stock.


























If need to put all records into a same data package which are having same keys from loading source system. In this case select semantic keys in DTP those are required as keys in data package.
In semantic group the key fields will be available if selected the error handling option „Valid Records Update, No reporting (Request Red)‟ or „Valid Records Updated, Reporting Possible (Request Green)‟


DTP Settings to Increase the Loading Performance


1. Number of Parallel Process:

We can define the number of processes to be used in the DTP.

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
Here defined 3, hence 3 data packages are processed in parallel.
2. Don’t Load Large Value Data by Sing DTP Load Request:

To avoid load large volume data into a single DTP request select Get all new data request by request in extraction tab.
3. Full Load to Target:

In case full load into data target from DSO or first load from DSO to target always loads from Active table as it contains less number of records with Change log table.


























4. Load from Info Cube to Other Target:
In case reading data from info cube to open hub destination it is best to use extraction from Aggregates. If select this option it reads first the aggregates tables instead of E and F table in case cube contains any aggregates.
Handle Duplicate Records
In case load to DSO, we can eliminate duplicate records by selecting option "Unique Data Records". If loading to master data it can be handled by selecting “handling duplicate record keys” option in DTP.

If you select this option then It will overwrite the master data record in case it time independent and will create multiple entries in case dime dependent master data.


Fiscal Week and Fiscal Quarter



Step by Step Guide to Fiscal Week and Fiscal Quarter

This article addresses the requirement of Fiscal Week and Fiscal quarter in BW/BI Reports.

Business Requirement

 
 
In quite a few of the Sales requirement, users will ask for Calendar week analysis and reports based on

Calendar week. Such requirements will be satisfied using standard time characteristics vise 0CALWEEK,

0CALQUARTER.

But in certain scenarios, users will ask for Fiscal Week and to achieve the same we need to write logic based on the Date, Fiscal Year and Fiscal Year variant.

Difference

If, for a given client, the Fiscal Year (FY) is not same as that maintained in standard SAP system then Fiscal week and Calendar week would be different.

For example, if SAP standard FY Variant K4 is used then the FY would be January to December but if for a

client the Fiscal Year is, say July to June, and say defined FY Variant is “XY” then in BW we can neither directly use 0CALWEEK for Weekly reporting requirements nor can directly have any time conversion, instead we need to derive Fiscal Week code based on the Date, FY and FY Variant.
Step by Step guide

1. Create a custom InfoObject “ZFISCWEEK” of Data Type “NUMC” length 6.

2. Add “ZFISCWEK” in the given DSO / Cube.

3. Create the transformation rule with type “Routine” with Date (in our example FKDAT – Bill date)

and FY Variant (PERIV) as Source Field Assignment.

 
4. The logic would be as follows:


a. First get the year from the Date or use standard Function Module

“DATE_TO_PERIOD_CONVERT”.

b. Find the first and last date of a given Year. Use standard SAP Function Module,

“FIRST_AND_LAST_DAY_IN_YEAR_GET” for this.

c. Find the difference of days in a given Fiscal Year i.e. No. of days between First Date and the

given data record date. Use standard SAP Function Module,

“/SDF/CMO_DATETIME_DIFFERENCE” for this.

d. Divide the No. of Days by 7 to get the total number of weeks. If Weeks is greater than say 52.5, It is automatically rounded off to 53, however if it is less than 52.5, it rounds it off to 52 and we have to add 1 to it explicitly to show 53. Therefore, we are using dummy variable (NUMBER) to find out when we have to add 1 explicitly.

5. To get the Fiscal Quarter find the week as above procedure and then find the Quarter based on the range of 13 weeks.

Fiscal Week Code:

DATA: YEAR (4) TYPE N, DATE1 LIKE SY-DATUM, DATE2 LIKE SY-DATUM, DAYS TYPE P,

WEEKS (2) TYPE N,

NUMBER (2) TYPE N, FISCV (2) TYPE C.


CLEAR: YEAR, DATE1, DATE2, DAYS, WEEKS, NUMBER, FISCV.

DATE2 = SOURCE_FIELDS-FKDAT. FISCV = SOURCE_FIELDS-PERIV.

**Get the Year from Date

CALL FUNCTION 'DATE_TO_PERIOD_CONVERT' EXPORTING

I_DATE = DATE2

* I_MONMIT = 00

I_PERIV = FISCV

IMPORTING

* E_BUPER =

E_GJAHR = YEAR

EXCEPTIONS

INPUT_FALSE = 1

T009_NOTFOUND = 2

T009B_NOTFOUND = 3

OTHERS = 4.

IF SY-SUBRC <> 0.

* MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO

* WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.

ENDIF.
**Get the First and Last date of the Year

CALL FUNCTION 'FIRST_AND_LAST_DAY_IN_YEAR_GET' EXPORTING

I_GJAHR = YEAR

I_PERIV = FISCV

IMPORTING

E_FIRST_DAY = DATE1

* E_LAST_DAY = DATE2

EXCEPTIONS

INPUT_FALSE = 1

T009_NOTFOUND = 2

T009B_NOTFOUND = 3

OTHERS = 4.
IF SY-SUBRC <> 0.

* MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO

* WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.

ENDIF.

**Get the difference between the first and input date (given data record date)

CALL FUNCTION '/SDF/CMO_DATETIME_DIFFERENCE' EXPORTING

DATE1 = DATE1

* TIME1 =

DATE2 = DATE2

* TIME2 =

IMPORTING

DATEDIFF = DAYS

* TIMEDIFF =

* EARLIEST =

EXCEPTIONS

INVALID_DATETIME = 1

OTHERS = 2.

IF SY-SUBRC <> 0.

* MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO

* WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.

ENDIF.

**Get the rounded off value to get the total number of weeks.

DAYS = DAYS + 1.

WEEKS = DAYS / 7.

NUMBER = DAYS MOD 7.


IF NUMBER > 0 AND NUMBER < 4.

WEEKS = WEEKS + 1.

ENDIF.


**Get the Fiscal week for an input date by concatenating Year and Week found in above step

CONCATENATE YEAR WEEKS INTO RESULT.

Fiscal Quarter Code:
DATA: YEAR (4) TYPE N, DATE1 LIKE SY-DATUM, DATE2 LIKE SY-DATUM, DAYS TYPE P,

WEEKS (2) TYPE N, NUMBER (2) TYPE N, FISCV (2) TYPE C.

CLEAR: YEAR, DATE1, DATE2, DAYS, WEEKS, NUMBER, FISCV. DATE2 = SOURCE_FIELDS-FKDAT.

FISCV = SOURCE_FIELDS-PERIV.

**Get the Year from Date

CALL FUNCTION 'DATE_TO_PERIOD_CONVERT' EXPORTING

I_DATE = DATE2

* I_MONMIT = 00

I_PERIV = FISCV

IMPORTING


* E_BUPER =

E_GJAHR = YEAR

EXCEPTIONS

INPUT_FALSE = 1

T009_NOTFOUND = 2

T009B_NOTFOUND = 3

OTHERS = 4.

IF SY-SUBRC <> 0.

* MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO

* WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.

ENDIF.

**Get the First and Last date of the Year

CALL FUNCTION 'FIRST_AND_LAST_DAY_IN_YEAR_GET' EXPORTING

I_GJAHR = YEAR

I_PERIV = FISCV

IMPORTING

E_FIRST_DAY = DATE1

* E_LAST_DAY = DATE2

EXCEPTIONS

INPUT_FALSE = 1

T009_NOTFOUND = 2

T009B_NOTFOUND = 3

OTHERS = 4.


IF SY-SUBRC <> 0.

* MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO

* WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.

ENDIF.

**Get the difference between the first date and input date (given data record date)

CALL FUNCTION '/SDF/CMO_DATETIME_DIFFERENCE' EXPORTING

DATE1 = DATE1

* TIME1 =

DATE2 = DATE2

* TIME2 =

IMPORTING

DATEDIFF = DAYS

* TIMEDIFF =

* EARLIEST =

EXCEPTIONS

INVALID_DATETIME = 1

OTHERS = 2.

IF SY-SUBRC <> 0.

* MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO

* WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.

ENDIF.



**Get the rounded off value to get the total number of weeks.

DAYS = DAYS + 1.

WEEKS = DAYS / 7.

NUMBER = DAYS MOD 7.
IF NUMBER > 0 AND NUMBER < 4.

WEEKS = WEEKS + 1.

ENDIF.

**Get the Quarter based on the found week in above step for an input date (given record date)

IF WEEKS BETWEEN '01' AND '13'.

CONCATENATE YEAR '1' INTO RESULT.

ELSEIF WEEKS BETWEEN '14' AND ’26’.

CONCATENATE YEAR '2' INTO RESULT.

ELSEIF WEEKS BETWEEN '27' AND ’39’.

CONCATENATE YEAR '3' INTO RESULT.

ELSEIF WEEKS BETWEEN '40' AND ’53’.

CONCATENATE YEAR '4' INTO RESULT.

Appendix


Other useful Function Modules for dates: RSARCH_DATE_SHIFT: To get the Date/Week Shift


 
Enter the date. (I_DATE)
Enter the Shift unit i.e. “Shift by DAY”, “Shift by Week” etc. (I_SHIFT) Enter the Shift Unit i.e. 1,2,3…etc (I_SHIFT_UNIT)

Enter the Option i.e. LT (Less Than), GT (Greater Than) etc. (I_OPTION)
FIMA_DATE_SHIFT_WITH_WEEKDAY: To get the Next day as well next week days

 
 
Enter the date (I_DATE).


Enter the Shift unit in I_WEEKDAY. If value is „1‟ then it will show Monday of the Next week and if any other number then it will show the according day.

Enter, if required, Number of weeks shift in I_NUMBER_OF_WEEKDAYS.

If you want to stay in the current month only then only make I_FLG_STAY_IN_MONTH = X else leave it blank.
To understand this FM let us take following example:


Example 1:
I_DATE = 09/06/2010

I_WEEKDAY = 1

I_NUMBER_OF_WEEKDAYS = 0

I_FLG_STAY_IN_MONTH = „„



If I_WEEKDAYS = 1 & I_NUMBER_OF_WEEKDAYS = n, where n is any whole number, then it will always show nth week‟s Monday irrespective of date.
Example 2:

I_DATE = 09/06/2010

I_WEEKDAY = 2

I_NUMBER_OF_WEEKDAYS = 0

I_FLG_STAY_IN_MONTH = „„

 
 
If I_WEEKDAY > 1 & I_NUMBER_OF_WEEKDAYS = n, where n is any whole number, then it will show nth week date based on I-WEEKDAY value.


If entered date is Monday, I_NUMBER_OF_WEEKDAYS = 0 and I_WEEKDAY = 2 then it will show the same week Tuesday i.e. 09/07/2010 in our example.

If entered date is Monday, I_NUMBER_OF_WEEKDAYS = 3 and I_WEEKDAY = 2 then it will show after 2 week‟s Tuesday i.e. 09/20/2010 in our example.

Like this few more combinations can be made as per the requirements. WEEK_GET_FIRST_DAY: To get the First date of the Calendar Week





Data Transfer Process (DTP) and Error handling process:

Data Transfer Process (DTP) and Error handling process:



Introduction about Data Transfer Process


1. You use the data transfer process (DTP) to transfer data within BI from a persistent object to another object in accordance with certain transformations and filters. In this respect, it replaces the data mart interface and the Info Package. As of SAP Net Weaver 2004s, the Info Package only loads data to the entry layer of BI (PSA).


2. The data transfer process makes the transfer processes in the data warehousing layer more transparent. Optimized parallel processing improves the performance of the transfer process .You can use the data transfer process to separate delta processes for different targets and you can use filter options between the persistent objects on various levels. For example, you can use filters between a Data Store object and an Info Cube.


3. Data transfer processes are used for standard data transfer, for real-time data acquisition and for accessing data directly.
































Interesting Benefits of New Data Transfer Process



1. Loading data from one layer to others except Info sources.

2. Separation of delta mechanism for different data targets.

3. Enhanced filtering in dataflow.

4. Improved transparency of staging processes across data warehouse layers.

5. Improved performance : optimized parallelization

6. Enhanced error handling in the form of error stack

7. Enables real-time data acquisition.

















Most important advantage in Data Transfer Process


1. Delta logic can be separately handled for separate data targets

2. Example for separation for delta logic

3. Delta logic is a part of DTP

4. One Source PSA

5. Two targets : One DSO keeping daily data and other one keeping weekly data

Five process for handling errors in DTP

Process#1 Enhanced Filtering, Debugging and error handling options



 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
Process # 2 -Handling Data Records With Errors


1. Using the error handling settings on the Update tab page in the data transfer process, when data is transferred from a DTP source to a DTP target, you can specify how the system is to react if errors occur in the data records.

2. These settings were previously made in the Info Package. When using data transfer processes, Info Packages write to the PSA only. Error handling settings are therefore no longer made in the Info Package, but in the data transfer process

Process # 3 -Error Handling Features

1. Possibility to choose in the scheduler to

2. Abort process when errors occur

3. Process the correct records but do not allow reporting on them

4. Process the correct records and allow reporting on them

5. Number of wrong records which lead to a wrong request

6. Invalid records can be written into an error stack

7. Keys should be defined for error stack to enable the error handling of data store object

8. Temporary data storage can be switched on/off for each sub step of the loading process

9. Invalid records can be updated into data records after their correction

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
Process # 4 - Error Stack


1. Stores erroneous records

2. Keeps the right sequence of records à for consistent data store handling.

3. Key of error stack defines which data should be detained from the update after the erroneous data record.

4. After Correction, Error-DTP updates data from error stack to data target.

Note: Once the request in the source object is deleted, the related data records in error stack area automatically deleted.

1.

                 a. Key of Error Stack = Semantic Groups.

                 b. Subset of the key of the target object.

i. Max. 16 fields

ii. Defines which data should be detained from the update of erroneous data record (for data store object)

iii. The bigger the key, the fewer records will be written to the error stack

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
Process # 5 - Temporary Data Storage


2. In order to analyze the data at various stages you can activate the temporary storage in the DTP

3. This allows you to determine the reasons of error












Changes between BI 7.0 and BW 3.5

Changes between BI 7.0 and BW 3.5:



Below are the major changes in BI 7.0 or 2004S version when compared with earlier versions.

1. In Infosets now you can include Infocubes as well.

1. The Remodeling transaction helps you add new key figure and characteristics and handles historical data as well without much hassle. This is only for info cube.

1. The BI accelerator (for now only for InfoCubes) helps in reducing query run time by almost a factor of 10 - 100. This BI accl is a separate box and would cost more. Vendors for these would be HP or IBM.

1. The monitoring has been improved with a new portal based cockpit. Which means you would need to have an EP guy in your project for implementing the portal!

1. Search functionality has improved!! You can search any object. Not like 3.

1. Transformations are in and routines are passé Yes, you can always revert to the old transactions too

1. Renamed ODS as DataStore.

1. Inclusion of Write-optimized DataStore which does not have any change log and the requests do need any activation

1. Unification of Transfer and Update rules

1. Introduction of "end routine" and "Expert Routine"

1. Push of XML data into BI system (into PSA) without Service API or Delta Queue

1. Introduction of BI accelerator that significantly improves the performance.

1. Load through PSA has become a must. It looks like we would not have the option to bypass the PSA during data load.



Metadata Search (Developer Functionality):

1. It is possible to search BI metadata (such as InfoCubes, InfoObjects, queries, Web templates) using the TREX search engine. This search is integrated into the Metadata Repository, the Data Warehousing Workbench and to some degree into the object editors. With the simple search, a search for one or all object types is performed in technical names and in text.

2. During the text search, lower and uppercase are ignored and the object will also be found when the case in the text is different from that in the search term. With the advanced search, you can also search in attributes. These attributes are specific to every object type. Beyond that, it can be restricted for all object types according to the person who last changed it and according to the time of the change.

3. For example, you can search in all queries that were changed in the last month and that include both the term "overview" in the text and the characteristic customer in the definition. Further functions include searching in the delivered (A) version, fuzzy search and the option of linking search terms with "AND" and "OR".

4. "Because the advanced search described above offers more extensive options for search in metadata, the function ""Generation of Documents for Metadata"" in the administration of document management (transaction RSODADMIN) was deleted. You have to schedule (delta) indexing of metadata as a regular job (transaction RSODADMIN).


o Effects on Customizing

o Installation of TREX search engine

o Creation of an RFC destination for the TREX search engine

o Entering the RFC destination into table RSODADMIN_INT

o Determining relevant object types

o Initial indexing of metadata"



Remote Activation of DataSources (Developer Functionality):

1. When activating Business Content in BI, you can activate DataSources remotely from the BI system. This activation is subject to an authorization check. You need role SAP_RO_BCTRA. Authorization object S_RO_BCTRA is checked. The authorization is valid for all DataSources of a source system. When the objects are collected, the system checks the authorizations remotely, and issues a warning if you lack authorization to activate the DataSources.

2. In BI, if you trigger the transfer of the Business Content in the active version, the results of the authorization check are based on the cache. If you lack the necessary authorization for activation, the system issues a warning for the DataSources. BW issues an error for the corresponding source-system-dependent objects (transformations, transfer rules, transfer structure, InfoPackage, process chain, process variant). In this case, you can use Customizing for the extractors to manually transfer the required DataSources in the source system from the Business Content, replicate them in the BI system, and then transfer the corresponding source-system-dependent objects from the Business Content. If you have the necessary authorizations for activation, the DataSources in the source system are transferred to the active version and replicated in the BI system. The source-system-dependent objects are activated in the BI system.

3. Source systems and/or BI systems have to have BI Service API SAP NetWeaver 2004s at least; otherwise remote activation is not supported. In this case, you have to activate the DataSources in the source system manually and then replicate them to the BI system.

Copy Process Chains (Developer Functionality):

You find this function in the Process Chain menu and use it to copy the process chain you have selected, along with its references to process variants, and save it under a new name and description.

InfoObjects in Hierarchies (Data Modeling):

1. Up to Release SAP NetWeaver 2004s, it was not possible to use InfoObjects with a length longer than 32 characters in hierarchies. These types of InfoObjects could not be used as a hierarchy basic characteristic and it was not possible to copy characteristic values for such InfoObjects as foreign characteristic nodes into existing hierarchies. From SAP NetWeaver 2004s, characteristics of any length can be used for hierarchies.

2. To load hierarchies, the PSA transfer method has to be selected (which is always recommended for loading data anyway). With the IDOC transfer method, it continues to be the case that only hierarchies can be loaded that contain characteristic values with a length of less than or equal to 32 characters.

Parallelized Deletion of Requests in DataStore Objects (Data Management) :

Now you can delete active requests in a DataStore object in parallel. Up to now, the requests were deleted serially within an LUW. This can now be processed by package and in parallel.

Object-Specific Setting of the Runtime Parameters of DataStore Objects (Data Management):

Now you can set the runtime parameters of DataStore objects by object and then transport them into connected systems. The following parameters can be maintained:

- Package size for activation

- Package size for SID determination

- Maximum wait time before a process is designated lost

- Type of processing: Serial, Parallel (batch), Parallel (dialog)

- Number of processes to be used

- Server/server group to be used

Enhanced Monitor for Request Processing in DataStore Objects (Data Management):

1. for the request operations executed on DataStore objects (activation, rollback and so on), there is now a separate, detailed monitor. In previous releases, request-changing operations are displayed in the extraction monitor. When the same operations are executed multiple times, it will be very difficult to assign the messages to the respective operations.

2. In order to guarantee a more simple error analysis and optimization potential during configuration of runtime parameters, as of release SAP NetWeaver 2004s, all messages relevant for DataStore objects are displayed in their own monitor.

Write-Optimized DataStore Object (Data Management):

1. Up to now it was necessary to activate the data loaded into a DataStore object to make it visible to reporting or to be able to update it to further InfoProviders. As of SAP NetWeaver 2004s, a new type of DataStore object is introduced: the write-optimized DataStore object.

2. The objective of the new object type is to save data as efficiently as possible in order to be able to further process it as quickly as possible without addition effort for generating SIDs, aggregation and data-record based delta. Data that is loaded into write-optimized DataStore objects is available immediately for further processing. The activation step that has been necessary up to now is no longer required.

3. The loaded data is not aggregated. If two data records with the same logical key are extracted from the source, both records are saved in the DataStore object. During loading, for reasons of efficiency, no SID values can be determined for the loaded characteristics. The data is still available for reporting. However, in comparison to standard DataStore objects, you can expect to lose performance because the necessary SID values have to be determined during query runtime.

Deleting from the Change Log (Data Management):

The Deletion of Requests from the Change Log process type supports the deletion of change log files. You select DataStore objects to determine the selection of requests. The system supports multiple selections. You select objects in a dialog box for this purpose. The process type supports the deletion of requests from any number of change logs.

Using InfoCubes in Infosets (Data Modeling):

1. You can now include InfoCubes in an InfoSet and use them in a join. InfoCubes are handled logically in Infosets like DataStore objects. This is also true for time dependencies. In an InfoCube, data that is valid for different dates can be read.

2. for performance reasons you cannot define an InfoCube as the right operand of a left outer join. SAP does not generally support more than two InfoCubes in an InfoSet.

Pseudo Time Dependency of DataStore Objects and InfoCubes in Infosets (Data Modeling):

In BI only master data can be defined as a time-dependent data source. Two additional fields/attributes are added to the characteristic. DataStore objects and InfoCubes that are being used as InfoProviders in the InfoSet cannot be defined as time dependent. As of SAP NetWeaver 2004s, you can specify a date or use a time characteristic with DataStore objects and InfoCubes to describe the validity of a record. These InfoProviders are then interpreted as time-dependent data sources.

Left Outer: Include Filter Value in On-Condition (Data Modeling) :

1. The global properties in InfoSet maintenance have been enhanced by one setting Left Outer: Include Filter Value in On-Condition. This indicator is used to control how a condition on a field of a left-outer table is converted in the SQL statement. This affects the query results:

• If the indicator is set, the condition/restriction is included in the on-condition in the SQL statement. In this case the condition is evaluated before the join.

• If the indicator is not set, the condition/restriction is included in the where-condition. In this case the condition is only evaluated after the join.

• The indicator is not set by default.

Key Date Derivation from Time Characteristics (Data Modeling):

Key dates can be derived from the time characteristics 0CALWEEK, 0CALMONTH, 0CALQUARTER, 0CALYEAR, 0FISCPER, 0FISCYEAR: It was previously possible to specify the first, last or a fixed offset for key date derivation. As of SAP NetWeaver 2004s, you can also use a key date derivation type to define the key date.

Repartitioning of InfoCubes and DataStore Objects (Data Management):

With SAP NetWeaver 2004s, the repartitioning of InfoCubes and DataStore objects on the database that are already filled is supported. With partitioning, the runtime for reading and modifying access to InfoCubes and DataStore objects can be decreased. Using repartitioning, non-partitioned InfoCubes and DataStore objects can be partitioned or the partitioning schema for already partitioned InfoCubes and DataStore objects can be adapted.

Remodeling InfoProviders (Data Modeling):

1. As of SAP NetWeaver 2004s, you can change the structure of InfoCubes into which you have already loaded data, without losing the data. You have the following remodeling options:

2. For characteristics:

• Inserting or replacing characteristics with: Constants, Attribute of an InfoObject within the same dimension, Value of another InfoObject within the same dimension, Customer exit (for user-specific coding).

• Delete

3. For key figures:

• Inserting: Constants, Customer exit (for user-specific coding).

• Replacing key figures with: Customer exit (for user-specific coding).

• Delete

4. SAP NetWeaver 2004s does not support the remodeling of InfoObjects or DataStore objects. This is planned for future releases. Before you start remodeling, make sure:

(A) You have stopped any process chains that run periodically and affect the corresponding InfoProvider. Do not restart these process chains until remodeling is finished.

(B) There is enough available tablespace on the database.

1. After remodeling, check which BI objects that are connected to the InfoProvider (transformation rules, MultiProvider, queries and so on) have been deactivated. You have to reactivate these objects manually

Parallel Processing for Aggregates (Performance):

1. The change run, rollup, condensing and checking up multiple aggregates can be executed in parallel. Parallelization takes place using the aggregates. The parallel processes are continually executed in the background, even when the main process is executed in the dialog.

2. This can considerably decrease execution time for these processes. You can determine the degree of parallelization and determine the server on which the processes are to run and with which priority.

3. If no setting is made, a maximum of three processes are executed in parallel. This setting can be adjusted for a single process (change run, rollup, condensing of aggregates and checks). Together with process chains, the affected setting can be overridden for every one of the processes listed above. Parallelization of the change run according to SAP Note 534630 is obsolete and is no longer being supported.


Multiple Change Runs (Performance):

1. You can start multiple change runs simultaneously. The prerequisite for this is that the lists of the master data and hierarchies to be activated are different and that the changes affect different InfoCubes. After a change run, all affected aggregates are condensed automatically.

2. If a change run terminates, the same change run must be started again. You have to start the change run with the same parameterization (same list of characteristics and hierarchies). SAP Note 583202 is obsolete.

Partitioning Optional for Aggregates (Performance):

1. Up to now, the aggregate fact tables were partitioned if the associated InfoCube was partitioned and the partitioning characteristic was in the aggregate. Now it is possible to suppress partitioning for individual aggregates. If aggregates do not contain much data, very small partitions can result. This affects read performance. Aggregates with very little data should not be partitioned.

2. Aggregates that are not to be partitioned have to be activated and filled again after the associated property has been set.

MOLAP Store (Deleted) (Performance):

Previously you were able to create aggregates either on the basis of a ROLAP store or on the basis of a MOLAP store. The MOLAP store was a platform-specific means of optimizing query performance. It used Microsoft Analysis Services and, for this reason, it was only available for a Microsoft SQL server database platform. Because HPA indexes, available with SAP NetWeaver 2004s, are a platform-independent alternative to ROLAP aggregates with high performance and low administrative costs, the MOLAP store is no longer being supported.

Data Transformation (Data Management):

1. A transformation has a graphic user interfaces and replaces the transfer rules and update rules with the functionality of the data transfer process (DTP). Transformations are generally used to transform an input format into an output format. A transformation consists of rules. A rule defines how the data content of a target field is determined. Various types of rule are available to the user such as direct transfer, currency translation, unit of measure conversion, routine, read from master data.

2. Block transformations can be realized using different data package-based rule types such as start routine, for example. If the output format has key fields, the defined aggregation behavior is taken into account when the transformation is performed in the output format. Using a transformation, every (data) source can be converted into the format of the target by using an individual transformation (one-step procedure). An InfoSource is only required for complex transformations (multistep procedures) that cannot be performed in a one-step procedure.

3. The following functional limitations currently apply:

You cannot- use hierarchies as the source or target of a transformation.

You can- not use master data as the source of a transformation.

You cannot- use a template to create a transformation.

No- documentation has been created in the metadata repository yet for transformations.

In the- transformation there is no check for referential integrity, the InfoObject transfer routines are not considered and routines cannot be created using the return table.

Quantity Conversion:

As of SAP NetWeaver 2004s you can create quantity conversion types using transaction RSUOM. The business transaction rules of the conversion are established in the quantity conversion type. The conversion type is a combination of different parameters (conversion factors, source and target units of measure) that determine how the conversion is performed. In terms of functionality, quantity conversion is structured similarly to currency translation. Quantity conversion allows you to convert key figures with units that have different units of measure in the source system into a uniform unit of measure in the BI system when you update them into InfoCubes.

Data Transfer Process:

You use the data transfer process (DTP) to transfer data within BI from a persistent object to another object in accordance with certain transformations and filters. In this respect, it replaces the InfoPackage, which only loads data to the entry layer of BI (PSA), and the data mart interface. The data transfer process makes the transfer processes in the data warehousing layer more transparent. Optimized parallel processing improves the performance of the transfer process (the data transfer process determines the processing mode). You can use the data transfer process to separate delta processes for different targets and you can use filter options between the persistent objects on various levels. For example, you can use filters between a DataStore object and an InfoCube. Data transfer processes are used for standard data transfer, for real-time data acquisition, and for accessing data directly. The data transfer process is available as a process type in process chain maintenance and is to be used in process chains.

ETL Error Handling:

The data transfer process supports you in handling data records with errors. The data transfer process also supports error handling for DataStore objects. As was previously the case with InfoPackages, you can determine how the system responds if errors occur. At runtime, the incorrect data records are sorted and can be written to an error stack (request-based database table). After the error has been resolved, you can further update data to the target from the error stack. It is easier to restart failed load processes if the data is written to a temporary store after each processing step. This allows you to determine the processing step in which the error occurred. You can display the data records in the error stack from the monitor for the data transfer process request or in the temporary storage for the processing step (if filled). In data transfer process maintenance, you determine the processing steps that you want to store temporarily.

InfoPackages:

InfoPackages only load the data into the input layer of BI, the Persistent Staging Area (PSA). Further distribution of the data within BI is done by the data transfer processes. The following changes have occurred due to this:

- New tab page: Extraction -- The Extraction tab page includes the settings for adaptor and data format that were made for the DataSource. If data transfer from files occurred, the External Data tab page is obsolete; the settings are made in DataSource maintenance.

- Tab page: Processing -- Information on how the data is updated is obsolete because further processing of the data is always controlled by data transfer processes.

- Tab page: Updating -- On the Updating tab page, you can set the update mode to the PSA depending on the settings in the DataSource. In the data transfer process, you now determine how the update from the PSA to other targets is performed. Here you have the option to separate delta transfer for various targets.

For real-time acquisition with the Service API, you create special InfoPackages in which you determine how the requests are handled by the daemon (for example, after which time interval a request for real-time data acquisition should be closed and a new one opened). For real-time data acquisition with Web services (push), you also create special InfoPackages to set certain parameters for real-time data acquisition such as sizes and time limits for requests.

PSA:

The persistent staging area (PSA), the entry layer for data in BI, has been changed in SAP NetWeaver 2004s. Previously, the PSA table was part of the transfer structure. You managed the PSA table in the Administrator Workbench in its own object tree. Now you manage the PSA table for the entry layer from the DataSource. The PSA table for the entry layer is generated when you activate the DataSource. In an object tree in the Data Warehousing Workbench, you choose the context menu option Manage to display a DataSource in PSA table management. You can display or delete data here. Alternatively, you can access PSA maintenance from the load process monitor. Therefore, the PSA tree is obsolete.

Real-Time Data Acquisition:

Real-time data acquisition supports tactical decision making. You use real-time data acquisition if you want to transfer data to BI at frequent intervals (every hour or minute) and access this data in reporting frequently or regularly (several times a day, at least). In terms of data acquisition, it supports operational reporting by allowing you to send data to the delta queue or PSA table in real time. You use a daemon to transfer DataStore objects that have been released for reporting to the ODS layer at frequent regular intervals. The data is stored persistently in BI. You can use real-time data acquisition for DataSources in SAP source systems that have been released for real time, and for data that is transferred into BI using the Web service (push). A daemon controls the transfer of data into the PSA table and its further posting into the DataStore object. In BI, InfoPackages are created for real-time data acquisition. These are scheduled using an assigned daemon and are executed at regular intervals. With certain data transfer processes for real-time data acquisition, the daemon takes on the further posting of data to DataStore objects from the PSA. As soon as data is successfully posted to the DataStore object, it is available for reporting. Refresh the query display in order to display the up-to-date data. In the query, a time stamp shows the age of the data. The monitor for real-time data acquisition displays the available daemons and their status. Under the relevant DataSource, the system displays the InfoPackages and data transfer processes with requests that are assigned to each daemon. You can use the monitor to execute various functions for the daemon, DataSource, InfoPackage, data transfer process, and requests.

Archiving Request Administration Data:

You can now archive log and administration data requests. This allows you to improve the performance of the load monitor and the monitor for load processes. It also allows you to free up tablespace on the database. The archiving concept for request administration data is based on the SAP NetWeaver data archiving concept. The archiving object BWREQARCH contains information about which database tables are used for archiving, and which programs you can run (write program, delete program, reload program). You execute these programs in transaction SARA (archive administration for an archiving object). In addition, in the Administration functional area of the Data Warehousing Workbench, in the archive management for requests, you can manage archive runs for requests. You can execute various functions for the archive runs here.

After an upgrade, use BI background management or transaction SE38 to execute report RSSTATMAN_CHECK_CONVERT_DTA and report RSSTATMAN_CHECK_CONVERT_PSA for all objects (InfoProviders and PSA tables). Execute these reports at least once so that the available request information for the existing objects is written to the new table for quick access, and is prepared for archiving. Check that the reports have successfully converted your BI objects. Only perform archiving runs for request administration data after you have executed the reports.

Flexible process path based on multi-value decisions:

The workflow and decision process types support the event Process ends with complex status. When you use this process type, you can control the process chain process on the basis of multi-value decisions. The process does not have to end simply successfully or with errors; for example, the week day can be used to decide that the process was successful and determine how the process chain is processed further. With the workflow option, the user can make this decision. With the decision process type, the final status of the process, and therefore the decision, is determined on the basis of conditions. These conditions are stored as formulas.

Evaluating the output of system commands:

You use this function to decide whether the system command process is successful or has errors. You can do this if the output of the command includes a character string that you defined. This allows you to check, for example, whether a particular file exists in a directory before you load data to it. If the file is not in the directory, the load process can be repeated at pre-determined intervals.

Repairing and repeating process chains:

You use this function to repair processes that were terminated. You execute the same instance again, or repeat it (execute a new instance of the process), if this is supported by the process type. You call this function in log view in the context menu of the process that has errors. You can restart a terminated process in the log view of process chain maintenance when this is possible for the process type.

If the process cannot be repaired or repeated after termination, the corresponding entry is missing from the context menu in the log view of process chain maintenance. In this case, you are able to start the subsequent processes. A corresponding entry can be found in the context menu for these subsequent processes.

Executing process chains synchronously:

You use this function to schedule and execute the process in the dialog, instead of in the background. The processes in the chain are processed serially using a dialog process. With synchronous execution, you can debug process chains or simulate a process chain run.

Error handling in process chains:

You use this function in the attribute maintenance of a process chain to classify all the incorrect processes of the chain as successful, with regard to the overall status of the run, if you have scheduled a successor process Upon Errors or Always. This function is relevant if you are using metachains. It allows you to continue processing metachains despite errors in the subchains, if the successor of the subchain is scheduled Upon Success.

Determining the user that executes the process chain:

You use this function in the attribute maintenance of a process chain to determine which user executes the process chain. In the default setting, this is the BI background user.

Display mode in process chain maintenance:

When you access process chain maintenance, the process chain display appears. The process chain is not locked and does not call the transport connection. In the process chain display, you can schedule without locking the process chain.

Checking the number of background processes available for a process chain:

During the check, the system calculates the number of parallel processes according to the structure of the tree. It compares the result with the number of background processes on the selected server (or the total number of all available servers if no server is specified in the attributes of the process chain). If the number of parallel processes is greater than the number of available background processes, the system highlights every level of the process chain where the number of processes is too high, and produces a warning.

Open Hub / Data Transfer Process Integration:

As of SAP NetWeaver 2004s SPS 6, the open hub destination has its own maintenance interface and can be connected to the data transfer process as an independent object. As a result, all data transfer process services for the open hub destination can be used. You can now select an open hub destination as a target in a data transfer process. In this way, the data is transformed as with all other BI objects. In addition to the InfoCube, InfoObject and DataStore object, you can also use the DataSource and InfoSource as a template for the field definitions of the open hub destination. The open hub destination now has its own tree in the Data Warehousing Workbench under Modeling. This tree is structured by InfoAreas.

The open hub service with the InfoSpoke that was provided until now can still be used. We recommend, however, that new objects are defined with the new technology.



0RECORDMODE and Delta Type Concepts:

0RECORDMODE & Delta Type Concepts:

What is Delta Management?


It implies the ability to extract only new or changed data records to the BI system in a separate data request. Whenever we activate the Data source which can serve Delta Records in R/3 system, the system

Automatically generates the extraction structure for Data source. This extraction structure sends the delta records to BI based on the Update mode we mention in LBWE.
Let’s take an example that we have chosen update mode as “Queued delta”. So first delta record will be

Collected into delta queue (RSA7) before it posted to BI.

The delta queue is an S-API (Service API) function. This is the central interface technology used to extract data from SAP source systems to a BI system. Consequently, the delta queue is only used in SAP or BI source systems.
The delta queue is a data store for new or changed data records for a Data Source (that has occurred since the last data request). The new or changed data records are either written to the delta queue automatically using an update process in the source system, or by means of the Data Source extractor when a data

Request is received from the BI system. You can check this in below screen

Go to RSA7 either in R/3 or BI you will find the below screen

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
How to Identify Delta capable Data Source?


You can check whether Data source can provide delta or not by going through SBIW or RSA6

Go to RSA6, then locate your data source by drill down the SAP R/3 data source folder and then double click on the data source. You will get the below screen


























If Delta Update is checked then this Data Source is delta capable.

Now once the data source is delta capable we have to check the type of delta process. We can find this in table the table ROOSOURCE (in the source system) or in the table RSOLTPSOURCE (in BI for DataSources 3.x) or in the table RSDS (in BI for DataSources) respectively


Properties of the delta process are determined in the table RODELTAM (in BI or in the source system)


 

 

 

 

 

 

 

 

 

 

 

 

 
Delta Types:


It describes how the new and changed records enter the delta queue. The delta type is Property of Delta process, it defers from one delta process to other.

To check the delta type of a particular delta process, go to SE16 and give table RODELTAM execute you will get the below screen.


















Form the above screen the different delta types are as follows




















„ „ : The delta type is not defined
'A': The DataSource determines the delta with ALE update pointers. This method is used in the main in connection with DataSources for attributes and texts from SAP source systems.

'D': The SAP application writes delta data records directly to the delta queue(.PUSH.) for the DataSource. Each data record is either a) stored in the delta queue individually on saving / updating the corresponding transactions in the application (for example, FI-AR/AP or direct delta in the LO Cockpit),or b) written in groups of delta data records (after updating the transaction) to the delta queue by means of application-specific jobs.

'E': The DataSource determines the delta through the extractor on request. This means that the extractor must be capable of providing the delta records for the DataSource on request (.PULL.).

'F': The delta data records are loaded by flat file. This delta type is only used for DataSources for flat file source systems

ROCANCEL and 0RECORDMODE:
ROCANCEL which is automatically part of DataSource saves the record mode in R/3 side based on the type of delta process of DataSource.

This field for the DataSource is assigned to Info object 0RECORDMODE in BI system.

Mapping between Delta indicators:
To check how this fields are mapped just double click on transfer structure of Data Source to get the below screen.
In BI you can use 0RECORDMODE or 0STORNO to map the ROCANCEL field.



 
 
 
 
 
 
From the above screen we can easily say it is direct mapping.

ROCANCEL Values:

We can analyze what are all the values by going through the data in delta queue.
Goto RSA7 Select any Delta queue then click on Display data records button. Then you will get the below screen

 





If you want more records to display then change 1000 to 99999 then click on execute. You will get data as mentioned in below screen.








In the above screen first field indicates the ROCANCEL, from this screen we can see the different values are
„ „  after Image


„X‟  Before Image(This is missed in screen shot)


„R‟  Reverse Image(after image with reversed signs)

Apart from the above three you can find some more values for ROCANCEL field in some specific situation

If it is SD transaction data then you will find the below values in ROCANCEL field.

'U'  becomes an after image with a minus key figure in BI.

'V'  becomes a remove (deletion record) with a plus key figures in BI.

'W' becomes a before image with a plus key figures in BI.

These three values are only required for the internal conversion of keyfigures.This conversion occurs during

extraction to BI, so you won’t find this values in BI. This will be available only in RSA7 at R/3 system.

0RECORDMODE Values:


Go to change log table of DSO, in content screen press F4 on the selections provided for reocordmode field, you will get the below list.


 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 





' ': The record provides an after image. The status of the record is transferred after it has been changed, or after data has been added.

'X': The record provides a before image. The status of the record is transferred before it has been changed or deleted. All attributes for the record that can be aggregated (key figures) must be transferred with a reversed plus/minus sign. These records are ignored in a non-additive (overwriting) update of a DataStore object. The before image complements the after image.

'A': The record provides an additive image. This provides the record with differences for all the numeric values are available. The record can be updated to an InfoCube without restrictions, but requires an additive update to be made to a DataStore object.

'D': The record must be deleted. Only the key is transferred. This record (and therefore the DataSource too)

Can only be updated to a DataStore object.

'R': The record provides a reverse image. The content of this record is equivalent to a before image. The only difference occurs when updating a DataStore object: An existing record with the same key is deleted.

'N': The record provides a new image. The content of this record is equivalent to an after image without a before image. A new image should be transferred instead of an after image when a record is created. The new image complements the reverse image.

Possible Scenarios to update Delta Records into Data Targets


Here we will have a look at the most used delta process types and how a particular record looks

The following are the most used delta process types

ABR  Which Provides After Before and Reverse Images


AIE/AIM  Which Provides After Image
ADD  which provides additive Image

Now let’s take an example of simple sales order as below













































Based on the properties of Data source we have to design our data flow in our System. The below table illustrates How to use data targets based on Data source delta process

























Case1: If the DataSource sends both the before image and the after image, this combination can be loaded to any InfoCube or DataStore object. If the overwrite data setting was made for DataStore objects, only the after image (the last image) arrives in the activation queue table of the DataStore object. If settings are made in the DataStore object so that data is added, both the before and the after image are necessary to load the data correctly to the target.

Case2: If the data that fills the BI system is an additive image, the data can be written to an InfoCube or a DataStore object. With a DataStore object, the update type for key figures must be set to add and not overwrite, however.

Case3: If the DataSource only sends the after image, this must first be updated to a DataStore object that is in overwrite mode

Case4: Reverse images can be processed by all targets.

Case5: Delete images can only be processed by a DataStore object. InfoCubes cannot process deletions.

Ads: