You are on page 1of 52

Exercise: loading data from ECC to hana using BODS(data services)

Follow
Follow

0 Likes 1,200 View 3 Comments

Hi, today we will try to load/transfer data from one of the ecc table to hana using bo data services.

Steps:

Step1: Create two datastore system in bods related to hana and ecc.

step2: search for your table

step3: create project,job and dataflow.

step4: In dataflow: we will use and map our table to query transform and template table.

the template table will correspond to setting in hana

step5: validate, execute the job

step6: check in hana system

Step1: datastore related to hana system create your target links on the bottom 6 th tab: datastore: which will be your mapping for hana system,where it will update the data. Call it vikg_hana1 and give necessary details:

Datastore name: vikg_hana1

Datastore type: database

Database type: SAP hana

Database version: HANAv1.x

Database server name: saphan

Port:30015

Username Hana1,give pwd.

Click checkbox enable automatic data transfer

Click checkbox enable automatic data transfer Step2: Create a datastore w.r.to ECC source system You can

Step2: Create a datastore w.r.to ECC source system You can find these details from SAP logon pad:

Create a datastore with below details: Datastore name: vikg_ecc Datastore type: SAP application Database server name:

Create a datastore with below details:

Datastore name: vikg_ecc

Datastore type: SAP application

Database server name: ECCEHP6

User-id (in my case ’Hana1’)n password for ecc system:

Click advanced button: give system Id as 10.

Step3: search for your table Double-click vik_ecc datastore just created. Click Repository Metadata radio-button and search
Step3: search for your table Double-click vik_ecc datastore just created. Click Repository Metadata radio-button and search

Step3: search for your table Double-click vik_ecc datastore just created.

Click Repository Metadata radio-button and search for VBAP in the external metadata.

When search result gives vbap ,rt. Click and import.

Similarly do for vbak or any other table.

As you can see, we have sales order header and item tables: Vbak ,Vbap in display.
As you can see, we have sales order header and item tables: Vbak ,Vbap in display.

As you can see, we have sales order header and item tables: Vbak ,Vbap in display.

Right Click on tables > properties> tab view data > check whether it is displaying data.

If it is displaying data means your connection is ok.

This data we will try to transfer to HANA.

Step4: create project, job and dataflow. Create a new project in the project area (from the new button on applicaton bar: ecc_vikg01)>

rt. Click and create a batch job ecc_job2.

Double click job ecc_job2 > drag n drop the dataflow button from the rt. Side button bar.

Call it df02. doube click df02.

Step4: create project, job and dataflow. Create a new project in the project area (from the

Step5: In dataflow: we will use and map our table to query transform and template table. the template table will correspond to setting in hana

>doube click df02.>

  • 1. Drag-n-drop your vbap table icon you just create in the ecc datastore ‘vikg_ecc’.

  • 2. Then drop query and template table from the rt. side menu.

  • 3. On template table: give your table name ECC_BODS_VBAP, give your hana datastore ‘vikg_hana1’

we earlier created in the datastore area., give owner as your hana schema name: vikasg.

the table will be created in this schema

4.Link table to query and query to hana

  • 5. double click query icon:Select some of the required fields in table and bring on the rt. To-be output

side query.

the query to hana field mapping will happen automatically. If not then you need to do it.

6.Your screen will look like this.

If you click the magnifier button on table VBAP,it will preview or display table data at bottom

the table will be created in this schema 4.Link table to query and query to hana

As you can see, Query and fields with primary key fields: vbeln and posnr

  • 5. double click query icon:Select some of the required fields in table and bring on the rt. To-be output

side query.

the query to hana field mapping will happen automatically. If not then you need to do it.

If you click the template table: it will show the fields you selected in query. Step6:

If you click the template table: it will show the fields you selected in query.

Step6: validate, execute the job Then rt. Click on batch job ecc_job2 > execute and select your dataflow df02 (on the pop-up).

After 1 or 2 mins. The job will finish.

From the job log : The job has completed susccessfully.

Second tab says: 17032 records were transferred.

Step7: check in hana system Now we will check in Hana System: As you can see

Step7: check in hana system Now we will check in Hana System:

As you can see ‘ECC_BODS_VBAP’ table is created in the schema ‘vikasg’.

This is the same table name and schema we had given in template table setting in bods.schema was there in hana before hand,

but the table is created at run-time after the job has completed successfully.

check for fields and data type of fields how vbeln,posr and netwr had the data type

check for fields and data type of fields

how vbeln,posr and netwr had the data type varchar, numeric and decimals in the query transformation

and it is nvarchar and decimal in the Hana system

Now,we will preview data:

On the runtime information tab , it says when the table was created and updated and

On the runtime information tab , it says when the table was created and updated and how many records were transferred.

Similarly,you can upload for sales header: vbak data by creating new dataflow(df01) and new batch job(ecc_job) in the same project.

In hana system you can see we created ECC_DS_HANA_VBAK table below the ECC_BODS_VBAP table.

Note: when you run your particular job, you have to select the dependent dataflow

hope, you find it useful. Best regards, Vikas Mapping an SAP ECC IDOC to an SAPFollowRSS fee dLike 3 Likes 1,904 View 6 Comments Business ByDesign comes with basic “out of the box” integration scenarios to SAP ECC. This includes for example master data integration for material and customer account data via IDOC XML, as described for example under the following link: http://help.sap.com/saphelp_byd1502/en/PUBLISHING/IntegrationScenarios.html#MasterDataIntegratio nwithSAPERP While this enables very quick adoption of running integration scenarios, the lack of scenario extensibility can be a drawback, when larger data sets shall be integrated. " id="pdf-obj-12-2" src="pdf-obj-12-2.jpg">

hope, you find it useful.

Best regards,

Vikas

Mapping an SAP ECC IDOC to an SAP Business ByDesign A2X Service with Hana Cloud Integration

3 Likes 1,904 View 6 Comments

Business ByDesign comes with basic “out of the box” integration scenarios to SAP ECC. This includes for

example master data integration for material and customer account data via IDOC XML, as described for example under the following link:

While this enables very quick adoption of running integration scenarios, the lack of scenario extensibility can be a drawback, when larger data sets shall be integrated.

This blog describes step by step, how to realize an alternative approach for master data replication based on an ECC IDOC and a ByD A2X Service. The IDOC is sent from SAP ECC as an XML / SOAP message to Hana Cloud Integration (HCI). HCI maps the IDOC XML to the ByD web service XML format and invokes the A2X service interface.

The pure mapping of data structures is very similar to e.g. standard iFlows for master data replication between SAP ECC and SAP C4C. Specific care however needs to be taken of the web service response handling, due to the synchronous nature of A2X services. Adding additional field mappings to the project is then quite simple.

The material master replication based on the MATMAS05 IDOC Type serves as an example for the implemention of this communciation pattern.

The Eclipse project files that are required to run the example scenario are attached to this blog. The files need to be unzipped first, and then renamed from .txt to .zip. The resulting zip files can be imported as HCI Integration Projects into Eclipse.

ByD Communication Arrangement and WSDL File for A2X Service

To obtain a web service endpoint that can be invoked from an external system (in this case from HCI), a Communication Arrangement needs to be created in the ByD system (WoC view: Application and User Management > Communication Arrangements). This in turn requires a Communication Scenario to be created, which contains the Manage Material In service, and a Communication System instance, which represents the calling system.

The WSDL file of the service endpoint can then be downloaded from the Communication Arrangement screen:

This blog describes step by step, how to realize an alternative approach for master data replication

ALE Scenario Configuration and WSDL File for IDOC XML Message

Standard ALE Configuration is required to enable sending of IDOCs for material master records from ECC. This includes the following entities:

Logical System that represents the receiving ByD system

Distribution Model

Change Pointer Activation

Partner Profile

ALE Port

HTTP Connection

The ALE Port has to be of type XML HTTP, see transaction WE21:

ALE Scenario Configuration and WSDL File for IDOC XML Message Standard ALE Configuration is required to

The HTTP connection has to be of type “G” – “HTTP Connection to external Server”, see transaction

SM59:

ALE Scenario Configuration and WSDL File for IDOC XML Message Standard ALE Configuration is required to

Here, the host name of the HCI worker node, and the URL path as configured in the HCI iFlow (see below), have to be maintained. The URL path has to have the prefix /cxf/. SSL needs to be active for the connection.

The WSDL file for the IDOC basic type (MATMAS05) or a related extension type can be created with ABAP report SRT_IDOC_WSDL_NS directly in the ECC system. The report progam is provided with SAP Note

1728487.

Here, the host name of the HCI worker node, and the URL path as configured inhttps://tools.hana.ondemand.com/#hci Then an Eclipse HCI project of type “Integration Flow” can be created via the project wizard: The project will finally contain at least the following entities:  iFlow definition  Message Mapping definition  WSDL files that describe the two XML signatures to be mapped " id="pdf-obj-15-8" src="pdf-obj-15-8.jpg">

Integration Flow Project in Eclipse

The Eclipse plugin for development of HanaCloud Integration content needs to be installed, as documented for example under the following link:

https://tools.hana.ondemand.com/#hci Then an Eclipse HCI project of type “Integration Flow” can be created via the project wizard:

Here, the host name of the HCI worker node, and the URL path as configured inhttps://tools.hana.ondemand.com/#hci Then an Eclipse HCI project of type “Integration Flow” can be created via the project wizard: The project will finally contain at least the following entities:  iFlow definition  Message Mapping definition  WSDL files that describe the two XML signatures to be mapped " id="pdf-obj-15-18" src="pdf-obj-15-18.jpg">

The project will finally contain at least the following entities:

iFlow definition

Message Mapping definition

WSDL files that describe the two XML signatures to be mapped

The following picture proposes an iFlow design to support the communication pattern that is discussed here:

The following picture proposes an iFlow design to support the communication pattern that is discussed here:

The following picture proposes an iFlow design to support the communication pattern that is discussed here:

This example iFlow contains the following elements:

  • 1. Sender System

Named SAP_ECC in the example.

Specifies the credentials to be used for service invokation on HCI (basic authentication vs. client certificate based authentication).

  • 2. Sender SOAP Channel

Specifies the protocol (plain SOAP), and the endppoint URL path for service invokation on HCI. This has to fit to the URL path maintained in the SM59 connection on ECC (see above).

The plain SOAP protocol is chosen instead of the IDOC SOAP protocol to be able to

The plain SOAP protocol is chosen instead of the IDOC SOAP protocol to be able to influence the response that is sent back to ECC based on the response received from ByD.

  • 3. Content Modifier A

Reads the IDOC number from the IDOC XML payload through an XPath expression and stores it as a Property in the context of the iFlow execution.

The plain SOAP protocol is chosen instead of the IDOC SOAP protocol to be able to
  • 4. Message Mapping

Defines the mapping of the IDOC XML structure to the A2X service XML structure. The two corresponding WSDL files that have been extracted as described above, have to be uploaded to the message mapping for this purpose:

The plain SOAP protocol is chosen instead of the IDOC SOAP protocol to be able to

The mapping definition itself can mostly be done by dragging source elements to target elements in the graphical mapping editor.

The mapping definition that is part of the project file attached to this blog contains allhttp://help.sap.com/saphelp_byd1502/en/ktp/Software- Components/01200615320100003517/BC_FP30/ERP_Integration/Subsidiary_Mapping_Info/WI_MD_Ma pping_Info.html). In addition, the gross weight is mapped to the quantity characteristic of the ByD material, which is a frequently requested feature. 5. Request-Reply Service Call Is used to handle the response of the synchronous web service call and transfer it to subsequent processing steps of the iFlow. 6. Receiver SOAP Channel " id="pdf-obj-18-2" src="pdf-obj-18-2.jpg">

The mapping definition that is part of the project file attached to this blog contains all element mappings provided by the standard material master inegration scenario, as described under the following link:

In addition, the gross weight is mapped to the quantity characteristic of the ByD material, which is a

frequently requested feature.

  • 5. Request-Reply Service Call

Is used to handle the response of the synchronous web service call and transfer it to subsequent processing steps of the iFlow.

  • 6. Receiver SOAP Channel

Specifies the communication protocol that is used for the call to ByD (plain SOAP), the end point URL to be called, and the credentials to be used for authentication at the ByD system.

Specifies the communication protocol that is used for the call to ByD (plain SOAP), the end

In this case, basic authentication with user and password is used. The credentials object is deployed to

the tenant (see “Deployed Artifacts”).

  • 7. Receiver System

Named SAP_ByD in the example.

Represents the target ByD system but doesn’t contain any additional relevant information.

  • 8. Exclusive Decision Gateway

Specifies the communication protocol that is used for the call to ByD (plain SOAP), the end

Defines the route to be chosen for further processing, depending on the web service response content.

The “Success” route is chosen by default.

The “Error” route is chosen if the ByD response contains a log item with severity code 3 (= error), which is

the case if an application error has occurred. This information is extracted from the response payload through an XPath expression. Technical errors (e.g. due to wrong message payload syntax) are returned

as SOAP faults with HTTP code 500. Such errors are directly handed back to the sender (i.e. here the ECC syystem) by the iFlow processing logic.

9. Content Modifier B

Assembles the IDOC XML response message in the success case. The IDOC ID (DOCNUM) in the sender system is read from the property, which has been set by Content Modifier A for this purpose.

as SOAP faults with HTTP code 500. Such errors are directly handed back to the sender

10. Content Modifier C

Assembles an XML repsonse message for the error case. The message contains the IDOC number in the ECC sender system and error information as received in the web service response issued by the ByD tenant.

as SOAP faults with HTTP code 500. Such errors are directly handed back to the sender

The IDOC number is read from the iFlow properties and written into a header variable of the content modifier entity, since the subsequent storage operation can only access header variables to fill the Entry ID, but no property variables.

The error text is again extracted from the web service response payload through an XPath expression.

as SOAP faults with HTTP code 500. Such errors are directly handed back to the sender

11. Data Store Operation (Write)

Persists the error response message in the tenant’s data store. It is saved with the IDOC number as Entry

ID.

12. Final “Channel” Indicates that a response is sent back to ECC, but contains no additionally

12. Final “Channel”

Indicates that a response is sent back to ECC, but contains no additionally relevant information.

Value Mapping Project in Eclipse

Value mappings need to be defined in a separate Eclipse HCI Integration Project of type “Value Mappjng”.

12. Final “Channel” Indicates that a response is sent back to ECC, but contains no additionally

The value mappings are then provided in the automatically generated file “value_mapping.xml”.

12. Final “Channel” Indicates that a response is sent back to ECC, but contains no additionally

For the example project that is developed here, a value mapping is required only to map the ECC Material Group ID (MARA-MATKL) to a ByD Product Category ID. The respective mapping XML entry looks as follows:

…and can be accessed from the message mapping definition as follows: Trigger Material Master Replication Create

…and can be accessed from the message mapping definition as follows:

…and can be accessed from the message mapping definition as follows: Trigger Material Master Replication Create

Trigger Material Master Replication

Create / change material master in ECC (transaction MM01/MM02):

…including basic data text: Trigger Replication (transaction BD10):

…including basic data text:

…including basic data text: Trigger Replication (transaction BD10):

Trigger Replication (transaction BD10):

Check that IDOC has been transferred successfully (transaction WE05 or BD87). Verify the replication result in

Check that IDOC has been transferred successfully (transaction WE05 or BD87).

Verify the replication result in the ByD tenant:

Check that IDOC has been transferred successfully (transaction WE05 or BD87). Verify the replication result in

Check Error Handling

An application error on ByD application side can be caused e.g. by changing the value mapping from ECC

Material Group Code to ByD Product Category to obtain a target value that doesn’t exist in the ByD

system.

Based on the iFlow design that is discussed here, the IDOC processing will go into an error status on the ECC sender side in such a case. To obtain the error description, there are two options:

  • 1. In ECC, transaction SRTUTIL, display the failed service call in the error log. The response message

payload contains the error information as assembled by the iFlow in the “Error” route:

Based on the iFlow design that is discussed here, the IDOC processing will go into an
  • 2. Access the tenant data store on Eclipse, and download the message that has been written into the data

store:

Related Questions

By Sujeet Kumar Sao, Nov 09, 2017

6 Comments

You must be Logged on to comment or reply to a post.


 
 

Excellent kick-start into ByD integration scenarios using HCI. Already recommended your article 3 times

o

Like(0)



Nice blog.

o

Like(0)



Hi Stefan, i have a question, does this work with S/4HANA?

o

Like(0)


 
 

Thanks for sharing. Nice blog!

Murali

o

Like(0)


 
 

Hi Stefen,

How to handle ByD Response. I have used Router after Request-Reply step . Í am trying to capture error in Routers error step by using XML condition Expression = /n0:MaterialBundleMaintainConfirmation_sync_V1/Log/MaximumLogItemSeverityCode=3

Error:

 

Message processing failed.

 

org.apache.camel.builder.xml.InvalidXPathExpression: Invalid xpath:

//n0:MaterialBundleMaintainConfirmation_sync_V1/Log/MaximumLogItemSeverityCode = 3. Reason:

javax.xml.xpath.XPathExpressionException: net.sf.saxon.trans.XPathException: Prefix n0 has not been declared, cause: net.sf.saxon.trans.XPathException: Prefix n0 has not been declared

o

Like(0)



Hi

Thanks for the great article.

I am facing one issue here.I am using the standard b2b integration scenario of ByD with SAP ERP (S4HANA) And I am getting the following error.

SOAP:1.023 SRT: Processing error in Internet Communication Framework: (“ICF Error when receiving the response: ICM_HTTP_SSL_ERROR”)

Clearly it looks like SSL certificate error. I have uploaded the certificate from Communication Arrangement to SAP ERP and also uploaded the certificate from SAP ERP to communication arrangement. But looks like something is still wrong with the certificates. Can you briefly advise which certificates are mandatory for this integration and how to fix this error. Thank you very much

Loading data from flat file into SAP HANA DB The ever simplest way with SPS04 Hi Everyone,

In this document, I would like to explain how the data can be uploaded into HANA DB through the “Data From Local File” option which is a new feature of HANA Studio Revision 28 (SPS04).

Pain in the past:

So far, uploading data into HANA from flat file is not a straight forward approach.

The user has to create some control (.ctl) files, have to place the flat file along with control file into server location, execute SQL scripts to import the data.

Or some users’ would have used SAP Information Composer to achieve this.

Instant Solution:

Now everything made so simple through “Data From Local File” import option. Through this option

one can upload data from xls,xlsx and csv files. How to:

Now will see how this can be done

Go to Quick Launch->Content Section->Import->SAP HANA Content->Data From Local File as shown below

2. Click Next 3. Select the target system where you want to import the data and
  • 2. Click Next

  • 3. Select the target system where you want to import the data and click Next

4. Specify the required details in the following dialog,

4. Specify the required details in the following dialog,

Source File: Browse for the source file (which you want to upload) File Details: Select the

Source File:

Browse for the source file (which you want to upload) File Details:

Select the Field Delimeter (Comma or Semi Colon or Colon) and select from which Worksheet of your file, you want to import the data. If the file has single sheet then this option would be disabled

Import all data: With this option, either you can import all the data from flat file or you can mention the lines(For Eg : import the data from line number 5 to 50)

Header row exists: If the flat file has header row,just mention the header row number here. If there are no header row, simply uncheck this checkbox

Target Table:

New : If you want to create a new table in HANA DB with the data from flat file, then select this option. Specify the Schema name under which you want to create the table and the name of the table

Existing : If you want to append the flat file data into existing table, then go for this option 5. Click Next

New : If you want to create a new table in HANA DB with the data

Here the source and targets will be mapped by default. If the user wants to change, he can delete the join and remap with other columns also.

If the user wants to change the order (Up or Down) of the fields or if he wants to insert some new fields or delete the existing fields, these can be achieved by the top right corner icons

New : If you want to create a new table in HANA DB with the data

The user can set the Store Type to either Row Store or Column Store through the Drop down option. If the Store Type is set to Column Store then the Primary Key should have been set by user by selecting the respective Key Column checkbox.

The user can enter the description for each target field, if he wish to. The Preview of the File Data will be shown in the lower portion of the screen

The user can enter the description for each target field, if he wish to. The Preview

6. Click Next

This screen will show how the data will be stored in the target table in HANA. Just a Preview of the target table

7. Click Finish 8. Check the Job Log Entry 9. Now go to the Navigator Pane
  • 7. Click Finish

  • 8. Check the Job Log Entry

7. Click Finish 8. Check the Job Log Entry 9. Now go to the Navigator Pane
  • 9. Now go to the Navigator Pane -> Catalog folder -> Schema you selected as target->Tables and

look for the table you mentioned

10. Do the Data Preview of this table

10. Do the Data Preview of this table

10. Do the Data Preview of this table
This is the simplest and straightforward way to load the data from local flat file into

This is the simplest and straightforward way to load the data from local flat file into SAP HANA.

I hope this feature would be much more helpful for the end users to instantly push the data from local system into HANA DB without much effort.

Thanks for reading it. Please share your comments.

Rgds,

Murali

Tags:

sap

in-memory_business_data_management

flat

revision

upload_file

28

file

hana

This is the simplest and straightforward way to load the data from local flat file into

Muralikrishnan E May 11, 2012 at 08:04 AM

  • 13 Likes

  • 46 replies

Former Member replied May 11, 2012 at 08:27 AM Murali you have done a greate job.
 

Former Member replied May 11, 2012 at 08:27 AM Murali you have done a greate job. Hana is very straight forward as you said.

 

0

likes

Former Member replied May 11, 2012 at 08:27 AM Murali you have done a greate job.
 

Former Member replied

 

May 11, 2012 at 15:52 PM

Good blog Murali

I

like that !!! I always prefer simple ways for the best practice

......

0

likes

Former Member replied May 11, 2012 at 08:27 AM Murali you have done a greate job.
 

Former Member replied

 

May 11, 2012 at 17:54 PM

Good write up

...

how

easy it is now

...

0

likes

Former Member replied May 11, 2012 at 08:27 AM Murali you have done a greate job.
 

Rama Shankar replied May 15, 2012 at 18:20 PM Murali, Good write-up. Thanks for putting the steps together for SPS04. I am excited and will wait for SPS04. I do have few questions on date handling:

1- Is there any specific reason why you choose NVARCHAR for date other than to quickly show load feature. What if we have to do date arithmetics in the front-end? 2- What is your recommendation to model date and time fields for large flat file loads?

Please share your experiences with SPS3 and how it might change in SPS4. Thanks, Rama

0

likes

0 likes <a href=Muralikrishnan E r eplied May 16, 2012 at 08:45 AM Hi Rama, Thanks for your comments. 1.Just to show one expamle,I put it as NVARCHAR.There is no other reason behind that. Actually,during File upload,the systme will decide the data type based on considering few set of initial records.But the user is allowed to change based on the need during upload. 2.You can change the Data type accordingly(Eg:as DATE). SPS4 is similar to SPS3 with very few enhancements like flat file upload,Calc View Variables,Hierarchy creation UI,Creation of Hier in CV,etc .... Actually there is a released doc available for this SPS04.If I find,I will share with you. Rgds, Murali 0 likes Rama Shankar r eplied May 17, 2012 at 02:14 AM Thanks Murali! 0 likes " id="pdf-obj-39-11" src="pdf-obj-39-11.jpg">

Muralikrishnan E replied May 16, 2012 at 08:45 AM Hi Rama, Thanks for your comments. 1.Just to show one expamle,I put it as NVARCHAR.There is no other reason behind that.

Actually,during File upload,the systme will decide the data type based on considering few set of initial records.But the user is allowed to change based on the need during upload.

2.You can change the Data type accordingly(Eg:as DATE).

SPS4 is similar to SPS3 with very few enhancements like flat file upload,Calc View Variables,Hierarchy creation UI,Creation of Hier in CV,etc ....

Actually there is a released doc available for this SPS04.If I find,I will share with you. Rgds, Murali

0

likes

0 likes <a href=Muralikrishnan E r eplied May 16, 2012 at 08:45 AM Hi Rama, Thanks for your comments. 1.Just to show one expamle,I put it as NVARCHAR.There is no other reason behind that. Actually,during File upload,the systme will decide the data type based on considering few set of initial records.But the user is allowed to change based on the need during upload. 2.You can change the Data type accordingly(Eg:as DATE). SPS4 is similar to SPS3 with very few enhancements like flat file upload,Calc View Variables,Hierarchy creation UI,Creation of Hier in CV,etc .... Actually there is a released doc available for this SPS04.If I find,I will share with you. Rgds, Murali 0 likes Rama Shankar r eplied May 17, 2012 at 02:14 AM Thanks Murali! 0 likes " id="pdf-obj-39-36" src="pdf-obj-39-36.jpg">

Rama Shankar replied May 17, 2012 at 02:14 AM Thanks Murali!

0

likes

0 likes <a href=Muralikrishnan E r eplied May 16, 2012 at 08:45 AM Hi Rama, Thanks for your comments. 1.Just to show one expamle,I put it as NVARCHAR.There is no other reason behind that. Actually,during File upload,the systme will decide the data type based on considering few set of initial records.But the user is allowed to change based on the need during upload. 2.You can change the Data type accordingly(Eg:as DATE). SPS4 is similar to SPS3 with very few enhancements like flat file upload,Calc View Variables,Hierarchy creation UI,Creation of Hier in CV,etc .... Actually there is a released doc available for this SPS04.If I find,I will share with you. Rgds, Murali 0 likes Rama Shankar r eplied May 17, 2012 at 02:14 AM Thanks Murali! 0 likes " id="pdf-obj-39-49" src="pdf-obj-39-49.jpg">

Former Member replied May 24, 2012 at 12:06 PM Murali, well done. Thanks

Some Q; Insertion speed up ? File Size max? Number rows?

Is it faster than other SAP´s tools or thirds? Timing? Best regards GFRA

0

likes

0 likes <a href=Muralikrishnan E r eplied May 24, 2012 at 12:30 PM Hi, Thanks for your comments. The limitations would be The data type for the table is determined based on initial set of records from the input file and not considering the data types of all the records. The data load is considerably faster,but not as quick as tools like SLT or Data Services When you make sure that the main memory of the server is bigger than 5 times of the table size, then this export might work properly. Rgds, Murali 0 likes Former Member replied May 24, 2012 at 12:37 PM thanks Rgds Gustav 0 likes " id="pdf-obj-40-13" src="pdf-obj-40-13.jpg">

Muralikrishnan E replied May 24, 2012 at 12:30 PM Hi, Thanks for your comments. The limitations would be

The data type for the table is determined based on initial set of records from the input file and not considering the data types of all the records.

The data load is considerably faster,but not as quick as tools like SLT or Data Services

When you make sure that the main memory of the server is bigger than 5 times of the table size, then this export might work properly.

 

Rgds,

Murali

0

likes

0 likes <a href=Muralikrishnan E r eplied May 24, 2012 at 12:30 PM Hi, Thanks for your comments. The limitations would be The data type for the table is determined based on initial set of records from the input file and not considering the data types of all the records. The data load is considerably faster,but not as quick as tools like SLT or Data Services When you make sure that the main memory of the server is bigger than 5 times of the table size, then this export might work properly. Rgds, Murali 0 likes Former Member replied May 24, 2012 at 12:37 PM thanks Rgds Gustav 0 likes " id="pdf-obj-40-42" src="pdf-obj-40-42.jpg">

Former Member replied

May 24, 2012 at 12:37 PM thanks Rgds Gustav

0

likes

Some Q; Insertion speed up ? File Size max? Number rows? Is it faster than otherMuralikrishnan E r eplied May 24, 2012 at 12:30 PM Hi, Thanks for your comments. The limitations would be The data type for the table is determined based on initial set of records from the input file and not considering the data types of all the records. The data load is considerably faster,but not as quick as tools like SLT or Data Services When you make sure that the main memory of the server is bigger than 5 times of the table size, then this export might work properly. Rgds, Murali 0 likes Former Member replied May 24, 2012 at 12:37 PM thanks Rgds Gustav 0 likes " id="pdf-obj-40-56" src="pdf-obj-40-56.jpg">

Former Member replied June 15, 2012 at 09:13 AM hi murali

i have exported table from other schema to the desktop and try to load the table using "import data from local file". every thing went fine but during activation its throwing errors

batch from Record 4347 to 6519 Failed: For input string: "20090327145007.7" java.lang.NumberFormatException: For input string: "20090327145007.7" at java.lang.NumberFormatException.forInputString(Unknown Source) at java.lang.Integer.parseInt(Unknown Source) at java.lang.Integer.parseInt(Unknown Source) at com.sap.ndb.studio.bi.filedataupload.deploy.populate.PopulateSQLTable.populateTable(PopulateSQ

LTable.java:63)

at com.sap.ndb.studio.bi.filedataupload.deploy.job.FileUploaderJob.uploadFlatFile(FileUploaderJob.jav

a:186)

at com.sap.ndb.studio.bi.filedataupload.deploy.job.FileUploaderJob.run(FileUploaderJob.java:59) at org.eclipse.core.internal.jobs.Worker.run(Worker.java:54)

can u please help me out to resolve the issue. the table i am using to import is VBAK from ECC schema. and i am using cloudshare environment

Regards

Sriram

0 likes

Former Member replied June 15, 2012 at 09:13 AM hi murali i have exported table from

Former Member replied June 15, 2012 at 09:17 AM Hi, Have a look at the error, 20090327145007.7 It looks like you have date and amount field grouped into one. Date: 20090327 Amount: 145007.7 Check you csv file ...

0

likes

0 likes Former Member replied June 15, 2012 at 09:34 AM Hi pravin i am gettingMuralikrishnan E r eplied June 15, 2012 at 09:49 AM Hi Sriram, May be you would be loading the same record for the field which marked as Primary key. Check it out. Rgds, Murali 0 likes Former Member replied June 05, 2012 at 10:55 AM Good blog Murali, The steps seem simple enough, however I've already been giving this a try on the SAP sandbox but once I select 'NEXT' at step 4 (i.e. the 'Define Import Properties' step) I keep getting a "Insufficient " id="pdf-obj-42-7" src="pdf-obj-42-7.jpg">

Former Member replied June 15, 2012 at 09:34 AM Hi pravin i am getting one more error here and is as below

Batch from Record 1 to 3000 Failed: [301]: unique constraint violated(input position 349) com.sap.db.jdbc.exceptions.BatchUpdateExceptionSapDB: [301]: unique constraint violated(input position 349) please help me out in this issue Regards Sriram

  • 0 likes

0 likes Former Member replied June 15, 2012 at 09:34 AM Hi pravin i am gettingMuralikrishnan E r eplied June 15, 2012 at 09:49 AM Hi Sriram, May be you would be loading the same record for the field which marked as Primary key. Check it out. Rgds, Murali 0 likes Former Member replied June 05, 2012 at 10:55 AM Good blog Murali, The steps seem simple enough, however I've already been giving this a try on the SAP sandbox but once I select 'NEXT' at step 4 (i.e. the 'Define Import Properties' step) I keep getting a "Insufficient " id="pdf-obj-42-15" src="pdf-obj-42-15.jpg">

Muralikrishnan E replied June 15, 2012 at 09:49 AM

Hi Sriram, May be you would be loading the same record for the field which marked as Primary key. Check it out. Rgds, Murali

  • 0 likes

0 likes Former Member replied June 15, 2012 at 09:34 AM Hi pravin i am gettingMuralikrishnan E r eplied June 15, 2012 at 09:49 AM Hi Sriram, May be you would be loading the same record for the field which marked as Primary key. Check it out. Rgds, Murali 0 likes Former Member replied June 05, 2012 at 10:55 AM Good blog Murali, The steps seem simple enough, however I've already been giving this a try on the SAP sandbox but once I select 'NEXT' at step 4 (i.e. the 'Define Import Properties' step) I keep getting a "Insufficient " id="pdf-obj-42-24" src="pdf-obj-42-24.jpg">

Former Member replied June 05, 2012 at 10:55 AM Good blog Murali,

The steps seem simple enough, however I've already been giving this a try on the SAP sandbox but once I select 'NEXT' at step 4 (i.e. the 'Define Import Properties' step) I keep getting a "Insufficient

privileges - User Pxxxxxx do not have INSERT and CREATE ANY privilege(s)" message. I believe that this is an authorization issue but the In Memory Developer Center Team think it is not. Has anyone encountered a similar sissue? If so please ideas on how to resolve it will be appreciated. Regards Eni

0

likes

0 likes <a href=Muralikrishnan E r eplied June 05, 2012 at 15:12 PM Hi Eni, You willl get this error,if the user doesnt have the insert privilege for the schema in which he is trying to write the data. Rgds, Murali 0 likes Jody LeBlanc r eplied June 14, 2012 at 00:57 AM Hi Eni, Please refer to this thread: http://scn.sap.com/message/13302132#13302132 Jody 0 likes " id="pdf-obj-43-11" src="pdf-obj-43-11.jpg">

Muralikrishnan E replied June 05, 2012 at 15:12 PM Hi Eni,

You willl get this error,if the user doesnt have the insert privilege for the schema in which he is trying to write the data.

 

Rgds,

Murali

0

likes

0 likes <a href=Muralikrishnan E r eplied June 05, 2012 at 15:12 PM Hi Eni, You willl get this error,if the user doesnt have the insert privilege for the schema in which he is trying to write the data. Rgds, Murali 0 likes Jody LeBlanc r eplied June 14, 2012 at 00:57 AM Hi Eni, Please refer to this thread: http://scn.sap.com/message/13302132#13302132 Jody 0 likes " id="pdf-obj-43-34" src="pdf-obj-43-34.jpg">

Jody LeBlanc replied June 14, 2012 at 00:57 AM Hi Eni, Please refer to this thread: http://scn.sap.com/message/13302132#13302132 Jody

0

likes

privileges - User Pxxxxxx do not have INSERT and CREATE ANY privilege(s)" message. I believe thatMuralikrishnan E r eplied June 05, 2012 at 15:12 PM Hi Eni, You willl get this error,if the user doesnt have the insert privilege for the schema in which he is trying to write the data. Rgds, Murali 0 likes Jody LeBlanc r eplied June 14, 2012 at 00:57 AM Hi Eni, Please refer to this thread: http://scn.sap.com/message/13302132#13302132 Jody 0 likes Former Member replied June 10, 2012 at 20:43 PM SAP HAHA Developer Studio is 1.0.27 now But HANA servers 1.0.31 " id="pdf-obj-43-48" src="pdf-obj-43-48.jpg">

Former Member replied June 10, 2012 at 20:43 PM SAP HAHA Developer Studio is 1.0.27 now But HANA servers 1.0.31

0

likes

 
Former Member replied June 11, 2012 at 10:23 AM Thanks murali,very useful one 0 likes FormerManish Umarwadia r eplied June 14, 2012 at 04:53 AM Hi, The control file approach may be necessary when your input file exceeds a certain number of rows. Just through some trial and error, I think it is ~75000 rows. It errors out beyond that. Is there a mechanism to automate this approach or the control file approach for recurring flat file loads? This is assuming no data services in the picture. 0 likes " id="pdf-obj-44-9" src="pdf-obj-44-9.jpg">

Former Member replied

June 11, 2012 at 10:23 AM Thanks murali,very useful one

0

likes

Former Member replied June 11, 2012 at 10:23 AM Thanks murali,very useful one 0 likes FormerManish Umarwadia r eplied June 14, 2012 at 04:53 AM Hi, The control file approach may be necessary when your input file exceeds a certain number of rows. Just through some trial and error, I think it is ~75000 rows. It errors out beyond that. Is there a mechanism to automate this approach or the control file approach for recurring flat file loads? This is assuming no data services in the picture. 0 likes " id="pdf-obj-44-24" src="pdf-obj-44-24.jpg">

Former Member replied June 13, 2012 at 17:45 PM Thanks Murali, very nice and very informative

0

likes

Former Member replied June 11, 2012 at 10:23 AM Thanks murali,very useful one 0 likes FormerManish Umarwadia r eplied June 14, 2012 at 04:53 AM Hi, The control file approach may be necessary when your input file exceeds a certain number of rows. Just through some trial and error, I think it is ~75000 rows. It errors out beyond that. Is there a mechanism to automate this approach or the control file approach for recurring flat file loads? This is assuming no data services in the picture. 0 likes " id="pdf-obj-44-36" src="pdf-obj-44-36.jpg">

Manish Umarwadia replied June 14, 2012 at 04:53 AM Hi,

The control file approach may be necessary when your input file exceeds a certain number of rows. Just through some trial and error, I think it is ~75000 rows. It errors out beyond that.

Is there a mechanism to automate this approach or the control file approach for recurring flat file loads? This is assuming no data services in the picture.

0

likes

0 likes Former Member replied June 11, 2012 at 10:23 AM Thanks murali,very useful one 0Manish Umarwadia r eplied June 14, 2012 at 04:53 AM Hi, The control file approach may be necessary when your input file exceeds a certain number of rows. Just through some trial and error, I think it is ~75000 rows. It errors out beyond that. Is there a mechanism to automate this approach or the control file approach for recurring flat file loads? This is assuming no data services in the picture. 0 likes Former Member replied June 14, 2012 at 12:40 PM Manish, There isn't any scheduling features within HANA as yet. A discussion on this subject: https://www.experiencesaphana.com/thread/1452 Have had the same problem loading big files - either it breaks or runs for very long. " id="pdf-obj-44-54" src="pdf-obj-44-54.jpg">

Former Member replied June 14, 2012 at 12:40 PM Manish, There isn't any scheduling features within HANA as yet. A discussion on this subject:

Have had the same problem loading big files - either it breaks or runs for very long.

Another problem is the lack of flexbility with assigning source to target fields. For instance, if you wish not to update the last column in the table from the file you cannot do it without changing the name of the last column in the table (and then mapping by name rule). Whereas in the control file approach you could hardcode the columns you wish to update and omit the last column in the IMPORT statement. Expecting to see the flexibility aspect & more data transformation abilities added into this in the upcoming releases.

0

likes

0 likes Former Member replied September 10, 2012 at 12:16 PM Hi all, am trying to

Former Member replied September 10, 2012 at 12:16 PM

Hi all, am trying to load a .csv file using the approach mentioned above but i do not see the 'table name' option mentioned in step 4. i only see the drop-down on 'schema'. the 'next' and 'finish' options are disabled. i only see the 'new' option, the 'existing' option is not there. not sure if i am missing something. Pl assist.

0

likes

0 likes Former Member replied September 10, 2012 at 12:16 PM Hi all, am trying to

Former Member replied September 10, 2012 at 12:56 PM Hi Saurabh,

if you are working in cloud share environment, select the "existing" schema by selecting existing tab. there you can find the table name that you have created.if you haven't created any table structure then you cant find the name.

 

Regards'

Sriram gandham

0

likes

0 likes Former Member replied September 10, 2012 at 12:16 PM Hi all, am trying to

Former Member replied September 10, 2012 at 13:04 PM

Thanks Sriram. I only see the 'new' option, the option to choose an existing schema is not there. the table exists in the system, Thanks.

0

likes

0 likes Former Member replied September 10, 2012 at 12:16 PM Hi all, am trying to

Former Member replied September 10, 2012 at 13:27 PM hi saurabh,

as your in cloud share environment you can follow the below procedure to upgrade your system to the latest version. this might help

Please revert your cloud share desktop to the latest snapshot which includes an upgrade to Revision 31 of the HANA studio . select Actions -> Revert on the top right of the screen when u log in to the hana cloud share. This will delete anything you’ve saved to your cloud share desktop, so backup anything you want to keep.

As it is the problem with in the studio. try this alternative it might work Regards Sriram gandham

0

likes

0 likes Former Member replied September 10, 2012 at 14:25 PM Thanks Sridhar. the problem is

Former Member replied September 10, 2012 at 14:25 PM

Thanks Sridhar. the problem is still the same. Could it have something to do with authorisations. Thanks.

0

likes

0 likes Former Member replied September 10, 2012 at 14:25 PM Thanks Sridhar. the problem is

Muralikrishnan E replied September 10, 2012 at 14:43 PM Hi Saurab,

Are you creating new table and try to load the csv data,then go for first option.Select the Schema name and mention the name of the table.Ofcourse you should have write permission on the selected schema to create table under that.

If you already has some data in existing table and want to add some more data in it,then go for second option wherein select the table and load data.

Rgds,Murali

Former Member replied September 10, 2012 at 15:00 PM Thanks Murali. I can create tables asMuralikrishnan E r eplied September 10, 2012 at 15:19 PM Hi Saurabh, This sounds weird and I never used this cloudshare environment before.But ideally if you are able to create and load data through SQL,this also should work. Lets hear from others. Rgds, Murali 0 likes Former Member replied September 11, 2012 at 12:33 PM Hi Murali, I checked with a colleague's id and he is also facing the same issue. Not sure if we are missing some steps. Thanks. 0 likes Rama Shankar r eplied September 11, 2012 at 15:58 PM Saurabh: " id="pdf-obj-47-2" src="pdf-obj-47-2.jpg">

Former Member replied September 10, 2012 at 15:00 PM

Thanks Murali. I can create tables as well as populate them with SQL Statements but am not able to do a file upload from the cloud desktop. Some options in the screens do not seem to be appearing, as mentioned in one of my earlier messages. Thanks.

0

likes

0 likes <a href=Muralikrishnan E r eplied September 10, 2012 at 15:19 PM Hi Saurabh, This sounds weird and I never used this cloudshare environment before.But ideally if you are able to create and load data through SQL,this also should work. Lets hear from others. Rgds, Murali 0 likes Former Member replied September 11, 2012 at 12:33 PM Hi Murali, I checked with a colleague's id and he is also facing the same issue. Not sure if we are missing some steps. Thanks. 0 likes " id="pdf-obj-47-15" src="pdf-obj-47-15.jpg">

Muralikrishnan E replied September 10, 2012 at 15:19 PM Hi Saurabh,

This sounds weird and I never used this cloudshare environment before.But ideally if you are able to create and load data through SQL,this also should work.

Lets hear from others. Rgds, Murali

0

likes

0 likes <a href=Muralikrishnan E r eplied September 10, 2012 at 15:19 PM Hi Saurabh, This sounds weird and I never used this cloudshare environment before.But ideally if you are able to create and load data through SQL,this also should work. Lets hear from others. Rgds, Murali 0 likes Former Member replied September 11, 2012 at 12:33 PM Hi Murali, I checked with a colleague's id and he is also facing the same issue. Not sure if we are missing some steps. Thanks. 0 likes " id="pdf-obj-47-34" src="pdf-obj-47-34.jpg">

Former Member replied September 11, 2012 at 12:33 PM

Hi Murali, I checked with a colleague's id and he is also facing the same issue. Not sure if we are missing some steps. Thanks.

0

likes

0 likes <a href=Muralikrishnan E r eplied September 10, 2012 at 15:19 PM Hi Saurabh, This sounds weird and I never used this cloudshare environment before.But ideally if you are able to create and load data through SQL,this also should work. Lets hear from others. Rgds, Murali 0 likes Former Member replied September 11, 2012 at 12:33 PM Hi Murali, I checked with a colleague's id and he is also facing the same issue. Not sure if we are missing some steps. Thanks. 0 likes " id="pdf-obj-47-49" src="pdf-obj-47-49.jpg">

Rama Shankar replied September 11, 2012 at 15:58 PM Saurabh:

The cloudshare may not be in SP4. That is why you may be experiencing issues. Check the version of the HANA studio and DB version in cloudshare.

In AWS environment, you can upgraded to the latest patch but in cloudshare you get what they have for all users by default.

Hope this helps. Regards, Rama

  • 0 likes

The cloudshare may not be in SP4. That is why you may be experiencing issues. CheckBill Ramos r eplied October 22, 2012 at 01:14 AM is there any to upload a pipe | delimited file with this tool. It looks like I need to first import into Excel and then import using an XLS file. This doesn't scale well for big files. Any tips on how to add | delimited files to the list of sopported options would be a big help. 0 likes Former Employee r eplied November 27, 2012 at 12:29 PM Hello Mr. Murali, The above post is very informative. Well done ! I have some queries . It would be helpful if you can give the solution I was working as SAP CRM Analyst and now moved to SAP BI. I started learning BI recently. Can you just give me a brief idea between the connection of BI-BW and HANA.? If possible please provide a glance about the steps involved in HANA reporting Best Regards, aLBi 0 likes " id="pdf-obj-48-10" src="pdf-obj-48-10.jpg">

Bill Ramos replied October 22, 2012 at 01:14 AM

is there any to upload a pipe | delimited file with this tool. It looks like I need to first import into Excel and then import using an XLS file. This doesn't scale well for big files. Any tips on how to add | delimited files to the list of sopported options would be a big help.

  • 0 likes

The cloudshare may not be in SP4. That is why you may be experiencing issues. CheckBill Ramos r eplied October 22, 2012 at 01:14 AM is there any to upload a pipe | delimited file with this tool. It looks like I need to first import into Excel and then import using an XLS file. This doesn't scale well for big files. Any tips on how to add | delimited files to the list of sopported options would be a big help. 0 likes Former Employee r eplied November 27, 2012 at 12:29 PM Hello Mr. Murali, The above post is very informative. Well done ! I have some queries . It would be helpful if you can give the solution I was working as SAP CRM Analyst and now moved to SAP BI. I started learning BI recently. Can you just give me a brief idea between the connection of BI-BW and HANA.? If possible please provide a glance about the steps involved in HANA reporting Best Regards, aLBi 0 likes " id="pdf-obj-48-19" src="pdf-obj-48-19.jpg">

Former Employee replied November 27, 2012 at 12:29 PM Hello Mr. Murali,

The above post is very informative. Well done !

  • I have some queries . It would be helpful if you can give the solution

  • I was working as SAP CRM Analyst and now moved to SAP BI.

  • I started learning BI recently.

Can you just give me a brief idea between the connection of BI-BW and HANA.? If possible please provide a glance about the steps involved in HANA reporting Best Regards, aLBi

  • 0 likes

The cloudshare may not be in SP4. That is why you may be experiencing issues. CheckBill Ramos r eplied October 22, 2012 at 01:14 AM is there any to upload a pipe | delimited file with this tool. It looks like I need to first import into Excel and then import using an XLS file. This doesn't scale well for big files. Any tips on how to add | delimited files to the list of sopported options would be a big help. 0 likes Former Employee r eplied November 27, 2012 at 12:29 PM Hello Mr. Murali, The above post is very informative. Well done ! I have some queries . It would be helpful if you can give the solution I was working as SAP CRM Analyst and now moved to SAP BI. I started learning BI recently. Can you just give me a brief idea between the connection of BI-BW and HANA.? If possible please provide a glance about the steps involved in HANA reporting Best Regards, aLBi 0 likes " id="pdf-obj-48-36" src="pdf-obj-48-36.jpg">

Muralikrishnan E replied December 04, 2012 at 14:17 PM Hi, For BI Reporting on HANA,refer http://www.youtube.com/watch?v=no4bph5VXek Rgds,Murali

  • 0 likes

<a href=Muralikrishnan E r eplied December 04, 2012 at 14:17 PM Hi, For BI Reporting on HANA,refer http://www.youtube.com/watch?v=no4bph5VXek Rgds,Murali 0 likes Former Member replied January 15, 2013 at 00:20 AM Hi, I could import a date field after I formatted it to a "yyyy-mm-dd" format and converted it to text before loading the flat excel file. I am getting an error on the file load when there is a 'time' field after I format in the "HH24:MI:SS" default format, and convert it to a text field. I thought the similar process to date would work for time. The error is on import for a "date, time or timestamp" data conversion. Have anyone faced a similar situation? 0 likes Former Member replied March 15, 2013 at 19:14 PM Hello folks, I spun a SAP HANA One instance on AWS. I tried to follow the steps mentioned above to load data from a csv file on my desktop into this HANA instance. I got to the step where I need to select the csv file on my desktop and create a new table on an existing schema on HANA. Clicking on 'Next' basically does NOT do anything. It does NOT take me to the next screen with mappings. Has anybody seen this? Any ideas on how to resolve. Thanks, Deepak 0 likes " id="pdf-obj-49-9" src="pdf-obj-49-9.jpg">

Former Member replied

January 15, 2013 at 00:20 AM Hi,

  • I could import a date field after I formatted it to a "yyyy-mm-dd" format and converted it to text before loading the flat excel file.

  • I am getting an error on the file load when there is a 'time' field after I format in the "HH24:MI:SS" default format, and convert it to a text field. I thought the similar process to date would work for time. The error is on import for a "date, time or timestamp" data conversion.

Have anyone faced a similar situation?

  • 0 likes

<a href=Muralikrishnan E r eplied December 04, 2012 at 14:17 PM Hi, For BI Reporting on HANA,refer http://www.youtube.com/watch?v=no4bph5VXek Rgds,Murali 0 likes Former Member replied January 15, 2013 at 00:20 AM Hi, I could import a date field after I formatted it to a "yyyy-mm-dd" format and converted it to text before loading the flat excel file. I am getting an error on the file load when there is a 'time' field after I format in the "HH24:MI:SS" default format, and convert it to a text field. I thought the similar process to date would work for time. The error is on import for a "date, time or timestamp" data conversion. Have anyone faced a similar situation? 0 likes Former Member replied March 15, 2013 at 19:14 PM Hello folks, I spun a SAP HANA One instance on AWS. I tried to follow the steps mentioned above to load data from a csv file on my desktop into this HANA instance. I got to the step where I need to select the csv file on my desktop and create a new table on an existing schema on HANA. Clicking on 'Next' basically does NOT do anything. It does NOT take me to the next screen with mappings. Has anybody seen this? Any ideas on how to resolve. Thanks, Deepak 0 likes " id="pdf-obj-49-23" src="pdf-obj-49-23.jpg">

Former Member replied March 15, 2013 at 19:14 PM Hello folks,

  • I spun a SAP HANA One instance on AWS. I tried to follow the steps mentioned above to load data

from a csv file on my desktop into this HANA instance. I got to the step where I need to select the csv

file on my desktop and create a new table on an existing schema on HANA. Clicking on 'Next' basically does NOT do anything. It does NOT take me to the next screen with mappings.

Has anybody seen this? Any ideas on how to resolve. Thanks, Deepak

  • 0 likes

<a href=Muralikrishnan E r eplied December 04, 2012 at 14:17 PM Hi, For BI Reporting on HANA,refer http://www.youtube.com/watch?v=no4bph5VXek Rgds,Murali 0 likes Former Member replied January 15, 2013 at 00:20 AM Hi, I could import a date field after I formatted it to a "yyyy-mm-dd" format and converted it to text before loading the flat excel file. I am getting an error on the file load when there is a 'time' field after I format in the "HH24:MI:SS" default format, and convert it to a text field. I thought the similar process to date would work for time. The error is on import for a "date, time or timestamp" data conversion. Have anyone faced a similar situation? 0 likes Former Member replied March 15, 2013 at 19:14 PM Hello folks, I spun a SAP HANA One instance on AWS. I tried to follow the steps mentioned above to load data from a csv file on my desktop into this HANA instance. I got to the step where I need to select the csv file on my desktop and create a new table on an existing schema on HANA. Clicking on 'Next' basically does NOT do anything. It does NOT take me to the next screen with mappings. Has anybody seen this? Any ideas on how to resolve. Thanks, Deepak 0 likes " id="pdf-obj-49-37" src="pdf-obj-49-37.jpg">

Former Member replied March 26, 2013 at 10:31 AM Hi Murali,

Nice blog and very straight forward and useful for the beginners like me. I am also trying to upload files using the instructions given in the blog. It worked for various tables . But I am stuck at one table which has date column. I am getting below error :

java.lang.IllegalArgumentException at java.sql.Date.valueOf(Date.java:138) at com.sap.ndb.studio.bi.filedataupload.deploy.populate.PopulateSQLTable.populateTable(PopulateSQ

LTable.java:85)

at

com.sap.ndb.studio.bi.filedataupload.ui.job.FileUploaderJob.uploadFlatFile(FileUploaderJob.java:19

8)

at com.sap.ndb.studio.bi.filedataupload.ui.job.FileUploaderJob.run(FileUploaderJob.java:61) at org.eclipse.core.internal.jobs.Worker.run(Worker.java:53)

I have tried passing various formats in the csv file .But it is not accepting. I have tried the format shown in your screenshots but that also did not work. Can you please let me know how to find out exact date format to be passed in the csv.

Thanks and Regards, Reena

  • 0 likes

Former Member replied March 26, 2013 at 10:31 AM Hi Murali, Nice blog and very straightMuralikrishnan E r eplied March 26, 2013 at 12:57 PM Hi Reena, Replied to your mail. Rgds,Murali 0 likes " id="pdf-obj-50-24" src="pdf-obj-50-24.jpg">

March 26, 2013 at 12:57 PM Hi Reena, Replied to your mail. Rgds,Murali

  • 0 likes

Former Member replied March 26, 2013 at 10:31 AM Hi Murali, Nice blog and very straightMuralikrishnan E r eplied March 26, 2013 at 12:57 PM Hi Reena, Replied to your mail. Rgds,Murali 0 likes " id="pdf-obj-50-33" src="pdf-obj-50-33.jpg">

Former Member replied April 08, 2013 at 14:38 PM Hi Murali,

Thanks a lot for your post. Just two questions on the import wizard of SAP HANA:

  • - What happen if some records have errors: will the load be fully rejected or only the records that goes not through ? Do you have document on this ?

  • - Also what would be faster the import Wizard of SAP HANA or the loading with DataServices ? In case you have any benchmark I am interested.

Many thanks,

Elsa

0 likes

Former Member replied April 08, 2013 at 14:38 PM Hi Murali, Thanks a lot for yourMuralikrishnan E r eplied April 08, 2013 at 15:06 PM Hi Elsa, If some records have errors,it will throw an error and the entire set would be ignored.Its atomic(either full or none). Using DataServices involves additional installation and licensing,so its better to do with HANA Studio,but it seems that there is a limitation with no.of records with data loading in HANA Studio (http://scn.sap.com/community/developer-center/hana/blog/2012/12/07/how-to-load-a-flat- file-using-aws-hana-when-hana-studio-just-wont-cut-it) . Rgds,Murali " id="pdf-obj-51-16" src="pdf-obj-51-16.jpg">

Muralikrishnan E replied April 08, 2013 at 15:06 PM Hi Elsa,

If some records have errors,it will throw an error and the entire set would be ignored.Its atomic(either full or none).

Using DataServices involves additional installation and licensing,so its better to do with HANA Studio,but it seems that there is a limitation with no.of records with data loading in HANA

Rgds,Murali