Professional Documents
Culture Documents
Bods is an ETL tool,using this we can extract ,transform,load data from source to target.
Local repository
Central repository
3.define job,workflow,dataflow?
Job:job is the smallest unit of work ,that we can execute and schedule.
Dataflow:dataflow is the place where actual ETL happens.inside the dataflow we can apply logics in
different transforms.
4.what is datastore?
Local variable: this is not constant throughout the job and exist only inside the job.
substitution parameter: substitution parameter is constant throughout the environment. for ex. we
are using same file path for every job in our repo, we can create the file path as substitution
parameter.
multiple sources
For all the generated outputs, structure and datatype will be similar
We can merge the data using multiple tables and generated in single output
For all input columns structure, names, no.of columns. Datatype, length should be same.
1.select distinct
1.select distinct:
if we check this box ,we can avoid the duplicate records coming from source.
If we check this option we can remove the duplicate records before load data in target, but
the condition is key should be enabled.
3.join rank
4.no.of loaders
5.degree of parallelism
Using this option we can define no.of parallel loads to the target table.
Default value :1
Max value : 5
12. What is rows per commit ?
Using this option we can define no.of records to be generated in target table .
Using this option we can define no.of records to be fetched in single time from source table.
Using this option we can split single dataflow into multiple parallel dataflows
All the operations we are doing inside the dataflow will be executed in 3 parallel ways.
We can give high joinrank to the table which have huge data ,Lower joinrank to the table which
have Lower data .
While doing the joins ,it will considered high rank table as Main table and joins another table to it.
LPAD:
if we want to add anything at the starting of the coloumn value ,we can use LPAD function
RPAD:
If we want to add anything at the end of the coloumn value ,we can use RPAD function.
Ltrim:
Ltrim function is used to trim a given value from left side of the coloumn.
Rtrim:
Rtrim function is used to trim a given value from right side of the coloumn.
Based on conditions,if we want to display any result we can use this function.
Ltrim-blanks:
This function is used to trim a blanks from left side of the cloumn
Rtrim-blanks:
This function is used to trim a blanks from right side of the coloum.
Using this function we can devide the string according to given position.
This function is used to display a value from another table.based on some key values.
In query editor go to functions select lookup-ext ,then one pop-up will come in top of left corner
and select table from which we need to get the data.
After that input cloumn gives the condition,that is matching value and in output select the value
which we need to populate then click on finish ,it will give automatic sysntax.
1.designer 4.workflow
2.repository 5.dataflow
3.job 6. Datastore
7.transforms
25. What are the types of joins used in bods and sql ?
Mandatory validation:
In Mandatory validation ,we are taking primary key coloum and primary key column should not
be null.
For ex. We can take MARA table ,in Mara table material number is primary key .while doing the
validation 1st we can choose material number cloumn and click it's not null option and then click on
ok
Other than primary key columns there are some important columns for idocs .for those coloumns
we can implement same validation process.
In check table validation we are doing plant validation .for ex.we can take MARC table for that
we need to extract T001 table and keep in temporary table.using this table we can check plant
numbers.while doing the validation 1st we can choose Marc.werks cloumn and click exist in table
option and give temporary table name and then click on ok.
Custom validation :
In Marc table there are two tables minimum lot size ,maximum lot size.
Minimum lot size should always less than maximum less than
We have to take all validate preload tables,and connected in each table In a query transform .and
inside the query we can map the input schema to output schema according to idoc segments.after
that all querys connected to single query transform .in this query transform new header data table
will be generated .in this table we have to map idoc parameters . And this query transform
connected to idocs.
Clint, idoc type, message type, message function, sender port, sender partner type, sender partner
number, receiver port, receiver partner type, receiver partner number