Professional Documents
Culture Documents
1. Transactinal applications:
2. analytical applications:
it is responsible for analysis amd review to unserstand or improve any product, any
process, any business.
it generate the report according to previous data .
Ex. Suppose I have a airtel plan of 598 rs. In this plan I got 84 days unlimited calls and 1.5gb
data per day. Suppose I will recharge my phone through phonepay app then I will get 5%
discount on this 598rs plan.so that of company will sell this plan with 5% discount then 7% of
customer are increased. Also if company will sell this plan with 10% discount then 10-15%
customer are increased. According to this type of analysis company will improve their business
process.
Only the business team uses analytical application
Ex. call logs analytical (Worldometers.info/corona)
Q…..Why we need datawarehouse??
1. Performance may be hampered if the reports are created from transactional
application database:
While using any application performance is the primary concern. So, it is not suggested
to create the report and running the fundamental business process from one database. It
may hamper the performance of the system. To overcome this problem, we use
another database which is also called as Data warehouse/analytical application
database. This database is totally dedicated for reporting purpose.
The data from transactional application database is copied to another database i.e. Data
warehouse. Analysis workload and transactional workload is separated in two different
databases. Different database have different data. Server is nothing but the database
2. Development and design is time consuming and difficult when we have multiple
data stores
If we have data in the multiple places, creating the report is bit difficult and time
consuming. The standard and good practice is to create a central repository means a data
warehouse. Store all the data into that data warehouse. Now we have all the data at the
single place and creating the reports from that place is not that difficult. Data can be
stored in different database. Creating the reports from different database or multiple
data sources is a bit difficult and development and design is time consuming.
3. Data is not fit for analysis
We are not changing the meaning of data; we are just changing the format of the data. We store
the data in the single uniform format while storing the date in data warehouse. Means we convert
these data into the single uniform format before storing it into the data warehouse. We
manipulate the data according to our need by making it in one uniform format we want. The
conversion of the data is called data transformation. For data transformation, data conversion of
manipulating the data, we need data warehouse. Data warehouse stores the data in one single
uniform format and it will be displayed in the report.
Q….what is architecture /technical flow /data flow/ high level design of data warehouse??
There are four layers in DWH architecture:
1.Data Source Layer
2.Data Staging Area
3. Data Storage Layer
4. Reporting Layer
Data Source layer: which refers to various data stores in multiple formats like relational
database, Flat Files, Excel files, Xml Files etc. These data stores business data like Sales,
Customer, Finance, Product etc.
After that the next step is Extract, where the required data from data source layer is extracted and
put into the data staging area.
Data Staging area: it is intermediate layer between Data Source Layer and Data Storage Layer
used for processing data during the ETL process. It minimises the chances of data loss.
staging area basically used to hold the data and to perform data transformations, before
loading the data into data warehouse.
Actual transformation of transactional data into analytical data is done in data staging
area.
Data Storage layer: i.e. data warehouse, the place where the successfully cleaned, integrated,
transformed and ordered data is stored in a multi-dimensional environment.
Now, the data is available for analysis and query purposes.
reporting layer: in this layer data in data storage layer is used to create various type of
management reports from where user can take business decisions for planning, designing,
forecasting etc.
Meta Data Repository :Meta data is nothing but the data about data. Meta data repository is use
to store meta data of data which is actually present in data warehouse i.e. Data storage layer.
Meta data repository is work like an index.
Data Mart: Data mart can be defined as the subset of data warehouse. A data mart is focused on
a single functional area e.g. product, customers, employees, sales, payment etc. It is a subject-
oriented databases.
- Staging Layer: In a staging layer, or source layer, data is stored that is extracted from
multiple data sources.
- Data Integration Layer: The integration layer plays the role of transforming data from the
staging layer to the database layer.
- Access Layer: Also called a dimension layer, it allows users to retrieve data for
analytical reporting and information retrieval.
partial dependency:
we can store the data in different different table
Redundancy:
Saving one type of data again and again
Partial dependency will create redundancy.
Q…What is Normalization ?
Normalization is the process of efficiently organizing the data in the database. It is done by
databse architect.
Normalization is used to minimize the redundancy. It is also used to eliminate the undesirable
characteristics like Insertion, Updation and Deletion Anomalies.
Normalization divides the larger table into the smaller table and links them using relationship.
Ex. following table is in denormalised form
1st Normal Form (1NF) : It states that an attribute of a table cannot hold multiple values. It
must hold only single-valued attribute.
Q…have you ever seen De-normalized data and its normalized data??
yes ,
De-normalized data:
its normalized data:
OLTP OLAP
Schema: logical partition in database
Default schema name: dbo
Data models :
- Data model tells how the logical structure of a database is designed.
- Data models define how data is connected to each other and how it will be processed and stored
inside the system.
- Types of Data Models:
i. Conceptual Data Model
ii. Logical Data Model
iii. Physical Data Model
2. Data architect and business analyst create Database administrator and developer
logical data model create physical data model
3. The objective of logical data model is to The objective of physical data model is
technical map of rules and data structure to implement the actual database
4. Simpler than physical data model Complex than logical data model
1..star schema:
It is simple form of dimension table.
In star schema design, central table is called fact table and radially connected table are
called dimension table.
It is known as star schema because entity realationship diagram is look like a star.
Dimension table in star schema are in de-normalized form.
Star schema is good for data marts with simple relationship.
In the following Star Schema example, the fact table is at the center which contains keys to every
dimension table like Dealer_ID, Model ID, Date_ID, Product_ID, Branch_ID & other attributes
like Units sold and revenue.
2..Snowflak schema:
The process of normalizing the dimension table is called snowflaking.
ER diagram of this schema is look like a snowflak so that it is called snowflak
schema. Snowflak schema is extension of star schema.
Dimension table are in normalized form.
Advantages
1. Data integrity is reduced because of structured data.
2. Data are highly structured, so it requires little disk space.
3. Updating or maintaining Snowflaking tables is easy.
Disadvantages
4. Snowflake reduces the space consumed by dimension tables, but the space
saved is usually insignificant compared with the entire data warehouse.
5. Due to the number of tables added, you may need complex joins to
perform a query, which will reduce query performance
4. Both fact table and dimension table Fact table in de-normalized form but
are in denormalized form dimension table are in normalized form
5. A star schema contain one dimension A snowflak schema contain more than one
table for each dimension. dimension table for each dimension.
We can catagories the facts according to , how the nature of facts in table with dimensions in the
table
1…additive facts:
Additive facts are facts that can be summed up with all of the dimensions in the fact table.
Non-additive facts are facts that cannot be summed up with any of the dimensions present in the
fact table. The ratio is an example of a non-additive fact
3…semi additive fact:
Semi-additive facts are facts that can be summed up with some of the dimensions in the
fact table, but not with all..
Q…Types of Dimensions
Dimensions are catagorised according to how the data is stored in data
warehouse/data mart/database.
SCD0:
- In type 0, no special action is performed upon dimensional changes.
- Once we entered data into table then it can not be change. If we want to change the data
then it will shows the error. The error will be shown while executing the ETL code.
SCD1:
- Old data is replaced with new one. Do not store history data.
- This type is easy to maintain and is use for data which changes are caused by processing
corrections. (e.g. removal special characters, correcting spelling errors).
SCD2:
- New row is created for new data in same table
- Current data and history data is present in same table.
SCD3:
- Adding a new column.
- current and history data is kept in same table and same row.
- History is limited. It store history but only previous one.
- Storing of history data is depend on column structure maintain in the table
- This is the least commonly needed technique.
SCD4:
- Separate table are there for current data and history data.
- The 'main' table keeps only the New data (current data ) .
- Separate historical table is used to all historical changes for each of the dimension.
SCD6:
- It is combination of types 1,2,3 (1+2+3=6).
- In this type we have additional columns in dimension table such as
Current_Address, Current_Year : for keeping current value of the attribute.
Previous_Address, Previous_Year : for keeping historical value of the attribute.
Current_Flag : for keeping information about the most recent record.
Q…in your current project which SCD are use?
SCD1 & SCD2. But I am aware of other scd also.
2..conformed dimension:
Conformed dimension are those dimension which have been designing in such a way that
the dimension can be used across many fact tables(data marts) in different subject area of data
warehouse. That means shared dimensions are called conformed dimension.
3..degenrated dimension:
degenrated dimension are those dimension which are directly present in fact table not in
seprate dimension table.
4..junk dimension:
When group of independent dimension are stored in separate dimension table that
dimension are called junk dimension.
Business key:
Business key uniquely identifies a row in the table.
Business key are good way of avoiding duplicate records.
When we don’t want to have change in a data of the table then we will use business key
Natural key :
1. Primary key is made up of real data in table, that primary key is called as natural primary key.
2. For example, In HR database, Employees table having ‘Employee_id’ column which is unique
and not null and we can call this real data because every employee should be identified by its’
Employee_id.
3. Here we can make Employee_id as primary key in Employees table and this primary key is
called as natural primary key.
Surrogate key:
1.Some times in database table we cannot make primary key from real data.
2. In this situation, we have to add one artificial column in table which is having unique and not
null values, and make this column as primary key in table.
3. This primary key which is generated from artificial column is called as Surrogate key.
Ex.
If we have to maintained history of employee table then Emp_id should be not the primary key
column in table, because there are chances of duplicity in Emp_id column.
To resolve this problem of primary key in employee table, we have to add column EMP_KEY
column which should be unique, not null values. And we can make EMP_KEY as primary key in
employee table, as this primary key is called as surrogate key
30 APRIL
Loading types:
1. Initial load
2. Incremental load
3. Full load
1..initial load: when first time the source data is loaded into the target system that type of load is
called initial load
2..incremental load: newly added data and existing modified data will be loaded into target
system
3..full load: firstly truncate all the data from target table and then reload it into target. It is used
when we have recent records. It is rarely used type of loading
04 may
ETL code:
Extract means reading data from source
After extracting data from the source ETL code is executed for data transformation.
ETL code is executed when source is not busy with other work.
Ex. In banking working hour the etl code is not executed
Whats app backup at night 2pm
Suppose data is coming in source at day1 then it is transferred to target at day2
ETL Job:
For automatic execution of ETL code one code is writing that one code is called ETL job.
ETL job is made by developer.
The environment in which the developer is working that environment is called dev environment.
The environment in which the tester the is working that environment is called test environment.
Developer and tester will work on the data that data is called test data.
After that data is going to the end user and that environment is called production / live
environment.
the data made by end user that data is called live data or real data
real data of one company is not given to other company because of security reasons.
05 may
Mapping document: (also called as God’s document/ data leakage document)
- Mapping document is made by architect with the help of BA(business analyst).
- Mapping document contain total information about source and target.
Template of mapping document contain following info,
1.mapping no: for unique identification of each row
2. source table
3. source column
4. data type
5. target table
6. target column
7. data type
8. transformation logic: the logic required for converting data from the format of source
system into the required format of destination system
The data from source is directly transfer into target without any transformation logic is called
direct mapping/ straight mapping/ 1 :1 mapping ex. Created_on, modified_on
Reference table: to identify from which table the row/data is coming
ETLupdatetime: at what time the data is updated at target
Created_on: at what time the data is created at source
Modified_on: when the data is last modified at source
2. In ETL testing, we are validating data which is loaded from Transactional system to
Analytical system
1..Metadata testing:
- In meta data testing, we have to validate physical model against it’s logical model.
- Metadata testing involves verification of :
Table Name
Column name
Column data type
Column data length
Constraints
--In data completeness testing when there is incremental load(scd2) in target system then count
of source and target table did not macth. At this time we have to check only latest records of the
target table
SELECT COUNT(*) FROM CUSTOMER_TRG Where CUSTOMERKEY in (Select
MAX(CUSTOMERKEY) From CUSTOMER_TRG Group By CUSTOMERID );
---in data completeness testing when we have more than 1 source table and we have load all this
source table data in one target table.at this condition we will use mapping document in which
reference table name is given for matching the count of source records and target records
There are two tables, table 1 and table 2 having PK and FK in the source. The data in the table 1
will be loaded in the target first due to presence of the primary key and then table 2 data will be
loaded in the target section due to foreign key. Foreign key will refer the data from table where
primary key is given. Table 1 has given job1 and table 2 has given the job2 having time period of
5mint and 7mint respectively. As the data from table 1 will be loaded first and then table 2 after
completing their time period. Due to which it will take time to load the data in the target.
Due to the presence of primary key and foreign key in the table
We can’t insert the invalid data
We can’t update invalid data
We can’t delete any valid data from the table
When there is criticality and time concern we remove foreign key and add the data parallelly in
both the table with or without sequence. Any data can be inserted at any time in any table. It
saves the time and we don’t need to run both the job parallelly together.
Now we will insert the data in the child table whose parent table is not present.
We can’ make any changes in the data using invalid data after applying foreign key to the table.
If there is primary and foreign key present, it takes time run the job means data insertion in the
table. If there are thousands of records present, in that case it will take more time to insert the
data in the tables due to the presence of primary key and foreign key. As we need to add the data
sequentially in the table. After removing the foreign key there will be no waiting time for data
insertion in the table and no need to add data sequentially as well. So we can insert all the data in
the tables and after loading all the data we can apply the foreign key to the table. So, it will save
the time and no need to follow the sequence. If in any case if the foreign key is present but its
parent key is missing then it will show an error. We cannot apply foreign key in this condition.
Those records whose parent is not available in the table are called as orphan records and we can
calculate these records by below query.
Calculation:
Every customer in the system should have SSN. It is mandatory to have SSN. It would be not
null. If this is applied in the source then it will be ok. But, if this is introduced in the target then
need to observe very carefully. These data validation rules can be handled through constraints or
coding and this code can be written from database side or ETL side. It is not necessary to cover
these in the metadata testing. Sometimes if some restriction or validation is there you need to put
the data and need to observe the result to check whether is done or not.
5. Performance Testing: Verifying that data loads into the data warehouse within
predetermined timelines to ensure speed and scalability.
Template of the test case contain following (how is your organization test case template?):
Test case ID: unique identification id for each test case
Type: whether it is positive or negative test case
Description/Objective: describe the test objective in brief(test case)
Prerequisite: condition that must be met before the test case can be run . ex. User must be logged
in
Actions/Steps: list all the test execution steps in detail
Test data : use of test data as an input for this test case. List of variables and possible values used
in the test case. Examples: login id ={valid loginid, invalid loginid, valid email, invalid
email,empty} password={valid,invalid, empty}
Data validation query :
Expected result : what should be the system output after test execution. Describe the expected
result in detail including the message/error that should be displayed on the screen
Actual result : the actual test result should be filled after test execution. Describe the test
behaviour after test execution
Status : if the actual result is not as per the expected result then mark this test as failed.
Otherwise , update it as passed
22 may……
Environments in software development and testing:
1. Development /local environment
2. Test environment/ SIT/ QA(quality analysis)/ Stagging environment
3. UAT-Alpha environment
4. UAT-Beta/ pre-production environment
5. Production/live environment
1….Development environment:
- Development environment is also called local environment and Software Development
Environment (SDE).
- The development environment is used by developers to build the application.
- In development environment, developers are writing the code, testing the same so that it
can be deployed at Test Environment.
- This is organization level environment
- Testing done by developer is called WBT(white box testing)
- Entry criteria for development environment is,
1. Requirement gathering
2. Review
3. HLD(hign level design)
4. LLD(low level design)
- Exit criteria for developer environment
Coding is completed and code review, WBT is done by development team
2….testing environment:
- The testing environment is used by software testers to test the application.
- Software builds are deployed in test environment by developers.
- Testing Environment is also called as Software Integration testing environment (SIT
environment) or Staging environment.
- Once the testers completed the software testing, software build is deployed to UAT
environment.
- This is organization level environment
- Entry criteria for testing environment is,
Coding is completed and code review, WBT is done by development team
- Exit criteria from testing environment is,
Test execution is completed with defect fix impact
3….UAT-Alpha environment:
- The UAT environment is used by client’s side testing team to test the application.
- Client is going to test the application before moving it to next environment.
- Once the UAT testing done by client, application build is deployed production
environment.
- This is client side environment.
- Testing is done by technical team and professional tester
- Entry criteria for UAT-Alpha environment is,
Test execution is completed with defect fix impact
- Exit criteria from UAT-Alpha environment is,
Test execution is completed with defect fix impact in UAT-Alpha env
4….UAT-beta environment:
- It is also called as pre–production environment
- it is used by client/beta user/end user like people to test the application.
- Client is going to test the application before moving it to production environment.
- Once the testing done in pre–production environment by client, application build is
deployed in production environment.
- This is client side environment.
- Testing is done by unprofessional tester
- Entry criteria for UAT-beta testing environment is,
Test execution is completed with defect fix impact in UAT-Alpha env
- Exit criteria from testing environment is,
Test execution is completed with defect fix impact in UAT-beta env
5….production environment:
- The production environment is where users access the final code after all of the updates
and testing of all the environments, this one is the most important.
- The Production environment is the final stage, it used by end users.
- This is client side environment.
- Entry criteria for testing environment is,
Test execution is completed with defect fix impact in UAT-Alpha env.
* Application server and database sever is different for different env
Defect leakage: The defect which is missed from previous environment is called defect leakage
Big project is divided into different parts that parts are called as release.
DF- defect fix, TE- test execution
Onshore projects & Offshore projects:
if development , testing and UAT of one project is given to different different organization then
quality of the product increses and also dependency is not created. But more money is required
and more time is comsumed.
Q…whether you are involve in UAT ? / if there is defect in UAT then what will be your
approach as a tester ?
if there is defect at UAT(client side) then I will test that same defect at SIT(software integration
testing). After testing if defect is there then I will inform the developer to fix that defect.
After fixing the defect by developer I will retest that defect in SIT. After retesting the defect is
not fix then again defect send to developer. If the defect is fix then I will inform the developer to
redeploy the new build at UAT.
2nd scenario is that if there is defect at UAT then I will test that same defect at SIT. After testing
defect at SIT if there is no defect found then I will inform to developer that defect is not present
at SIT. Then again same defect is test at UAT. If defect is there then inform the developer to fix
that defect. If defect is fix then inform to developer to redeployed the new build at UAT. In this
scenario the defect is raise because of misunderstanding in requirements/invalid case/junk data.
Defect life cycle:
if defect is got then log the defect into JIRA /HPALM. When the defect is coming then status of
the defect is new. When the developer is to start working on defect then status of defect is open.
At this stage 4 possible outcomes from developer are there , that are
1. Fixed: defect is fix and ready for testing. If defect is not fixed then it is reopen and inform
the developer for fixing that defect. If the defect is fixed then it is closed
2. Rejected: defect is rejected for any of the reasons like duplicate defect, not a defect, non
reproducible
3. Duplicate: if one type defect is coming 2 times then that defect is duplicate marked by
developer
4. deffered : when the defect is not addressed in that particular cycle it is differed to future
release.
Retesting:
- After defect is detected and fixed, the software should be retested to confirm that the
original defect has been effectively removed. This is called Re-Testing or Confirmation
Testing.
- In retesting, we have to run previously failed test cases again on the updated build to
verify whether the defects posted earlier are fixed or not.
- In simple words, Retesting is testing a specific bug after it was fixed.
- Defect verification is come under Retesting.
- It is carried out before regression testing.
Regression testing:
- Regression testing is done to ensure, previously developed and tested code still performs
expected after a change.
- Changes that may require regression testing include bug fixes, enhancements, changes
because of new features.
- In regression, we have to re-run previously passed test cases again on the updated build to
verify code changes doesn't affect any other part of the system.
- Every regression testing is retesting but every retesting is not regression testing.
Q..how to identify the scope of regression testing?(regression testing kha krna hai ye kaise
identify kroge)
In regression testing we are checking the side effects of changing code. Code changes is the
scope of regression testing . code changes happen because of bug fixes, enhancements, changes
because of new features.
Suppose we have A, B , C these three functionalities . functionality B is depend on C and C is
depend on B. in iteration 1 round 1 testing is done and A & C functionalities are pass but B is
fail. So that in iteration 2 retesting is done only for functionality B. after retesting ,B is pass.
While retesting there is changes in the code is happen. So checking the side effects of changing
code we are doing regression testing in iteration 3. In ideal regression testing we are checking
only B & C functionality because B & C are depends on each other. But in practical regression
testing we are checking A, B, C functionality.
Smoke testing:
- Smoke Testing is nothing but the verification of basic functionality + troubleshooting the
defects in software build.(Need to check the major functionality of an application)
- No documents/test case for this kind of testing
- No defect is raised
- To check the stability of the software build
Ex1..logging window : able to move to next window with valid username and password
on clicking submit button
Ex2..user unable to sign out from the webpage
Sanity testing:
- Sanity Testing is a software testing technique which does a quick verification of the
quality of the software build to determine whether it is testable or not.
- Sanity testing is also called as level zero testing/ build health check up testing/ dry run /
build stability testing
- Sanity testing helps in quick identification of defect in core/basic functionality of an
application
- No documents/test case for these kind of testing
- No defect is raise
- To Check the stability of the software build Sanity Testing involves :
Functional testing -
1. URL
2. User credentials
3. Broken Image
4. Broken links
5. Add data, Update data, Delete data, serach data
6. Pages Title
7. Main menu, submenu
- If sanity testing pass -- > detail level testing/ start executing the test cases.
If sanity testing fails -- > build reject and sent it back to dev team.
STLC(software testing life cycle):
ETL Testing Challenges :
Data loss during the ETL process.
Large volume of data.
Invalid and duplicate data at target system.
Many number of Source data stores.
Source to target mapping information may not be provided to ETL Tester.
Does not have permission to execute ETL code/Job.
Unstable Testing environment.
ETL testing-
1) Check source file/table, target table
2) Check ETL code run / ETL job run.
3) Check record count between soruce and target system
BI Testing-
1) Report dashboard
2) Report Name
3) Reports fields
4) Report Type
Q…difference between regression testing , smoke testing , sanity testing
Regression testing Smoke testing Sanity testing
We are doing regression Smoke Testing is nothing Sanity Testing is a software
testing to checking that but verification of basic testing technique which does
Previously passed test cases functionality + a quick verification of the
are still working as expected troubleshooting the defects quality of the software build
because of the new code in software build. to determine whether it is
changes testable or not.
1..Stored procedure
What is Stored Procedure?
A stored procedure is a set of Structured Query Language (SQL) statements with an assigned
name,which are stored in a relational database management system(RDBMS), so it can be reused
and shared by multiple programs.
For ex: If you enter ‘Pune’ in search box of redbus and click on the search button, in the backend
a query is fired for this action such as “select * from table name(having data of pune)” and
sometimes it may have a long query present for this action. The is the action performed on
frontend and its impact show on backend. The storage procedure for this may be long and for this
we use keywords instead of long query and execute these keywords in the place of long query.
For ex.
In one schema you can create only one storage procedure. We will create a storage procedure for
particular query and then use it to call that set of statement. (This is without parameter)
Ex with parameter
We can put dynamic value in the procedure. We have to define parameter with data type and we
For delete
Drop procedure procedure name;
Have you ever interacted with store procedure? you are aware about store procedure?
Yes , I am interacted with stored procedure but indirectly. Some of ETL code(migration code)
are return in stored procedure . and our task is to execute it. If there is any error in that code we
will inform to developer.
Exec Cust_search ;
drop procedure Cust_search ;
exec Cust_search ;
--
--2) With Parameter
Triggere:
What is trigger ?
Trigger is a special stored procedure that runs automatically when various events happen(ex.
Update,insert ,delete)
Views:
What is View?
View is a virtual table, based on the result-set of an SQL statement. We consider view as a table
Types of views:
1)Simple View: created from only one table. We can not use group functions like max(), count()
etc. does not contain group of data. DML operations could be perform through simple view.
2) Complex View: created from one or more table. We can use group functions like max(),
count() etc. does contain group of data. DML operations could not be always perform through
complex view.
3. We can not write trigger within a stored We can write stored procedure in
procedure trigger
4. Stored procedure can be return for the Triggeres can be written for the tables
database
2 Can not be used as building block in large Can be used as building block in large
query query
3. Can contain statements like if , else, loop Can contain only select statements
4. Can perform modification to the tables Can not perform modification to the
table
5. Can not be used as a target for insert , Sometimes can be used as a target for
update, delete queries insert , upate, delete queries
Q…What are the conditions under which you use dynamic cache and static cache in
connected and unconnected transformations?
- In order to update the master table and slowly changing dimensions (SCD) type 1, it is
necessary to use the dynamic cache.
- In the case of flat files, a static cache is used.
- Design Test Cases and Preparing Test Data: Step three includes designing ETL mapping
scenarios, developing SQL scripts, and defining transformation rules. Lastly, verifying
the documents against business needs to make sure they cater to those needs. As soon as
all the test cases have been checked and approved, the pre-execution check is performed.
All three steps of our ETL processes - namely extracting, transforming, and loading - are
covered by test cases.
- Test Execution with Bug Reporting and Closure: This process continues until the exit
criteria (business requirements) have been met. In the previous step, if any defects were
found, they were sent to the developer for fixing, after which retesting is
performed ,moreever regression testing is performed in order to prevent the introduction
of new bugs during the fix of an earlier bug.
- Summary Report and Result Analysis: At this step, a test report is prepared, which lists
the test cases and their status (passed or failed). As a result of this report, stakeholders or
decision-makers will be able to properly maintain the delivery threshold by
understanding the bug and the result of the testing process.
Q…What is the difference between the STLC (Software Testing Life Cycle) and SDLC
(Software Development Life Cycle)?
- SDLC deals with development/coding of the software while STLC deals with validation
and verification of the software
Q…Using SSIS ( SQL Server Integration Service) what are the possible ways to update
table?
- To update table using SSIS the possible ways are:
Use a SQL command
Use a staging table
Use Cache
Use the Script Task
Use full database name for updating if MSSQL is used
Q…Explain what is data source view?
- A data source view allows to define the relational schema which will be used in the
analysis services databases. Rather than directly from data source objects, dimensions
and cubes are created from data source views.
Q…Explain what is the difference between OLAP tools and ETL tools?
- The difference between ETL and OLAP tool is that
- ETL is used for extraction of data from source system and load the data into target system
with some process.
- Example: Visual studio, Data stage, Informatica etc.
- While OLAP is meant for reporting purpose in OLAP data available in multi-directional
model.
- Example: Tableau etc.
Q…What are the various tools used in ETL?
- We have used only visual studio along with tableau for reporting purpose.