You are on page 1of 11

MODEL NAME ADVANTAGES DISADVANTAGES USAGE

Water Fall model 1.Since Requirement changes are not allowed 1.Requirement changes are not allowed. 1.Small and short term projects.
or Basic Model or downward flow of defects is less. 2.Developers were involved in testing. 2.When customer says they wont change
Traditional model or 2.Quality of s/w will be good. 3.If there is defect in Requirement it will flow till the Requirement.
Sequential model 3.It is very simple model the end of Development.
4.Advantages for Developers(less recoding. 4.Requirement collection and design are not
5.Initial investment is low. tested.
5.Its not flexible because Requirement changes are
not allowed.
6.Total investment is high.
7.Here customer has to wait for long period to see
the software.

Spiral Model or 1.Requirement changes are allowed after each cycle. 1.Requirement and design are not tested. 1.When there is dependency between
Incremental model or 2.Customer get to see s/w after each cycle. 2.Developers were involved in testing. modules.
Iterative model 3.Testing is done in very cycle. 3.Requirement changes are not allowed between 2.When customer give Requirement in
4.Spiral model is a controlled model. the cycle. changes.
5.Investment is done in proper way. 4.Every cycle looks like waterfall model.

V & V Model 1.Total time taken is less since testing starts at early 1.Initial investment is less. 1.Complex Projects.
(Verification and stage. 2.Documentation is more. 2.Long term projects.
Validation) 2.Requirement changes are allowed. 3.Managing interaction between test engr and 3.When customer want high quality
3.Requirement and design are tested. Developer is difficult. software within short period of time.
4.Total investment is less.
5.Each and every stage is tested.
6.Testing starts at early stage downward flow of
defects is less.
7.Deliverables are parallel.

Prototype Model or 1.Customer will get to know how s/w looks at an 1.Total time taken is more. 1.When customer is new to business
Dummy Model early stage. 2.Total investment. 2.When customer is not aware of complete
2.There will be improved con section between 3.There will be delay in releasing the s/w. Requirement.
customer and Developers and it will be helpful for 4.Requirement and design are not tested. 3.When Developer are new to business.
Developer to Developereloperelop according to
customer Requirement.
3.We can set high expectation for customers.
4.Requirement changes are allowed.
5.It is easy to handle Requirement changes.
MODEL NAME ADVANTAGES DISADVANTAGES USAGE
Combining two or more models into a single model 1.Spiral+V&V
is a hybrid model. a)When there is dependency between
modules.
b)When customer give Requirement in stages
& Long term projects
c)Complex projects.
d)Long term projects.

Hybrid Model 2.Spiral+Prototype


a)When there is dependency between
modules.
b)When customer give Requirement in stages.
c)When customer is new to business.
D)When customer is not aware of complete
Requirement.

Agile Model Principles of Agile: Disadvantages:


1)Our highest priority is customer satisfaction 1)For critical & complex project difficult to give
through quick delivery of s/w. effort estimation.
2)Req can be changed at any point of development 2)Less scope for documentation.
process. 3)If customer is not aware of req then we will mess
3)Releases should be short. up with the project.
4)There will be good communication b/w customer, 4)To work on agile process we require experienced
BA, Develoment & TE & they communicate very resources.
frequently.
5)It is a simple model to adapt.
6)Developers, TE &BA will be having frequent
meeting to improve the process.
Sl.no WATER FALL MODEL SPIRAL MODEL
1 Req changes are not allowed Req changes are allowed after each cycle
2 Testing is done after complete coding is completed Testing is done in each cycle.

3 Customer gives req at once. Customer gives req in stages


4 Customer can see s/w after complete coding and Customer can see s/w after each cycle.
testing.
5 No iteration as complete s/w is developed in one There is iteration in each cycle.
cycle
6 It’s a simple model It’s a controlled model.
7 Total Investment is high Investment is done in a proper way.

Sl.no WATER FALL MODEL V&V MODEL


1 Req changes are not allowed Req changes are allowed.
2 Developers were involved in testing Test engr do the testing
3 No much documentation. Documentation is more.
4 TE work only after coding is completed. Testing is done at each and every stage.
5 Req and design are not tested Req and design are tested
6 It is not flexible It is flexible.
7 Initial Investment is low. Initial Investment is high.
8 Testing is done after complete coding is completed Testing is done at each and every stage.

9 Short term projects. Long term and complex projects.

Sl.no WHITE BOX TESTING BLACK BOX TESTING


1 Code is visible Code is not visible
2 Developers do WBT Test engr do BBT
3 Here dev check logic of the code test engr check the functionalityb of the
application
4 Developer give i/p the source code and check o/p Test engr give i/p to the application check o/p
according to customer req according to customer req
5 Programming knowledge is required Programming knowledge is not required
6 The person should have knowledge on Design TE does not require to have knowledge on
internal design.

Sl.no SMOKE SOAK


1 Testing the basic & critical features of an application Testing the stability and response time of an
before through or rigorous testing is known as Smoke application by applying load continuously for some
testing. period of time

2 It’s a type of functional testing. It’s a type of non-functional testing.


3 We do smoke testing as soon as we get the build. It will be done when all type of testing is completed.
Sl.no FUNCTIONAL NON-FUNCTIONAL

Smoke, FT, IT, ST, adhoc, Exploratory, Regression, Performance testing, Usability, GUI, Websecurity,
1 Acceptance, alpha beta gamma testing. Compatibility, Globalization, Migration.

Sl.no ALPHA,BETA ACCEPTANCE


1 Product based companies follow this testing Serive based companies follow this testing.
2 Company will take ownership of the product Customer will take ownership of the product
3 In this we do not have control over the product ie we Here we have control over the Product.
will not be aware of no. of users using this software.

Sl.no QA QC
1 QA is set of activities done to improve the quality in Qc is a set of activities which is done to improve
process. quality in product.
2 QA is process oriented. Qc is product oriented.
3 QA is done to prevent defects. QC is done to identify defects.
4 The main goal of QA is to improve the development & The main goal of QC is to identify the defects.
testing process so that defect will not arise once s/w is
developed.

5 QA is proactive process QA is reactive process.


6 QA means planning QA means execution
7 Verification is an example of QA Validation is an example for QC
8 Here we identify weakness in the process and try to Here we identify defects and get it fixed by developer.
improve it

Sl.no Static Dynamic


1 Static testing involves verification activities Dynamic testing involves validation activities.
2 Verification includes reviewers, inspection, auditing, Validation includes the types of testing that we do on
walkthrough. the application.
3 It is done before coding is completed Once after coding is completed
4 Here we ensure that are we bulding product right. Here we ensure that are we bulding right product
5 It is done to prevent the defects It is done to identify the defects
6 It is less cost effective It is more cost effective bcoz we might use tools for
automation
7 Documentation Execution
What type of testing will you do for web application?
1.Web application consists of basic and critical features so we will do smoke testing.
2. Web application consists of many components so we do FT.
3.Web application consists data flow so we do IT.
4.Web application consists end to end scenarios so we do ST.
5.Web application may be randomly used by customers so we do adhoc testing.
6.Chances are there customers might use web application in diff platforms so we do compatibility testing.
7.Suppose if application is developed for multiple languages then we do Globalization Testing.
8.Since web application is developed for multiple users chances are there application might not withstand the
load so we do performance testing.
9.We check application is user friendly or not by doing usability testing.
10.Chances are there application might be attacked by hackers ie. we do web security testing.
11.In web application chances are there new feature might impact old feature so we regression testing.
12.I will compare one application with another similar kind of application by doing comparision Testing.

What happens if Developer involved in testing?


1.Chances are there Developers will utilize testing time in coding.
2.Developers will be concerned more about coding rather than testing.
3.Developers are overconfident.
4.Developers if they find any defect they neglect wont fix it.
5.Developereloper think only from positive point of view.

What happens if test engr involved in fixing defect?


1.Chances are there fixing one defect may introduce many defects.
2.Time taken by test engr to fix defect is more as compared to Developereloper.
3.Test engr might utilize testing time for coding.

When to release s/w to customer?


1.When all customer req features are ready.
2.Once after doing FT,IT and ST.
3.Once all end to end scenarios are thoroughly tested.
4.When product is functionally stable.
5.Once after s/w is tested in testing server similar to production server.
6.We can have some minor defects which does not affect customer business work flow it should be within
customer acceptable limits.
7.When we are above to meet dead line.

When we go for Migration Testing/When customer wants to chnage the technology/from one database to
another??
1.When Technology is old.
2.If the application or a software is facing performance.
3.If application or s/w facing severity related issue.
4.If application is complex and not able to add any feature.
TEST CASES
Step by step procedure to test a feature which gives all possible scenarios for one particular requirement is called as Testca

What are the drawbacks of not writing Test Cases??


1.There will no consistency in testing if we test by looking into requirement.
2.We will miss lot of scenarios and defects.
3.Testing depends on mood of TE.
4.Testing depends on memory power of TE.
5.Chances are there we might end up testing the same thing again &again.
6.Quality of testing varies from person to person.
7.Testing varies from person to person.
8.Testing coverage will not be good if we look into application and test.

Why should we Test Cases?


1.To have better test coverage.
2.To have better consistency while testing.
3.To avoid training the new TE on the same project/req.
4.To depend on process rather than person.
5.By looking into req if we test s/w we will miss lot of scenarios and defects that’s why we write testcases.

6.Test cases are the only proof for TE to show for customers & developers stating we have tested the application by coverin
7.Test cases are the base for automation.
8.Testing can be done in an organised way by looking into Test cases.

When to write Testcases??


1.When customer gives new requirement.
2.When customer want to add new feature or extra feature we should write Testcases.
3.When customer wants to do modification on existing feature then we should write Testcases.
4.TestEngineer while testing if he comeup with any creative scenarios then he shouls write Testcases.
5.While testing if we find any defect and if testcase is not available then we have to write testcases.

Test case Design Techniques:-(Black box technique/Design Technique/Testing Tesnique)


Error guessing
Equivalence class Partitioning
Decision table technique
Boundary value analysis.

What are the qualities/ characteristics of a Good Testcases or the Advantages of Testcases.
1.Test case should be written in test case template.
2.Test case should be written by applying test case design technique.
3.Test case coverage should be good.
4.Test case coverage should be good with less no of steps.
5.Test case should be consistent.
6.If TC is given to any new TE he sholud be able to execute without asking any questions.
7.TC should be simple to understand.
8.TC should be able to catch the defects.
9.TC should consists of both positive and negative scenarios.
10.TC should not have any duplicates.
11.TC should be easy to convert into automation scripts.
TYPE OF TESTING WHAT WHY WHEN HOW ADVANTAGES Types
Testing the basic and critical 1.To check whether s/w is testable or not. 1.As soon as we get the build, TE does 1.We test only the basic and critical 1.TE can find blocker 1.Formal
features of an application before 2.First day itself if we find blocker defects dev smoke testing. features. defects at the early 2.Informal
doing thorough or rigorous will get suffient time to fix the defects. 2.When developer give the build to the 2.Here we test basic and critical features statge
testing is called as Smoke testing. 3.Doing smoke testing we ensure indirectly customer chance are there he might for one or two scenarios. 2.Developer will get
whether s/w is installed properly or not. miss few files to copy so customer do 3.Here we do only positive testing. sufficient time to fix the
4.Dev give new build means he has done some smoke testing. 4.For the first time while doing smoke defect.
changes, might impact old features so we do 3.Release/build engr will do smoke testing we will not know the basic and 3.Test cycle will not get
smoke testing. testing to check s/w is installed properly critical feature we will get to once after postponed
5.Smoke testing is health check of the product. or not. getting product knowledge. 4.Release will not be
SMOKE TESTING 6.Smoke testing is the bulid verification 4.Developers to smoke before giving delayed.
(Sanity testing testing, here we check build is broken or not. build for testing.
Dry run testing 7.Before ft,it,st we do smoke.Before
Skim testing acceptance we do smoke and once after s/w is
Health check of the installed in production server we do smoke.
product
Build verification testing,
Confidence testing)

Testing each and every To ensure each and every component works 1.Customer gives requirement. By entering all possible inputs according to 1.Over Testing.
component of an s/w application according to the requirement. 2.Developer will give the code. requirement (positive) as well as negative 2.Under Testing.
thoroughly/ rigorously is known 3.Developer does WBT. inputs for each and very component. 3.Optimized Testing.
as Functional Testing. 4.S/w is installed in testing Server. a)Positive Testing
5.Resourses should be available. b)Negative Testing
FUNCTIONAL TESTING 6.Smoke Testing is done.
(Component Testing) 7.After writing functional Test scenarios
and Test cases.

Testing the data flow between To check the data flow between the modules. 1.Customer gives requirement. 1.Incremantal Integration
two or more modules of an 2.Developer will give the code. Testing.
application is known as 3.Developer does WBT. a)Top-down
Interagtion Testing. 4.S/w is installed in testing Server. b)Bottom-up
5.Resourses should be available. c)Sandwitch
6.Smoke Testing is done. 2.Non-Incremantal
INTEGRATION TESTING 7.Funtional Testing is done. Integration Testing(Big
8.After writing Integration Test scenarios Bang Method).
and Test cases.

System Testing is an end to end We go for system testing to check all the end 1.Smoke Testing is done. We test the customer business work flow.
Testing where in we test in Testing to end scenarios are working fine in testing 2.Funtional Testing is done.
Server similar to Production server similar to production server. 3.Integration Testing is done.
Server. 4.Minimum bunch of modules are ready.
5.Product is relatively stable.
6.Basic functionality of the application is
working fine.
7.Testing server similar to Production
server is available.
SYSTEM TESTING 8.When we start getting less no of
defects.
9.After writing S/m Test scenarios and
Test cases.
Acceptance Testing is an end to 1.To ensure that s/w meets business req. It is done once after s/w is released to 1.IT engineer at customers place do the 1.User Acceptance testing
end Testing done by end users / 2.To get confidence on the software. customer. acceptance Testing. 2.Operational acceptance
customers wherein they use the 3.To make sure that software company has not 2.Employees at customer place do testing
s/w to run the business for developed any wrong feature. acceptance Testing. 3.Contract acceptance
particular period of time to check 4.Chances are there under business pressure 3.Test Engineer travel to customer place testing
whether s/w is able to handle all company might have missed some of the and do acceptance testing. 4.Compliance acceptance
the realtime business scenarios defects. 4.Customer will travel to company along testing.
ACCEPTANCE TESTING and situations. with business document and ask TE to look
into business doc and do acceptance
testing.

Developing s/w for multiple Internationalization(I18N) Once after smoke, FT, IT and ST is 1.To check right language is displayed we DEFECTS:-
languages is known as To check whether right language is displayed. completed and if s/w developed for go to property files and we add prefix and 1.Right content is not
Globalization.Testing the s/w To check whether right content is displayed in multiple languages we go for I18N ensure that right language is diaplayed. displayed.
which is developed for multiple right place. testing. 2.To ensure that right content is diplayed 2.Right language not
languages is know as we add both prefix and suffix in property displayed.
Globalization Testing. files. 3.Alignment Problem.
4.Tool tip defect.
GLOBALIZATION TESTING
We check for Testing the s/w or application and check
a)Currency format. whether s/w is developed as per the
b)Date format Testing. country standards/country culture is
c)Pincode Testing. called as Localization Testing.
d)Image format Testing.

Testing an application / software 1.Chances are there customer or endusers 1.As soon as you get the build first we do By entering the inputs which is not 1)Buddy Testing (TE+DEV)
randomly without looking into might use the application randomly and they smoke(+ve)and check s/w is testable or according to the requirement. 2)Pair Testing(2 TE)
requirement is known as Adhoc might find the defects to avoid that we should not. It can be done in two ways 3)Monkey Testing (Test
Testing. do adhoc testing. 2.While doing FT,IT,ST in b/w or at the 1)TE will come up with creative scenarios & randomly like monkey)
2.Developers will always develop s/w end if we have some time we do adhoc test application like monkey.
according to req &TE test acc. to req chances or if we don’t have time we document 2)Test the application like monkey without
of finding defect is less so TE should come with adhoc Test scenarios and test cases. applying any logic.
creative scenarios and test application. 3.Once after s/w is completely tested as
3.Since its -ve testing so we should do adhoc per req then we go for adhoc.
testing. 4.Once after product is tested for 50 to
4.Since req are not followed so we should do 20 cycles and product gets stable then
ADHOC TESTING (Monkey adhoc. we go for adhoc.
Testing/ Gorilla Testing) 5.The intension of doing adhoc is to somehow 5.While doing FT,IT,ST if we get a creative
break the product.(find defects) scenarios the we can do adhoc
testing,after that immidiately swictch
back to FT,IT,ST.
Testing the fuctionality of an 1.TE test the s/w in one platform and release 1)Once after s/w is tested for base I will do compatability testing in diff h/w 1)Hardware Compatibility
application or software in the s/w to the customer, customer might use platform then only we test for different and s/w platforms where in I will test the Testing
different hardware and software the s/w for different platform it may not work, platform then only we do compatibility application for different OS, browser and (a)Test for different
platforms is called as bad name spreads in market regarding the testing. browser versions. Processor.[Intel,AMV]
Compatibility testing. company and customer usage goes down so 2)Company will do market research DEFECTS:- (b)Test for different RAM
we do compatibility testing. based on market research we will get to 1.Scattered Content [HP, sony]
2)To ensure that each and every feature are know which platform is extensively used 2.Alignment Issue (c )Test for diff mother
consistently working in all the platforms by customer then we consuder it as base 3.Broken Frames board.[Acer, Intel]
COMPATIBILITY TESTING compatibility testing to be done. platform. (d)Test for different VGA
4.Change in look and feel
Configuration Testing cards. [NVIDIA, ATI, Foxon]
5.Object overlapping
Portability Testing 2)Sofware Compatibility
6.Scroll bar issue
Cross broswer Testing Testing.
Cross Platform Testing Diff OS ,its versions, service
packs, browsers.

Understanding the application By doing exploratory testing I will be able to 1.When we don’t have requirement. 1.I will understand the application by
identify all possible scenarios and find all blocker and critical defects & also I 2.There is req when we don’t time to entering all possible inputs to each and
test the application by looking might miss some of the minor defects when understand the requiremnt. very component I do exploratory testing.
into scenarios is known aa s/w is given to customer our customers will be 3.There is requiremnt but very complex 2.I will understand the apllication by
Exploratory Testing. able to use the s/w & run the business without to understand. undertanding how each and every feature
facing any blocker defects ie the reason even works and test the data flow between the
when we don have req we do exploratory features I do exploratory testing.
testing. 3.I wil explore the application, try to cover
EXPLORATORY TESTING all possible scenarios by doing exploratory
testing.

Testing the stability and response 1)Load Testing


time of an application by applying 2)Stress Testing
PERFORMANCE TESTING load is called as performance 3)Scalability Testing
Bench mark testing testing. 4)Volume Testing
Theshold Testing 5)Soak/Endurance Testing
Bottomneck Testing
Baseline Testing

Testing the unchanged features to To make sure that the unchanged feature or 1)When changes are done in s/w. How to identify impacted areas?? Drawbacks of doing 1)Unit Regression Testing.
make sure that changes like modules are not impacted with the changes. 2)When platform or enviroment is 1)Customer informs TE & Dev about Regression manually. 2)Regional Regression
adding feature, modifying changed. impacted areas 1.More time Testing.
feature, deleting feature and 2)BA will mention the impacted areas in consuming. 3)Full Regression Testing
fixing defects is not introducing Full Regression Testing: SRS. 2.More resource
any dects in old features or 1)When changes are more don’t waste 3)Sr.Dev/Dev will give the list of impacted utilization.
unchanged features is called as time in impact analyis meeting just go for areas. 3.Tedious job.
Regression Testing. full regression. 4)Test Mgr/TL/Sr.TE based on their product 4.No consistency in
REGRESSION TESTING 2)When changes are done in root of the knowledge they give the list of impacted testing.
product then we should go for full areas. 5.Turn around time is
regression testing. 5)TE will conduct impact analysis meeting. more.
6.Less accuracy.
What mistakes TE will find while reviewing TC??
1.TE will find wrong scenarios, missing scenario while reviewing.
2.Check whether Testsecases are written in proper TC template or not.
3.Check whether simple to understand or not.
4.When TE copy paste TC chances are there you might not change the header check whether header is relevant or not.
5.Check whether TC are written by applying TC design technique or not.
6.In header check whether all attributes are present or not.
7.In the header check whether all the fields are having proper data or no.
8.Check whether the flow of TC is good or not.
9.Check whether TC coverage is good or not.
10.Check for the spelling mistakes & sentence formation.
11.By looking the body of the TC check whether all the steps, i/p and expected result is present or not.
12.Check whether footer is present with proper data or not.

Procedure to write Test cases


Req,System study, Identify all possible scenarios, brain strom meeting,Write tc, review tc,fix review comments,TC
approval, TC repository.

Sl.no SMOKE SANITY


1 It is shallow testing. Narrow and deep testing.
2 Smoke is +ve testing. Sanity is both +ve and -ve testing
3 We write smoke Test scenarios and testcases. Here we don’t write sanity testcases.
4 We can go for automation We cant go for automation
5 Smoke is done by dev,TE Sanity is done by TE.
6 Testing the basic features of an application Here sanity testing is subset os acceptance
before thorough and rigorous testing os called as testing and regression testing.When the build is
smoke testing. deplayed to production/testing server we do
sanity testing and check whether build is stable &
testing environment is stable or not.

Sl.no RETESTING REGRESSION TESTING


1 Whenever developer give the build check Testing the unchanged features to make sure that
whether defect is fixed or not is called retesting. changes like adding, modifying or deleting a
feature not impacted old features is called as
regression testing.

2 Retesting done for failed TC Regression testing done for passed TC


3 Restesting is planned Testing Regression is generic testing
4 Restesting takes the highest priority. Regression takes less priority than retesting.
5 We cant go for automation We can go for automation

When to go for automation


1.When product is fuctionally stable.
2.Once after s/w is manually tested for one or two releases.
3.When there are no critrical and blocker defects.
4.When we have more no. of regression Testcases.
5.When there are no major changes done by customer.
6.Once after s/w is manually tested for 1.-15 testcycles then we can go for automation.

TEST PLAN ATTIBUTES


Objective Scheduling
Scope Defect tracking
Testing Methodologies Test environment
Approach Entry and exit criteria
Assumption Test automation
Risk Deliverables
Backup plan/Mitigation plan Templates.
Roles and responsibilities

DEFECT IS INVALID/REJECTED/NOT A DEFECT


1.Because of misunderstanding of req.
2.When or s/w is wrongly configured
3.Because of refering old req.
4.Because of adding extra feature.

DEFECT IS DUPLICATE.
We find duplicated because of testing common feature.
Assume one TE come work for 2yrs and leave company another test engineer come defects will not be fixed earlier
the new TE log the same defect.
How to avoid duplicates
1.TE when they find defcts and send for DL keep cc for TL and TE working on same project.
2.Login to DTT search for defect before tracking defect.
3.TE can cross check with team members some times developers before tracking defect.

DEFECT CANNOT BE FIXED /WONT FIX


1.If technology itself is not supporting.
2.If TE finds any defect in root of the product.
3.If cost of fixing defect is more than that of cost of defect.

ISSUE NOT REPRODUCIBLE


1.Platform mismatch(OS,Browser,Browser version,Browser setting)
2.Because of improper defect report
3.Because of data mismatch.
4.Because of build mismatch
Because of inconsistent defect.

POSTPONE/HOLDING/FIX IN FUTURE RELEASE


1.TE finds minor defect at the end of the release.
2.If TE finds any minor defect which is exposed to internal user.
3.If TE finds any defect in a feature which is not required by customer in current release.

4.If TE finds any defect in a feature wherein customer is planning to do many req changes for that particular feature

You might also like