You are on page 1of 90

Software Testing

Error, Fault, Failure


• An error
• a human action that produces an incorrect result. (Mistake).
• we all make mistakes and when developing the software.

• Fault –
• The result of an error being made is a fault.
• It is something that is wrong in the software
• (source code or documentation – specifications, manuals, etc.).
• Faults are also known as defects or bugs.

• Failure –
• When a system or piece of software
• produces an incorrect result or does not perform the correct action.
• Failures are caused by faults in the software.
Testing Principles
Testing Objectives
• Testing is

• the process of exercising a program

• with the specific intent of finding errors

• prior to delivery to the end user.


Testing Principles
1. All the tests should be traceable to customer requirements

2. Tests should be planned

3. 80% of errors – uncovered, during testing

4. Testing should begin “in the small” & progress toward


testing “in the large”

5. Testing should be conducted by – independent third party


• The developer of the software conducts testing
• For the large projects, independent test groups
also assists the developers

• Testing & debugging are different activities


• That must be carried out in software testing

• Debugging also lies within any testing strategy


Testing Strategy

• We begin by ‘testing-in-the-small’
• And
• move toward ‘testing-in-the-large’
Testing Strategies for Conventional Software
1. Unit testing

2.Integration testing
• Non-incremental integration
- big-bang approach
• Incremental integration
- top down testing, bottom up integration, regression testing, smoke
testing

3.Validation testing
- acceptance testing
- alpha testing, beta testing

4.System testing
- recovery testing, security testing, stress testing
performance testing
Phases of Testing
1. Unit Testing
• A unit is the smallest testable part of software.

• It usually has one or a few inputs and usually a single output.

• In procedural programming
- a unit may be an individual program, function, procedure,
etc.

• In object-oriented programming,
- the smallest unit is a method, which may belong to
- a base/ super class, abstract class or derived/ child class.
• Individual components are tested independently

• Driver and stub software are used in unit testing

• Driver –
- a program,
- that accepts the test data & prints the relevant results

• Stub –
- a subprogram,
- that uses – module interfaces
- and
- performs minimal data manipulation
- If required
Unit Testing

module
to be
tested

results

software
engineer
test cases
Unit Testing

module
to be
tested
interface
local data structures

boundary conditions
independent paths
error handling paths

test cases
BENEFITS (Advantages) of Unit Testing
• Unit testing
- increases confidence in changing/ maintaining code.

• If good unit tests are written


- & if they are run every time any code is changed,
- its possible to promptly catch any defects
- introduced due to the change.

• Codes are more reusable.


This means that codes are easier to reuse.
2. Integration Testing
• is a level of software testing
- where individual units are combined and
tested as a group.

• The purpose of this level of testing is


- to expose
- faults in the interaction
- between integrated units. 
Integration Testing
• Group of dependent components are tested together

• Uncovers errors in –

- Design and construction of software architecture.

- Integrated functions and operation at system level

- Interfaces and interactions between them

- Resource integration and environmental integration


When is Integration Testing performed?

• Integration Testing is performed after


Unit Testing and before System Testing.

• Who performs Integration Testing?


• Either Developers themselves or independent
Testers perform Integration Testing.
Integration Testing Strategies

Options:
• the “big bang” approach
• an incremental construction strategy
Approaches of Integration Testing
• Non incremental integration
- big-bang approach

• Incremental integration
- top down testing
- Bottom up testing
- Regression testing
- Smoke testing
Non Incremental Integration
(Big-bang Approach)
• Steps –

- All components are combined in advance


- The entire program is tested as a whole

- set of errors are tested as a whole


- once these errors are corrected, new ones appear
- this process continues infinitely
• Advantages –
- simple

• Disadvantages –
- Hard to debug
- difficult to isolate errors while testing
Incremental Integration

- top down integration testing


- Bottom up integration testing
- Regression testing
- Smoke testing
Top Down Integration

B F G

D E
Bottom-Up Integration

B F G

D E

35
Regression Testing
• re-execution of some subset of tests
• that have already been conducted
- to ensure that
- changes have not propagated
- unintended side effects or additional errors

• Whenever software is corrected,


- some aspect of the software configuration is changed.
Smoke Testing

• A common approach for creating “daily builds” for product software

• steps:

– Software components that have been translated into code


– are integrated into a “build.”
• A build includes all data files, libraries, reusable modules, etc.

– The build is integrated with other builds


– and the entire product is smoke tested daily.
4. System Testing
System Testing
• a complete and integrated software is tested.

• Definition by ISTQB
system testing:
- The process of testing an integrated
system
- to verify that it meets specified
requirements.
• The system test is

- a series of tests conducted


- to fully test
- the computer based system
- (As a single unit)
• When is it performed?
- System Testing is performed
- after Integration Testing and before
Acceptance Testing.

• Who performs it?


- Normally,
- independent Testers perform System Testing.
Types of System Tests
• Recovery testing
• Security testing
• Stress testing
• Performance or load testing

• Interoperability testing
• Scalability testing
Types of System Tests
• Recovery testing
- System’s ability to recover from failures.

- s/w is forced to fail & then it is verified,


- whether the system is properly recoverable or not.

- Reinitialization, checkpoint mechanisms, data recovery & restart


- are verified

• Security testing
- Verifies the protection mechanism,
– unauthorized internal or external access,
- or willful damage.
• Stress testing
- Software is tested at highest load, than normal
working environment
- i.e. system is executed for –
resources in abnormal quantity, frequency or volume

• Performance testing
- runtime performance of the software is evaluated.
- resource utilization is measured.
- such as – CPU load, throughput, response time,
memory usage.
- ex. Beta testing
Acceptance Testing
Acceptance Testing

• Conducted to ensure that –


- The s/w works correctly
- In the user work environment

• Can be conducted over a period of weeks or months


Types of Acceptance Testing
• Alpha Testing (Internal Acceptance Testing)
- complete s/w is tested by customer
- under supervision of developer
- at developer’s site
- in controlled environment

• Beta Testing (External Acceptance Testing)


- complete s/w is tested by customer
- without developer being present
- at customer’s site
- no control of developer
• System is tested for acceptability.

• The purpose of this test is


- to evaluate the system’s compliance
- with the business requirements
- and assess whether it is acceptable for
delivery.
Acceptance Testing
• Determines - whether the system is doing the right things.

• Acceptance tests represent


- the customer’s interests.

• The acceptance tests give the customer confidence


- that
- the application has the required features and
- that they behave correctly.

• In theory when all the acceptance tests pass


- the project is done.
METHOD

• Usually, Black Box Testing method is used in


Acceptance Testing. 

• Testing does not normally follow a strict


procedure

• and is not scripted but is rather ad-hoc.


• When is it performed?

• Acceptance Testing is performed

• after System Testing and before making the


system available for actual use.
• Who performs it?

• Internal Acceptance Testing (Also known as Alpha Testing)


is performed by
- members of the organization, that developed the software
- but who are not directly involved in the project
(Development or Testing).

• Usually,
- it is the members of Product Management, Sales and/or
Customer Support.
• External Acceptance Testing
• is performed by people
• who are not employees of the organization that developed the
software.

– Acceptance Testing is performed by


– the customers

• User Acceptance Testing (Also known as Beta Testing)


– is performed by the end users of the software.
– They can be the customers themselves or the customers’ customers.
How Acceptance Testing Have Been Done

• The customer writes stories.

• The development team and the customer


have conversations about the story
• to flesh out the details
• and make sure there is mutual understanding.
GUI Testing
What is GUI ?

• There are two types of interfaces for a computer application.

• Command Line Interface


• is where you type text
• and computer responds to that command.

• Graphical User Interface


• where you interact with the computer
• using images rather than text.

• Following are the GUI elements which can be used for


interaction between the user and application:
GUI Testing,
is validation of below elements.
What is GUI Testing?

• checking the screens with the controls like -


• menus, buttons, icons, and all types of bars - toolbar, menu bar, dialog boxes
and windows, etc.

• GUI is what user sees.

• Say if you visit snjb.org


• what you will see say home page
• it is the GUI of the site.
• A user does not see the source code.

• The interface is visible to the user.


• Especially the focus is on the design structure
What do you Check in GUI Testing?

• Check all the GUI elements for size, position,


width, length and acceptance of characters or
numbers.
- For instance, you must be able to
provide inputs to the input fields.

• Check Error Messages are displayed correctly


• Check the alignment of text & images is proper

• Check the size & Color of the font

• Check that the images have good clarity

• Check the positioning of GUI elements for


different screen resolution.
Approach of GUI Testing

• Manual Based Testing


• Record and Replay
• Model Based Testing
Manual Based Testing

• Under this approach,


- graphical screens are checked
- manually
- by testers

• to conform the customer requirements


Record and Replay

• GUI testing can be done using automation tools.

• This is done in 2 parts.

• During Record ,
- test steps are captured by the automation tool.

• During playback,
- the recorded test steps are executed
- on the Application Under Test.
- Example of such tools - QTP .
Model Based Testing
• A model is a graphical description of system's
behavior.

• It helps us to understand and predict the


system behavior.

• Models help in a generation of efficient test


cases using the system requirements.
Following needs to be considered for this model
based testing:
• Build the model
• Determine Inputs for the model
• Calculate expected output for the model
• Run the tests
• Compare the actual output with the expected output
• Decision on further action on the model
• Some of the modeling techniques from which test cases can be derived:
• Charts - Depicts the state of a system and checks the state after some input.
• Decision Tables - Tables used to determine results for each input applied
• Model based testing is an evolving technique for the generating the test cases
from the requirements. Its main advantage, compared to above two methods,
is that it can determine undesirable states that your GUI can attain.
Following are open source tools available to conduct GUI Testing.

Product Licensed Under


AutoHotkey GPL
Selenium Apache
Sikuli MIT
Robot Framework Apache
Water BSD
Dojo Toolkit BSD
GUI Testing Test Cases
• Testing the size, position, width, height of the elements.
• Testing of the color & symbols of error messages and
hyperlinks.
• Testing the different sections of the screen, like scrollbars.
• Testing of the size, heading and color font.
• Testing of the screen in different resolutions.
• Testing the alignment of the texts, images, icons, buttons, etc.
• Testing whether the image has good clarity or not.
• Testing of the spelling.
• The user must not get frustrated while using the system
interface.
• Testing of the disabled fields if any.
Demo: How to conduct GUI Test

• Here we will use some sample test cases for


the following screen.
Following below are the example of the Test cases,
which consists of UI and Usability test scenarios.

• TC 01- Verify that the text box with the label "Source Folder" is aligned properly.

• TC 02 - Verify that the text box with the label "Package" is aligned properly.

• TC 03 – Verify that label with the name "Browse" is a button which is located at the end of TextBox
with the name "Source Folder.“

• TC 04 – Verify that label with the name "Browse" is a button which is located at the end of Text Box
with the name "Package.“

• TC 05 – Verify that the text box with the label "Name" is aligned properly.

• TC 06 – Verify that the label "Modifiers" consists of 4 radio buttons with the name public, default,
private, protected.

• TC 07 – Verify that the label "Modifiers" consists of 4 radio buttons which are aligned properly in a
row.
• TC 08 – Verify that the label "Superclass" under the label "Modifiers" consists of a
dropdown which must be proper aligned.

• TC 09 – Verify that the label "Superclass" consists of a button with the label
"Browse" on it which must be properly aligned.

• TC 10 – Verify that clicking on any radio button the default mouse pointer must be
changed to hand mouse pointer.

• TC 11 – Verify that user must not be able to type in the dropdown of "Superclass."

• TC 12 – Verify that there must be a proper error generated if something has been
mistakenly chosen.

• TC 13 - Verify that the error must be generated in the RED color wherever it is
necessary.

• TC 14 – Verify that proper labels must be used in the error messages.


• TC 15 – Verify that the single radio buttons must be selected by
default every time.

• TC 16 – Verify that the TAB button must be work properly while


jumping on other field next to previous.

• TC 17 – Verify that all the pages must contain the proper title.

• TC 18 – Verify that the page text must be proper aligned.

• TC 19 – Verify that after updating any field a proper confirmation


message must be displayed.

• TC 20 - Verify that only 1 radio button must be selected and


more than single checkboxes may be selected.
GUI Testing Tools

• Selenium
• QTP
• Cucumber
• SilkTest
• TestComplete
• Squish GUI Tester
Web Based Application Testing
Components of Web Based Testing
• Functionality Testing
- Tests, whether all functionalities of that web applications are
working properly

• Usability Testing
- Testing from users perspective
- Content checking

• Interface Testing
- Tests Interfacing of web application with other systems
- Testing of data flow to other system

• Compatibility Testing
- Tests Compatibility of web application
- with different software, operating systems, hardware etc.
What is Verification?

• The process of evaluating software to determine

- whether the products of a given development


phase

- satisfy the conditions

- imposed at the start of that phase.


Verification

• is a static practice of verifying documents, design, code and program.

• includes all the activities associated with producing high quality


software:
- inspection, design analysis and specification analysis.

• determines whether the software is of high quality,


- but it will not ensure that the system is useful.

• is concerned with whether the system is well-engineered and error-


free.
What is Validation?

• The process of evaluating software


- during or at the end of the development process
- to determine whether it satisfies specified requirements.

• the process of evaluating the final product


- to check
- whether the software meets the customer expectations
and requirements.

• It is a dynamic mechanism of validating and testing the


actual product.
• Verification 
• is the process of checking that
• the software meets the specification.  
• “Did I build what I need?”
• Are we building the system right?

• Validation 
• is the process of checking whether
• the specification captures the customer’s needs. 
• “Did I build what I said I would?”
• Are we building the right system?
Verification Validation

1. is a static practice of verifying documents, 1. Validation is a dynamic mechanism of


design, code and program. validating and testing the actual product.

2. It does not involve executing the code. 2. It always involves executing the code.

3. It is human based checking of documents


and files. 3. It is computer based execution of program.

4. Validation uses methods like black box


4. Verification uses methods like inspections,
(functional)  testing, gray box testing, and white
reviews, walkthroughs, and Desk-checking etc. box (structural) testing etc.

5. Validation is to check whether software


5. Verification is to check whether the software
meets the customer expectations and
conforms to specifications. requirements.
verification validation

6. It can catch errors that validation cannot 6. It can catch errors that verification cannot
catch. It is low level exercise. catch. It is High Level Exercise.

7. Target is requirements specification,


7. Target is actual product-a unit, a module, a
application and software architecture, high
level, complete design, and database design bent of integrated modules, and effective final
product.
etc.

8. Verification is done by QA team to ensure


that the software is as per the specifications in 8. Validation is carried out with the
involvement of testing team.
the SRS document.

9. It generally comes first-done before


9. It generally follows after verification.
validation.

You might also like