You are on page 1of 15

MANUAL TESTING

UNIT 01 - FUNDAMENTALS OF TESTING

INTRODUCTION
What is Software Testing?
• Software Testing is a process of executing a program or application with the intent of finding the software bugs.
• It can also be stated as the process of validating and verifying that a software program or application or product.
• It can also be defined as the process of checking the completeness, correctness, quality, and security of any software
product.
• Testing is a process, not an activity.
• Testing takes place throughout the SDLC.

How to test Software?


• Based on following things, we test the software,
a. Business Requirements
b. Technical Requirements
c. Client Requirements
d. Common Sense

Why is software testing necessary?


• Software testing is really required to point out the defects and errors that were made during the development phases.
• Its essential since it makes sure of the customer’s reliability and their satisfaction in the application.
• It is very important to ensure the quality of the product delivered to the customers helps in gaining their confidence.
• It’s important to ensure that the application should not result into any failures because it can be very expensive in the
future or in the later stages of the development.

Software Testing Objective –


1. Finding & Preventing defects.
2. To make sure that the end result meets the business and user requirements.
3. To gain the confidence of the customers by providing them a quality product.

Defect or Bug
• A Bug or defect is an error/Flaw in the application.
• When actual result deviates from the expected result while testing a software application or product then it results into a
defect.
• When the result of the software application or product does not meet the end user expectations or the software
requirements then it results into a Bug or Defect.

What is Failure
• If under certain environment and situation, defects in the application or product get executed then the system will produce
the wrong results causing a failure.

How defect arises?


• Errors in the specification, design and implementation of the software and system.
• Errors in use of the system
• Environmental conditions
• Intentional damage

PRINCIPLES OF TESTING
There are 7 principles of Testing
1. Testing shows presence of defects
2. Exhaustive testing is impossible
3. Early Testing
4. Defect clustering
5. Pesticide Paradox
6. Testing is context depending
7. Absence of errors fallacy

Principle 1 - Testing shows presence of defects


• Testing can show the defects are present but cannot prove that there are no defects.
• Testing only reduces the number of undetected defects.
• It is not the PROOF of Correctness.

Principle 2 - Exhaustive testing is impossible


• Testing everything including all combinations of inputs and preconditions is not possible.
• It is impossible to test any software 100%

Principle 3 - Early Testing


• We should start testing as soon as possible.

Principle 4 - Defect clustering


• The distribution of defects is not across the application but rather centralized in limited sections of the application.
• Most number of issues found in few modules/one module.

Principle 5 - Pesticide paradox


• If the same kinds of tests are repeated again and again, eventually the same set of test cases will no longer be able to find
any bugs.

Principle 6 - Testing is context depending


• How you are going to test the software it depends on the type of software.

Principle 7 - Absence of errors fallacy


• If the system built is unusable and does not fulfil the user’s needs and expectations, then finding and fixing defects does
not help.

What is Quality?
• Degree of excellence.
• ISO 8402-1986 standard defines quality as “the totality of features and characteristics of a product or service that bears its
ability to satisfy stated or implied needs.”

Key Aspects of Quality Software


1. Good Design
2. Good Functionality
3. Reliable
4. Consistency
5. Durable
6. Value for Money

UNIT 02 – SOFTWARE DEVELOPMENT LIFE CYCLE

Software Development Life Cycle


• SDLC is Software Development Life Cycle
• Complete process of developing a software

Phases of SDLC
1. Requirement gathering and analysis
2. Design
3. Implementation
4. Testing
5. Deployment
6. Maintenance
Verification
• It is the process which starts at early phases of SDLC where we check Requirements, Design and Code.
• Done by Review meetings, Walkthrough, or Inspection.
• Are we building the product, right?
• The process of evaluating work-products of a development phase to determine whether they meet the specified
requirements for that phase.

Validation
• The process of evaluation software during or at the end of the development process to determine whether it satisfies
specified business requirements.
• Done by Testing.
• Are we building the right product?

SDLC MODELS
1. SDLC models are methodology used to develop a software
2. Models we are going to cover
3. Waterfall Model
4. Agile Development
5. V Model

Waterfall Model

Advantages
• Simple and Easy to Understand & Implement
• Easy to Manage due to Rigidity of Model
• Phases are processed & completed one at a time
• Good for shorter time projects where requirements are well understood.

Disadvantages
• No working software is produces until late during the life cycle.
• High amounts of risk and uncertainty
• Not good for complex projects
• Cannot accommodate changing requirements
How to Test in SDLC
1. Requirements
2. Design
3. Implementation
4. Testing
5. Deployment
6. Maintenance

UNIT 03 – LEVELS OF TESTING

Different Levels of Testing


1. Unit Testing
2. Integration Testing
3. System Testing
4. User Acceptance Testing

UNIT TESTING
• Also known as Component Testing, Module Testing and Program Testing.
• A unit is the smallest testable part of an application like functions, classes, procedures, interfaces.
• Unit testing is a method by which individual units of source code are tested to determine if they are fit for use.
• Unit tests are basically written and executed by software developers to make sure that code meets its design and
requirements and behaves as expected.
• Unit Testing can include Functional and Non-Functional Testing (Memory leak, Performance)
• When Unit testing should be done?
• By whom unit testing should be done

Advantages of Unit Testing


• Issues are found at early stage
• Unit testing helps in maintaining and changing the code
• Since the bugs are found early in unit testing hence it also helps in reducing the cost of bug fixes
• Unit testing helps in simplifying the debugging process. If suppose a test fails, then only latest changes made in code needs
to be debugged.

INTEGRATION TESTING
• Integration testing is the process in which we test the interface between the components.
• It tests how interface is interaction with OS, File system, hardware, or another interface.
• Integration testing is done by a specific integration tester or test team.
Levels of Integration testing
There are 2 levels of Integration Testing
1. Component Integration Testing
2. System Integration Testing

Component integration testing


• It tests the interactions between software components
• It is carried out after component testing.

System integration testing


• It tests the integration between different systems
• It may be done after System Testing.

SYSTEM TESTING
• In system testing the behavior of whole system/product is tested as defined by the scope of the development project or
product.
• It may include tests based on risks and/or requirement specifications, business process, use cases, or other high-level
descriptions of system behavior, interactions with the operating systems, and system resources.
• System testing is most often the final test to verify that the system to be delivered meets the specification and its purpose.
• System testing is carried out by specialist’s testers or independent testers.
• System testing should investigate both functional and non-functional requirements of the testing.
• System testing requires a controlled test environment.

USER ACCEPTANCE TESTING


• It is the process to determine whether the software is ready to deliver, and it meets the requirement specification.
• We ask following questions to our product.
• Can we release the product?
• Do we have any business risk?
• Has development completed agreement?
• After the system test has corrected all or most defects, the system will be delivered to the user or customer for acceptance
testing.
• Acceptance testing is basically done by the user or customer although other stakeholders may be involved as well.
• The goal of acceptance testing is to establish confidence in the system.
• Acceptance testing is most often focused on a validation type testing.
• This testing requires test environment almost same as Production.

Different forms of UAT


1. Alpha Testing
2. Beta Testing
Above type of testing happens if the software is created for large number of users.
For ex – Gaming s/w, Social Networking, Browsers.

ALPHA TESTING
• This takes place at the developer’s site. Developers observe the users and note problems.
• Alpha testing is testing of an application when development is about to complete. Minor design changes can still be made
as a result of alpha testing.
• Alpha testing is typically performed by a group that is independent of the design team, but still within the company,
e.g. in-house software test engineers, or software QA engineers.
• Alpha testing is final testing before the software is released to the general public.

BETA TESTING
1. It takes place at the customer’s site. It sends the system to users who install it and use it under real-world working
conditions.
2. A beta test is the second phase of software testing in which a sampling of the intended audience tries the product out.
3. The goal of beta testing is to place your application in the hands of real users outside of your own engineering team to
discover any flaws or issue from the user’s perspective that you would not want to have in your final, released version of
the application.

UNIT 04 – TYPES OF TESTING

Types of Testing
• Testing if function (Functional Testing)
• Testing of Software product characteristics (Non- functional Testing)
• Testing of Software structure/architecture (Structural Testing)
• Testing related to changes.

FUNCTIONAL TESTING
• Functional Testing is the testing in which we test the functional aspect of the system.
• We test what a system does.
• The system should do what it is supposed to do whether it is written in the requirements or not.
• This type of testing can be covered based in functional requirement specification document or use cases.
• This type of testing can be done at any test levels.
• According to IS 9126 functional testing focusing on suitability, interoperability, security, accuracy, and compliance.

NON-FUNCTIONAL TESTING
• Non-functional testing is the testing in which we test the non-functional aspect of the system.
• We test how well things are done.
• Non-functional testing covers following testing:
a. Performance Testing
b. Usability Testing

USER INTERFACE
• Check the User Interface of any system
• Color coding is proper
• Position of all elements is proper
• Font sizes
• Changing the mode

USABILITY TESTING
• To test the ease with which the user can use. It tests that whether the application or the product built is user-friendly or
not.
• Test following things
• How easy it is to use the software?
• How easy it is to learn the software?
• How convenient is the software to end user?
• Components we test in Usability testing:
• Learnability: How easy to learn the system.
• Efficiency: How fast experience user work on system
• Memorability: how easy user remember the system
• Errors: How many errors user makes
• Satisfaction: How satisfy the user is.

PERFORMANCE TESTING
• Performance testing is testing that is performed, to determine how fast some aspect of a system performs under a
particular workload.
• This process can involve quantitative tests done in a lab, such as measuring the response time or the number of MIPS
(millions of instructions per second) at which a system function.

LOAD TESTING
• A load test is type of Test which is conducted to understand the behavior of the application under a specific expected load.
• Load testing is performed to determine a system's behavior under both normal and at peak conditions.

STRESS TESTING
• It involves testing beyond normal operational capacity, often to a breaking point, in order to observe the results.
• It put greater emphasis on robustness, availability, and error handling under a heavy load, rather than on what would be
considered correct behavior under normal circumstances.

Internationalization and Localization


• Internationalization is a process of designing a software application so that it can be adapted to various languages and
regions without any changes.
• Whereas Localization is a process of adapting internationalized software for a specific region or language by adding local
specific components and translating text.

SECURITY TESTING
• Whether the application or the product is secured or not.
• Checks to see if the application is vulnerable to attacks, if anyone hack the system or login to the application without any
authorization.
• Checks confidentiality, integrity, authentication, availability, authorization.

COMPATIBILITY TESTING
• To ensure compatibility of the system/application/website built with various Software platforms.

CONFIGURATION TESTING
• To ensure compatibility of the system/application/website built with various Hardware platforms

CONFIRMATION TESTING (RE-TESTING)


• In this type of testing, we check whether the defect is resolved or not.
• To test whether the defect is fixed or not.
• In this we test with the exact same conditions in which we found the defect.

REGRESSION TESTING
• In this testing we check whether any unchanged modules is affected by any changed module.
• To verify that modifications in the software have not caused unintended adverse side effects.

INTEROPERABILITY TESTING
• Interoperability testing is a type of testing to check whether software can inter-operate with other software component,
softwares or systems.
• This type of testing carried out to ensure that different components or devices from different vendors are working fine
together or not.

MAINTAINABILITY TESTING
• This type of testing covers how easy it is to maintain the system.
• How easy it is to modify the system.

RELIABILITY TESTING
• Reliability testing is a type of testing to verify that software is capable of performing a failure-free operation for a specified
period of time in a specified environment.
• The purpose of reliability testing is to determine product reliability.
• Following factors which decides Reliability:
• Operations should be failure free.
• For how much time operations of failure free.

PORTABILITY TESTING
• In this testing type we check how easy it is to move the software from one environment/platform to another.
• We usually check the amount of effort to move from different Hardware platforms, & Software Platforms.
AD-HOC TESTING
• The testing process in which is NOT organized.
• It is black box testing process which does not follow any process.
• The intent of doing Ad hoc is to find issue after formal round of testing is done.
• People who have complete knowledge of System can only do the ad hoc testing.

STRUCTURAL TESTING
• This is the testing in which we test the internal architecture of the system.
• Also known as White Box, Glass Box.
• We usually do this testing at Unit or Integration testing level.

What is Static testing


• Static testing is the testing of the software work products manually, without executing the code.
• It starts early in the Life cycle and so it is done during the verification process.
• It does not need computer as the testing of program is done without executing the program. For example: reviewing, walk
through, inspection, etc.

Use of Static Testing


• Early feedback on quality issues
• As the defects are getting detected at an early stage so the rework cost most often relatively low.
• Development productivity is likely to increase

Types of Defects in STT


• Types of the defects that are easier to find during the static testing are: deviation from standards, missing requirements,
design defects, non-maintainable code.

Informal Review
• Informal Process of Review
• Usually involves 2 members
• Things are not documented.

Formal Review
• Well Structured format of Review
• Consist of 6 Stages
a. Planning
b. Kick Off
c. Preparation
d. Review Meeting
e. Rework
f. Follow Up

Roles and Responsibility in Review

1. The Moderator
• Review Leader
• Perform entry check
• Follow-up on the rework
• Schedule the meeting
• Coaches another team
• Leads the possible discussion and stores the date that is collected.

2. The Author
• Illuminate the unclear areas and understand the defects found
• Basic goal should be to learn as much as possible with regard in improving the quality of the document.

3. The Scribe
• Scribe is separate person to do the logging of the defects found during the review.
4. The reviewers
• Also known as checkers or inspectors.
• Check any material for defects, mostly prior to the meeting.
• The manager can also be involved in the review depending on his or her background.

5. The managers
• Manager decides on the execution of review
• Allocates time in project schedules and determines whether review process objectives have been met.

UNIT 05 – TESTING TECHNIQUES

STATIC TEST TECHNIQUES

What is Static Testing?


• Static testing is the testing of the software work products manually, without executing the code.
• It starts early in the Life cycle and so it is done during the verification process.
• It does not need computer as the testing of program is done without executing the program. For example: reviewing, walk
through, inspection, etc.

Types of Review
1. Walkthrough
2. Technical Review
3. Inspection

Walkthrough
• It is not a formal process/review
• It is led by the authors
• Authors guide the participants through the document according to his or her thought process to achieve a common
understanding and to gather feedback.
• Useful for the people if they are not from the software discipline, who are not used to or cannot easily understand software
development process.
• Is especially useful for higher level documents like requirement specification, etc.
• The goals of a walkthrough:
• To present the documents both within and outside the software discipline in order to gather the information regarding
the topic under documentation.
• To explain or do the knowledge transfer and evaluate the contents of the document
• To achieve a common understanding and to gather feedback.
• To examine and discuss the validity of the proposed solutions.

Technical Review
• It is less formal review
• It is led by the trained moderator but can also be led by a technical expert
• It is often performed as a peer review without management participation
• Defects are found by the experts (such as architects, designers, key users) who focus on the content of the document.
• In practice, technical reviews vary from quite informal to very formal
• The goals of the technical review are:
• To ensure that an early stage the technical concepts are used correctly
• To access the value of technical concepts and alternatives in the product
• To have consistency in the use and representation of technical concepts
• To inform participants about the technical content of the document.

Inspection
• It is the most formal review type
• It is led by the trained moderators
• During inspection the documents are prepared and checked thoroughly by the reviewers before the meeting
• It involves peers to examine the product
• A separate preparation is carried out during which the product is examined and the
• The defects found are documented in a logging list or issue log
• A formal follow-up is carried out by the moderator applying exit criteria
• The goals of inspection are:
• It helps the author to improve the quality of the document under inspection
• It removes defects efficiently and as early as possible
• It improves product quality It create common understanding by exchanging information
• It learns from defects found and prevent the occurrence of similar defects.

DYNAMIC TESTING TECHNIQUES

BLACK BOX TESTING


• In black box testing the tester is concentrating on what the software does, not how it does it.
• Also Known as Specification-based
• Also Known as behavioral testing
• Also Known as Input/Output driven testing

Black Box Testing Techniques


1. Equivalence partitioning
2. Boundary value analysis
3. Decision tables
4. State transition testing

Equivalence Class Partitioning


• Basic Idea is to divide the problem (set of Test Conditions) into small parts.
• We need to test only one condition from each partition.
• Assuming that all the conditions in one partition will be treated in the same way by the software.
• We divide into Valid Class and Invalid class.

Boundary Value Analysis


• Testing at the boundaries.
• Probability of finding bug is high on boundaries.
• Test Boundaries of both valid and invalid classes.
• Test boundary -1, boundary and boundary +1.

Decision table
• Deal with combinations of inputs.
• more focused on business logic or business rules.
• Decision tables provide a systematic way of stating complex business rules, which is useful for developers as well as for
testers.

Decision Table Example


State Transition Diagram
• A black testing technique, in which outputs are triggered by changes to the input conditions or changes to ‘state’ of the
system.
• To use when sequence of events occurs.

State Transition Example

WHITE BOX TESTING


• In white box testing the tester is concentrating on how the software does it.
• Structure-base
• Glass-box
• Structural testing

White box Testing Techniques


1. Statement Coverage
2. Decision Coverage
3. Path Coverage

Statement Coverage
• The state coverage covers only the true conditions.
• In this process each and every line of code needs to be checked and executed.
Example
Int a
Int b
If (a>b)
Print a
To cover 100% statement coverage, we need 1 statement

Example
Int a
Int b
If (a>b)
Print a
Else
Print b
In order to do 100% statement coverage, we need 2 statements.
Path Coverage
• To cover all path once.

UNIT 06 – SOFTWARE TESTING LIFE CYCLE

SOFTWARE TESTING LIFE CYCLE


1. Test Planning
2. Test Designing
3. Test Implementation
4. Test Execution
5. Test Closure & Reporting
6. Postmortem Review

Test Planning
• Test planning involves scheduling and estimating the system testing process, establishing process standards, and describing
the tests that should be carried out.
• managers allocate resources and estimate testing schedules
• Final outcome of Planning is Test Plan.

Test Plan
• Test plan is the project plan for the testing work to be done
• Plan to track the progress of the Training project.

Test Plan Template


• Test plan identifier: Unique identifying reference.
• Introduction: A brief introduction about the project and to the document.
• Test deliverables: The deliverables that are delivered as part of the testing process, such as test plans, test specifications
and test summary reports
• Test tasks: All tasks for planning and executing the testing
• Test items: A test item is a software item that is the application under test.
• Environment needs: Defining the environmental requirements such as hardware, OS, network configurations, tools
required.
• Features To Be Tested: A feature that needs to test on the test ware.
• Responsibilities: Lists the roles and responsibilities of the team members.
• Features Not to Be Tested: Identify the features and the reasons for not including as part of testing.
• Staffing And Training Needs: Captures the actual staffing requirements and any specific skills and training requirements
• Approach Schedule: States the important project delivery dates and key milestones.
• Item Pass/Fail Criteria: Documented whether a software item has passed or failed its test.
• Risks And Contingencies: High-level project risks and assumptions and a mitigating plan for each identified risk.
• Suspension And Resumption Criteria: Suspension criteria specify the criteria to be used to suspend all or a portion of the
testing activities while resumption criteria specify when testing can resume after it has been suspended.
• Approvals: Captures all approvers of the document, their titles, and the sign off date.

Test Designing
• This phase defines "HOW" to test.
• Identify and get the test data.
• Identify and set up the test environment.
• Create the requirement traceability metrics.

Requirement Traceability Metrics (RTM)


• A process of documenting the links between the requirements and the work products developed to implement and verify
those requirements.
• RTM captures all requirements and their traceability in a single document delivered at the conclusion of the life cycle.
Test Implementation
• Creation of the detailed test cases.
• Carry out the review to ensure the correctness of the test cases.
• If Automation is also part of Project than identify the candidate test cases for automation and proceed for scripting the
test cases and review them.

Test Case Document


• A test case document, which has a set of test data, preconditions, expected results and post conditions, developed for a
particular test scenario in order to verify compliance against a specific requirement.

Test Case Template


1. S. No.
2. Test Case ID
3. Module Name
4. Test Case Description
5. Steps to Execute / Test Steps
6. Prerequisite
7. Test Data
8. Test Environment
9. Expected Result
10. Actual Result
11. Status
12. Created By
13. Executed By
14. Date of Creation
15. Date of Execution
16. Comment

Test Execution
• This is the Software Testing Life Cycle phase where the actual execution takes place.
• before you start your execution, make sure that your entry criterion is met.
• Entry Criteria for Execution:
a. Verify if the Test environment is available and ready for use.
b. Verify if test tools installed in the environment are ready for use.
c. Verify if Testable code is available.
d. Verify if Test Data is available and validated for correctness of Data.

Bug / Defect
• A Bug or defect is an error/Flaw in the application.
• When actual result deviates from the expected result while testing a software application or product then it results into a
defect.
• When the result of the software application or product does not meet with the end user expectations or the software
requirements then it results into a Bug or Defect.

Failure
• If under certain environment and situation defects in the application or product get executed, then the system will produce
the wrong results causing a failure.

Defect Template
1. Defect ID
2. Project
3. Module Name
4. Summary
5. Description
6. Steps to reproduce
7. Actual Result
8. Expected Result
9. Attachments
10. Severity
11. Priority
12. Resolution
13. Component
14. Fix Version
15. Reported By
16. Assigned To
17. Status
18. Environment

Bug Life Cycle

Test Closure & Reporting


• Collect data from completed test activities to consolidate experience, test ware, facts, and numbers
• Exit Criteria:
• Verify if All tests planned have been run.
• Verify if the level of requirement coverage has been met.
• Verify if there are NO Critical or high severity defects that are left outstanding.
• Verify if all high-risk areas are completely tested.
• Verify if software development activities are completed within the projected cost.
• Verify if software development activities are completed within the projected timelines.

Postmortem Review
• it's important for project managers and team members to take stock at the end of a project and develop a list of lessons
learned so that they don't repeat their mistakes in the next project.
• Reexamine the complete process for Future improvements.

UNIT 07 – TEST MANAGEMENT


Test Management

Roles & Responsibilities of Test Leader


• Involved in the planning, monitoring, and control of the testing activities and tasks.
• Devise the test objectives, organizational test policies, test strategies and test plans with help of other people.
• Estimate the testing to be done & Resource Allocation.
• Recognize when test automation is appropriate.
• Lead, guide and monitor the analysis, design, implementation and execution of the test cases, test procedures and test
suites.
• Test execution and as the project winds down, they write summary reports on test status.
• Sometimes also known as Test Manager or Test Coordinator.

Roles & Responsibilities of a Tester


• In the planning and preparation phases of the testing, testers should review and contribute to test plans, as well as
analyzing, reviewing, and assessing requirements and design specifications. They may be involved in or even be the primary
people identifying test conditions and creating test designs, test cases, test procedure specifications and test data, and
may automate or help to automate the tests.
• set up the test environments or assist system administration and network management staff in doing so.
• As test execution begins, the number of testers often increases, starting with the work required to implement tests in the
test environment.
• Testers execute and log the tests, evaluate the results, and document problems found.
• They monitor the testing and the test environment, often using tools for this task, and often gather performance metrics.
• Throughout the testing life cycle, they review each other's work, including test specifications, defect reports and test
results.

Test monitoring
• Give the test team and the test manager feedback on how the testing work is going, allowing opportunities to guide and
improve the testing and the project.
• Provide the project team with visibility about the test results.
• Measure the status of the testing, test coverage and test items against the exit criteria to determine whether the test work
is done.

Test control
• Guiding and corrective actions to try to achieve the best possible outcome for the project
• A portion of the software under test will be delivered late but market conditions dictate that we cannot change the release
date. At this point of time test control might involve re-prioritizing the tests so that we start testing against what is available
now.

Risk in S/W Testing


• Risks are the possible problems that might endanger the objectives of the project
• It is the possibility of a negative or undesirable outcome.

Types of Risk
1. Product Risk
2. Project Risk

Product Risk
• Possibility that the system or software might fail to satisfy or fulfill some reasonable expectation of the customer, user, or
stakeholder.
• They are risks to the quality of the product.
• Also Known as Quality Risk
• The product risks that can put the product or software in danger are:
• If the software skips some key function that the customers specified, the users required, or the stakeholders were
promised.
• If the software is unreliable and frequently fails to work.
• If software fails in ways that cause financial or other damage to a user or the company that user works for.
• If the software has problems related to a particular quality characteristic, which might not be functionality, but rather
security, reliability, usability, maintainability, or performance.

Project Risk
• It is subject to risks that carry danger to project.
• Risk such as the late delivery of the test items to the test team or availability issues with test environment.
• They are also indirect risks such as expensive delays in repairing defects found in testing or problem with getting
professional system administration support for the test environment.

Risk based testing


• A testing done for the project based on risks.
• Risk-based testing is the idea that we can organize our testing efforts in a way that reduces the residual level of product
risk when the system is deployed.
• Risk based testing involves testing the functionality which has the highest impact and probability of failure.

You might also like