You are on page 1of 85

UNIT 3

LEVELS OF TETSING
Need For Testing

• Validate and Verify


• Check if system meets functional requirement or not.
• Enhance the quality of software
• Security
• Customer Satisfaction
Levels of Testing
1. Unit Testing
2.Integration Testing
3. System Testing
4. Acceptance Testing
• A unit can be small sized component may be purchased from
an outside vendor.
• Eg;
– Procedure sized reusable components
– Classes/ objects/ methods
– Procedures and functions
– After completion of design and their reviews, programmers concentrate on coding to
physically constructed a software.
– In this phase programmers are conducting unit level testing on that programs through
white box testing
– These WBT is classified in to 3 parts
– Execution testing:
• Basics path coverage
• Loop coverage
• Program techniques converge
– Operations testing- whether a program is run on customers expected platform or not?
• Mutation testing- it means that a complex change in program logic,
programmers are following this technique to estimated completeness and
correctness of a program testing
• Advantages:
– Easy to design, execute, record and analyze
– It is easy to identify the fault and repair them
– It compiles and executes cleanly
– Unit test history is easy to maintain and record

Unit test planning:


– It defines what to test, how to test, when to test and who to test
– Test plan is a project level documents
– Before unit test, plan must be developed
– It may be stand alone plan or master test plan
– Unit test plan necessary input documents are
1. project plan
2. requirement
3. specification
4. design document (previous level doc)
• Unit test plan is developed 3 different set of phases:
– Describe unit test methods, approaches and risk
– Identify the unit features to be tested
– Included/ADD the details in the plan

– Phase 1: Describe Unit Test Approach and Risks

•In this phase of unit testing planning the general approach to unit testing is outlined.
The test planner:

(i)identifies test risks;


(ii)describes techniques to be used for designing the test cases for the units;
(iii)describes techniques to be used for data validation and recording of test results;
(iv) describes the requirements for test harnesses
(v) Identifies the termination condition
(vi) Requirement and resources must
Phase 2: Identify Unit Features to be Tested

This phase requires information from the unit specification and detailed
design description.
The planner determines which features of each unit will be tested, for
example: functions, performance requirements, states, and state transitions,
control structures, messages, and data flow patterns.
If some features will not be covered by the tests, they should be
mentioned and the risks of not testing them be assessed.
Input/output characteristics associated with each unit should
also be
identified, such as variables with an allowed ranges of values
and
performance at a certain level.
Phase 3: Add Detail to the Plan
•In this phase the planner refines the plan from previous two phases.
•The planner adds new details to the approach, resource, and scheduling
portions of the unit test plan.
•Unit availability and integration scheduling information should be included in
the revised version of the test plan.
•The planner must be sure to include a description of how test results will be
recorded.
•Test-related documents that will be required for this task, for example, test logs,
and test incident reports, should be described, and references to standards for
these documents provided. Any special tools required for the tests are also
described.
Designing the unit tests:
•Unit test design is a part of preparation work for unit test. It is important to
specify:
Test cases
Test procedures
Test cases:
•It’s a tabularized form of test case data
•It is easy to use and reuse
•Test case components are arranged in the form of semantics network components
part are object id, test case- id, purpose.
•Test case design specification includes: list of relevant states, messages,
exceptions, interrupts.
•Test case design at unit level on 2 types: black box tests, white box tests

Select test cases Execute test cases Analyze test result


•Test procedures:
It is project level document
Test procedures and test suites are reused from the past projects when the
organization is stored them.
Both BBT and WBT are useful for designing test cases for function and
procedures.
WBT gives opportunity to check the,
•Data flow sequence,
•Uses of mutation analysis
•Evaluate the integrity of unit
Test harness:
•It is unit test framework in software testing
•Its used for developers
•It support to test the individual components or units of software
•A test harness provides stubs and drivers, which are small programs that interact
with software under test
•Test environment + test bed = test harness
• The auxiliary code developed to support testing of units and
components is called a test harness. The harness consists of drivers
that call the target code and stubs that represent modules it calls.
• Stubs and drivers collect the information from test case repository
and actual results.
• Stubs: it invokes the modules
• Drivers: call the target code.
Running the Unit Tests and Recording Results
Unit tests can begin when

(i) the units becomes available from the developers (an estimation
of availability is part of the test plan),

(ii) the test cases have been designed and reviewed, and

(iii) the test harness, and any other supplemental supporting tools, are
available.
Integration test:

Integration testing test integrates or interfaces between components,


interaction to different parts of the system such as operating system, file system
and hardware or interfaces.
•It is done by specific integration tester and test team.
Objectives:
•Test code execution against target system/ dependencies
•Verify methods as they will be used in released code.
•Validate/ invalidate unit test results
•Identify vulnerabilities in execution chain
•To find errors in interfaces of units
Goals:
 To test interfaces between the units or modules.
Types: 4 approaches
Top- down approach
Bottom up approach
Hybrid approach
Big-bang approach
Top down approach:
Testing takes places from top to bottom
Components or systems are substituted by stubs
Developers are used “stubs” program to construct stub modules.
Advantages:
The tested product is very consistent
Stubs can be written with lesser time
Disadvantages:
Basic functionality is tested at the end of the cycle
Bottom- up testing approach:
Testing takes place from the bottom of the control flow to upwards
Components or systems are substituted by deriver

Module 6

Module 4 Module 5

Module 3 Module 2 Module 1


Developer used a temporary program, instead of main module construction. It is
called “drivers”
Advantages:
•Development and testing can be done together.
Disadvantages:
•we can catch the key interface defects at the end of cycle.
•It is required to create to create the test drivers for modules at all levels.
Main module

Driver
program

Sub module
Hybrid integration testing :
•It is a combination/ construction of top- down and bottom- up testing approaches
Advantages:
• Easily combine modules(sub modules and main modules)
• It helps to developers effectively
Disadvantages: Main module

• It is a temporary one Driver


program
• Unstructured one
Sub program
1

Stub
Sub program
2

Stub prog 3
Big bang integration approach:
All components or modules are integrated simultaneously, after which everything is
tested as a whole.
Advantages:
Everything is finished before integration testing starts
Disadvantages:
Time consuming
Difficult to trace the case of failures

Sub module Sub module


1 2

Sub module
Main module 3
Sub module
5
Sub module
4
DESIGNING INTEGRATION TEST:
Integration tests for procedural software can be designed using a
• Black box approach
• white box approach
• both
Tests engineers must concentrate on
• Input / output parameter
• Calling relationships(function/ procedure)
• Test engineers must check the parameters whether they are incorrect order or
type
•The above example integrate 2 procedures(procedure X, procedure Y)
•Here procedure Y integrate with procedure X and procedure X integrate with
procedure Y with help of input parameter 3 and 4.
•Input parameter 3 and 4 used by procedure Y and returns a value for the output
parameters
•Lhs, rhs are variables
•Designing integration test for conventional system:
• In conventional input/output parameters and relationship are described in
structure chart.
•Here black box testing is provided to check all functionality, ability and
performance, etc
•Designing integration test for object oriented system:
•Integration testing of clusters of classes also involves building test harness which
in this case are special classes of objects built especially for testing.
•At the cluster level inter class method integration is tested
Integration Test Planning

•It is project planning


•It is planned one
•After the completion of designing(high level design), integration test planning
is started and system architecture is defined
•Integration test planning required following documents
• Requirement document
• User manual
• Working sequence
• Design document
• Design test plan
•After the completion of integration test tester to find the critical and risk
modules.
•These critical/risk modules are identified by,
• It requires more software requirement
• High level of control
• Complex
• Error prone
• Poor performance etc
Integration test plan (ITP)
1. Test items
2. Features to be tested
 Integration with dispatcher software
 integration with client software
 integration with agent software
3. Test deliverables
4. Testing tasks
5. Environment needs
6. Test case pass/fail criteria
SCENARIO TESTING:
Testing activity that user scenario based on a hypothetical story to help a person
think through a complex problem or system for a testing environment.
This testing process is performed by the testing teams
It is stated as a collection of realistic user actions which are developed for
knowing and checking/ evaluating
Methods:
 System scenario
 Use case scenario (or) role based
1. System scenarios
It helps to covers many components in the system
Following approaches are used for developing scenario testing
• Story line: It combines various activities of the product.
• Life cycle/ state transition: scenarios are derived from various objects
and its transition
• Development/ implementation stories from customer
• Business verticals
• Battle ground
Use cases:
It is step by step procedure on how a user intends to use a system,
with different user roles.
It includes,
• Stories
• Pictures
• Deployment details
DEFECT BASH ELIMINATION:
• Defect bash is an adhoc testing where people does the different roles in
an organization test the product together at same time.
• Adhoc testing: its performed by without planning and documentation.
The tester tries to break the system by randomly trying the system
functionality
• Defect bash/adhoc testing is fully based on individual decision and
creativity
• Letting everyone in the organization use the product before delivery.
• All the activities in the defect bash are planned activities except for what
to be tested.
• Step 1: choosing frequency and duration of defect bash
• Step 2: choosing right build
• Step 3: communicating the objective of each defect bash to
everyone
• Step 4: setting up and monitoring the lab for defect bash
SYSTEM TESTING AND ITS TYPES:

• The process of testing an integrated hardware and software system to verify


that the system meets, its specified requirement.

• After the completion of all possible modules integration, separate testing


team validate the build through a set of black box testing techniques.

• These techniques are classified into 4:


• Usability testing

• Functional testing

• Performance testing
• Security testing
Usability testing:
• In general a system level testing starts with usability.
• Its used in user- centered interaction design to evaluate a product by
testing it on users
• Here the testing teams follow 2 techniques,
UI testing (user interface testing)
Ease to use, look and feel, interface speed
Manual support testing

It involves testing of all functions performed by the people while


preparing the data and using these data from automated system
Eg: help document
Functional testing:
During this test, testing team concentrate on “meet customer requirement”

Input domain output domain


System under Input
Output test data
test test data

1. Functionality testing:
It is also called as “requirement testing”
It validates “correctness of every functionality of the system”
It covers following coverage,
Behavioral coverage, error handling coverage, input domain
coverage, service level coverage
2. Input domain testing:
• It is a part of functionality testing.
• Boundary value analysis
• Equivalence class partitions
3. Recovery testing:
• It is also called as “reliability testing”
• During this, test engineers validates whether our application build change from
abnormal state to normal state or not ?
4.Compatibility testing:
• It is also called as probability testing
• It is tested whether our application build run on customer expected platform or not?
5. Configuration testing:
• It is also called as “hardware compatibility testing”
• It tests whether our application build run on different technology hardware devices
or not?
6. Inter system testing:
• It is also called “end to end testing”
• It checks whether our application build coexistence with other existing
software to share common resources or not?
7. Sanitation testing:
• It is also called as garbage testing.
• During test, engineers find extra functionality in build
8. Installation testing:
• It also called as “Implementation testing”.
• It checks whether application is successfully installed and working .
9.Parallel testing:
• It is also called as “comparative testing”.
• It finds competitiveness of our application product through compare with
other competitive product in the market
Performance testing:
It is expensive testing division of black box testing
Teams concentrate on speed of processing in application build
It is classified into
 Load testing : checks customer expected configuration and expected load to
estimate the performance.
 Stress testing : un-interval load to estimate the performance
 Storage testing : test under huge amount of resources to estimate storage
limit
 Data volume testing : to estimate the volume of data in terms of no of
records
Security testing:
Concentrate on “privacy to user operations” in our application
It is classified into subtests:
• Authorization : whether the user is authorized or not?
• Access control: whether a valid user has permission to use specific service
or not?
• Encryption/ decryption : Information are to be received in the form of
encrypted/ decrypted format or not?
• It checks:
• Sender performs encryption or not?
• Receiver performs decryption or not?
Acceptance testing:
• It is performed to determine whether or not the software system has
met the requirement specifications.
• Purpose:
• To evaluate the systems compliance with business requirements and
verify if it has met required criteria for delivery
• Who ?
• It’s done by users or customer with the help of testing and
development people.
• Goals:
• To establish confidence of the system
• To focus on validation type testing
• Demonstration and proof of an accurate work than a software testing
Benefits:
• Functions and features to be tested are known
• Details of tests are known
• It permits regression testing
• Acceptability criteria are known
Acceptance criteria:
•Functional correctness and completeness
•Data integrity
•Data conversion
•Usability
•Performance
•Timeliness
•Scalability
•Documentation
Performance testing:
• It is the process of determining the speed or effectiveness of a
computer, network, software program or device.
• It measure the quality of the system, such as scalability, reliability,
resource usage.
• Factors :
– Throughput : measure the capability of system
– Latency : delay between application, OS performance
– Tuning : performance is known by giving many values to product
parameters
– Bench marking : compare with competitive products
– Capacity planning : to know about resources and configurations
– Performance testing techniques:
• Load testing
• Stress testing
• Soak testing
• Spike testing,etc
Performance testing process:
1. Requirement collection

2. Preparing the test cases

3. Automating the performance test cases

4. Execution of performance test cases Re-identifying test


cases

5. Analysis of performance test results


7. Performance
bench marking

6. Performance tuning

8. Recommending right configuration of the


customers
Step 1: Requirement collection
• It requires elaborate documentation and environment setup.
• It must include required/needed factors for performance testing
• It include comparison status between our product and market leading
product
• 2 types of requirements needed
– Generic requirement: general idea to all product
– Specific requirement : depends on particular product implementation
Step 2: preparing the test cases:
• Test cases are repetitive one
• Test cases must contains
– Collection positive and negative transactions
– Transaction steps
– loading pattern
– required resources
– expected result
– required tools
Step 3: Automating performance test cases:
• Performance testing/ performance test case should be automated .
• Because
– Performance is a success/ repeatable process
– It is a efficient/effective one
– Must produce accurate result to users
– Computation of response time, throughput are lead to accuracy
Step 4: Execution of performance testing:
– Execution process takes some automate scripts into a process of invoking them
– Data collection is a important task for test execution
– Data collection is based on,
• starting./ ending time of test cases
• Resources utilization of particular test cases on particular time intervals
• Log and trace files of the product and OS
• Configuration of all environment factors
Step 5: analysis of performance test results
• Multidensional thinking is necessary for performance test results
analysis. It must have,
1. product knowledge
2. analytical thinking
3. Statistical background
4. Tool usage
5. Automation
6. processing knowledge

Step 6: Performance tuning:


• It needs a high degree of skill in identifying the list of parameters and their
contribution to performance
• Two ways to reach the optimum mileage from performance tuning
– Tuning the product parameters
– Tuning the operating system parameters
Step 7: Performance benchmarking
• It is a process of comparing the performance of product transaction with
that of the competitors
• End user transaction/ scenarios could be one approach for comparison
• Types of deployment and customers are also differ for the 2 products:
• Performance benchmarking- steps
– 1. finding the transaction/ scenario and test configuration
– 2. comparing the performance of products
– 3. parameters tuning
– 4. giving the results of performance benchmarking

Challenges:
• Relevant skill is a major problem when doing a performance testing.
• It requires large amount of resources
• Test results must needs to reflect user expectation and real life environment
• Relevant tool selection process is a challenged one
• Lack of seriousness of performance tests.
Regression testing:
The re execution of tests on modified build to endure bug fix works
and possibilities of side effects occurrence.
Purpose:
The purpose of regression testing is to confirm that a recent program
or code change has not adversely affected existing features.
Needs:
• Change in requirements and code
• New features is added to the software
• Defect fixing
• Performance issue fix
Types:
1. Normal/regular regression tests : to verify if the build has not broken any
other parts of the application
2. Final regression tests : To validate that build has not changed for period of
time.
When?
• Whenever any code is changed
• When the bug/defect is reopens
• Particular amount of defect found and repeated
• Fixed defects create additional defects/ bugs
• One functionality depends on another functionality
Regression steps:

1. Doing a smoke or sanity testing

2. Understanding the criteria to choose the test cases

3. Categorizing the test cases

4. Method to choose the test cases

5. Resetting the test cases for the test execution

6. Confirming the outputs of the regression cycle


Step 1:
Smoke testing:
• It is performed to ascertain that the critical functionalities of the program is
working fine or not .
• It is used to verify the stability of the system.
Sanity testing:
• It is used to check the new functionality / bugs have been fixed.
• It is verify the rationality of the system in order produce more rigorous
testing
Step 2: understand the criteria to choose the test cases
2 methods to select the test cases
1. A constant collection of regression tests are employed to run for each
build
2. Test cases are dynamically selected by making judicious choices of the
test cases.
Step 3: categorizing the test cases:
Test cases are classified into many types
Eg:
1. Functionality test cases
2. UI test cases
3. Performance test cases
4. Usability test cases
5. DB test cases
6. Unit test cases
The above test cases are divided into 2 types:
7. Importance
8. Customer usage
Popular 3 level priority categorization scheme is used for regression testing
Priority 0: Allocated to all tests that must be executed in any cases
Priority 1: Allocated to be tests which can be executed, only when time
permits
Priority 2: Allocated to tests, which even if not executed, will not cause big
upsets
Step 4: methods to choose test cases
Once test cases are prioritized test cases can be selected
There could be several approaches to regression testing which need to be
decided on a case by case basis.
Case 1: if critically and impact of defect is low, then its enough to select few
tests cases from test cases database . These fall under any priority
Case 2: if critically and impact of bug fixes are medium, then we need to
execute all priority 0 and 1 test cases
Case 3: if critically and impact of bug fixes are high, then we need to execute
an priority- 0,1 and carefully selected priority 2 test cases
Step 5: Resetting the test cases for execution
• Resetting of the test cases need to be done with the following consideration
• When there is a major change in the product
• Different results generated compare to previous stage/ cycles.
• Whenever existing application functionality is removed, the related test
cases can be re test
• When there is change in the build procedure which affects the products
• Large release cycles where come tests were not executed for long time
• You are in final regression test cycle with a few selected test cases
Step 6: confirming the results of regression testing
• Regression testing uses only build for testing
• It is expected that all 100% test cases pass using the same build
• In situation where the pass % is not 100, the test manager check the
previous result and conclude the results.
INTERNATIONALIZATION TESTING (I18N TESTING)
• It is a non- functional testing.
• It is a process of designating a software application, so that it
can be adopted to various language & regions without any
changes
• It is also called “Globalization”

Purpose:
• It is used to check if the code can handle all international
support without breaking functionality that might cause data
loss or data integrity issues.
Enable the code

1. Enabling testing Message consolidation


2. Locale testing

3. I18N testing Message translation


4. Fake language testing

5. Language testing Include message into


products

6. Localization testing
Release
(English
version) Release
(international
version)
1. Enabling testing:
• It’s a white box testing methodology
• It ensures that the source code used in the software allows I18N
• An activity of code review or code inspection mixed with some tests cases
for unit testing, which an objective to catch I18N defects is called
“enabling testing”
• It uses a check list.
• Check the code for API/function calls
• Check the code for hard coded data/currency format/ ASCII characters
constant
• Check the code to see that there are no computations done on date
variables
• Check the dialogue box and screen
• Ensure all the messages
• Ensure no string operation is performed in code
2. Locale testing:
• Changing the different location using the system setting or environment
variables and testing the software functionality, number, date, time and
currency format.
• It uses check list
• Hot keys, function keys and help screens are tested with many application
locales
• Date/ time format are in line with defined locale of language
• Time zone information and day light saving time calculations
• Currency is in line with the locale selected language
3. I18N testing and validation
It is different from testing
The objective of I18N are,
Software is tested for functionality with ASCII,DBCS and European characters
Software treats and works for string operation sorting, sequencing operations
Software display is convenient with characters, which are NON ASCII in GUI,
menus
Check list are
• Functionality in all language and locale are the same
• Sorting / sequencing the items to be as per the convention of language and
locale
• Input of software can be in non ASCII or special characters
• The display of non ASCII characters are displayed as entered
• Cut/copy and paste of non ASCII characters retain their style after pasting
4. Fake language testing
It helps to software translators to catch the translation and localization issues
It helps to identify the issues proactively before the product is localized
Checks list are
Ensure software functionality is tested for single byte or double bytes
Ensure all strings are displayed properly
Ensure the screen width, size, pop-ups and dialogue box
5. Language testing
• It also called as language compatibility testing
• It ensures that the functionality of software is not broken on other
language setting and its still compatible
• Check list are
• Check the functionality on English, one non English and double byte
language platform combination
• Check the performance of key functionality on different language
platforms
6. Localization testing
• It performed to verify the quality of a software's localization for a
particular target culture/locale and is executed only on localized version
of the product
• It helps to check resources attributes
• To find typographical errors
• To verify the system adherence to input and display environment
standards
AD-HOC TESTING
It is performed without planning and documenting
Tester tries to break the system by randomly trying the systems functionality
Planned test issues:
Lack of clarity
Lack of skills for doing tests
Lack of time for test design
Ad hoc testing vs planned testing

Requirement analysis Analysis of existing test cases


Test planning Test planning
Test case design Test execution
Test execution Test report generation
Test support generation Test case design

Planned testing Ad hoc testing


Forms of Ad hoc testing
• Buddy testing
• Pair testing
• Exploratory testing
• Iterative testing
• Agile/Extreme testing
• Defect seeds
Buddy testing:
• Two buddies, one from development team and one from test team
mutually work on identifying defects in the same module. Buddy
testing helps the testers develop better test cases .
• This kind of testing happens usually after completing the unit
testing.
• It helps tester, develop better test cases while development team
can also make design changes early
• It is adapted under un-planned testing situation.
Pair testing:
• Two testers are assigned the same modules and they share ideas
and work on the same systems to find defects. One tester executes
the tests while another tester records the notes on their findings.
• Its done by two testers working simultaneously on the same
machine to find defects in the product
Exploratory testing:
• Exploratory Testing is a testing approach that allows you to
apply your ability and skill as a tester in a powerful way.
• Testers have to understand the application first by exploring
the application and based on this understand they should
come up with the test scenarios.
• After that, start actual testing of application.
• Due to lack of knowledge, tester follow this tests
• It is done in any phase
• It can be used to test the software that is untested unknown or
unstable
• Testers execute the tests based on past experience, similar
product, similar domain or product in a technology
Iterative testing:
• This model is used where the requirement keep coming and
product is developed iteratively for each environment.
• It requires repetitive testing
• Majority of these tests are executed manually
• A defect found in one iteration may be fixed in same build or
carried forward, based on the priority decided by customer
• Each iteration, unit test cases are added, edited or deleted to keep
up with revised requirement for the current phase
Agile/ Extreme testing:
• The testing principles that follows the principles of the agile
manifesto
• It is done by QA team members
• This testing starts with the meeting called “stand up meeting”
• It takes processes to extreme to ensure that customer requirement
are met
 During the meeting the team up any clarification or concerns are
discussed and resolved
1. Develop and understand user story

2. Prepare acceptance testing

3. Test plan and estimation

4. Code

5. Test

6. Refractor

7. Automate

8. Acceptance and delivered


6. Monkey testing:
• It is also called as “chimpanjee testing”
• It covers/ checks the main activities of the build during testing
7. Defect testing:
• Error seeding is also known as debugging. It acts as a reliability
measure for the release of the product
• Usually one group members injects the defects and another group
tests to remove them
Purpose
• While finding the known seeded defects, the unseeded defects may
also be uncovered
• It acts as guide to check the efficiency of inspection or testing
process
• It serves as a confidence measure to know the % of defect removed
rates
• When tests are automated defects can be seeded any time
ALPHA BETA TESTING:
Alpha Testing Beta Testing

Alpha testing performed by Testers who Beta testing is performed by Clients or


are usually internal employees of the End Users who are not employees of
organization. the organization.
Reliability and security testing are not Reliability, Security, Robustness are
performed in- depth Alpha Testing. checked during Beta Testing.
Alpha testing involves both the white Beta Testing typically uses black box
box and black box techniques. testing.
Alpha testing requires lab environment Software is made available to the public
or testing environment. and is said to be real time environment.

Long execution cycle may be required Only few weeks of execution are
for Alpha testing. required for Beta testing.
Alpha testing is to ensure the quality of Beta testing ensures that the product is
the product before moving to Beta ready for real time users.
testing.
TESTING OBJECT ORIENTED SYSTEM:
Testing an object oriented system should tightly integrate data and
algorithm.
It covers following topics
1. Unit testing a class: Smallest testable unit is the encapsulated class
or object. Do not test operation from one another
2. Putting classes to work together : It focuses on groups of classes
that collaborate of communicate in some manner. Object oriented
approach has hierarchical control structure to conventional top-
down and bottom up integration
3. System testing: Different classes may be combined together and
this may load to new defects.
4. Regression testing: Changes to one components may have side
effects on another
5. Tools for testing object oriented systems: use cases, class
diagram, sequence diagram, activity diagram, state diagram
Use
• cases
Use cases represent various tasks that a user will
perform when interacting with the system.
Class

diagrams
It represents different entities and the relationships among then
entities.
• Few parts of class diagrams are,
 Boxes
 Association
 Generalization
Sequence

diagrams
A Sequence diagram is an interaction diagram that shows
how processes operate with one another and in what order.
Activity
diagrams
• Activity diagrams are graphical representations of workflows of
stepwise activities and actions with support for choice, iteration
and concurrency.
State
diagrams
• A state diagram describes the behaviour of a single object in response
to a series of events in a system.
CONFIGURATION TESTING
• It is called as hardware compatibility testing
• Testers validates whether our application build run on different
technology hardware devices or not?
• Eg: printers, different technology LAN cards, different LAN
topologies, etc
• Different configuration possibilities are,
• PC, components, peripherals, interfaces, options and memory and
device drivers
Sizing up the job:
• There are huge no of display cards, bund cards, moderns available
in network. These combination are not possible to tests. Because
total no of combination testing may be billions
These sizing up problem is solved by,
1. Equivalence partitions
2. Boundary value analysis
APPROACHING THE TASK :
1. Decide the types of hardware you will need
2. Decide what hardware brands, models and device drivers are
available
3. Decide which hardware features, modes and options are possible
4. Page down the identified hardware configuration to a manageable
set.
5. Identify your software unique features that work with hardware
configuration
6. Design the test cases to run on each configuration
7. Execute the tests on each configuration
8. Return the tests until the results satisfy your team
Obtaining the hardware:
• Every tester on team to have different hardware set up
• Create and maintain good relationship with manufacture
• Collect all required hardware in your team and purchase remaining
for cheap
Configuration testing other hardware:
• It is done by 3 steps
• Create equivalence class partition of hardware based on inputs
from people who work with the equivalent, your project manager
on your sales people
• Develop test cases and collect the selected hardware and run the
tests
• Follow the configuration testing approaches
TESTING THE DOCUMENTATION
• It is a non functional testing
• It involves testing of documented artifacts that are usually developed
before or during the testing of software
• It helps to estimate the testing efforts required, test coverage, requirement
tracking/tracing, etc
Why?
• It provides step by step processing
• Documented software is easily tested
• Improved usability
• Improve reliability
• Lower support costs
Overview:
• Documentation is now a major part of software system
• It might exceed the amount of source code
• It must be integrated into software
• Tester has to cover the code and documentation.
Classes of software documentation:
1. Packing text and graphics
2. Marketing material, ads and other
3. Warranty/ registration
4. End user license agreement
5. Users manual
6. Online help
Loosely coupled to the code:
• Apply techniques on specification testing and software inspection
• Think of it as technical editing or proof reading
Eg: user manual
Tightly coupled to code:
Apply techniques such as black box and white box testing
Eg: documents are an integral part of the software such as training system
Documentation testing check list
1. General areas (audience, terminology & content)
2.Terminology
– It is suitable for the audience
– Terms used consistently
– Abbreviations for acronyms
3. Content and subject matter
– Appropriate subject covered?
– No subjects missing?
– Proper depth?
4.Just the facts
– All information technically correct?
– Correct table of contents, index, chapter references
– Correct websites URLs, phone numbers?
5. Step by step
– Any missing steps?
6. Figures and screen captures
– Accurate and precise
– Are they from the latest version of software?
7. Samples and examples
8. Spelling and grammar
WEBSITE TESTING
• Website testing, a software testing technique exclusively adopted to test the
applications that are hosted on web in which the application interfaces and
other functionalities are tested.
• Web sites testing done for 3 tier applications

Monitors data Browser

Web server
Manipulates data

DB server
Stores data
Concerns:
• Browser compatibility
• Functional correctness
• Integration
• Usability
• Security
• Performance
• Verification of code
Web page fundamentals:
• Internet page contains
• Text of different sizes
• Fonts and colors
• Graphics and photos
• Hyperlinked test, images
• Rotating ads
• Text fields
Website creation is also complex, because
• Customizable layouts
• Customized contents
• Dynamic changes of text
• Dynamic layout
• Dynamic changes of drop down selection boxes.
Different testing types:
• Black box testing : it is used to test text, hyperlinks, forms, graphics, etc
• Gray box testing: it is mixture of white and black box. It helps to test
HTML code
• White box testing : test internal part of source code, need knowledge on
website system structure
• Configuration/ compatibility testing : must follow possible configuration
hardware and software
• Usability testing, it gives the better appearance, look and feel for websites
Website creation is also complex, because
• Customizable layouts
• Customized contents
• Dynamic changes of text
• Dynamic layout
• Dynamic changes of drop down selection boxes.
Different testing types:
• Black box testing : it is used to test text, hyperlinks, forms, graphics, etc
• Gray box testing: it is mixture of white and black box. It helps to test
HTML code
• White box testing : test internal part of source code, need knowledge on
website system structure
• Configuration/ compatibility testing : must follow possible configuration
hardware and software
• Usability testing, it gives the better appearance, look and feel for websites
COMPATIBILITY TESTING
• It is also called as portability testing. During this test, test engineers
validates that whether our application build run on customer expected
platforms or not?
• Software compatibility testing satisfy following requirements,
– OS, web browser, operating environment
– Compatibility standards and guidelines
– Type of data and share information with other platforms and software

Backward and forward compatibility


Backward: it will work with previous versions of the software eg: VB-UNIX

Build OS

Forward : it will work with future versions of the software ex: oracle- win 98,
XP

Build OS
Notepad run on
MS Dos

Notepad run on
windows 10

Notepad run on
windows 2000
Mydata.txt
Notepad run on
windows 8

Notepad run on
windows XP

Notepad run on
windows 7
Impact of testing multiple versions:
• Programmers have made numerous bug fixes and performance
improvements and have added many new features to the code.
• There could be 10 or 100 of 1000 of existing programs for current
versions of the OS

Standards and guidelines:


1. High level standards and guide lines
It provided a guide for general operation of your products, (its looks and feel, its
supported features)
2. Low level standards and guide lines
These are provided nitty- gritty details of the product.
Data sharing and compatibility
• Sharing of data among application is what really gives software and its
power.
• A well retuned program that supports and allows users to easily transfer
data to and from other software is a grate compatible product.

You might also like