You are on page 1of 68

SOFTWARE ENGINEERING 

UNIT-IV

Prepared by
S.Shashikanth
Asst. Professor (C), IT Dept
JNTUH College of Engineering Jagtial
UNIT-IV SYLLABUS
• Testing Strategies : A strategic approach to
software testing, test strategies for conventional
software, Black-Box and White-Box testing,
Validation testing, System testing, the art of
Debugging.
• Product metrics : Software Quality, Metrics for
Analysis Model, Metrics for Design Model, Metrics
for source code, Metrics for testing, Metrics for
maintenance.
• What is Testing?

• What is Debugging?
A strategic Approach for Software
testing

• Software Testing

 One of the important phases of software


development

Testing is the process of execution of a


program with the intention of finding errors

Involves 40% of total project cost


• Software testing strategy provides the
guidelines that gives an outline of the following.

• What steps to follow in order to undergo a test?


• When to plan and workout the steps?
• How much time , input & resources must be
utilized?
A strategic Approach for Software testing

• Testing Strategy
 A road map that incorporates test planning, test case
design, test execution and resultant data collection
and execution
 Validation refers to a different set of activities that
ensures that the software is traceable to the customer
requirements.
 verification& validation encompasses a wide
array of Software Quality Assurance
A strategic Approach for Software testing

• Perform Formal Technical reviews(FTR) to


uncover errors during software development
• Begin testing at component level and
move outward to integration of entire
component based system.
• Adopt testing techniques relevant to
stages of testing
A strategic Approach for Software
testing
• Testing can be done by software developer
and independent testing group
• Testing and debugging are different
activities. Debugging follows testing
• Low level tests verifies small code
segments.
• High level tests validate major system
functions against customer requirements
• Advantages of s/w developer:
• More Knowledge on software
• They are not paid for additional for testing
• Disadvantages of s/w developer:
• They are not efficient for testing
• They will not show errors in their own code,
later they may be discovered by users
• Advantages of Independent Testing Group:
• Trained & motivated for testing
• Can comment on s/w reliability before it
shipped to user
• Disadvantages of Independent Testing Group:
• Must be paid additional for testing
• May lead to project delay (due to repetition of
work)
Testing Strategies for Conventional Software

• Spiral Representation for Conventional Software

9
• S/w development begins at system engineering
defines the role of s/w and ends at coding.
• Testing begins at coding phase and ends at
system engineering phase
• At each level of testing a different testing
method is adopted
• Those method are Unit Testing, Integration
Testing, Validation Testing, System Testing
1) Unit Testing: After developing the code, each unit or
component in it is tested individually.

-It ensures that maximum errors are detected & entire code is
tested.

2) Integration Testing: The units in the developed code are


combined or integrated to result in a single system.

-Thus this test ensures that whether integration results in any


more bugs or not
3) Validation Testing: Requirements gathered at requirements phase
validated against the code developed at coding phase

4) System Testing: System engineer combines the validated s/w with other

system components like databases , s/w and performs system testing

-It ensures that all system components work together


Criteria for completion of
software testing
• No body is absolutely certain that software will not
fail
• Based on statistical modeling and
software reliability models 95 percent
confidence(probability) that 1000 CPU
hours of failure free operation is at least
0.995
Software Testing
• Two major categories of software testing

* Black box testing


1) Equivalence partitioning
2) Boundary Value Analysis
3)Graph based testing method

* Whitebox testing
Black box testing

•Black box testing


•Treats the system as black box whose behavior
can be determined by studying its input and related
output
•Not concerned with the internal structure of the
program
•It focuses on the functional requirements of the
software
• It also called as functional testing
Black Box Testing
• i.e it enables the s/w engineer to derive a set
of input conditions that fully exercise all the
functional requirements for that program.
• Concerned with functionality and
implementation
1) Equivalence partitioning
2) Boundary Value Analysis
3) Graph based testing method
Black Box Testing

Black Box
Testing

Equivalence Boundary Value Graph based


partitioning Analysis testing method
1.Equivalence partitioning
• Divides all possible inputs into classes
such that there are a finite equivalence
classes.
• Equivalence class
-- Set of objects that can be linked by
relationship
• Reduces the cost of testing
1.Equivalence partitioning
Example
• Input consists of 1 to 10
• Then classes are n<1,1<=n<=10,n>10
• Choose one valid class with value within the
allowed range and two invalid classes where
values are greater than maximum value and
smaller than minimum value.
2. Boundary Value analysis
• Select input from equivalence classes such
that the input lies at the edge of the
equivalence classes
• Set of data lies on the edge or boundary of a
class of input data or generates the data that
lies at the boundary of a class of output data
2. Boundary Value analysis

Example
• If 0.0<=x<=1.0
• Then test cases (0.0,1.0) for valid input
and (-0.1 and 1.1) for invalid input
3.Graph based testing
• It is also called as state based testing.
• Draw a graph of objects and relations
• Devise test cases to uncover the graph such
that each object and its relationship
exercised.
3.Graph based testing
• A graph is collection of nodes that specifies
the objects
• The relationship between these objects are
represented by links
• In graphical representations nodes are
represented by circles.
• All nodes are connected by following links
3.Graph based testing
• Directed link: Describes the relationship between the
node is unidirectional

• Bidirectional link: The relationship between the


nodes exists in both directions.

• Parallel links: to describe more than one type of


relationship between the nodes.
3.Graph based testing
Directed
Object
Object link #1
#2

Node
Undirected weight
link
Parallel
object Links
#3

Fig:A
Advantages & Disadvantages of Black-Box testing

• Advantages
• More effective compared to white-box testing
• Test case are generated from the user view
• Programming language is enough to perform a test
accurately
• Disadvantages
• Takes more time for testing each & every input
• Test cases may be repeated
• Many of the program paths are remain unchanged
White Box testing
• Also called glass box testing or open-box
testing
• Involves knowing the internal working of a
program
• Guarantees that all independent paths will be
exercised at least once.
• Exercises all logical decisions on their true
and false sides
White-Box
Testing

Statement Decision Condition


Testing Testing Testing
White Box testing
• The different methods to perform white box
testing are
• 1) Statement testing
• 2) Decision testing
• 3) Condition testing
• 1) Statement testing: Test values are provided
to check whether each statement in the module
is executed at least once.
White Box testing
• 2) Decision testing : Tests are performed to check
whether each branch of a decision is executed at least
once
• Decisions requires two values in standard Boolean
decision testing
• But in case of nested decisions the number of Boolean
decision values can be greater than two
• 3) Condition testing: Tests are performed to check
whether each condition in a decision accepts all the
necessary outputs at least once
Advantages & Disadvantages of White-Box testing

• Advantages
• The application is effective because of internal
knowledge of the program code
• code is optimized
• Unnecessary lines of code which results in
hidden errors are removed
• Disadvantages
• Very expensive test
Validation Testing
• At the end of s/w development the product is usually
delivered to the customer.

• It succeeds when the software functions in a manner


that can be reasonably expected by the customer.

• In this we discuss the following things

1)Validation Test Criteria


2)Configuration Review
3)Alpha And Beta Testing
• 1)Validation Test Criteria
• The testing is carried out using two things
• -test plan-The classes of tests are included in test
plan
• -test procedure-The classes of test-cases are
included in test procedure
• Test plan & Test procedure are designed keeping
the performance,requirements,correct
documentation, usability, maintainability in focus
• 2)Configuration Review
• Also known as “S/w Audit”
• It is the most essential element of s/w
validation
• It ensures that the s/w configuration elements
are appropriately developed and recorded
• 3)Alpha And Beta Testing
• Acceptance testing: At the end of s/w
development the product is usually delivered to
the customer.
• The user may not understand some instructions
written or may not be comfortable with outputs
even though these outputs comforts the
developers.
• This form of testing is called acceptance testing
• Acceptance testing is proves to be an efficient
form of testing the developed product for a
single user
• If there are multiple users then we need to use
other testing's like
• Alpha testing
• Beta testing
• Alpha testing
• It is done at developer side
• During this testing several users are called to
the developer’s site and explains their
problems
• Under the guidance of software engineer the
users uses the product and rectifies their
problems
• Beta testing
• It is done at customer side
• In absence of s/w engineer the tests the product
and prepares a report
• The report contains all problems that are faced by
user while using the product
• After analyzing the report s/w engineer makes
suitable modifications to the s/w and supplies it
back to customer
System Testing
• In a computer based system there is more than s/w.
i.e., h/w, OS etc. Thus once the s/w is developed it
must be integrated with other elements and tested.
• This type of testing is called system testing.
• System engineer combines the validated s/w with
other system components like databases , s/w and
performs system testing
-It ensures that all system components work together
System Testing
• Its primary purpose is to test the complete
software.
1) Recovery Testing

2)Security Testing

3) Stress Testing

4)Performance Testing
System Testing
• 1) Recovery Testing
• If the system is effected by some faults or failures, it
should resume processing within a stipulated period of
time.
• Moreover the system should be strong enough to
overcome the failures & function normally.
• To known whether a system is fault-tolerant , try to fail
the system in all aspects & measure its
recovering/resuming capabilities.
• This type system testing is called recovery testing.
System Testing

• 2)Security Testing
• Computer systems prone to be security attacks.
• Penetration into these systems can be from
hackers , employees of the organization
• In order to carry out security testing tester acts
as the penetrator and tries all possibilities of
breaking the security.
System Testing
• 3) Stress Testing
• When developed s/w is given 20 interrupts/second
whereas the average rate 3 interrupts/second &
increasing data rates , requesting maximum memory,
huge data to be search in disk…etc the system is being
stress tested.
• In stress testing the aim of the tester is design such test
cases that can overwhelm the program.
• Stress testing is to know the maximum functionality of
a system before it fails
System Testing
• 4) Performance Testing
• It ensures that the s/w meets the performance
requirements
• It is performed during the entire processing of
testing from the lowest level of testing to
highest level of testing(unit testing to system
testing)
The Art of Debugging
• Once the testing successfully carried out &
bugs are uncovered, the next step is the
removal of these bugs and is named as “
debugging”
•Debugging Stratergies
1)Brute Force Method.
2)Back Tracking 3)Cause
Elimination and
4)Automated debugging
The Art of Debugging
• Brute force
-- Most common and least efficient
-- Applied when all else fails
-- Memory dumps are taken
-- Tries to find the cause from the load of information
• Back tracking
-- Common debugging approach
-- Useful for small programs
-- Beginning at the system where the symptom has been
uncovered, the source code traced backward until the
site of the cause is found.
The Art of Debugging
• Cause Elimination
-- Based on the concept of Binary
partitioning
-- A list of all possible causes is developed
and tests are conducted to eliminate each
The Art of Debugging
The Debugging process

Execution of
test cases
Results

Test
cases Additional Suspected
tests causes

Regression tests Debugging


Identified
Corrections causes
Software Quality
• Conformance to explicitly stated functional and
performance requirements, explicitly documented
development standards, and implicit characteristics that
are expected of all professionally developed software.
• Factors that affect software quality can be categorized in
two broad groups:
1. Factors that can be directly measured (e.g. defects
uncovered during testing)
2. Factors that can be measured only indirectly (e.g.
usability or maintainability)
Software Quality- McCall’s quality factors:

• McCall’s quality factors: This model classifies all software requirements into 11
software quality factors. The 11 factors are grouped into three categories – product
operation, product revision, and product transition factors.
Product Operation
• According to McCall’s model, product operation category includes five software quality factors, which deal
with the requirements that directly affect the daily operation of the software. They are as follows −
• Correctness
• These requirements deal with the correctness of the output of the software system. They include −
• Output mission
• The required accuracy of output that can be negatively affected by inaccurate data or inaccurate calculations.
• Reliability
• Reliability requirements deal with service failure.
• Efficiency
• It deals with the hardware resources needed to perform the different functions of the software system. It
includes processing capabilities (given in MHz), its storage capacity (given in MB or GB) and the data
communication capability (given in MBPS or GBPS).
• Integrity
• This factor deals with the software system security, that is, to prevent access to unauthorized persons, also to
distinguish between the group of people to be given read as well as write permit.
• Usability
• Usability requirements deal with the staff resources needed to train a new employee and to operate the
software system.
Product Revision Quality Factors
• According to McCall’s model, three software quality factors are
included in the product revision category. These factors are as follows −
• Maintainability
• This factor considers the efforts that will be needed by users and
maintenance personnel to identify the reasons for software failures, to
correct the failures, and to verify the success of the corrections.
• Flexibility
• This factor deals with the capabilities and efforts required to support
adaptive maintenance activities of the software.
• Testability
• Testability requirements deal with the testing of the software system as
well as with its operation.
Product Transition Software Quality Factor

• According to McCall’s model, three software quality factors are included in the
product transition category that deals with the adaptation of software to other
environments and its interaction with other software systems. These factors are as
follows −
• Portability
• Portability requirements tend to the adaptation of a software system to other
environments consisting of different hardware, different operating systems, and so
forth. The software should be possible to continue using the same basic software in
diverse situations.
• Reusability
• This factor deals with the use of software modules originally designed for one project
in a new software project currently being developed.
• Interoperability
• Interoperability requirements focus on creating interfaces with other software systems
or with other equipment firmware.
Software Quality-ISO 9126 Quality Factors

• The ISO 9126-1 software quality model identifies 6 main quality characteristics, namely:
• Functionality
Functionality is the essential purpose of any product or service. It reflects working nature.
• Reliability
• Reliability requirements deal with service failure.
• Usability
• Usability requirements deal with the staff resources needed to train a new employee and to operate the software
system.
• Efficiency
• It deals with the hardware resources needed to perform the different functions of the software system. It
includes processing capabilities (given in MHz), its storage capacity (given in MB or GB) and the data
communication capability (given in MBPS or GBPS).
• Maintainability
• This factor considers the efforts that will be needed by users and maintenance personnel to identify the reasons
for software failures, to correct the failures, and to verify the success of the corrections.
• Portability
• Portability requirements tend to the adaptation of a software system to other environments consisting of
different hardware, different operating systems, and so forth. The software should be possible to continue using
the same basic software in diverse situations.
Product metrics
• Product metrics for computer software helps us to
assess quality.
• Measure
-- Provides a quantitative indication of the extent, amount,
dimension, capacity or size of some attribute of a
product or process
• Metric(IEEE 93 definition)
-- A quantitative measure of the degree to which a system,
component or process possess a given attribute
• Indicator
-- A metric or a combination of metrics that provide insight
into the software process, a software project or a product
itself
Product Metrics for analysis,Design,Test
and maintenance
• Product metrics for the Analysis model
Function point Metric
1. First proposed by Albrecht
2. Measures the functionality delivered by the
system
3. FP computed from the following
parameters
1)Number of external inputs(EIS)
2)Number external outputs(EOS
45
Product metrics for the Analysis model

Number of external Inquiries(EQS)


Number of Internal Logical Files(ILF)
Number of external interface files(EIFS)
Each parameter is classified as simple,
average or complex and weights are
assigned as follows
Metrics for Design Model
• DSQI(Design Structure Quality Index)
• US air force has designed the DSQI
• Compute s1 to s7 from data and
architectural design
• S1:Total number of modules
• S2:Number of modules whose correct
function depends on the data input
• S3:Number of modules whose function
depends on prior processing
• S4:Number of data base items 48
Metrics for Design Model
• S5:Number of unique database items
• S6: Number of database segments
• S7:Number of modules with single entry
and exit
• Calculate D1 to D6 from s1 to s7 as
follows:
• D1=1 if standard design is followed
otherwise D1=0
49
Metrics for Design Model
• D2(module independence)=(1-(s2/s1))
• D3(module not depending on prior
processing)=(1-(s3/s1))
• D4(Data base size)=(1-(s5/s4))
• D5(Database compartmentalization)=(1-
(s6/s4)
• D6(Module entry/exit characteristics)=(1-
(s7/s1))
• DSQI=sigma of WiDi 50
Metrics for Design Model
• i=1 to 6,Wi is weight assigned to Di
• If sigma of Wi is 1 then all weights
are equal to 0.167
• DSQI of present design be compared with
past DSQI. If DSQI is significantly lower
than the average, further design work and
review are indicated

51
METRIC FOR SOURCE CODE
• HSS(Halstead Software science)
• Primitive measure that may be derived after the
code is generated or estimated once design is
complete
• n1 = the number of distinct operators that appear in a
program
• n2 = the number of distinct operands that appear in a
program
• N1 = the total number of operator occurrences.
• N = the total number of operand occurrence.
2

• Overall program length N can be computed:


• N = n log2 n + n log2 n
1 1 2 2
52
• V = N log2 (n1 + n2)
METRIC FOR TESTING
• n1= the number of distinct operators that appear in a
program
• n2 = the number of distinct operands that appear in a
program
• N1 = the total number of operator occurrences.
• N2 = the total number of operand occurrence.

• Program Level and Effort

• PL = 1/[(n1 / 2) x (N2 / n2 l)]

• e = V/PL
METRICS FOR MAINTENANCE

••M = the number of modules in the current release
t

• F = the number of modules in the current release that


c have
• been changed
• Fa= the number of modules in the current release that have
been added.
• Fd = the number of modules from the preceding release that were
deleted in the current release
• The Software Maturity Index, SMI, is defined as:

• SMI = [Mt –(Fc + Fa +Fd)/Mt ]

You might also like