You are on page 1of 36

Software Review

Dr. Aprna Tripathi


Software Review
• software review is a complete process that results
in carefully examining a software product in a
meeting or at any event.
• This process is usually undertaken by project
personnel, managers, users, customers, user
representatives.
• In software engineering, this term is used to
define review of any work done by trained
personnel's who inspect the software in order to
determine the positive and negative aspects of a
program.
Software Review
• software review is a complete process that results
in carefully examining a software product in a
meeting or at any event.
• This process is usually undertaken by project
personnel, managers, users, customers, user
representatives.
• In software engineering, this term is used to
define review of any work done by trained
personnel's who inspect the software in order to
determine the positive and negative aspects of a
program.
Purpose of Review
• Purpose is to find errors before they are
• passed on to another software engineering
activity or released to the customer.
• Software engineers (and others) conduct
formal technical reviews (FTRs) for software
quality assurance.
• Using formal technical reviews (walkthroughs
or inspections) is an effective means for
improving software quality.
Review Guidelines
• Review the product, not the producer
• Set an agenda and maintain it
• Limit debate and rebuttal
• Enunciate problem areas, but don’t attempt to solve every
problem noted
• Take return notes
• Limit the number of participants and insist upon advance
preparation.
• Develop a checklist for each product i.e likely to be reviewed
• Allocate resources and schedule time for FTRS
• Conduct meaningful training for all reviewer
• Review your early reviews
Review stages
Verification
• There are a few techniques available to verify that the
detailed design is consistent with the system design.

• Focus is on showing that the detailed design meets the


specifications laid down in the system design.

• Three verification methods:


– Design walkthroughs
– Critical design review
– Consistency checkers.
Design Walkthroughs

• A design walkthrough is a manual method of verification.


• The definition and use of walkthroughs change from organization to
organization.
• A design walkthrough is done in an informal meeting called by the
designer or the leader of the designer's group.
• The walkthrough group is usually small and contains, along with the
designer, the group leader and/or another designer of the group.
• In a walkthrough the designer explains the logic step by step, and the
members of the group ask questions, point out possible errors or seek
clarification.
Design Walkthroughs

• Benefit: the process of articulating and explaining the design in detail,


the designer himself can uncover some of the errors.
• Walkthroughs are essentially a form of peer review. Due to its informal
nature, they are usually not as effective as the design review.
• Who should do a walkthrough, and when?
– If you're designing a small piece of the interface on your own, you can do your
own, informal, "in your head" walkthroughs to monitor the design as you
work.

– Periodically, as larger parts of the interface begin to combine, it's useful to get
together with a group of people, including other designers and users, and do a
walkthrough for a complete task.
Design Walkthroughs

• What's needed before you can do a walkthrough?


– You need a description or a prototype of the interface. It doesn't have to be
complete, but it should be fairly detailed. Things like exactly what words are in
a menu can make a big difference.

– You need a task description. The task should usually be one of the
representative tasks you're using for task-centered design, or some piece of that
task.

– You need a complete, written list of the actions needed to complete the task
with the interface.

– You need an idea of who the users will be and what kind of experience they'll
bring to the job. This is an understanding you should have developed through
your task and user analysis.
Critical Design Review
• Purpose: is to ensure that the detailed design satisfies the
specifications laid down during system design.
• The critical design review process is same as the inspections
process in which a group of people get together to discuss the
design with the aim of revealing design errors or undesirable
properties.
• The review group includes,
– Author of the detailed design
– Member of the system design team
– Programmer responsible for ultimately coding the module(s)
under review
– Independent software quality engineer
Critical Design Review
• While doing design review it should be kept in mind that the aim
is to uncover design errors, not try to fix them. Fixing is done
later.
• The use of checklists
– Is considered important for the success of the review.
– It is a means of focusing the discussion or the "search" of
errors.
– It can be used by each member during private study of the
design and during the review meeting.
• For best results, the checklist should be tailored to the
project at hand, to uncover project specific errors.
Critical Design Review
• A Sample Checklist
– Does each of the modules in the system design exist in detailed design?
– Are there analyses to demonstrate that the performance requirements can be
met?
– Are all the assumptions explicitly stated, and are they acceptable?
– Are all relevant aspects of system design reflected in detailed design?
– Have the exceptional conditions been handled?
– Are all the data formats consistent with the system design?
– Is the design structured, and does it conform to local standards?
– Are the sizes of data structures estimated? Are provisions made to guard
against overflow?
Critical Design Review
– Is each statement specified in natural language easily
codable?
– Are the loop termination conditions properly specified?
– Are the conditions in the loops OK?
– Are the conditions in the if statements correct?
– Is the nesting proper?
– Is the module logic too complex?
– Are the modules highly cohesive?
Consistency Checkers
• Design reviews and walkthroughs are manual processes; the people
involved in the review and walkthrough determine the errors in the
design.

• If the design is specified in Program Design Language (PDL) or


some other formally defined design language, it is possible to
detect some design defects by using consistency checkers.

• Consistency checkers are essentially compilers that take


as input the design specified in a design language.
• Clearly, they cannot produce executable code because the
inner syntax of PDL allows natural language and many
activities are specified in the natural language.
Consistency Checkers

• However, the module interface specifications (which belong to outer


syntax) are specified formally.

• A consistency checker can ensure that any modules invoked or used


by a given module actually exist in the design and that the interface
used by the caller is consistent with the interface definition of the
called module.

• It can also check if the used global data items are indeed defined
globally in the design.

• Depending on the precision and syntax of the design language,


consistency checkers can produce other information as well.
Consistency Checkers
• These tools can be used to compute the complexity of modules and
other metrics, because these metrics are based on alternate and
loop constructs, which have a formal syntax in PDL.

• The trade-off here is that the more formal the design language, the
more checking can be done during design, but the cost is that the
design language becomes less flexible and tends towards a
programming language.
Software Metric, Measurement
and Indicator
Five Views of Quality
Measures
• When you obtain/observe/measure a value of
directly observable property of an entity, you have
a measure.
• Each measure has a standard unit of measure
(UOM) like seconds, meter, kilograms etc.
• In software development and software testing,
most commonly used measures are:
– Number of Defects found in a system or component
– Lines of Code (LOC, kLOC)
– Number of Test Cases
Metric
• A metric, in contrast, is a derived value which
cannot be observed/measured directly.
• It is a number derived from one or more measures
by a formula (or estimation).
• Best known metrics in software development and
software testing are:
– Number of defects found per kLOC, which serves as
an estimation of quality of code
– Productivity, i.e. Size / Effort
– Defect Density, i.e. number of defects related to size
Types of Metrics
• There are two types of metrics, objective and
subjective.
– Objective metrics can be quantized and are readily
available,
– subjective metrics rely on opinions, gut feelings, personal
attitudes etc.
• An example is CSAT (customer satisfaction), though
the former is more reliable, than the later, the reliability
of subjective metrics can be improved by having
checklists and guidelines.
• For e.g.: survey question need to have, probe areas,
facets, scale definitions before an option can be chosen
Indicator
• An indicator is “a thing that indicates the state or level
of something”.
• thus it can be simply just a number showing value of a
particular measure or metric.
• A better indicator could be a chart comparing two
measures/metrics or projecting how a measure/metric
developed during a time period.
• Also, a semaphore where red means bad and green
means good is also a very simple indicator, which can
be helpful in particular class of situations.
• Thus, indicator is most general of these terms.
Quality Measure
• A measure is to ascertain or appraise by
comparing to a standard.
• A measure gives very little or no information
in the absence of a trend to follow or an
expected value to compare against.
• Measure does not provide enough information
to make meaningful decisions.
Quality Metric
• A metric is a quantitative measure of the degree to
which a system, component, or process possesses
a given attribute.
• A metric is a comparison of two or more measures
like defects per thousand source lines of code.
• Software quality metrics is used to assess
throughout the development cycle whether the
software quality requirements are being met or
not
Quality Indicator
• An indicator usually compares a metric with a
baseline or expected result.
• Indicator help the decision-makers to make a
quick comparison that can provide a perspective
as to the “health” of a particular aspect of the
project.
• Software quality indicators act as a set of tools to
improve the management capabilities of
personnel responsible for monitoring software
development projects.
Quality Measure Metric & Indicator
• Measure does not provide enough information to
make meaningful decisions.
• Software quality metrics is used to assess
throughout the development cycle whether the
software quality requirements are being met or
not.
• Software quality indicators act as a set of tools to
improve the management capabilities of
personnel responsible for monitoring software
development projects.
Software Quality Indicator|1
• 1) Progress:
– Measures the amount of work accomplished by the developer in each phase.
– This measure flows through the development life cycle with a number of
requirements defined and baselined, then the amount of preliminary and
detailed designed completed, then the amount of code completed, and various
levels of tests completed.
• 2) Stability:
– Assesses whether the products of each phase are sufficiently stable to allow the
next phase to proceed.
– This measures the number of changes to requirements, design, and
implementation.
• 3) Process compliance:
– Measures the developer’s compliance with the development procedures
approved at the beginning of the project.
– Captures the number of procedures identified for use on the project versus
those complied with on the project.
Software Quality Indicator|2
• 4) Quality evaluation effort:
– Measures the percentage of the developer’s effort that is being spent on internal
quality evaluation activities.
– Percent of time developers are required to deal with quality evaluations and
related corrective actions.
• 5) Test coverage:
– Measures the amount of the software system covered by the developer’s testing
process.
– For module testing, this counts the number of basis paths executed/covered, &
for system testing it measures the percentage of functions tested.
• 6) Defect detection efficiency:
– Measures how many of the defects detectable in a phase were actually
discovered during that phase.
– Starts at 100% and is reduced as defects are uncovered at a later development
phase.
Software Quality Indicator|3
• 7) Defect removal rate:
– Measures the number of defects detected and resolved over a period of time.
– Number of opened and closed system problem reports (SPR) reported through
the development phases.
• 8) Defect age profile:
– Measures the number of defects that have remained unresolved for a long
period of time.
– Monthly reporting of SPRs remaining open for more than a month’s time.
• 9) Defect density:
– Detects defect-prone components of the system. Provides measure of SPRs /
Computer Software Component (CSC) to determine which is the most defect-
prone CSC.
• 10) Complexity:
– Measures the complexity of the code.
– Collects basis path counts (cyclomatic complexity) of code modules to
determine how complex each module is.
Types of Indicator
• Process Indicator
• P
Exercise
• Company “ABC” stated that they can remove 50%
defects in a week of the arrival, while company
“PQR” stated that – they can remove up to 50 defects
in a week. For the following data, sketch the BMI
graph. Assume that 4 week is equal to 1 month.
(Evaluate)
Week No. of Defect Arrival
W1 20
W2 60
W3 43
W4 67
W5 51
W6 42
W7 78
W8 34
W9 56
W10 49
W11 34

You might also like