You are on page 1of 75

CHAPTER – 6

Software Configuration Management, Quality Assurance and Maintenance

Following are the categories of the risk:

1. Project risk
 If the project risk is real then it is probable that the project schedule will slip and the cost of the
project will increase.
 It identifies the potential schedule, resource, stakeholders and the requirements problems and
their impact on a software project.
2. Technical risk
 If the technical risk is real then the implementation becomes impossible.
 It identifies potential design, interface, verification and maintenance of the problem.
3. Business risk
If the business risk is real then it harms the project or product.

There are five sub-categories of the business risk:

1. Market risk - Creating an excellent system that no one really wants.


2. Strategic risk - Creating a product which no longer fit into the overall business strategy for
companies.
3. Sales risk - The sales force does not understand how to sell a creating product.
4. Management risk - Loose a support of senior management because of a change in focus.
5. Budget risk - losing a personal commitment.

Other risk categories


These categories suggested by Charette.

1. Known risks : These risk are unwrapped after the project plan is evaluated.
2. Predictable risks : These risks are estimated from previous project experience.
3. Unpredictable risks : These risks are unknown and are extremely tough to identify in
advance.

Risk Projection:-
Risk projection, also called risk estimation, attempts to rate each risk in two ways—the
likelihood or probability that the risk is real and the consequences of the problems associated
with the risk, should it occur.
The project planner, along with other managers and technical staff, performs four risk projection
activities:
(1) Establish a scale that reflects the perceived likelihood of a risk.
(2) Delineate the consequences of the risk.
(3) Estimate the impact of the risk on the project and the product.
(4) Note the overall accuracy of the risk projection so that there will be no misunderstandings.

Developing a Risk Table


Risk table provides a project manager with a simple technique for risk projection.

Steps in Setting up Risk Table


(1) Project team begins by listing all risks in the first column of the table.
Accomplished with the help of the risk item checklists.
(2) Each risk is categorized in the second column.
(e.g. PS implies a project size risk, BU implies a business risk).
(3) The probability of occurrence of each risk is entered in the next column of the table.
The probability value for each risk can be estimated by team members individually.
(4) Individual team members are polled in round-robin fashion until their assessment of risk
probability begins to converge.

Assessing Risk Impact


Nature of the risk - the problems that are likely if it occurs.
e.g. a poorly defined external interface to customer hardware (a technical risk) will preclude
early design and testing and will likely lead to system integration problems late in a project.
Scope of a risk - combines the severity with its overall distribution (how much of the project
will be affected or how many customers are harmed?).
Timing of a risk - when and how long the impact will be felt.
Overall risk exposure, RE, determined using:
RE = P x C
P is the probability of occurrence for a risk.
C is the the cost to the project should the risk occur.
A risk management strategy can be defined as a software project plan or the risk management
steps. It can be organized into a separate Risk Mitigation, Monitoring and Management Plan.
The RMMM plan documents all work performed as part of risk analysis and is used by the
project manager as part of the overall project plan.
Teams do not develop a formal RMMM document. Rather, each risk is documented individually
using a risk information sheet . In most cases, the RIS is maintained using a database system, so
that creation and information entry, priority ordering, searches, and other analysis may be
accomplished easily.
Once RMMM has been documented and the project has begun, risk mitigation and monitoring
steps commence. As we have already discussed, risk mitigation is a problem avoidance activity.
Risk monitoring is a project tracking activity with three primary objectives:
(1) to assess whether predicted risks occur.
(2) to ensure that risk aversion steps defined for the risk are being properly applied; and
(3) to collect information that can be used for future risk analysis.

Effective strategy must consider three issues:


 risk avoidance
 risk monitoring
 risk management and contingency planning.Proactive approach to risk – avoidance
strategy.Develop risk mitigation plan. Develop a strategy to mitigate this risk for
reducing turnover.Meet with current staff to determine causes for turnover.Mitigate those
causes that are under our control before the project starts.
 Organize project teams so that information about each development activity is widely
dispersed.
 Define documentation standards and establish mechanisms to be sure that documents are
developed in a timely manner.Project manager monitors for likelihood of risk,Project
manager should monitor the effectiveness of risk mitigation steps.Risk management and
contingency planning assumes that mitigation efforts have failed and that the risk has
become a reality.RMMM steps incur additional project cost.

THE RMMM PLAN


Risk Mitigation, Monitoring and Management Plan (RMMM) – documents all work performed
as part of risk analysis and is used by the project manager as part of the overall project plan.RIS
is maintained using a database system, so that creation and information entry, priority ordering,
searches, and other analysis may be accomplished easily. Risk monitoring is a project tracking
activity
Three primary objectives:
 Assess whether predicted risks do, in fact, occur
 Ensure that risk aversion steps defined for the risk are being properly applied
 Collect information that can be used for future risk analysis.

Example:
Let us understand RMMM with the help of an example of high staff turnover.
Risk Mitigation:
To mitigate this risk, project management must develop a strategy for reducing turnover. The
possible steps to be taken are:
 Meet the current staff to determine causes for turnover (e.g., poor working conditions, low
pay, competitive job market).
 Mitigate those causes that are under our control before the project starts.
 Once the project commences, assume turnover will occur and develop techniques to
ensure continuity when people leave.
 Organize project teams so that information about each development activity is widely
dispersed.
 Define documentation standards and establish mechanisms to ensure that documents are
developed in a timely manner.
 Assign a backup staff member for every critical technologist.
Risk Monitoring:
As the project proceeds, risk monitoring activities commence. The project manager monitors
factors that may provide an indication of whether the risk is becoming more or less likely. In
the case of high staff turnover, the following factors can be monitored:
 General attitude of team members based on project pressures.
 Interpersonal relationships among team members.
 Potential problems with compensation and benefits.
 The availability of jobs within the company and outside it.
Risk Management:
Risk management and contingency planning assumes that mitigation efforts have failed and
that the risk has become a reality. Continuing the example, the project is well underway, and a
number of people announce that they will be leaving. If the mitigation strategy has been
followed, backup is available, information is documented, and knowledge has been dispersed
across the team. In addition, the project manager may temporarily refocus resources (and
readjust the project schedule) to those functions that are fully staffed, enabling newcomers
who must be added to the team to “get up to the speed”.

Software Quality Assurance (SQA) is a set of activities for ensuring quality in software
engineering processes. It ensures that developed software meets and complies with the defined
or standardized quality specifications. SQA is an ongoing process within the Software
Development Life Cycle (SDLC) that routinely checks the developed software to ensure it
meets the desired quality measures.
SQA practices are implemented in most types of software development, regardless of the
underlying software development model being used. SQA incorporates and implements
software testing methodologies to test the software. Rather than checking for quality after
completion, SQA processes test for quality in each phase of development, until the software is
complete. With SQA, the software development process moves into the next phase only once
the current/previous phase complies with the required quality standards. SQA generally works
on one or more industry standards that help in building software quality guidelines and
implementation strategies.
It includes the following activities −

 Process definition and implementation


 Auditing
 Training
Processes could be −

 Software Development Methodology


 Project Management
 Configuration Management
 Requirements Development/Management
 Estimation
 Software Design
 Testing, etc.
Once the processes have been defined and implemented, Quality Assurance has the following
responsibilities −

 Identify the weaknesses in the processes


 Correct those weaknesses to continually improve the process
Software Quality Assurance Plan
Abbreviated as SQAP, the software quality assurance plan comprises of the procedures,
techniques, and tools that are employed to make sure that a product or service aligns with the
requirements defined in the SRS(software requirement specification).
.

The plan identifies the SQA responsibilities of a team, lists the areas that need to be reviewed
and audited. It also identifies the SQA work products.
SQA Task
The structure of SQA unit varies by type and size of the organization. The following figure
shows an example of a standard structure and all the components under an SQA unit. In this
chapter, we will discuss the roles and responsibilities of each sub-unit.
Tasks Performed by the Head of the SQA Unit
The head of the SQA unit is responsible for all the quality assurance tasks performed by the
SQA unit and its sub-units. These tasks can be classified into the following categories −

 Planning tasks
 Management of the unit
 SQA professional activities
Planning Tasks
 Preparation of the proposed annual activity program and budget for the unit
 Planning and updating the organization’s software quality management system
 Preparation of the recommended annual SQA activities programs and SQA systems
development plans for the software development and maintenance departments
Management Tasks
 Management of the SQA team’s activities
 Monitoring implementation of the SQA activity program
 Nomination of team members, SQA committee members and SQA trustees
 Preparation of special and periodic reports, e.g., the status of software quality issues
within the organization and monthly performance reports
Project Life Cycle Control Tasks
 Follow-up of development and maintenance team's compliance with SQA procedures
and work instructions
 Approval or recommendation of software products according to the relevant procedures
 Monitoring delivery of software maintenance services to internal and external customers
 Monitoring customer satisfaction and maintaining contact with customer's quality
assurance representatives
Participation Tasks
These tasks include participation in −
 Contract reviews
 Preparation and updating of project development and quality plans
 Formal design reviews
 Subcontractors’ formal design reviews
 Software testing, including customer acceptance tests
 Software acceptance tests of subcontractors’ software products
 Installation of new software products

 What is Software Quality Metrics?


 The word 'metrics' refer to standards for measurements. Software Quality Metrics means
measurement of attributes, pertaining to software quality along with its process of
development.
 The term "software quality metrics" illustrate the picture of measuring the software
qualities by recording the number of defects or security loopholes present in the software.
However, quality measurement is not restricted to counting of defects or vulnerabilities
but also covers other aspects of the qualities such as maintainability, reliability, integrity,
usability, customer satisfaction, etc.
Software metrics can be classified into three categories −
 Product metrics − Describes the characteristics of the product such as size, complexity,
design features, performance, and quality level.
 Process metrics − These characteristics can be used to improve the development and
maintenance activities of the software.
 Project metrics − This metrics describe the project characteristics and execution.
Examples include the number of software developers, the staffing pattern over the life
cycle of the software, cost, schedule, and productivity.
Some metrics belong to multiple categories. For example, the in-process quality metrics of a
project are both process metrics and project metrics.
Product Quality Metrics
This metrics include the following −

 Mean Time to Failure


 Defect Density
 Customer Problems
 Customer Satisfaction
Mean Time to Failure
It is the time between failures. This metric is mostly used with safety critical systems such as
the airline traffic control systems, avionics, and weapons.
Defect Density
It measures the defects relative to the software size expressed as lines of code or function point,
etc. i.e., it measures code quality per unit. This metric is used in many commercial software
systems.
Customer Problems
It measures the problems that customers encounter when using the product. It contains the
customer’s perspective towards the problem space of the software, which includes the non-
defect oriented problems together with the defect problems.
Customer Satisfaction
Customer satisfaction is often measured by customer survey data through the five-point scale −

 Very satisfied
 Satisfied
 Neutral
 Dissatisfied
 Very dissatisfied
Satisfaction with the overall quality of the product and its specific dimensions is usually
obtained through various methods of customer surveys. Based on the five-point-scale data,
several metrics with slight variations can be constructed and used, depending on the purpose of
analysis. For example −

 Percent of completely satisfied customers


 Percent of satisfied customers
 Percent of dis-satisfied customers
 Percent of non-satisfied customers
Usually, this percent satisfaction is used.
In-process Quality Metrics
In-process quality metrics deals with the tracking of defect arrival during formal machine
testing for some organizations. This metric includes −

 Defect density during machine testing


 Defect arrival pattern during machine testing
 Phase-based defect removal pattern
 Defect removal effectiveness
Defect density during machine testing
Defect rate during formal machine testing (testing after code is integrated into the system
library) is correlated with the defect rate in the field. Higher defect rates found during testing is
an indicator that the software has experienced higher error injection during its development
process, unless the higher testing defect rate is due to an extraordinary testing effort.
This simple metric of defects per KLOC or function point is a good indicator of quality, while
the software is still being tested. It is especially useful to monitor subsequent releases of a
product in the same development organization.
Defect arrival pattern during machine testing
The overall defect density during testing will provide only the summary of the defects. The
pattern of defect arrivals gives more information about different quality levels in the field. It
includes the following −
 The defect arrivals or defects reported during the testing phase by time interval (e.g.,
week). Here all of which will not be valid defects.
 The pattern of valid defect arrivals when problem determination is done on the reported
problems. This is the true defect pattern.
 The pattern of defect backlog overtime. This metric is needed because development
organizations cannot investigate and fix all the reported problems immediately. This is a
workload statement as well as a quality statement. If the defect backlog is large at the
end of the development cycle and a lot of fixes have yet to be integrated into the system,
the stability of the system (hence its quality) will be affected. Retesting (regression test)
is needed to ensure that targeted product quality levels are reached.
Phase-based defect removal pattern
This is an extension of the defect density metric during testing. In addition to testing, it tracks
the defects at all phases of the development cycle, including the design reviews, code
inspections, and formal verifications before testing.
Because a large percentage of programming defects is related to design problems, conducting
formal reviews, or functional verifications to enhance the defect removal capability of the
process at the front-end reduces error in the software. The pattern of phase-based defect
removal reflects the overall defect removal ability of the development process.
With regard to the metrics for the design and coding phases, in addition to defect rates, many
development organizations use metrics such as inspection coverage and inspection effort for in-
process quality management.
Defect removal effectiveness
It can be defined as follows −
DRE=DefectremovedduringadevelopmentphaseDefectslatentintheproduct×100%DRE=Defectre
movedduringadevelopmentphaseDefectslatentintheproduct×100%
This metric can be calculated for the entire development process, for the front-end before code
integration and for each phase. It is called early defect removal when used for the front-end
and phase effectiveness for specific phases. The higher the value of the metric, the more
effective the development process and the fewer the defects passed to the next phase or to the
field. This metric is a key concept of the defect removal model for software development.
Software Reliability
Software Reliability is the probability of failure-free software operation for a specified period of
time in a specified environment. Software Reliability is also an important factor affecting system
reliability. It differs from hardware reliability in that it reflects the design perfection, rather than
manufacturing perfection. The high complexity of software is the major contributing factor of
Software Reliability problems.

Software Reliability is an important to attribute of software quality, together with functionality,


usability, performance, serviceability, capability, installability, maintainability, and
documentation. Software Reliability is hard to achieve, because the complexity of software tends
to be high. While any system with a high degree of complexity, including software, will be hard
to reach a certain level of reliability, system developers tend to push complexity into the software
layer, with the rapid growth of system size and ease of doing so by upgrading the software. For
example, large next-generation aircraft will have over one million source lines of software on-
board; next-generation air traffic control systems will contain between one and two million lines;
the upcoming international Space Station will have over two million lines on-board and over ten
million lines of ground support software; several major life-critical defense systems will have
over five million source lines of software. While the complexity of software is inversely related
to software reliability, it is directly related to other important factors in software quality,
especially functionality, capability, etc. Emphasizing these features will tend to add more
complexity to software.

Software failure mechanisms

Software failures may be due to errors, ambiguities, oversights or misinterpretation of the


specification that the software is supposed to satisfy, carelessness or incompetence in writing
code, inadequate testing, incorrect or unexpected usage of the software or other unforeseen
problems. While it is tempting to draw an analogy between Software Reliability and Hardware
Reliability, software and hardware have basic differences that make them different in failure
mechanisms. Hardware faults are mostly physical faults, while software faults are design faults,
which are harder to visualize, classify, detect, and correct. Design faults are closely related to
fuzzy human factors and the design process, which we don't have a solid understanding. In
hardware, design faults may also exist, but physical faults usually dominate. In software, we can
hardly find a strict corresponding counterpart for "manufacturing" as hardware manufacturing
process, if the simple action of uploading software modules into place does not count. Therefore,
the quality of software will not change once it is uploaded into the storage and start running.
Trying to achieve higher reliability by simply duplicating the same software modules will not
work, because design faults can not be masked off by voting.

A partial list of the distinct characteristics of software compared to hardware is listed below :

 Failure cause: Software defects are mainly design defects.


 Wear-out: Software does not have energy related wear-out phase. Errors can occur
without warning.
 Repairable system concept: Periodic restarts can help fix software problems.
 Time dependency and life cycle: Software reliability is not a function of operational
time.
 Environmental factors: Do not affect Software reliability, except it might affect
program inputs.
 Reliability prediction: Software reliability can not be predicted from any physical basis,
since it depends completely on human factors in design.
 Redundancy: Can not improve Software reliability if identical software components are
used.
 Interfaces: Software interfaces are purely conceptual other than visual.
 Failure rate motivators: Usually not predictable from analyses of separate statements.
 Built with standard components: Well-understood and extensively-tested standard parts
will help improve maintainability and reliability. But in software industry, we have not
observed this trend. Code reuse has been around for some time, but to a very limited
extent. Strictly speaking there are no standard parts for software, except some
standardized logic structures.

The bathtub curve for Software Reliability

Over time, hardware exhibits the failure characteristics shown in Figure 1, known as the bathtub
curve. Period A, B and C stands for burn-in phase, useful life phase and end-of-life phase. A
detailed discussion about the curve can be found in the topic Traditional Reliability.

Factors Influencing Software Reliability

1. The number of faults presents in the software


2. The way users operate the system

 Reliability Testing is one of the key to better software quality. This testing helps discover
many problems in the software design and functionality.
 The main purpose of reliability testing is to check whether the software meets the
requirement of customer's reliability.
 Reliability testing will be performed at several levels. Complex systems will be tested at
unit,assembly,subsystem and system levels.

Why to do Reliability Testing

Reliability testing is done to test the software performance under the given conditions.

The objective behind performing reliability testing are,

1. To find the structure of repeating failures.


2. To find the number of failures occurring is the specified amount of time.
3. To discover the main cause of failure
4. To conduct Performance Testing of various modules of software application after fixing
defect

After the release of the product too,we can minimize the possibility of occurrence of defects and
thereby improve the software reliability. Some of the tools useful for this are- Trend
Analysis,Orthogonal Defect Classification and formal methods, etc.

Figure 1. Bathtub curve for hardware reliability

Software reliability, however, does not show the same characteristics similar as hardware. A
possible curve is shown in Figure 2 if we projected software reliability on the same
axes. [RAC96] There are two major differences between hardware and software curves. One
difference is that in the last phase, software does not have an increasing failure rate as hardware
does. In this phase, software is approaching obsolescence; there are no motivation for any
upgrades or changes to the software. Therefore, the failure rate will not change. The second
difference is that in the useful-life phase, software will experience a drastic increase in failure
rate each time an upgrade is made. The failure rate levels off gradually, partly because of the
defects found and fixed after the upgrades.
Figure 2. Revised bathtub curve for software reliability

0
Formal Technical Reviews
 Formal Technical review is a software quality assurance activity performed by software
engineer.
Objectives of FTR

1. FTR is useful to uncover error in logic, function and implementation for any
representation of the software.
2. The purpose of FTR is to ensure that software meets specified requirements.
3. It is also ensure that software is represented according to predefined standards.
4. It helps to review the uniformity in software development process.
5. It makes the project more manageable.
 Besides the above mentioned objectives, the purpose of FTR is to enable junior engineer
to observer the analysis, design, coding and testing approach more closely.
 Each FTR is conducted as meeting and is considered successfully only if it is properly
planned, controlled and attended.
Steps in FTR
1. The review meeting

 Every review meeting should be conducted by considering the following constraints-


1. Involvement of people
Between 3 and 5 people should be involve in the review.
2. Advance preparation Advance preparation should occur but it should be very short that is
at the most 2 hours of work for each person can be spent in this preparation
3. Short duration The short duration of the review meeting should be less than two hour.

 Rather than attempting to review the entire design walkthrough are conducted for
modules or for small group of modules.
 The focus of the FTR is on work product (a software component to be reviewed). The
review meeting is attended by the review leader, all reviewers and the producer.
 The review leader is responsible for evaluating for product for its deadlines. The copies
of product material is then distributed to reviewers. -The producer organises
“walkthrough” the product, explaining the material, while the reviewers raise the issues
based on theirs advance preparation.
 One of the reviewers become recorder who records all the important issues raised during
the review. When error are discovered, the recorder notes each.
 At the end of the review, the attendees decide whether to accept the product or not, with
or without modification.
2. Review reporting and record keeping

 During the FTR, the reviewer actively record all the issues that have been raised.
 At the end of meeting these all raised issues are consolidated and review issue list is
prepared.
 Finally, formal technical review summary report is produced.
3. Review guidelines

 Guidelines for the conducting of formal technical review must be established in advance.
These guidelines must be distributed to all reviewers, agreed upon, and then followed.
 For example,
Guideline for review may include following things

1. Concentrate on work product only. That means review the product not the producers.
2. Set an agenda of a review and maintain it.
3. When certain issues are raised then debate or arguments should be limited. Reviews
should not ultimately results in some hard feelings.
4. Find out problem areas, but don’t attempt to solve every problem noted.
5. Take written notes (it is for record purpose)
6. Limit the number of participants and insists upon advance preparation.
7. Develop a checklist for each product that is likely to be reviewed.
8. Allocate resources and time schedule for FTRs in order to maintain time schedule.
9. Conduct meaningful trainings for all reviewers in order to make reviews effective.
10. Reviews earlier reviews which serve as the base for the current review being conducted.
Six Sigma for Software Engineering• The most widely used strategy for statistical quality
assurance• Three core steps:• 1. Define customer requirements, deliverables, and project goals
via well-defined• methods of customer communication.• 2. Measure each existing process and its
output to determine current quality• performance (e.g., compute defect metrics)• 3. Analyze
defect metrics and determine vital few causes.• For an existing process that needs improvement•
1. Improve process by eliminating the root causes for defects• 2. Control future work to ensure
that future work does not reintroduce causes of• defects• If new processes are being developed•
1. Design each new process to avoid root causes of defects and to meet• customer requirements•
2. Verify that the process model will avoid defects and meet customer• requirements
Six Sigma is a highly disciplined process that helps us focus on developing and delivering near-
perfect products and services.
Features of Six Sigma
 Six Sigma's aim is to eliminate waste and inefficiency, thereby increasing customer
satisfaction by delivering what the customer is expecting.
 Six Sigma follows a structured methodology, and has defined roles for the participants.
 Six Sigma is a data driven methodology, and requires accurate data collection for the
processes being analyzed.
 Six Sigma is about putting results on Financial Statements.
 Six Sigma is a business-driven, multi-dimensional structured approach for −
o Improving Processes
o Lowering Defects
o Reducing process variability
o Reducing costs
o Increasing customer satisfaction
o Increased profits
The word Sigma is a statistical term that measures how far a given process deviates from
perfection.
The central idea behind Six Sigma: If you can measure how many "defects" you have in a
process, you can systematically figure out how to eliminate them and get as close to "zero
defects" as possible and specifically it means a failure rate of 3.4 parts per million or 99.9997%
perfect.
Key Concepts of Six Sigma
At its core, Six Sigma revolves around a few key concepts.
 Critical to Quality − Attributes most important to the customer.
 Defect − Failing to deliver what the customer wants.
 Process Capability − What your process can deliver.
 Variation − What the customer sees and feels.
 Stable Operations − Ensuring consistent, predictable processes to improve what the
customer sees and feels.
 Design for Six Sigma − Designing to meet customer needs and process capability.
Our Customers Feel the Variance, Not the Mean. So Six Sigma focuses first on reducing
process variation and then on improving the process capability.
Myths about Six Sigma
There are several myths and misunderstandings surrounding Six Sigma. Some of them few are
given below −

 Six Sigma is only concerned with reducing defects.


 Six Sigma is a process for production or engineering.
 Six Sigma cannot be applied to engineering activities.
 Six Sigma uses difficult-to-understand statistics.
 Six Sigma is just training.
Benefits of Six Sigma
Six Sigma offers six major benefits that attract companies −

 Generates sustained success


 Sets a performance goal for everyone
 Enhances value to customers
 Accelerates the rate of improvement
 Promotes learning and cross-pollination
 Executes strategic change

Software Configuration Management

What is Software Configuration Management?

Configuration Management helps organizations to systematically manage, organize, and control


the changes in the documents, codes, and other entities during the Software Development Life
Cycle. It is abbreviated as the SCM process. It aims to control cost and work effort involved in
making changes to the software system. The primary goal is to increase productivity with
minimal mistakes.

Why do we need Configuration management?

The primary reasons for Implementing Software Configuration Management System are:

 There are multiple people working on software which is continually updating


 It may be a case where multiple version, branches, authors are involved in a software
project, and the team is geographically distributed and works concurrently
 Changes in user requirement, policy, budget, schedule need to be accommodated.
 Software should able to run on various machines and Operating Systems
 Helps to develop coordination among stakeholders
 SCM process is also beneficial to control the costs involved in making changes to a
system

Any change in the software configuration Items will affect the final product. Therefore, changes
to configuration items need to be controlled and managed.
Tasks in SCM process
Configuration Identification
Baselines
Change Control
Configuration Status Accounting
Configuration Audits and Reviews
Configuration Identification:

Configuration identification is a method of determining the scope of the software system. With
the help of this step, you can manage or control something even if you don't know what it is. It is
a description that contains the CSCI type (Computer Software Configuration Item), a project
identifier and version information.

Activities during this process:

 Identification of configuration Items like source code modules, test case, and
requirements specification.
 Identification of each CSCI in the SCM repository, by using an object-oriented approach
 The process starts with basic objects which are grouped into aggregate objects. Details of
what, why, when and by whom changes in the test are made
 Every object has its own features that identify its name that is explicit to all other objects
 List of resources required such as the document, the file, tools, etc.

Example:

Instead of naming a File login.php its should be named login_v1.2.php where v1.2 stands for the
version number of the file

Instead of naming folder "Code" it should be named "Code_D" where D represents code should
be backed up daily.

Baseline:

A baseline is a formally accepted version of a software configuration item. It is designated and


fixed at a specific time while conducting the SCM process. It can only be changed through
formal change control procedures.

Activities during this process:

 Facilitate construction of various versions of an application


 Defining and determining mechanisms for managing various versions of these work
products
 The functional baseline corresponds to the reviewed system requirements
 Widely used baselines include functional, developmental, and product baselines

In simple words, baseline means ready for release.

Change Control:

Change control is a procedural method which ensures quality and consistency when changes are
made in the configuration object. In this step, the change request is submitted to software
configuration manager.

Activities during this process:

 Control ad-hoc change to build stable software development environment. Changes are
committed to the repository
 The request will be checked based on the technical merit, possible side effects and overall
impact on other configuration objects.
 It manages changes and making configuration items available during the software
lifecycle
Configuration Status Accounting:

Configuration status accounting tracks each release during the SCM process. This stage involves
tracking what each version has and the changes that lead to this version.

Activities during this process:

 Keeps a record of all the changes made to the previous baseline to reach a new baseline
 Identify all items to define the software configuration
 Monitor status of change requests
 Complete listing of all changes since the last baseline
 Allows tracking of progress to next baseline
 Allows to check previous releases/versions to be extracted for testing

Configuration Audits and Reviews:

Software Configuration audits verify that all the software product satisfies the baseline needs. It
ensures that what is built is what is delivered.

Activities during this process:

 Configuration auditing is conducted by auditors by checking that defined processes are


being followed and ensuring that the SCM goals are satisfied.
 To verify compliance with configuration control standards. auditing and reporting the
changes made
 SCM audits also ensure that traceability is maintained during the process.
 Ensures that changes made to a baseline comply with the configuration status reports
 Validation of completeness and consistency

Participant of SCM process:

Following are the key participants in SCM


1. Configuration Manager

 Configuration Manager is the head who is Responsible for identifying configuration


items.
 CM ensures team follows the SCM process
 He/She needs to approve or reject change requests

2. Developer

 The developer needs to change the code as per standard development activities or change
requests. He is responsible for maintaining configuration of code.
 The developer should check the changes and resolves conflicts

3. Auditor

 The auditor is responsible for SCM audits and reviews.


 Need to ensure the consistency and completeness of release.

4. Project Manager:

 Ensure that the product is developed within a certain time frame


 Monitors the progress of development and recognizes issues in the SCM process
 Generate reports about the status of the software system
 Make sure that processes and policies are followed for creating, changing, and testing

5. User

The end user should understand the key SCM terms to ensure he has the latest version of the
software

Software Configuration Management Plan


The SCMP (Software Configuration management planning) process planning begins at the early
phases of a project. The outcome of the planning phase is the SCM plan which might be
stretched or revised during the project.

 The SCMP can follow a public standard like the IEEE 828 or organization specific
standard
 It defines the types of documents to be management and a document naming. Example
Test_v1
 SCMP defines the person who will be responsible for the entire SCM process and
creation of baselines.
 Fix policies for version management & change control
 Define tools which can be used during the SCM process
 Configuration management database for recording configuration information.

Software Configuration Management Tools

Any Change management software should have the following 3 Key features:

Concurrency Management:

When two or more tasks are happening at the same time, it is known as concurrent operation.
Concurrency in context to SCM means that the same file being edited by multiple persons at the
same time.

If concurrency is not managed correctly with SCM tools, then it may create many pressing
issues.

Version Control

Version Control Combines procedures and tools to manage different version of configuration
objects that are created during the software process.
A version control system implements or is directly integrated with four major capabilities:

A project database that stores all relevant configuration objects,


A version management capability that stores all version of configuration object,
A make facility that enables the software engineer to collect all relevant configuration objects,
and
Construct a specific version of the software.
A number of version control systems establish a set – a collection of all changes (to some
baseline configuration) that are required to create a specific version of the software.
“Changes set” captures all changes to all files in the configuration along with reason for changes
and details of who made the changes and when.

A number of named change set can be identified for an application or system. This enables a
software engineer to construct a version of the software by specifying the changes set (by name)
that must be applied to the baseline configuration.

To accomplish this, a system modelling approach is applied. The system model contains

A template that include a component hierarchy and build order for the component that describe
how the system must be constructed,
Construction rules, and
Verification rules.

Change Control
Change control is manual step in software lifecycle. It combines human procedures and
automated tools.

Change request submitted and evaluated to assess technical merit, potential side effects, overall
impact on other configuration object and system function, and project cost of change.

The result of the evaluation are presented as a change report, which is used by the change control
authority(CCA) – A person or group who make final decision on the status and priority of the
change.

An engineering change order (ECO) is generated for each approved change. The ECO describes
the change order to be made, the constraints that must be respected, and the criteria for view and
audit.

The object to be changed can be placed in a directory that is controlled by software engineer
making the change. As an alternative, the object to be changed can be “checked out” of the
project database, change is made, and appropriate SQA activities are applied.

The object are then “checked in” to the database and appropriate version control mechanism are
used to create the next version of the software.

Checked in and Checked out mechanism require two important elements

Access Control
Synchronization Control

The Access control mechanism gives the authority to the software engineer to access and modify
the specific configuration object.

The Synchronization control mechanism allows to make parallel changes or the change made by
two different people without overwriting each other’s work.
Version Control and change control system often implements an issue tracking (also called bug
tracking) capability that enables the team to record and track the status of all outstanding issues
associated with each configuration object.
Chapter 4
Software Design

Introduction to design process


 The main aim of design engineering is to generate a model which shows firmness, delight
and commodity.
 Software design is an iterative process through which requirements are translated into the
blueprint for building the software.
Software quality guidelines
 A design is generated using the recognizable architectural styles and compose a good design
characteristic of components and it is implemented in evolutionary manner for testing.
 A design of the software must be modular i.e the software must be logically partitioned into
elements.
 In design, the representation of data , architecture, interface and components should be
distinct.
 A design must carry appropriate data structure and recognizable data patterns.
 Design components must show the independent functional characteristic.
 A design creates an interface that reduce the complexity of connections between the
components.
 A design must be derived using the repeatable method.
 The notations should be use in design which can effectively communicates its meaning.
Quality attributes
The attributes of design name as 'FURPS' are as follows:

Functionality:
It evaluates the feature set and capabilities of the program.

Usability:
It is accessed by considering the factors such as human factor, overall aesthetics, consistency
and documentation.

Reliability:
It is evaluated by measuring parameters like frequency and security of failure, output result
accuracy, the mean-time-to-failure(MTTF), recovery from failure and the the program
predictability.

Performance:
It is measured by considering processing speed, response time, resource consumption,
throughput and efficiency.

Supportability:
It combines the ability to extend the program, adaptability, serviceability. These three term
defines the maintainability.
Testability, compatibility and configurability are the terms using which a system can be
easily installed and found the problem easily.
Supportability also consists of more attributes such as compatibility, extensibility, fault
tolerance, modularity, reusability, robustness, security, portability, scalability.

Design concepts

The set of fundamental software design concepts are as follows:

1. Abstraction
 A solution is stated in large terms using the language of the problem environment at the
highest level abstraction.
 The lower level of abstraction provides a more detail description of the solution.
 A sequence of instruction that contain a specific and limited function refers in a procedural
abstraction.
 A collection of data that describes a data object is a data abstraction.
2. Architecture
 The complete structure of the software is known as software architecture.
 Structure provides conceptual integrity for a system in a number of ways.
 The architecture is the structure of program modules where they interact with each other in a
specialized way.
 The components use the structure of data.
 The aim of the software design is to obtain an architectural framework of a system.
 The more detailed design activities are conducted from the framework.
3. Patterns
 A design pattern describes a design structure and that structure solves a particular design
problem in a specified content.
4. Modularity
 A software is separately divided into name and addressable components. Sometime they are
called as modules which integrate to satisfy the problem requirements.
 Modularity is the single attribute of a software that permits a program to be managed easily.
5. Information hiding
Modules must be specified and designed so that the information like algorithm and data
presented in a module is not accessible for other modules not requiring that information.
6. Functional independence
 The functional independence is the concept of separation and related to the concept of
modularity, abstraction and information hiding.
 The functional independence is accessed using two criteria i.e Cohesion and coupling.

Cohesion
 Cohesion is an extension of the information hiding concept.
 A cohesive module performs a single task and it requires a small interaction with the other
components in other parts of the program.
Coupling
Coupling is an indication of interconnection between modules in a structure of software.

7. Refinement
 Refinement is a top-down design approach.
 It is a process of elaboration.
 A program is established for refining levels of procedural details.
 A hierarchy is established by decomposing a statement of function in a stepwise manner till
the programming language statement are reached.
 Refinement - It is the process of elaboration. A hierarchy is developed by
decomposing a macroscopic statement of function in a step-wise fashion until
programming language statements are reached. In each step, one or several
instructions of a given program are decomposed into more detailed instructions.
Abstraction and Refinement are complementary concepts.
 Refinement is a generic term of computer science that encompasses various approaches for
producing correct computer programs and simplifying existing programs to enable their
formal verification.

8. Refactoring
 It is a reorganization technique which simplifies the design of components without changing
its function behaviour.
 Refactoring is the process of changing the software system in a way that it does not change
the external behaviour of the code still improves its internal structure.
9. Design classes
 The model of software is defined as a set of design classes.
 Every class describes the elements of problem domain and that focus on features of the
problem which are user visible
10. Separation of Data
Known as separation of concerns, this principle states that the software code must be
separated into two sections called layers and components. To ensure proper implementation,
the two sections must have little to no overlap between them and must have a defined purpose
for each component. This principle allows each component to be developed, maintained, and
reused independently of one another. This eases the development stage and benefits future
updates of software since it allows the code to be modified without needing to know the
specifics of other components.
11. Data Hiding
Also known as information hiding, data hiding allows modules to pass only the required
information between themselves without sharing the internal structures and processing. The
specific purpose of hiding the internal details of individual objects has several benefits.
Increased efficiency, in general, allows for a much lower level of error while ensuring high-
quality software. It is important to note that data hiding only hides class data components.
Software Engineering | Coupling and Cohesion
Introduction: The purpose of Design phase in the Software Development Life Cycle is to
produce a solution to a problem given in the SRS(Software Requirement Specification)
document. The output of the design phase is Sofware Design Document (SDD).
Basically, design is a two-part iterative process. First part is Conceptual Design that tells the
customer what the system will do. Second is Technical Design that allows the system
builders to understand the actual hardware and software needed to solve customer’s problem.

Conceptual design of system:


 Written in simple language i.e. customer understandable language.
 Detail explaination about system characteristics.
 Describes the functionality of the system.
 It is independent of implementation.
 Linked with requirement document.
Modularization: Modularization is the process of dividing a software system into multiple
independent modules where each module works independently. There are many advantages
of Modularization in software engineering. Some of these are given below:
 Easy to understand the system.
 System maintenance is easy.
 A module can be used many times as their requirements. No need to write it again and
again.

Cohesion
Cohesion is a measure that defines the degree of intra-dependability within elements of a
module. The greater the cohesion, the better is the program design.

There are seven types of cohesion, namely –


 Co-incidental cohesion - It is unplanned and random cohesion, which might be the
result of breaking the program into smaller modules for the sake of modularization.
Because it is unplanned, it may serve confusion to the programmers and is generally
not-accepted.
 Logical cohesion - When logically categorized elements are put together into a
module, it is called logical cohesion.
 Temporal Cohesion - When elements of module are organized such that they are
processed at a similar point in time, it is called temporal cohesion.
 Procedural cohesion - When elements of module are grouped together, which are
executed sequentially in order to perform a task, it is called procedural cohesion.
 Communicational cohesion - When elements of module are grouped together, which
are executed sequentially and work on same data (information), it is called
communicational cohesion.
 Sequential cohesion - When elements of module are grouped because the output of
one element serves as input to another and so on, it is called sequential cohesion.
 Functional cohesion - It is considered to be the highest degree of cohesion, and it is
highly expected. Elements of module in functional cohesion are grouped because they
all contribute to a single well-defined function. It can also be reused.

Coupling
Coupling is a measure that defines the level of inter-dependability among modules of a
program. It tells at what level the modules interfere and interact with each other. The lower
the coupling, the better the program.
There are five levels of coupling, namely -
 Content coupling - When a module can directly access or modify or refer to the
content of another module, it is called content level coupling.
 Common coupling- When multiple modules have read and write access to some
global data, it is called common or global coupling.
 Control coupling- Two modules are called control-coupled if one of them decides the
function of the other module or changes its flow of execution.
 Stamp coupling- When multiple modules share common data structure and work on
different part of it, it is called stamp coupling.
 Data coupling- Data coupling is when two modules interact with each other by
means of passing data (as parameter). If a module passes data structure as parameter,
then the receiving module should use all its components.
Ideally, no coupling is considered to be the best.
Cohesion
 Cohesion is the indication of the relationship within module.
 Cohesion shows the module’s relative functional strength.
 Cohesion is a degree (quality) to which a component / module focuses on the single
thing.
 While designing you should strive for high cohesion i.e. a cohesive component/ module
focus on a single task (i.e., single-mindedness) with little interaction with other modules
of the system.
 Cohesion is the kind of natural extension of data hiding for example, class having all
members visible with a package having default visibility. Cohesion is Intra – Module
Concept.
Coupling
 Coupling is the indication of the relationships between modules.
 Coupling shows the relative independence among the modules.
 Coupling is a degree to which a component / module is connected to the other modules.
 While designing you should strive for low coupling i.e. dependency between modules
should be less
 Making private fields, private methods and non public classes provides loose coupling.
 Coupling is Inter -Module Concept.

Architectural design
Introduction: The software needs the architectural design to represents the design of
software. IEEE defines architectural design as “the process of defining a collection of
hardware and software components and their interfaces to establish the framework for the
development of a computer system.” The software that is built for computer-based systems
can exhibit one of these many architectural styles.
Each style will describe a system category that consists of :
 A set of components(eg: a database, computational modules) that will perform a
function required by the system.
 The set of connectors will help in coordination, communication, and cooperation
between the components.
 Conditions that how components can be integrated to form the system.
 Semantic models that help the designer to understand the overall properties of the
system.
 An architectural design performs the following functions.
 1. It defines an abstraction level at which the designers can specify the functional and
performance behaviour of the system.
 2. It acts as a guideline for enhancing the system (when ever required) by describing
those features of the system that can be modified easily without affecting the system
integrity.
 3. It evaluates all top-level designs.
 4. It develops and documents top-level design for the external and internal interfaces.
 5. It develops preliminary versions of user documentation.
 6. It defines and documents preliminary test requirements and the schedule for
software integration.
 7. The sources of architectural design are listed below.
 8. Information regarding the application domain for the software to be developed
 9. Using data-flow diagrams
 10. Availability of architectural patterns and architectural styles.
Software Architecture :
Software Architecture defines fundamental organization of a system and more simply
defines a structured solution. It defines how components of a software system are
assembled, their relationship and communication between them. It serves as a blueprint for
software application and development basis for developer team.
Software architecture defines a list of things which results in making many things easier in
the software development process.

 A software architecture defines structure of a system.


 A software architecture defines behavior of a system.
 A software architecture defines component relationship.
 A software architecture defines communication structure.
 A software architecture balances stakeholder’s needs.
 A software architecture influences team structure.
 A software architecture focuses on significant elements.
 A software architecture captures early design decisions.
Characteristics of Software Architecture :
Architects separate architecture characteristics into broad categories depending upon
operation, rarely appearing requirements, structure etc. Below some important
characteristics which are commonly considered are explained.
 Operational Architecture Characteristics :
1. Availability
2. Performance
3. Reliability
4. Low fault tolerance
5. Scalability
 Structural Architecture Characteristics :
1. Configurability
2. Extensibility
3. Supportability
4. Portability
5. Maintainability
 Cross-Cutting Architecture Characteristics :
1. Accessibility
2. Security
3. Usability
4. Privacy
5. Feasibility

Architectural Design Representation


Architectural design can be represented using the following models.

1. Structural model: Illustrates architecture as an ordered collection of program components


2. Dynamic model: Specifies the behavioral aspect of the software architecture and indicates
how the structure or system configuration changes as the function changes due to change in
the external environment
3. Process model: Focuses on the design of the business or technical process, which must be
implemented in the system
4. Functional model: Represents the functional hierarchy of a system
5. Framework model: Attempts to identify repeatable architectural design patterns encountered
in similar types of application. This leads to an increase in the level of abstraction.
The use of architectural styles is to establish a structure for all the components of the system.
Taxonomy of Architectural styles:
1. Data centred architectures:
 A data store will reside at the center of this architecture and is accessed frequently
by the other components that update, add, delete or modify the data present within
the store.
 The figure illustrates a typical data centered style. The client software access a
central repository. Variation of this approach are used to transform the repository
into a blackboard when data related to client or data of interest for the client
change the notifications to client software.
 This data-centered architecture will promote integrability. This means that the
existing components can be changed and new client components can be added to
the architecture without the permission or concern of other clients.
 Data can be passed among clients using blackboard mechanism.
2. Data flow architectures:
 This kind of architecture is used when input data to be transformed into output
data through a series of computational manipulative components.
 The figure represents pipe-and-filter architecture since it uses both pipe and filter
and it has a set of components called filters connected by pipes.
 Pipes are used to transmit data from one component to the next.
 Each filter will work independently and is designed to take data input of a certain
form and produces data output to the next filter of a specified form. The filters
don’t require any knowledge of the working of neighboring filters.
 If the data flow degenerates into a single line of transforms, then it is termed as
batch sequential. This structure accepts the batch of data and then applies a series
of sequential components to transform it.

3. Call and Return architectures: It is used to create a program that is easy to scale and
modify. Many sub-styles exist within this category. Two of them are explained below.
 Remote procedure call architecture: This components is used to present in a
main program or sub program architecture distributed among multiple computers
on a network.
 Main program or Subprogram architectures: The main program structure
decomposes into number of subprograms or function into a control hierarchy.
Main program contains number of subprograms that can invoke other
components.
4. Object Oriented architecture: The components of a system encapsulate data and the
operations that must be applied to manipulate the data. The coordination and
communication between the components are established via the message passing.
5. Layered architecture:
 A number of different layers are defined with each layer performing a well-
defined set of operations. Each layer will do some operations that becomes closer
to machine instruction set progressively.
 At the outer layer, components will receive the user interface operations and at the
inner layers, components will perform the operating system
interfacing(communication and coordination with OS)
 Intermediate layers to utility services and application software functions.

Importance of software architecture


Following are the reasons for the importance of software architecture.

1. The representation of software architecture allows the communication between all


stakeholder and the developer.
2. The architecture focuses on the early design decisions that impact on all software
engineering work and it is the ultimate success of the system.
3. The software architecture composes a small and intellectually graspable model.
4. This model helps the system for integrating the components using which the components
are work together.
Architectural design
 The architectural design starts then the developed software is put into the context.
 The information is obtained from the requirement model and other information collect
during the requirement engineering.
Representing the system in context

All the following entities communicates with the target system through the interface that is
small rectangles shown in above figure.

Superordinate system
These system use the target system like a part of some higher-level processing scheme.

Subordinate system
This systems is used by the target system and provide the data mandatory to complete target
system functionality.

Peer-level system
These system interact on peer-to-peer basis means the information is consumed by the target
system and the peers.

Actors
These are the entities like people, device which interact with the target system by consuming
information that is mandatory for requisite processing.
Defining Archetypes
An archetype is a class or pattern that represents a core abstraction that is critical tothe design
of an architecture for the target system. In general, a relatively small setof archetypes is
required to design even relatively complex systems. The target systemarchitecture is
composed of these archetypes, which represent stable elementsof the architecture but may be
instantiated many different ways based on thebehavior of the system.
In many cases, archetypes can be derived by examining the analysis classes definedas part of
the requirements model. Continuing the discussion of the SafeHomehome security function,
you might define the following archetypes:
• Node. Represents a cohesive collection of input and output elements ofthe home security
function. For example a node might be comprised of(1) various sensors and (2) a variety of
alarm (output) indicators.
• Detector. An abstraction that encompasses all sensing equipment that feedsinformation into
the target system.
Indicator.An abstraction that represents all mechanisms (e.g., alarm siren,flashing lights,
bell) for indicating that an alarm condition is occurring.
• Controller. An abstraction that depicts the mechanism that allows thearming or disarming
of a node. If controllers reside on a network, they havethe ability to communicate with one
another.

Each of these archetypes is depicted using UML notation as shown in FigureRecall that the
archetypes form the basis for the architecture but are abstractions thatmust be further refined
as architectural design proceeds. For example, Detectormight be refined into a class
hierarchy of sensors.
Refining the Architecture into Components
As the software architecture is refined into components, the structure of the systembegins to
emerge. But how are these components chosen? In order to answer thisquestion, you begin
with the classes that were described as part of the requirementsmodel.4 These analysis classes
represent entities within the application(business) domain that must be addressed within the
software architecture. Hence,the application domain is one source for the derivation and
refinement of components.Another source is the infrastructure domain. The architecture must
accommodatemany infrastructure components that enable application componentsbut have no
business connection to the application domain. For example, memorymanagement
components, communication components, database components,and task management
components are often integrated into the softwarearchitecture.
Continuing the SafeHome home security function example, you might define theset of top-
level components that address the following functionality:
• External communication management—coordinates communication of thesecurity function
with external entities such as other Internet-based systemsand external alarm notification.
• Control panel processing—manages all control panel functionality.
• Detector management—coordinates access to all detectors attached to thesystem.
• Alarm processing—verifies and acts on all alarm conditions.

• Architectural Mapping Using Data Flow


• A mapping technique, called structured design, is often characterized as a data flow-
oriented design method because it provides a convenient transition from a data flow
diagram to software architecture.
• The transition from information flow to program structure is accomplished as part of a
six step process:
(1) The type of information flow is established,
(2) Flow boundaries are indicated,
(3) The DFD is mapped into the program structure,
(4) Control hierarchy is defined,
(5) The resultant structure is refined using design measures.
(6) The architectural description is refined and elaborated.
As a brief example of data flow mapping, a step-by-step “transform” mapping for a
small part of the Safe Home security function. In order to perform the mapping, the
type of information flow must be determined. One type of information flow is called
transform flow and exhibits a linear quality. Data flows into the system along an
incoming flow path where it is transformed from an external world representation into
internalized form. Once it has been internalized, it is processed at a transform center.
Finally, it flows out of the system along an outgoing flow path that transforms the data
into external world form.

Component-Based Architecture
Component-based architecture focuses on the decomposition of the design into individual
functional or logical components that represent well-defined communication interfaces
containing methods, events, and properties. It provides a higher level of abstraction and
divides the problem into sub-problems, each associated with component partitions.
The primary objective of component-based architecture is to ensure component reusability.
A component encapsulates functionality and behaviors of a software element into a reusable
and self-deployable binary unit. There are many standard component frameworks such as
COM/DCOM, JavaBean, EJB, CORBA, .NET, web services, and grid services. These
technologies are widely used in local desktop GUI application design such as graphic
JavaBean components, MS ActiveX components, and COM components which can be
reused by simply drag and drop operation.
Component-oriented software design has many advantages over the traditional object-
oriented approaches such as −
 Reduced time in market and the development cost by reusing existing components.
 Increased reliability with the reuse of the existing components.
What is a Component?
A component is a modular, portable, replaceable, and reusable set of well-defined
functionality that encapsulates its implementation and exporting it as a higher-level
interface.
A component is a software object, intended to interact with other components, encapsulating
certain functionality or a set of functionalities. It has an obviously defined interface and
conforms to a recommended behavior common to all components within an architecture.
A software component can be defined as a unit of composition with a contractually specified
interface and explicit context dependencies only. That is, a software component can be
deployed independently and is subject to composition by third parties.
Views of a Component
A component can have three different views − object-oriented view, conventional view, and
process-related view.
Object-oriented view
A component is viewed as a set of one or more cooperating classes. Each problem domain
class (analysis) and infrastructure class (design) are explained to identify all attributes and
operations that apply to its implementation. It also involves defining the interfaces that
enable classes to communicate and cooperate.
Conventional view
It is viewed as a functional element or a module of a program that integrates the processing
logic, the internal data structures that are required to implement the processing logic and an
interface that enables the component to be invoked and data to be passed to it.
Process-related view
In this view, instead of creating each component from scratch, the system is building from
existing components maintained in a library. As the software architecture is formulated,
components are selected from the library and used to populate the architecture.
 A user interface (UI) component includes grids, buttons referred as controls, and
utility components expose a specific subset of functions used in other components.
 Other common types of components are those that are resource intensive, not
frequently accessed, and must be activated using the just-in-time (JIT) approach.
 Many components are invisible which are distributed in enterprise business
applications and internet web applications such as Enterprise JavaBean (EJB), .NET
components, and CORBA components.
Characteristics of Components
 Reusability − Components are usually designed to be reused in different situations in
different applications. However, some components may be designed for a specific
task.
 Replaceable − Components may be freely substituted with other similar components.
 Not context specific − Components are designed to operate in different environments
and contexts.
 Extensible − A component can be extended from existing components to provide
new behavior.
 Encapsulated − A A component depicts the interfaces, which allow the caller to use
its functionality, and do not expose details of the internal processes or any internal
variables or state.
 Independent − Components are designed to have minimal dependencies on other
components.

Principles of Component−Based Design


A component-level design can be represented by using some intermediary representation
(e.g. graphical, tabular, or text-based) that can be translated into source code. The design of
data structures, interfaces, and algorithms should conform to well-established guidelines to
help us avoid the introduction of errors.
 The software system is decomposed into reusable, cohesive, and encapsulated
component units.
 Each component has its own interface that specifies required ports and provided
ports; each component hides its detailed implementation.
 A component should be extended without the need to make internal code or design
modifications to the existing parts of the component.
 Depend on abstractions component do not depend on other concrete components,
which increase difficulty in expendability.
 Connectors connected components, specifying and ruling the interaction among
components. The interaction type is specified by the interfaces of the components.
 Components interaction can take the form of method invocations, asynchronous
invocations, broadcasting, message driven interactions, data stream communications,
and other protocol specific interactions.
 For a server class, specialized interfaces should be created to serve major categories
of clients. Only those operations that are relevant to a particular category of clients
should be specified in the interface.
 A component can extend to other components and still offer its own extension points.
It is the concept of plug-in based architecture. This allows a plugin to offer another
plugin API.
Component-Level Design Guidelines
Creates a naming conventions for components that are specified as part of the architectural
model and then refines or elaborates as part of the component-level model.
 Attains architectural component names from the problem domain and ensures that
they have meaning to all stakeholders who view the architectural model.
 Extracts the business process entities that can exist independently without any
associated dependency on other entities.
 Recognizes and discover these independent entities as new components.
 Uses infrastructure component names that reflect their implementation-specific
meaning.
 Models any dependencies from left to right and inheritance from top (base class) to
bottom (derived classes).
 Model any component dependencies as interfaces rather than representing them as a
direct component-to-component dependency.
Conducting Component-Level Design
Recognizes all design classes that correspond to the problem domain as defined in the
analysis model and architectural model.
 Recognizes all design classes that correspond to the infrastructure domain.
 Describes all design classes that are not acquired as reusable components, and
specifies message details.
 Identifies appropriate interfaces for each component and elaborates attributes and
defines data types and data structures required to implement them.
 Describes processing flow within each operation in detail by means of pseudo code or
UML activity diagrams.
 Describes persistent data sources (databases and files) and identifies the classes
required to manage them.
 Develop and elaborates behavioral representations for a class or component. This can
be done by elaborating the UML state diagrams created for the analysis model and
by examining all use cases that are relevant to the design class.
 Elaborates deployment diagrams to provide additional implementation detail.
 Demonstrates the location of key packages or classes of components in a system by
using class instances and designating specific hardware and operating system
environment.
The final decision can be made by using established design principles and guidelines.
Experienced designers consider all (or most) of the alternative design solutions
before settling on the final design model.
Advantages
 Ease of deployment − As new compatible versions become available, it is easier to
replace existing versions with no impact on the other components or the system as a
whole.
 Reduced cost − The use of third-party components allows you to spread the cost of
development and maintenance.
 Ease of development − Components implement well-known interfaces to provide
defined functionality, allowing development without impacting other parts of the
system.
 Reusable − The use of reusable components means that they can be used to spread
the development and maintenance cost across several applications or systems.
 Modification of technical complexity − A component modifies the complexity
through the use of a component container and its services.
 Reliability − The overall system reliability increases since the reliability of each
individual component enhances the reliability of the whole system via reuse.
 System maintenance and evolution − Easy to change and update the
implementation without affecting the rest of the system.
 Independent − Independency and flexible connectivity of components. Independent
development of components by different group in parallel. Productivity for the
software development and future software development.
Jayant Sawarkar 9029397070

CHAPTER 5.

SOFTWARE TESTING
BASIC CONCEPT AND TERMINOLOGY

Software testing is a critical element of software quality assurance and represents the
ultimate review of specification, design, and code generation. The increasing visibility of
software as a system element and the attendant "costs" associated with a software failure
are motivating forces for well-planned, thorough testing. It is not unusual for a software
development organization to expend between 30 and 40 percent of total project effort on
testing. In the extreme, testing of human-rated software (e.g., flight control, nuclear
reactor monitoring) can cost three to five times as much as all other software engineering
steps combined!
Once source code has been generated, software must be tested to uncover (and correct) as
many errors as possible before delivery to customer. Your goal is to design a series of test
cases that have a high likelihood of finding errors but how? That’s where software testing
techniques enter the picture. These techniques provide systematic guidance for designing
tests that (1) exercise the internal logic of software components, and (2) exercise the input
and output domains of the program to uncover errors in program function, behavior and
performance.
Reviews and other SQA activities can and do uncover errors, but they are not sufficient.
Every time the program is executed, the customer tests it! Therefore, you have to execute
the program before it gets to the customer with the specific intent of finding and removing
all errors. In order to find the highest possible number of errors, tests must be conducted
systematically.

Unit Testing
Unit Testing is a software testing technique by means of which individual units of
software i.e. group of computer program modules, usage procedures, and operating
procedures are tested to determine whether they are suitable for use or not. It is a testing
method using which every independent module is tested to determine if there is an issue
by the developer himself. It is correlated with the functional correctness of the
independent modules. Unit Testing is defined as a type of software testing where
individual components of a software are tested. Unit Testing of the software product is
carried out during the development of an application. An individual component may be
either an individual function or a procedure. Unit Testing is typically performed by the

1
Jayant Sawarkar 9029397070

developer. In SDLC or V Model, Unit testing is the first level of testing done before
integration testing. Unit testing is such a type of testing technique that is usually
performed by developers. Although due to the reluctance of developers to test, quality
assurance engineers also do unit testing.

Integration testing
Integration testing is the process of testing the interface between two software units or
modules. It focuses on determining the correctness of the interface. The purpose of
integration testing is to expose faults in the interaction between integrated units. Once all
the modules have been unit tested, integration testing is performed.
Integration test approaches – There are four types of integration testing approaches.
Those approaches are the following:
1. Big-Bang Integration Testing – It is the simplest integration testing approach, where
all the modules are combined and the functionality is verified after the completion of
individual module testing. In simple words, all the modules of the system are simply put
together and tested. This approach is practicable only for very small systems. If an error
is found during the integration testing, it is very difficult to localize the error as the error
may potentially belong to any of the modules being integrated. So, debugging errors
reported during big bang integration testing is very expensive to fix.
Advantages:
 It is convenient for small systems.
Disadvantages:
 There will be quite a lot of delay because you would have to wait for all the modules
to be integrated.
 High risk critical modules are not isolated and tested on priority since all modules
are tested at once.
2. Bottom-Up Integration Testing – In bottom-up testing, each module at lower levels
is tested with higher modules until all modules are tested. The primary purpose of this
integration testing is that each subsystem tests the interfaces among various modules
making up the subsystem. This integration testing uses test drivers to drive and pass
appropriate data to the lower level modules.
Advantages:
 In bottom-up testing, no stubs are required.
 A principle advantage of this integration testing is that several disjoint subsystems
can be tested simultaneously.
Disadvantages:
 Driver modules must be produced.

2
Jayant Sawarkar 9029397070

 In this testing, the complexity that occurs when the system is made up of a large
number of small subsystems.
3. Top-Down Integration Testing – Top-down integration testing technique is used in
order to simulate the behaviour of the lower-level modules that are not yet integrated. In
this integration testing, testing takes place from top to bottom. First, high-level modules
are tested and then low-level modules and finally integrating the low-level modules to a
high level to ensure the system is working as intended.
Advantages:
 Separately debugged module.
 Few or no drivers needed.
 It is more stable and accurate at the aggregate level.
Disadvantages:
 Needs many Stubs.
 Modules at lower level are tested inadequately.
4. Mixed Integration Testing – A mixed integration testing is also called sandwiched
integration testing. A mixed integration testing follows a combination of top down and
bottom-up testing approaches. In top-down approach, testing can start only after the top-
level module have been coded and unit tested. In bottom-up approach, testing can start
only after the bottom level modules are ready. This sandwich or mixed approach
overcomes this shortcoming of the top-down and bottom-up approaches.
Advantages:
 Mixed approach is useful for very large projects having several sub projects.
 This Sandwich approach overcomes this shortcoming of the top-down and bottom-up
approaches.
Disadvantages:
 For mixed integration testing, it requires very high cost because one part has Top-
down approach while another part has bottom-up approach.
 This integration testing cannot be used for smaller systems with huge
interdependence between different modules.

Validation Testing

3
Jayant Sawarkar 9029397070

The process of evaluating software during the development process or at the end of the
development process to determine whether it satisfies specified business requirements.
Validation Testing ensures that the product actually meets the client's needs. It can also be
defined as to demonstrate that the product fulfills its intended use when deployed on
appropriate environment.
It answers to the question, Are we building the right product?
Validation Testing - Workflow:
Validation testing can be best demonstrated using V-Model. The Software/product under
test is evaluated during this type of testing.

Difference between verification and validation testing


Verification Validation
We check whether we are developing the We check whether the developed product is
right product or not. right.
Verification is also known as static testing. Validation is also known as dynamic testing.
Verification includes different methods like Validation includes testing like functional
Inspections, Reviews, and Walkthroughs. testing, system testing, integration, and User
acceptance testing.
It is a process of checking the work-products It is a process of checking the software during
(not the final product) of a development or at the end of the development cycle to decide
cycle to decide whether the product meets whether the software follow the specified
the specified requirements. business requirements.
Quality assurance comes under verification Quality control comes under validation testing.
testing.
The execution of code does not happen in the In validation testing, the execution of code
verification testing. happens.
In verification testing, we can find the bugs In the validation testing, we can find those

4
Jayant Sawarkar 9029397070

early in the development phase of the bugs, which are not caught in the verification
product. process.
Verification testing is executed by the Validation testing is executed by the testing
Quality assurance team to make sure that the team to test the application.
product is developed according to customers'
requirements.
Verification is done before the validation After verification testing, validation testing
testing. takes place.
In this type of testing, we can verify that the In this type of testing, we can validate that the
inputs follow the outputs or not. user accepts the product or not.

System Testing
System Testing is a type of software testing that is performed on a complete integrated
system to evaluate the compliance of the system with the corresponding requirements. In
system testing, integration testing passed components are taken as input. The goal of
integration testing is to detect any irregularity between the units that are integrated
together. System testing detects defects within both the integrated units and the whole
system. The result of system testing is the observed behavior of a component or a system
when it is tested. System Testing is carried out on the whole system in the context of
either system requirement specifications or functional requirement specifications or in
the context of both. System testing tests the design and behavior of the system and also
the expectations of the customer. It is performed to test the system beyond the bounds
mentioned in the software requirements specification (SRS). System Testing is basically
performed by a testing team that is independent of the development team that helps to
test the quality of the system impartial. It has both functional and non-functional
testing. System Testing is a black-box testing. System Testing is performed after the
integration testing and before the acceptance testing.

System Testing Process: System Testing is performed in the following steps:


 Test Environment Setup: Create testing environment for the better quality testing.
 Create Test Case: Generate test case for the testing process.
 Create Test Data: Generate the data that is to be tested.
 Execute Test Case: After the generation of the test case and the test data, test cases
are executed.
 Defect Reporting: Defects in the system are detected.
 Regression Testing: It is carried out to test the side effects of the testing process.
 Log Defects: Defects are fixed in this step.
 Retest: If the test is not successful then again test is performed.

Types of System Testing:


 Performance Testing: Performance Testing is a type of software testing that is
carried out to test the speed, scalability, stability and reliability of the software
product or application.

5
Jayant Sawarkar 9029397070

 Load Testing: Load Testing is a type of software Testing which is carried out to
determine the behavior of a system or software product under extreme load.
 Stress Testing: Stress Testing is a type of software testing performed to check the
robustness of the system under the varying loads.
 Scalability Testing: Scalability Testing is a type of software testing which is carried
out to check the performance of a software application or system in terms of its
capability to scale up or scale down the number of user request load.

Advantages of System Testing :


 The testers do not require more knowledge of programming to carry out this testing.
 It will test the entire product or software so that we will easily detect the errors or
defects which cannot be identified during the unit testing and integration testing.
 The testing environment is similar to that of the real time production or business
environment.
 It checks the entire functionality of the system with different test scripts and also it
covers the technical and business requirements of clients.
 After this testing, the product will almost cover all the possible bugs or errors and
hence the development team will confidently go ahead with acceptance testing.
Disadvantages of System Testing :
 This testing is time consuming process than another testing techniques since it
checks the entire product or software.
 The cost for the testing will be high since it covers the testing of entire software.
 It needs good debugging tool otherwise the hidden errors will not be found.

Software Testing Techniques


Software testing techniques are the ways employed to test the application under test
against the functional or non-functional requirements gathered from business. Each
testing technique helps to find a specific type of defect. For example, Techniques which
may find structural defects might not be able to find the defects against the end-to-end
business flow. Hence, multiple testing techniques are applied in a testing project to
conclude it with acceptable quality.
Principles Of Testing
Below are the principles of software testing:
1. All the tests should meet the customer requirements.
2. To make our software testing should be performed by a third party
3. Exhaustive testing is not possible. As we need the optimal amount of testing based
on the risk assessment of the application.
4. All the test to be conducted should be planned before implementing it
5. It follows the Pareto rule(80/20 rule) which states that 80% of errors come from 20%
of program components.
6. Start testing with small parts and extend it to large parts.

6
Jayant Sawarkar 9029397070

Types Of Software Testing Techniques


There are two main categories of software testing techniques:
1. Static Testing Techniques are testing techniques which are used to find defects in
Application under test without executing the code. Static Testing is done to avoid
errors at an early stage of the development cycle and thus reducing the cost of fixing
them.
2. Dynamic Testing Techniques are testing techniques that are used to test the
dynamic behavior of the application under test, that is by the execution of the code
base. The main purpose of dynamic testing is to test the application with dynamic
inputs- some of which may be allowed as per requirement (Positive testing) and
some are not allowed (Negative Testing).
Each testing technique has further types as showcased in the below diagram. Each one of
them will be explained in detail with examples below.
Static Testing Techniques
As explained earlier, Static Testing techniques are testing techniques that do not require
the execution of a code base. Static Testing Techniques are divided into two major
categories:
1. Reviews: They can range from purely informal peer reviews between two
developers/testers on the artifacts (code/test cases/test data) to totally
formal Inspections which are led by moderators who can be internal/external to the
organization.
1. Peer Reviews: Informal reviews are generally conducted without any formal
setup. It is between peers. For Example- Two developers/Testers review each
other’s artifacts like code/test cases.
2. Walkthroughs: Walkthrough is a category where the author of work (code or
test case or document under review) walks through what he/she has done and the
logic behind it to the stakeholders to achieve a common understanding or for the
intent of feedback.
3. Technical review: It is a review meeting that focuses solely on the technical
aspects of the document under review to achieve a consensus. It has less or no
focus on the identification of defects based on reference documentation.
Technical experts like architects/chief designers are required for doing the
review. It can vary from Informal to fully formal.
4. Inspection: Inspection is the most formal category of reviews. Before the
inspection, The document under review is thoroughly prepared before going for
an inspection. Defects that are identified in the Inspection meeting are logged in
the defect management tool and followed up until closure. The discussion on
defects is avoided and a separate discussion phase is used for discussions, which
makes Inspections a very effective form of reviews.
2. Static Analysis: Static Analysis is an examination of requirement/code or design
with the aim of identifying defects that may or may not cause failures. For Example-
Reviewing the code for the following standards. Not following a standard is a defect

7
Jayant Sawarkar 9029397070

that may or may not cause a failure. There are many tools for Static Analysis that are
mainly used by developers before or during Component or Integration
Testing. Even Compiler is a Static Analysis tool as it points out incorrect usage of
syntax, and it does not execute the code per se. There are several aspects to the code
structure – Namely Data flow, Control flow, and Data Structure.
1. Data Flow: It means how the data trail is followed in a given program – How
data gets accessed and modified as per the instructions in the program. By Data
flow analysis, You can identify defects like a variable definition that never got
used.
2. Control flow: It is the structure of how program instructions get executed i.e
conditions, iterations, or loops. Control flow analysis helps to identify defects
such as Dead code i.e a code that never gets used under any condition.
3. Data Structure: It refers to the organization of data irrespective of code. The
complexity of data structures adds to the complexity of code. Thus, it provides
information on how to test the control flow and data flow in a given code.

Dynamic Testing Techniques


Dynamic techniques are subdivided into three categories:
1. Structure-based Testing:
These are also called White box techniques. Structure-based testing techniques are
focused on how the code structure works and test accordingly. To understand Structure-
based techniques, We first need to understand the concept of code coverage.
Code Coverage is normally done in Component and Integration Testing. It establishes
what code is covered by structural testing techniques out of the total code written. One
drawback of code coverage is that- it does not talk about code that has not been written
at all (Missed requirement), There are tools in the market that can help measure code
coverage.
There are multiple ways to test code coverage:
1. Statement coverage: Number of Statements of code exercised/Total number of
statements. For Example, If a code segment has 10 lines and the test designed by you
covers only 5 of them then we can say that statement coverage given by the test is 50%.
2. Decision coverage: Number of decision outcomes exercised/Total number of
Decisions. For Example, If a code segment has 4 decisions (If conditions) and your test
executes just 1, then decision coverage is 25%
3. Conditional/Multiple condition coverage: It has the aim to identify that each
outcome of every logical condition in a program has been exercised.
2. Experience-Based Techniques:
These are techniques of executing testing activities with the help of experience gained
over the years. Domain skill and background are major contributors to this type of
testing. These techniques are used majorly for UAT/business user testing. These work on
top of structured techniques like Specification-based and Structure-based, and it
complements them. Here are the types of experience-based techniques:

8
Jayant Sawarkar 9029397070

1. Error guessing: It is used by a tester who has either very good experience in testing
or with the application under test and hence they may know where a system might have
a weakness. It cannot be an effective technique when used stand-alone but is really
helpful when used along with structured techniques.
2. Exploratory testing: It is hands-on testing where the aim is to have maximum
execution coverage with minimal planning. The test design and execution are carried out
in parallel without documenting the test design steps. The key aspect of this type of
testing is the tester’s learning about the strengths and weaknesses of an application under
test. Similar to error guessing, it is used along with other formal techniques to be useful.
3. Specification-based Techniques:
This includes both functional and nonfunctional techniques (i.e. quality characteristics).
It basically means creating and executing tests based on functional or non-functional
specifications from the business. Its focus is on identifying defects corresponding to
given specifications. Here are the types of specification-based techniques:
1. Equivalence partitioning: It is generally used together and can be applied to any
level of testing. The idea is to partition the input range of data into valid and non-valid
sections such that one partition is considered “equivalent”. Once we have the partitions
identified, it only requires us to test with any value in a given partition assuming that all
values in the partition will behave the same. For example, if the input field takes the
value between 1-999, then values between 1-999 will yield similar results, and we need
NOT test with each value to call the testing complete.
Example:
To find a square root of a given number which:
Should be a whole number
Should be between 10-50
Should be a multiple of 10
Derived Equivalence Classes:
1 Number is a whole number Valid
2 Number is not a whole number Invalid
3 Number is between 10-50 Valid
4 Number is less than 10 Invalid
5 Number is greater than 50 Invalid
6 Number is a multiple of 10 Valid
7 Number is not a multiple of 10 Invalid
Test Case:
S.no Test Data Expected Result Classes covered
1 30 True 1,3,6
2 5 False 4,7
3 15.5 False 2
4 56 False 5

9
Jayant Sawarkar 9029397070

2. Boundary Value Analysis (BVA): This analysis tests the boundaries of the range-
both valid and invalid. In the example above, 0,1,999, and 1000 are boundaries that can
be tested. The reasoning behind this kind of testing is that more often than not,
boundaries are not handled gracefully in the code.
Example:
Check the salary of an Employee whose maximum salary is 20,000 and minimum is
10,000.
Boundary Value specification will be:
{9999, 10000, 20000, 20001}
Test Case:
S.No Test Data Expected Result
1 9999 False
2 10000 True
3 20000 True
4 20001 False

3. Decision Tables: These are a good way to test the combination of inputs. It is also
called a Cause-Effect table. In layman’s language, One can structure the conditions
applicable for the application segment under test as a table and identify the outcomes
against each one of them to reach an effective test.

Example:
If X stays at Y’s place. X has to pay rent to stay in Y’s house. If X does not have money to
pay, then will have to leave the house.
Causes:
 C1 – X stays at Y’s place.
 C2 – X has money to pay rent.
Effects:
 E1 – X can stay at Y’s place.
 E2 – X has to leave the house.
Cause-Effect Graph:

Decision Table:
C1 1 1

10
Jayant Sawarkar 9029397070

C2 1
E1 1 0
E2 1
Test Scenarios:
1. If X pays rent, X stays at Y’s house.
2. If X does not pay rent, X has to leave Y’s house.

4. Use case-based Testing: This technique helps us to identify test cases that execute
the system as a whole- like an actual user (Actor), transaction by transaction. Use cases
are a sequence of steps that describe the interaction between the Actor and the system.
They are always defined in the language of the Actor, not the system. This testing is
most effective in identifying the integration defects. Use case also defines any
preconditions and postconditions of the process flow.
5. State Transition Testing: It is used where an application under test or a part of it can
be treated as FSM or finite state machine. Continuing the simplified ATM example
above, We can say that ATM flow has finite states and hence can be tested with the State
transition technique. There are 4 basic things to consider –
1. States a system can achieve
2. Events that cause the change of state
3. The transition from one state to other
4. Outcomes of change of state

WHITE BOX TESTING


White Box Testing is the testing of a software solution's internal coding and infrastructure.
It focuses primarily on strengthening security, the flow of inputs and outputs through the
application, and improving design and usability. White box testing is also known as clear
box testing, open box testing, logic driven testing or path driven testing or structural
testing and glass box testing.
Using white-box testing methods, the software engineer can derive test cases that
(1) guarantee that all independent paths within a module have been exercised at least once,
(2) exercise all logical decisions on their true and false sides, (3) execute all loops at their
boundaries and within their operational bounds, and (4) exercise internal data structures to
ensure their validity.
Advantages of White Box Testing:
 Forces test developer to reason carefully about implementation.
 Reveals errors in "hidden" code.
 Spots the Dead Code or other issues with respect to best programming practices.
 It helps in optimizing the code.

 Extra lines of code can be removed which can bring in hidden defects.
 Due to the tester's knowledge about the code, maximum coverage is attained
during test scenario writing.
Disadvantages of White Box Testing:

11
Jayant Sawarkar 9029397070

 Expensive as one has to spend both time and money to perform white box testing.
 Every possibility that few lines of code are missed accidentally.
 In-depth knowledge about the programming language is necessary to perform
white box testing.
 Due to the fact that a skilled tester is needed to perform white box testing, the costs
are increased.
 Sometimes it is impossible to look into every nook and corner to find out hidden
errors that may create problems as many paths will go untested.
 It is difficult to maintain white box testing as the use of specialized tools like code
analyzers and debugging tools are required.
Different white-box testing methods are
1. Basis path testing
2. Control structure testing
BASIS PATH TESTING

Basis path testing is a structural testing method that involves using the source code of a
program to attempt to find every possible executable path. The idea is that we are then
able to test each individual path in as many ways as possible in order to maximize the
coverage of each test case. This gives the best possible chance of discovering all faults
within a piece of code.
The fact that path testing is based upon the source code of a program means that it is a
white box testing method. The ability to use the code for testing means that there exists a
basis on which test cases can be rigorously defined. This allows for both the test cases and
their results to be analysed mathematically, resulting in more precise measurement.

Flow Graph Notation: Before the basis path method can be introduced, a simple notation
for the representation of control flow, called a flow graph (or program graph) must be
introduced. The flow graph depicts logical control flow using the notation illustrated in
Figure 1. Each structured construct has a corresponding flow graph symbol.
To illustrate the use of a flow graph, we consider the procedural design representation in
Figure 2(A). Here, a flowchart is used to depict program control structure. Figure 2(B)
maps the flowchart into a corresponding flow graph (assuming that no compound
conditions are contained in the decision diamonds of the flowchart). Referring to
Figure 2B, each circle, called a flow graph node, represents one or more procedural

12
Jayant Sawarkar 9029397070

statements. A sequence of process boxes and a decision diamond can map into a single
node. The arrows on the flow graph, called edges or links, represent flow of control and
are analogous to flowchart arrows. An edge must terminate at a node, even if the node
does not represent any procedural statements (e.g., see the symbol for the if-then-else
construct). Areas bounded by edges and nodes are called regions. When counting regions,
we include the area outside the graph as a region.
When compound conditions are encountered in a procedural design, the generation of a
flow graph becomes slightly more complicated. A compound condition occurs when one
or more Boolean operators (logical OR, AND, NAND, NOR) is present in a conditional
statement. Referring to Figure 3, the PDL segment translates into the flow graph shown.
Note that a separate node is created for each of the conditions a and b in the statement IF a
OR b. Each node that contains a condition is called a predicate node and is characterized
by two or more edges emanating from it.

Figure 1:- Flow graph notation

13
Jayant Sawarkar 9029397070

Figure 2:- Flow chart (A) and Flow graph (B)

Figure 3:- Compound logic


Cyclomatic Complexity
Cyclomatic complexity is a software metric that provides a quantitative measure of the
logical complexity of a program. When used in the context of the basis path testing
method, the value computed for cyclomatic complexity defines the number of independent
paths in the basis set of a program and provides us with an upper bound for the number of
tests that must be conducted to ensure that all statements have been executed at least once.
An independent path is any path through the program that introduces at least one new set
of processing statements or a new condition. When stated in terms of a flow graph, an
independent path must move along at least one edge that has not been traversed before the
path is defined. For example, a set of independent paths for the flow graph illustrated in
Figure 2B is
path 1: 1-11
path 2: 1-2-3-4-5-10-1-11
path 3: 1-2-3-6-8-9-10-1-11
path 4: 1-2-3-6-7-9-10-1-11
Note that each new path introduces a new edge.

14
Jayant Sawarkar 9029397070

The path 1-2-3-4-5-10-1-2-3-6-8-9-10-1-11 is not considered to be an independent path


because it is simply a combination of already specified paths and does not traverse any
new edges.
Paths 1, 2, 3, and 4 constitute a basis set for the flow graph in Figure 2B. That is, if tests
can be designed to force execution of these paths (a basis set), every statement in the
program will have been guaranteed to be executed at least one time and every condition
will have been executed on its true and false sides. It should be noted that the basis set is
not unique. In fact, a number of different basis sets can be derived for a given procedural
design.
How do we know how many paths to look for? The computation of cyclomatic complexity
provides the answer.
Cyclomatic complexity has a foundation in graph theory and provides us with an
extremely useful software metric. Complexity is computed in one of three ways:
1. The number of regions of the flow graph correspond to the cyclomatic complexity.
2. Cyclomatic complexity, V(G), for a flow graph, G, is defined as
V(G) = E - N + 2
where E is the number of flow graph edges, N is the number of flow graph nodes.
3. Cyclomatic complexity, V(G), for a flow graph, G, is also defined as
V(G) = P + 1
where P is the number of predicate nodes contained in the flow graph G.
Referring once more to the flow graph in Figure 2B, the cyclomatic complexity can be
computed using each of the algorithms just noted:
1. The flow graph has four regions.
2. V(G) = 11 edges - 9 nodes + 2 = 4.
3. V(G) = 3 predicate nodes + 1 = 4.
Therefore, the cyclomatic complexity of the flow graph in Figure 2B is 4.
Deriving Test Cases
The basis path testing method can be applied to a procedural design or to source code. In
this section, we present basis path testing as a series of steps.
1. Using the design or code as a foundation, draw a corresponding flow graph.
1. Determine the cyclomatic complexity of the resultant flow graph.
2. Determine a basis set of linearly independent paths.
4. Prepare test cases that will force execution of each path in the basis set.
Each test case is executed and compared to expected results. Once all test cases have been
completed, the tester can be sure that all statements in the program have been executed at
least once.

15
Jayant Sawarkar 9029397070

It is important to note that some independent paths cannot be tested in stand-alone fashion.
That is, the combination of data required to traverse the path cannot be achieved in the
normal flow of the program. In such cases, these paths are tested as part of another path
test.
Graph Matrices
A graph matrix is a square matrix whose size (i.e., number of rows and columns) is equal
to the number of nodes on the flow graph. Each row and column corresponds to an
identified node, and matrix entries correspond to connections (an edge) between nodes. A
simple example of a flow graph and its corresponding graph matrix is shown in Fig.
Referring to the figure, each node on the flow graph is identified by numbers, while each
edge is identified by letters. A letter entry is made in the matrix to correspond to a
connection between two nodes. For example, node 3 is connected to node 4 by edge b.

Example:
Function fn_delete_element (int value, int array_size, int array[])
{
1 int i;
location = array_size + 1;

2 for i = 1 to array_size
3 if ( array[i] == value )
4 location = i;
end if;
end for;

5 for i = location to array_size


6 array[i] = array[i+1];
end for;
7 array_size --;
}
Steps to Calculate the independent paths and cyclomatic complexity.
Step 1 : Draw the Flow Graph of the Function/Program under consideration as shown
below:

16
Jayant Sawarkar 9029397070

Step 2 : Determine the independent paths.

Path 1: 1 - 2 - 5 - 7
Path 2: 1 - 2 - 5 - 6 - 7
Path 3: 1 - 2 - 3 - 2 - 5 - 6 - 7
Path 4: 1 - 2 - 3 - 4 - 2 - 5 - 6 - 7

Step 3: Determine the cyclomatic complexity


V(G) = 9 edges - 7 nodes + 2 = 4.
V(G) = 3 predicate nodes + 1 = 4.
Therefore, the cyclomatic complexity of the flow graph in Figure is 4.
CONTROL STRUCTURE TESTING
The basis path testing technique described in previous Section is one of a number of
techniques for control structure testing. Although basis path testing is simple and highly
effective, it is not sufficient in itself. In this section, other variations on control structure
testing are discussed. These broaden testing coverage and improve quality of white-box
testing.

1. Condition Testing
Condition testing is a test case design method that exercises the logical conditions
contained in a program module. A simple condition is a Boolean variable or a relational
expression, possibly preceded with one NOT (¬) operator. A relational expression takes
the form E1 <relational-operator> E2
where E1 and E2 are arithmetic expressions and <relational-operator> is one of the
following: <, ≤, =, ≠ (non equality), >, or ≥. A compound condition is composed of two or

17
Jayant Sawarkar 9029397070

more simple conditions, Boolean operators, and parentheses. We assume that Boolean
operators allowed in a compound condition include OR (|), AND (&) and NOT (¬). A
condition without relational expressions is referred to as a Boolean expression.
Therefore, the possible types of elements in a condition include a Boolean operator, a
Boolean variable, a pair of Boolean parentheses (surrounding a simple or compound
condition), a relational operator, or an arithmetic expression.

2. Data flow testing


The data flow testing method selects test paths of a program according to the locations of
definitions and uses of variables in the program.
To illustrate the data flow testing approach, assume that each statement in a program is
assigned a unique statement number and that each function does not modify its parameters
or global variables. For a statement with S as its statement number,
DEF(S) = {X | statement S contains a definition of X}
USE(S) = {X | statement S contains a use of X}
If statement S is an if or loop statement, its DEF set is empty and its USE set is based on
the condition of statement S. The definition of variable X at statement S is said to be live at
statement S' if there exists a path from statement S to statement S' that contains no other
definition of X.

3. Loop Testing
Loops are the cornerstone for the vast majority of all algorithms implemented in software.
And yet, we often pay them little heed while conducting software tests.
Loop testing is a white-box testing technique that focuses exclusively on the validity of
loop constructs. Four different classes of loops can be defined: simple loops, concatenated
loops, nested loops, and unstructured loops.

Simple loops: The following set of tests can be applied to simple loops, where n is the
maximum number of allowable passes through the loop.
1. Skip the loop entirely.
2. Only one pass through the loop.
3. Two passes through the loop.
4. m passes through the loop where m < n.
5. n -1, n, n + 1 passes through the loop.

18
Jayant Sawarkar 9029397070

Nested loops: If we were to extend the test approach for simple loops to nested loops, the
number of possible tests would grow geometrically as the level of nesting increases. This
would result in an impractical number of tests. Beizer suggests an approach that will help
to reduce the number of tests:
1. Start at the innermost loop. Set all other loops to minimum values.
2. Conduct simple loop tests for the innermost loop while holding the outer loops at their
minimum iteration parameter (e.g., loop counter) values. Add other tests for out-of-range
or excluded values.
3. Work outward, conducting tests for the next loop, but keeping all other outer loops at
minimum values and other nested loops to "typical" values.
4. Continue until all loops have been tested.

Concatenated loops: Concatenated loops can be tested using the approach defined for
simple loops, if each of the loops is independent of the other. However, if two loops are
concatenated and the loop counter for loop 1 is used as the initial value for loop 2, then the
loops are not independent. When the loops are not independent, the approach applied to
nested loops is recommended.

Unstructured loops: Whenever possible, this class of loops should be redesigned to


reflect the use of the structured programming constructs.

19
Jayant Sawarkar 9029397070

BLACK-BOX TESTING
The technique of testing without having any knowledge of the interior workings of the
application is Black Box testing. The tester is oblivious to the system architecture and does
not have access to the source code. Typically, when performing a black box test, a tester
will interact with the system’s user interface by providing inputs and examining outputs
without knowing how and where the inputs are worked upon.
OR
Black-box testing, also called behavioral testing, focuses on the functional requirements of
the software. That is, black-box testing enables the software engineer to derive sets of
input conditions that will fully exercise all functional requirements for a program.
Black-box testing attempts to find errors in the following categories: (1) incorrect or
missing functions, (2) interface errors, (3) errors in data structures or external data base
access, (4) behavior or performance errors, and (5) initialization and termination errors.
Unlike white-box testing, which is performed early in the testing process, black-box
testing tends to be applied during later stages of testing. Because black-box testing
purposely disregards control structure, attention is focused on the information domain.
Tests are designed to answer the following questions:
• How is functional validity tested?
20
Jayant Sawarkar 9029397070

• How is system behavior and performance tested?


• What classes of input will make good test cases?
• Is the system particularly sensitive to certain input values?
• How are the boundaries of a data class isolated?
• What data rates and data volume can the system tolerate?
• What effect will specific combinations of data have on system operation?
By applying black-box techniques, we derive a set of test cases that satisfy the following
criteria (1) test cases that reduce, by a count that is greater than one, the number of
additional test cases that must be designed to achieve reasonable testing and (2) test cases
that tell us something about the presence or absence of classes of errors, rather than an
error associated only with the specific test at hand.
Advantages:
 Well suited and efficient for large code segments.
 Code Access not required.
 Clearly separates user’s perspective from the developer’s perspective through
visibly defined roles.
 Large numbers of moderately skilled testers can test the application with no
knowledge of implementation, programming language or operating systems.
Disadvantages:
 Limited Coverage since only a selected number of test scenarios are actually
performed.
 Inefficient testing, due to the fact that the tester only has limited knowledge about
an application.
 Blind Coverage, since the tester cannot target specific code segments or error
prone areas.
 The test cases are difficult to design.
Following are the different black-box testing methods
1 Graph-Based Testing Methods
Software testing begins by creating a graph of important objects and their relationships
and then devising a series of tests that will cover the graph so that each object and
relationship is exercised and errors are uncovered.
To accomplish these steps, the software engineer begins by creating a graph a collection
of nodes that represent objects; links that represent the relationships between objects; node

21
Jayant Sawarkar 9029397070

weights that describe the properties of a node (e.g., a specific data value or state behavior);
and link weights that describe some characteristic of a link.

The symbolic representation of a graph is shown in Figure A. Nodes are represented as


circles connected by links that take a number of different forms. A directed link
(represented by an arrow) indicates that a relationship moves in only one direction. A
bidirectional link, also called a symmetric link, implies that the relationship applies in both
directions. Parallel links are used when a number of different relationships are established
between graph nodes.
As a simple example, consider a portion of a graph for a word-processing application
(Figure B) where
Object #1 = new file menu select
Object #2 = document window
Object #3 = document text
2 Equivalence Partitioning
Equivalence partitioning is a black-box testing method that divides the input domain of a
program into classes of data from which test cases can be derived. An ideal test case
single-handedly uncovers a class of errors that might otherwise require many cases to be
executed before the general error is observed.
Test case design for equivalence partitioning is based on an evaluation of equivalence
classes for an input condition. An equivalence class represents a set of valid or invalid
states for input conditions. Typically, an input condition is either a specific numeric value,
22
Jayant Sawarkar 9029397070

a range of values, a set of related values, or a Boolean condition. Equivalence classes may
be defined according to the following guidelines:
1. If an input condition specifies a range, one valid and two invalid equivalence classes are
defined.
2. If an input condition requires a specific value, one valid and two invalid equivalence
classes are defined.
3. If an input condition specifies a member of a set, one valid and one invalid equivalence
class are defined.
4. If an input condition is Boolean, one valid and one invalid class are defined.
3 Boundary Value Analysis
A greater number of errors tend to occur at the boundaries of the input domain rather than
in the "center." It is for this reason that boundary value analysis (BVA) has been
developed as a testing technique. Boundary value analysis leads to a selection of test cases
that exercise bounding values.
Boundary value analysis is a test case design technique that complements equivalence
partitioning. Rather than selecting any element of an equivalence class, BVA leads to the
selection of test cases at the "edges" of the class. Rather than focusing solely on input
conditions, BVA derives test cases from the output domain as well.
Guidelines for BVA are similar in many respects to those provided for equivalence
partitioning:
1. If an input condition specifies a range bounded by values a and b, test cases should be
designed with values a and b and just above and just below a and b.
2. If an input condition specifies a number of values, test cases should be developed that
exercise the minimum and maximum numbers. Values just above and below minimum and
maximum are also tested.
3. Apply guidelines 1 and 2 to output conditions. For example, assume that a temperature
vs. pressure table is required as output from an engineering analysis program. Test cases
should be designed to create an output report that produces the maximum (and minimum)
allowable number of table entries.
4. If internal program data structures have prescribed boundaries (e.g., an array has a
defined limit of 100 entries), be certain to design a test case to exercise the data structure
at its boundary.

23
Jayant Sawarkar 9029397070

Most software engineers intuitively perform BVA to some degree. By applying these
guidelines, boundary testing will be more complete, thereby having a higher likelihood for
error detection.
Example 1
Suppose you have very important tool at office, accepts valid User Name and Password
field to work on that tool, and accepts minimum 8 characters and maximum 12 characters.
Valid range 8-12, Invalid range 7 or less than 7 and Invalid range 13 or more than 13.
Write Test Cases for Valid partition value, Invalid partition value and exact boundary
value.
 Test Cases 1: Consider password length less than 8.
 Test Cases 2: Consider password of length exactly 8.
 Test Cases 3: Consider password of length between 9 and 11.
 Test Cases 4: Consider password of length exactly 12.
 Test Cases 5: Consider password of length more than 12.
Comparison between the Three Testing Types

SOFTWARE MAINTENANCE
Software maintenance is widely accepted part of software development life cycle (SDLC)
now a days. It stands for all the modifications and updations done after the delivery of
software product. There are number of reasons, why modifications are required, some of
them are briefly mentioned below:

24
Jayant Sawarkar 9029397070

 Market Conditions - Policies, which changes over the time, such as taxation and
newly introduced constraints like, how to maintain bookkeeping, may trigger need
for modification.
 Client Requirements - Over the time, customer may ask for new features or
functions in the software.
 Host Modifications - If any of the hardware and/or platform (such as operating
system) of the target host changes, software changes are needed to keep
adaptability.
 Organization Changes - If there is any business level change at client end, such
as reduction of organization strength, acquiring another company, organization
venturing into new business, need to modify in the original software may arise.
Types of maintenance
In a software lifetime, type of maintenance may vary based on its nature. It may be just a
routine maintenance tasks as some bug discovered by some user or it may be a large
event in itself based on maintenance size or nature. Following are some types of
maintenance based on their characteristics:
 Corrective Maintenance - This includes modifications and updations done in
order to correct or fix problems, which are either discovered by user or concluded
by user error reports.
 Adaptive Maintenance - This includes modifications and updations applied to
keep the software product up-to date and tuned to the ever changing world of
technology and business environment.
 Perfective Maintenance - This includes modifications and updates done in order
to keep the software usable over long period of time. It includes new features,
new user requirements for refining the software and improve its reliability and
performance.
 Preventive Maintenance - This includes modifications and updations to prevent
future problems of the software. It aims to attend problems, which are not
significant at this moment but may cause serious issues in future.
Cost of Maintenance
Reports suggest that the cost of maintenance is high. A study on estimating software
maintenance found that the cost of maintenance is as high as 67% of the cost of entire
software process cycle.

25
Jayant Sawarkar 9029397070

On an average, the cost of software maintenance is more than 50% of all SDLC phases.

Maintenance Activities

IEEE provides a framework for sequential maintenance process activities. It can be used
in iterative manner and can be extended so that customized items and processes can be
included.

These activities go hand-in-hand with each of the following phase:

26
Jayant Sawarkar 9029397070

 Identification & Tracing - It involves activities pertaining to identification of


requirement of modification or maintenance. It is generated by user or system
may itself report via logs or error messages. Here, the maintenance type is
classified also.
 Analysis - The modification is analyzed for its impact on the system including
safety and security implications. If probable impact is severe, alternative solution
is looked for. A set of required modifications is then materialized into requirement
specifications. The cost of modification/maintenance is analyzed and estimation is
concluded.
 Design - New modules, which need to be replaced or modified, are designed
against requirement specifications set in the previous stage. Test cases are created
for validation and verification.
 Implementation - The new modules are coded with the help of structured design
created in the design step. Every programmer is expected to do unit testing in
parallel.
 System Testing - Integration testing is done among newly created modules.
Integration testing is also carried out between new modules and the system.
Finally the system is tested as a whole, following regressive testing procedures.
 Acceptance Testing - After testing the system internally, it is tested for
acceptance with the help of users. If at this state, user complaints some issues they
are addressed or noted to address in next iteration.
 Delivery - After acceptance test, the system is deployed all over the organization
either by small update package or fresh installation of the system. The final
testing takes place at client end after the software is delivered.
Training facility is provided if required, in addition to the hard copy of user
manual.
 Maintenance management - Configuration management is an essential part of
system maintenance. It is aided with version control tools to control versions,
semi-version or patch management.

SOFTWARE RE-ENGINEERING
Software Re-engineering is a process of software development which is done to
improve the maintainability of a software system. Re-engineering is the examination
and alteration of a system to reconstitute it in a new form. This process encompasses a

27
Jayant Sawarkar 9029397070

combination of sub-processes like reverse engineering, forward engineering,


reconstructing etc.
Objectives of Re-engineering:
 To describe a cost-effective option for system evolution.
 To describe the activities involved in the software maintenance process.
 To distinguish between software and data re-engineering and to explain the problems
of data re-engineering.
Steps involved in Re-engineering:
1. Inventory Analysis
2. Document Reconstruction
3. Reverse Engineering
4. Code Reconstruction
5. Data Reconstruction
6. Forward Engineering

Diagrammatic Representation:

Re-engineering Cost Factors:


 The quality of the software to be re-engineered
 The tool support available for re-engineering
 The extent of the required data conversion
 The availability of expert staff for re-engineering

Advantages of Re-engineering:
Reduced Risk: As the software is already existing, the risk is less as compared to new
software development. Development problems, staffing problems and specification
problems are the lots of problems which may arise in new software development.

28
Jayant Sawarkar 9029397070

 Reduced Cost: The cost of re-engineering is less than the costs of developing new
software.
 Revelation of Business Rules: As a system is re-engineered , business rules that are
embedded in the system are rediscovered.
 Better use of Existing Staff: Existing staff expertise can be maintained and
extended accommodate new skills during re-engineering.
Disadvantages of Re-engineering:
 Practical limits to the extent of re-engineering.
 Major architectural changes or radical reorganizing of the systems data management
has to be done manually.
 Re-engineered system is not likely to be as maintainable as a new system developed
using modern software Re-engineering methods.

REVERSE ENGINEERING
Reverse engineering, also called back engineering, is the processes of
extracting knowledge or design information from anything man-made and re-producing it
or reproducing anything based on the extracted information.
Reverse engineering can extract design information from source code, but the abstraction
level, the completeness of the documentation, the degree to which tools and a human
analyst work together, and the directionality of the process are highly variable.
The abstraction level of a reverse engineering process and the tools used to effect it refers
to the sophistication of the design information that can be extracted from source code.
Ideally, the abstraction level should be as high as possible. That is, the reverse engineering
process should be capable of deriving procedural design representations (a low-level
abstraction), program and data structure information (a somewhat higher level of
abstraction), data and control flow models (a relatively high level of abstraction), and
entity relationship models (a high level of abstraction). As the abstraction level increases,
the software engineer is provided with information that will allow easier understanding of
the program.
The completeness of a reverse engineering process refers to the level of detail that is
provided at an abstraction level. In most cases, the completeness decreases as the
abstraction level increases. For example, given a source code listing, it is relatively easy to
develop a complete procedural design representation. Simple data flow representations
may also be derived, but it is far more difficult to develop a complete set of data flow
diagrams or entity-relationship models.

29
Jayant Sawarkar 9029397070

Completeness improves in direct proportion to the amount of analysis performed by the


person doing reverse engineering. Interactivity refers to the degree to which the human is
"integrated" with automated tools to create an effective reverse engineering process. In
most cases, as the abstraction level increases, interactivity must increase or completeness
will suffer.
If the directionality of the reverse engineering process is one way, all information
extracted from the source code is provided to the software engineer who can then use it
during any maintenance activity. If directionality is two way, the information is fed to a
reengineering tool that attempts to restructure or regenerate the old program.

Figure: A reverse engineering process


Q1. Consider the following ‘C’ function named
Int compute_get(intx,inty)
{
while (x!-y)
{
if (x>y) then
x=x-y;
elese y=y-x;

30
Jayant Sawarkar 9029397070

}
return x;
}
Determine the cyclomatic complexity of the above problem and list different linearly independent
paths using control flow graph. S-14, 5M
Q2. Draw DFD (Level0, 1 and 2) for above mentioned project and explain. S-14, 10M
Q3. What are the main advantages of using an object oriented approach to software design over a
function oriented approach? S-14, 10M
Q4. Explain the term unit testing integration testing. Also compare top-down and bottom-up
testing. S-14, 10M
Q5. Write short note on 1. Reverse engineering, 2. Test driven development S-14, 20M
Q6. What are the advantages of test driven development? W-14 10M
Q7. Explain different steps in requirement engineering. W-14 10M
Q8. What test is carried during verification and validation? Explain with example. W-14, 10M
Q9. Explain different type of maintenance with suitable example. W-14, 10M
Q10. List down and explain the activities of scheduling and tracking for library management
system. W-14, 10M
Q11. Write short note on 1. Reengineering, 2. Security engineering, 3. White box and black box
testing. W-14, 10M each
Q12. Compare validation and verification testing.S-15, 10M
Q13. Explain the different types of software maintenance. S-15, 10M
Q14. Compare black box and white box testing. Find cyclomatic complexity of following code. S-
15, 10M
IF A = 10 THEN
IF B > C THEN
A=B
ELSE A = C
END IF
END IF
PRINT A
PRINT B
PRINT C
Q15. Explain software reverse engineering in detail. S-15, 10M

1. Black box testing is a type of testing which is done without knowing about the internal
structure of the code But In White box testing the internal structure of the code is
known to tester who will test the software.
2. Black box testing is done by tester but White box testing is generally done by
developer.
3. Knowledge of code is not require in Black box testing but The knowledge of code is
required in White box testing.
4. Black box testing is done for higher level of testing like User acceptance testing or
System testing but White box testing is done in lower level of testing like Unit testing
or integration testing.
5. In black box testing the external test of the software take place but in white
box testing the internal structural test of the software take place.

31
Jayant Sawarkar 9029397070

6. In black box testing the functionality and UI of the application comes under test but
In white box testing the testing of performance of the application is comes under test.
7. Black box testing is based on requirement document of the client but White
box testing is based on detail flow of designed document.
8. No tool require to implement black box testing but Tools are required to do white
box testing.
Differences between Black Box Testing vs White Box Testing:
BLACK BOX TESTING WHITE BOX TESTING
It is a way of software testing in which the It is a way of testing the software in which
internal structure or the program or the the tester has knowledge about the internal
code is hidden and nothing is known structure r the code or the program of the
about it. software.
It is mostly done by software testers. It is mostly done by software developers.
No knowledge of implementation is Knowledge of implementation is required.
needed.
It can be referred as outer or external It is the inner or the internal software
software testing. testing.
It is functional test of the software. It is structural test of the software.
This testing can be initiated on the basis This type of testing of software is started
of requirement specifications document. after detail design document.
No knowledge of programming is It is mandatory to have knowledge of
required. programming.
It is the behavior testing of the software. It is the logic testing of the software.
It is applicable to the higher levels of It is generally applicable to the lower
testing of software. levels of software testing.
It is also called closed testing. It is also called as clear box testing.
It is least time consuming. It is most time consuming.
It is not suitable or preferred for algorithm It is suitable for algorithm testing.
testing.
Can be done by trial and error ways and Data domains along with inner or internal
methods. boundaries can be better tested.
Example: search something on google by Example: by input to check and verify
using keywords loops
Types of Black Box Testing: Tyeps of White Box Testing:
 A. Functional Testing  A. Path Testing
 B. Non-functional testing  B. Loop Testing
 C. Regression Testing  C. Condition testing

VERIFICATION AND VALIDATION


What is Verification?
Verification is a process of evaluating the intermediary work products of a software
development lifecycle to check if we are in the right track of creating the final product.
Now the question here is: What are the intermediary products? Well, These can include
the documents which are produced during the development phases like, requirements
specification, design documents, data base table design, ER diagrams, test cases etc. We

32
Jayant Sawarkar 9029397070

sometimes tend to neglect the importance of reviewing these documents but we should
understand that reviewing itself can find out many hidden anomalies when if found or
fixed in the later phase of development cycle, can be very costly.
In other words we can also state that verification is a process to evaluate the mediator
products of software to check whether the products satisfy the conditions imposed during
the beginning of the phase.
What is Validation?
Validation is the process of evaluating the final product to check whether the software
meets the business needs. In simple words the test execution which we do in our day to
day life are actually the validation activity which includes smoke testing, functional
testing, regression testing, systems testing etc.
Verification Are we building the system right?
Validation Are we building the right system?
Verification is the process of evaluating products of a development phase to find out
whether they meet the specified requirements.
Validation is the process of evaluating software at the end of the development process to
determine whether software meets the customer expectations and requirements.
The objective of Verification is to make sure that the product being develop is as per the
requirements and design specifications.
The objective of Validation is to make sure that the product actually meet up the user’s
requirements, and check whether the specifications were correct in the first place.
Following activities are involved in Verification: Reviews, Meetings and Inspections.
Following activities are involved in Validation: Testing like black box testing, white box
testing, gray box testing etc.
Verification is carried out by QA team to check whether implementation software is as
per specification document or not.
Validation is carried out by testing team.
Execution of code is not comes under Verification.
Execution of code is comes under Validation.
Verification process explains whether the outputs are according to inputs or not.
Validation process describes whether the software is accepted by the user or not.
Verification is carried out before the Validation.
Validation activity is carried out just after the Verification.

33
Jayant Sawarkar 9029397070

Following items are evaluated during Verification: Plans, Requirement Specifications,


Design Specifications, Code, Test Cases etc,
Following item is evaluated during Validation: Actual product or Software under test.
Cost of errors caught in Verification is less than errors found in Validation.
Cost of errors caught in Validation is more than errors found in Verification.
It is basically manually checking of the documents and files like requirement
specifications etc.
It is basically checking of developed program based on the requirement specifications
documents & files.
Difference between software Verification and Validation:

Verification Validation
Are we building the system right? Are we building the right system?
Verification is the process of evaluating Validation is the process of evaluating
products of a development phase to find software at the end of the development
out whether they meet the specified process to determine whether software
requirements. meets the customer expectations and
requirements.
The objective of Verification is to make The objective of Validation is to make sure
sure that the product being develop is as that the product actually meet up the user’s
per the requirements and design requirements, and check whether the
specifications. specifications were correct in the first place.
Following activities are involved Following activities are involved
in Verification: Reviews, Meetings and in Validation: Testing like black box
Inspections. testing, white box testing, gray box testing
etc.
Verification is carried out by QA team to Validation is carried out by testing team.
check whether implementation software is
as per specification document or not.
Execution of code is not comes Execution of code is comes
under Verification. under Validation.
Verification process explains whether the Validation process describes whether the
outputs are according to inputs or not. software is accepted by the user or not.
Verification is carried out before the Validation activity is carried out just after
Validation. the Verification.
Following items are evaluated Following item is evaluated
during Verification: Plans, Requirement during Validation: Actual product or
Specifications, Design Specifications, Software under test.
Code, Test Cases etc,
Cost of errors caught in Verification is less Cost of errors caught in Validation is more

34
Jayant Sawarkar 9029397070

than errors found in Validation. than errors found in Verification.


It is basically manually checking the of It is basically checking of developed
documents and files like requirement program based on the requirement
specifications etc. specifications documents & files.

35

You might also like