You are on page 1of 210

SCHOOL OF COMPUTING

DEPARTMENT OF COMPUTER SCIENCE AND


ENGINEERING
VTU R15

B.TECH – CSE

OBJECT ORIENTED SOFTWARE


ENGINEERING
SCHOOL OF COMPUTING
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING
VTU R15 B.TECH – CSE

UNIT-1 INTRODUCTION
Introduction to Software Engineering - Software Development process models – Agile Development - Project &
Process - Project management - Process & Project metrics - Object Oriented concepts, Principles &
Methodologies.

Software

Software is:
(1) Instructions (computer programs) that when executed provide desired features, function, and performance;
(2) Data structures that enable the programs to adequately manipulate information, and
(3) Descriptive information in both hard copy and virtual forms that describes the operation and use of the
programs

Software has characteristics that are considerably different than those of hardware:

1. Software is developed or engineered; it is not manufactured in the classical sense. Although some
similarities exist between software development and hardware manufacturing, the two activities are
fundamentally different. In both activities, high quality is achieved through good design, but the
manufacturing phase for hardware can introduce quality problems that are nonexistent
2. Software doesn’t “wear out.”

The relationship, often called the ―bathtub curve,‖ indicates that hardware exhibits relatively high
failure rates early in its life (these failures are often attributable to design or manufacturing defects);
defects are corrected and the failure rate drops to a steady-state level (hopefully, quite low) for some
period of time. As time passes, however, the failure rate rises again as hardware components suffer from
the cumulative effects of dust, vibration, abuse, temperature extremes, and many other environmental
maladies. Stated simply, the hardware begins to wear out.
3. Although the industry is moving toward component-based construction, most software continues
to be custom built

As an engineering discipline evolves, a collection of standard design components is created. Standard


screws and off-the-shelf integrated circuits are only two of thousands of standard components that are
used by mechanical and electrical engineers as they design new systems.

A software component should be designed and implemented so that it can be reused in many different
programs. Modern reusable components encapsulate both data and the processing that is applied to the
data, enabling the software engineer to create new applications from reusable parts.

Legacy Software

Legacy software systems . . . were developed decades ago and have been continually modified
to meet changes in business requirements and computing platforms. The proliferation of such
systems is causing headaches for large organizations who find them costly to maintain and risky
to evolve

Software engineering is a layered technology

The bedrock that supports software engineering is a quality focus

The foundation for software engineering is the process layer. The software engineering process is the glue that
holds the technology layers together and enables rational and timely development of computer software.
Process defines a framework that must be established for effective delivery of software engineering technology.
The software process forms the basis for management control of software projects and establishes the context in
which technical methods are applied, work products (models, documents, data, reports, forms, etc.) are
produced, milestones are established, quality is ensured, and change is properly managed.

Software engineering methods provide the technical how-to‘s for building software. Methods encompass a
broad array of tasks that include communication, requirements analysis, design modeling, program construction,
testing, and support. Software engineering methods rely on a set of basic principles that govern each area of the
technology and include modeling activities and other descriptive techniques. Software engineering tools provide
automated or semiautomated support for the process and the methods. When tools are integrated so that
information created by one tool can be used by another, a system for the support of software development,
called computer-aided software engineering, is established
THE SOFTWARE PROCESS

A process is a collection of activities, actions, and tasks that are performed when some work product is to be
created.

A generic process framework for software engineering encompasses five activities:

Communication.

Before any technical work can commence, it is critically important to communicate and collaborate with the
customer (and other stakeholders11 The intent is to understand stakeholders‘ objectives for the project and to
gather requirements that help define software features and functions.

Planning.

Any complicated journey can be simplified if a map exists. A software project is a complicated journey, and the
planning activity creates a ―map‖ that helps guide the team as it makes the journey. The map—called a
software project plan—defines the software engineering work by describing the technical tasks to be conducted,
the risks that are likely, the resources that will be required, the work products to be produced, and a work
schedule.

Modeling.

Whether you‘re a landscaper, a bridge builder, an aeronautical engineer, a carpenter, or an architect, you work
with models every day. You create a ―sketch‖ of the thing so that you‘ll understand the big picture—what it
will look like architecturally, how the constituent parts fit together, and many other characteristics. If required,
you refine the sketch into greater and greater detail in an effort to better understand the problem and how you‘re
going to solve it. A software engineer does the same thing by creating models to better understand software
requirements and the design that will achieve those requirements.

Construction.

This activity combines code generation (either manual or automated) and the testing that is required to uncover
errors in the code.

Deployment.

The software (as a complete entity or as a partially completed increment) is delivered to the customer who
evaluates the delivered product and provides feedback based on the evaluation.

Umbrella activities :

Software engineering process framework activities are complemented by a number of umbrella activities. In
general, umbrella activities are applied throughout a software project and help a software team manage and
control progress, quality, change, and risk. Typical umbrella activities include:

Software project tracking and control—allows the software team to assess progress against
the project plan and take any necessary action to maintain the schedule.
Risk management—assesses risks that may affect the outcome of the project or the
quality of the product.
Software quality assurance—defines and conducts the activities required to ensure software
quality.
Technical reviews— assesses software engineering work products in an effort to uncover and
remove errors before they are propagated to the next activity.
Measurement—defines and collects process, project, and product measures that assist the team
in delivering software that meets stakeholders‘ needs; can be used in conjunction with all other
framework and umbrella activities.
Software configuration management—manages the effects of change throughout the software
process.

General Principles

David Hooker has proposed seven principles that focus on software engineering practice

The First Principle: The Reason It All Exists


A software system exists for one reason: to provide value to its users. All decisions should be made with
this in mind
Before determining the hardware platforms or development processes, ask yourself questions such as:
―Does this add real value to the system?‖ If the answer is ―no,‖ don‘t do it
The Second Principle: KISS (Keep It Simple, Stupid!)
All design should be as simple as possible, but no simpler. it often takes a lot of thought and work over
multiple iterations to simplify. The payoff is software that is more maintainable and less error-prone.
The Third Principle: Maintain the Vision
A clear vision is essential to the success of a software project. Compromising the architectural vision of
a software system weakens and will eventually break even the well-designed systems.
The Fourth Principle: What You Produce, Others Will Consume
Always specify, design, and implement knowing someone else will have to understand what you are
doing.
The Fifth Principle: Be Open to the Future
The systems must be ready to adapt to the changes. Systems that do this successfully are those that have
been designed this way from the start. Never design yourself into a corner. Always ask ―what if,‖ and
prepare for all possible answers by creating systems that solve the general problem, not just the specific
one.
The Sixth Principle: Plan Ahead for Reuse
Planning ahead for reuse reduces the cost and increases the value of both the reusable components and
the systems into which they are incorporated.
The Seventh principle: Think!
Placing clear, complete thought before action almost always produces better results.

CHAPTER 2 PROCESS MODELS

A process was defined as a collection of work activities, actions, and tasks that are performed when some
work product is to be created. Each of these activities, actions, and tasks reside within a framework or
model that defines their relationship with the process and with one another

Five framework activities—communication, planning, modeling, construction, and deployment. In


addition,
Set of umbrella activities—project tracking and control, risk management, quality assurance,
configuration management, technical reviews, and others—are applied throughout the process.

Process flow—describes how the framework activities and the actions and tasks that occur within each
framework activity are organized with respect to sequence and time

A linear process flow executes each of the five framework activities in sequence, beginning with
communication and culminating with deployment

iterative process flow repeats one or more of the activities before proceeding to the next

An evolutionary process flow executes the activities in a ―circular‖ manner. Each circuit through the five
activities leads to a more complete version of the software

A parallel process flow (Figure d) executes one or more activities in parallel with other activities (e.g.,
modeling for one aspect of the software might be executed in parallel with construction of another aspect of the
software).
Example:

For a small software project requested by one person (at a remote location) with simple, straightforward
requirements, the communication activity might encompass little more than a phone call or email with the
appropriate stakeholder. Therefore, the only necessary action is phone conversation, and the work tasks
(the task set) that this action encompasses are:

1. Make contact with stakeholder via telephone.


2. Discuss requirements and develop notes.
3. Organize notes into a brief written statement of requirements.
4. Email to stakeholder for review and approval.
A software process framework
IDENTIFYING A TASK SET
A process model provides a specific roadmap for software engineering work. It defines the flow of all activities,
actions and tasks, the degree of iteration, the work products, and the organization of the work that must be done.

The Waterfall Model

The waterfall model, sometimes called the classic life cycle


The Waterfall Model was the first Process Model to be introduced. It is very simple to understand and use. In a
Waterfall model, each phase must be completed before the next phase can begin and there is no overlapping in
the phases. The waterfall model is the earliest SDLC approach that was used for software development.
As the Waterfall Model illustrates the software development process in a linear sequential flow; hence it is also
referred to as a Linear-Sequential Life Cycle Model.
Sequential Phases in the Waterfall Model
Requirements: The first phase involves understanding what needs to design and what is its function,
purpose, etc. Here, the specifications of the input and output or the final product are studied and
marked.
System Design: The requirement specifications from the first phase are studied in this phase and
system design is prepared. System Design helps in specifying hardware and system requirements
and also helps in defining overall system architecture. The software code to be written in the next
stage is created now.
Implementation: With inputs from system design, the system is first developed in small programs
called units, which are integrated into the next phase. Each unit is developed and tested for its
functionality which is referred to as Unit Testing.
Integration and Testing: All the units developed in the implementation phase are integrated into a
system after testing of each unit. The software designed, needs to go through constant software
testing to find out if there are any flaws or errors. Testing is done so that the client does not face any
problem during the installation of the software.
Deployment of System: Once the functional and non-functional testing is done, the product is
deployed in the customer environment or released into the market.
Maintenance: This step occurs after installation, and involves making modifications to the system or
an individual component to alter attributes or improve performance. These modifications arise either
due to change requests initiated by the customer, or defects uncovered during live use of the system.
The client is provided with regular maintenance and support for the developed software.

Advantages of the Waterfall Model


The advantage of waterfall development is that it allows for departmentalization and control. A schedule can
be set with deadlines for each stage of development and a product can proceed through the development
process model phases one by one.
The waterfall model progresses through easily understandable and explainable phases and thus it is easy to
use.
It is easy to manage due to the rigidity of the model – each phase has specific deliverables and a review
process.
In this model, phases are processed and completed one at a time and they do not overlap. The
waterfall model works well for smaller projects where requirements are very well understood.

Disadvantages of Waterfall Model


It is difficult to estimate time and cost for each phase of the development process.
Once an application is in the testing stage, it is very difficult to go back and change something that was not
well-thought-out in the concept stage.
Not a good model for complex and object-oriented projects.
Not suitable for the projects where requirements are at a moderate to high risk of changing.
The V-model

The V-model is a type of SDLC model where process executes in a sequential manner in V-shape. It is also
known as Verification and Validation model. It is based on the association of a testing phase for each
corresponding development stage. Development of each step directly associated with the testing phase. The
next phase starts only after completion of the previous phase i.e. for each development activity, there is a testing
activity corresponding to it.

Verification: It involves static analysis technique (review) done without executing code. It is the process of
evaluation of the product development phase to find whether specified requirements meet. Validation: It
involves dynamic analysis technique (functional, non-functional), testing done by executing code. Validation
is the process to evaluate the software after the completion of the development phase to determine whether
software meets the customer expectations and requirements. So V-Model contains Verification phases on one
side of the Validation phases on the other side. Verification and Validation phases are joined by coding phase
in V-shape. Thus it is called V-Model.
Design Phase:
Requirement Analysis: This phase contains detailed communication with the customer to understand
their requirements and expectations. This stage is known as Requirement Gathering.
System Design: This phase contains the system design and the complete hardware and
communication setup for developing product.
Architectural Design: System design is broken down further into modules taking up different
functionalities. The data transfer and communication between the internal modules and with the outside
world (other systems) is clearly understood.
Module Design: In this phase the system breaks dowm into small modules. The detailed design of
modules is specified, also known as Low-Level Design (LLD).
Testing Phases:
Unit Testing: Unit Test Plans are developed during module design phase. These Unit Test Plans are
executed to eliminate bugs at code or unit level.
Integration testing: After completion of unit testing Integration testing is performed. In integration
testing, the modules are integrated and the system is tested. Integration testing is performed on the
Architecture design phase. This test verifies the communication of modules among themselves.
System Testing: System testing test the complete application with its functionality, inter dependency,
and communication.It tests the functional and non-functional requirements of the developed
application.
User Acceptance Testing (UAT): UAT is performed in a user environment that resembles the
production environment. UAT verifies that the delivered system meets user‘s requirement and system
is ready for use in real world.
Advantages:
This is a highly disciplined model and Phases are completed one at a time.
V-Model is used for small projects where project requirements are clear.
Simple and easy to understand and use.
This model focuses on verification and validation activities early in the life cycle thereby
enhancing the probability of building an error-free and good quality product.
It enables project management to track progress accurately.
Disadvantages:
High risk and uncertainty.
It is not a good for complex and object-oriented projects.
It is not suitable for projects where requirements are not clear and contains high risk of changing.
This model does not support iteration of phases.
It does not easily handle concurrent events.

Incremental Process Models


Incremental Model is a process of software development where requirements are broken down into multiple
standalone modules of software development cycle. Incremental development is done in steps from analysis
design, implementation, testing/verification, maintenance.

Each iteration passes through the requirements, design, coding and testing phases. And each subsequent
release of the system adds function to the previous release until all designed functionality has been
implemented.

The system is put into production when the first increment is delivered. The first increment is often a core
product where the basic requirements are addressed, and supplementary features are added in the next
increments. Once the core product is analysed by the client, there is plan development for the next increment.
Advantages and Disadvantages of Incremental Model

Advantages Disadvantages

The software will be generated quickly It requires a good planning designing


during the software life cycle

It is flexible and less expensive to change Problems might cause due to system
requirements and scope architecture as such not all requirements
collected up front for the entire software
lifecycle

Throughout the development stages Each iteration phase is rigid and does not
changes can be done overlap each other

This model is less costly compared to Rectifying a problem in one unit requires
others correction in all the units and consumes a lot
of time

A customer can respond to each building


Errors are easy to be identified

Evolutionary Process Models


Evolutionary model is a combination of Iterative and Incremental model of software development life cycle.
Delivering your system in a big bang release, delivering it in incremental process over time is the action done in
this model. Some initial requirements and architecture envisioning need to be done. Evolutionary model suggests
breaking down of work into smaller chunks, prioritizing them and then delivering those chunks to the customer
one by one. The number of chunks is huge and is the number of deliveries made to the customer. The main
advantage is that the customer‘s confidence increases as he constantly gets quantifiable goods or services from
the beginning of the project to verify and validate his requirements. The model allows for changing requirements
as well as all work in broken down into maintainable work chunks.
Application of Evolutionary Model:
1. It is used in large projects where you can easily find modules for incremental implementation.
Evolutionary model is commonly used when the customer wants to start using the core features instead
of waiting for the full software.
2. Evolutionary model is also used in object oriented software development because the system can be
easily portioned into units in terms of objects.
Advantages:
In evolutionary model, a user gets a chance to experiment partially developed system.
It reduces the error because the core modules get tested thoroughly.
Disadvantages:
Sometimes it is hard to divide the problem into several versions that would be acceptable to the
customer which can be incrementally implemented and delivered.

Prototyping .

1.Requirements gathering and analysis: A prototyping model begins with requirements analysis and the
requirements of the system are defined in detail. The user is interviewed in order to know the requirements of the
system.
2. Quick design: When requirements are known, a preliminary design or quick design for the system is created.
It is not a detailed design and includes only the important aspects of the system, which gives an idea of the
system to the user. A quick design helps in developing the prototype.
3. Build prototype: Information gathered from quick design is modified to form the first prototype, which
represents the working model of the required system.
4. User evaluation: Next, the proposed system is presented to the user for thorough evaluation of the prototype
to recognize its strengths and weaknesses such as what is to be added or removed. Comments and suggestions
are collected from the users and provided to the developer.
5. Refining prototype: Once the user evaluates the prototype and if he is not satisfied, the current prototype is
refined according to the requirements. That is, a new prototype is developed with the additional information
provided by the user. The new prototype is evaluated just like the previous prototype. This process continues
until all the requirements specified by the user are met. Once the user is satisfied with the developed prototype, a
final system is developed on the basis of the final prototype.
6. Engineer product: Once the requirements are completely met, the user accepts the final prototype. The final
system is evaluated thoroughly followed by the routine maintenance on regular basis for preventing large-scale
failures and minimizing downtime.
A spiral model

The spiral model is another important SDLC model that came into use when the iteration in product
development came into the applied concept. The initial phase of the Spiral model is the early stages of
Waterfall Life Cycle that are needed to develop a software product. This model
supports risk handling, and the project is delivered in loops. Each loop in the Spiral model is the phases
of the software development process.

Different Phases of the Spiral model


The phase of the spiral model has four quadrants, and each of them represents some specific stage of
software development. The functions of these four quadrants are listed below:

1. Planning objectives or identify alternative solutions: In this stage, requirements are collected from customers
and then the aims are recognized, elaborated as well as analyzed at the beginning of developing the project. If
the iterative round is more than one, then an alternative solution is proposed in the same quadrant.
2. Risk analysis and resolving: As the process goes to the second quadrant, all likely solutions are sketched, and
then the best solution among them gets select. Then the different types of risks linked with the chosen solution
are recognized and resolved through the best possible approach. As the spiral goes to the end of this quadrant,
a project prototype is put up for the most excellent and likely solution.
3. Develop the next level of product: As the development progress goes to the third quadrant, the well-known
and mostly required features are developed as well as verified with the testing methodologies. As this stage
proceeds to the end of this third quadrant, new software or the next version of existing software is ready to
deliver.
4. Plan the next Phase: As the development process proceeds in the fourth quadrant, the customers appraise the
developed version of the project and reports if any further changes are required. At last, planning for the
subsequent phase is initiated.
Advantages of the Spiral Model
The spiral model has some advantages compared to other SDLC models:

Suitable for large projects: Spiral models are recommended when the project is large, bulky or
complex to develop.
Risk Handling: There are a lot of projects that have un-estimated risks involved with them. For such
projects, the spiral model is the best SDLC model to pursue because it can analyze risk as well as handling
risks at each phase of development.
Customer Satisfaction: Customers can witness the development of product at every stage and thus, they can
let themselves habituated with the system and throw feedbacks accordingly before the final product is made.
Requirements flexibility: All the specific requirements needed at later stages can be included
precisely if the development is done using this model.
AGILE DEVELOPMENT

Agile software engineering combines a philosophy and a set of development guidelines. The philosophy
encourages customer satisfaction and early incremental delivery of software; small, highly motivated project
teams; informal methods; minimal software engineering work products; and overall development simplicity. The
development guidelines stress delivery over analysis and design (although these activities are not discouraged),
and active and continuous communication between developers and customers.
It promotes adaptive planning, evolutionary development and early delivery for your highly iterative and
incremental approaches to software development.

In software development, the term ‗agile‘ means ‗the ability to respond to changes – change from
Requirements, Technology, and People.
It is an iterative and incremental process.
Direct collaboration with the customers.
Each iteration lasts from one to three weeks.
Delivers multiple Software Increments.
Engineering actions are carried out by cross-functional teams.

Agility Principles
The Agile Alliance (see [Agi03], [Fow01]) defines 12 agility principles for those who want to achieve agility:
1. Our highest priority is to satisfy the customer through early and continuous delivery of valuable
software.
2. Welcome changing requirements, even late in development. Agile processes harness change for the
customer's competitive advantage.
3. Deliver working software frequently, from a couple of weeks to a couple of months, with a preference to the
shorter timescale.
4. Business people and developers must work together daily throughout the
project.
5. Build projects around motivated individuals. Give them the environment and
support they need, and trust them to get the job done.
6. The most efficient and effective method of conveying information to and
within a development team is face-to-face conversation.
7. Working software is the primary measure of progress.
8. Agile processes promote sustainable development. The sponsors, developers, and users should be able to
maintain a constant pace indefinitely.
9. Continuous attention to technical excellence and good design enhances agility.
10. Simplicity—the art of maximizing the amount of work not done—is essential.
11. The best architectures, requirements, and designs emerge from self-organizing teams.
12. At regular intervals, the team reflects on how to become more effective, then tunes and adjusts its
behavior accordingly.

Advantages of Agile Development Model


Agile Development Model provides additional techniques obtainable so in that case, if there is any
kind of Modify request or improvements appears among any level, it could be applied without any
budget.
In Agile Development Model efficiency could be produced quickly.
The benefit of Agile Development Model can be conserving of your time as well as, money.
It encourages team-work and cross training and need minimal resources.
It suites in fixed or evolving desires.
You can easily control and it is flexible for developers.
Working software could be delivered constantly i.e. in Weeks or Months.
Regularly or weekly interaction among entrepreneurs and developers promotes software
development speed.
It primarily concentrates on the deliverable and fewer about paperwork.
Customer, developers, and tester continuously interact with each other.

Disadvantages of Agile Development Model


If the client-consultant is definitely not clear what the end result they need after the project they can
simply get the track removed.
There is certainly large people dependency as you can find minimal paperwork is completed.
It is not ideal for managing complicated dependencies.
Transfer of technology towards the additional new team is usually hard because there is very much
less paperwork is completed.
It offers a few troubles to testing due to insufficient documentation

- Project & Process - Project management.


Project management involves the planning, monitoring, and control of the people, process, and events that
occur as software evolves from a preliminary concept to full operational deployment.

THE MANAGEMENT SPECTRUM

Effective software project management focuses on the four Ps:


People,
Product,
Process, and
Project.

The People:

The people capability maturity model defi nes the following key practice areas for software people: staffi ng,
communication and coordination, work environment, performance management, training, compensation,
competency analysis and development, career development, workgroup development, and team/ culture
development, and others. Organizations that achieve high levels of People-CMM maturity have a higher
likelihood of implementing effective software project management practices.
People build computer software, and projects succeed because well-trained, motivated people get things done.
The Stakeholders The software process (and every software project) is populated by stakeholders who can be
categorized into one of five constituencies:
1. Senior managers who defi ne the business issues that often have a signifi - cant infl uence on the project.
2. Project (technical) managers who must plan, motivate, organize, and control the practitioners who do
software work.
3. Practitioners who deliver the technical skills that are necessary to engineer a product or application.
4. Customers who specify the requirements for the software to be engineered and other stakeholders who
have a peripheral interest in the outcome.
5. End users who interact with the software once it is released for production use.

The Product

Before a project can be planned, product objectives and scope should be established, alternative
solutions should be considered, and technical and management constraints should be identifi ed.
Software Scope
The first software project management activity is the determination of software scope. Scope is defined by
answering the following questions:
Context. How does the software to be built fit into a larger system, product, or business context, and what
constraints are imposed as a result of the context?
Information objectives. What customer-visible data objects are produced as output from the software?
What data objects are required for input?
Function and performance. What function does the software perform to transform input data into output?
Are any special performance characteristics to be addressed? Software project scope must be unambiguous and
understandable at the management and technical levels. A statement of software scope must be bounded. That
is, quantitative data (e.g., number of simultaneous users, size of mailing list, maximum allowable response
time) are stated explicitly, constraints and/or limitations (e.g., product cost restricts memory size) are noted,
and mitigating factors (e.g., desired algorithms are well understood and available in Java) are described.

The Process
A software process provides the framework from which a comprehensive plan for software development can be
established. A small number of framework activities are applicable to all software projects, regardless of their
size or complexity.
framework activities— communication, planning, modeling, construction, and deployment— tasks
for the communication:
1. Review the customer request.
2. Plan and schedule a formal, facilitated meeting with all stakeholders.
3. Conduct research to specify the proposed solution and existing approaches.
4. Prepare a ―working document‖ and an agenda for the formal meeting.
5. Conduct the meeting.
6. Jointly develop mini-specs that reflect data, function, and behavioral features of the
software. Alternatively, develop use cases that describe the software from the user‘s point
of view.
7. Review each mini-spec or use case for correctness, consistency, and lack of ambiguity.
8. Assemble the mini-specs into a scoping document.
9. Review the scoping document or collection of use cases with all concerned.
10. Modify the scoping document or use cases as required.
The Project

A fIve -part commonsense approach to software projects:

1. Start on the right foot. This is accomplished by working hard (very hard) to understand the problem that is
to be solved and then setting realistic objectives and expectations for everyone who will be involved in the
project. It is reinforced by building the right team and giving the team the autonomy, authority, and technology
needed to do the job.
2. Maintain momentum. Many projects get off to a good start and then slowlydisintegrate. To maintain
momentum, the project manager must provide incentives to keep turnover of personnel to an absolute
minimum, the team should emphasize quality in every task it performs, and senior management should do
everything possible to stay out of the team‘s way.7
3. Track progress. For a software project, progress is tracked as work products (e.g., models, source code,
sets of test cases) are produced and approved (using technical reviews) as part of a quality assurance activity.
In addition, software process and project measures can be collected and used to assess progress against averages
developed for the software development organization.
4. Make smart decisions. In essence, the decisions of the project manager and the software team
should be to ―keep it simple.‖ Whenever possible, decide to use commercial off-the-shelf software or existing
software components or patterns, decide to avoid custom interfaces when standard approaches are available,
decide to identify and then avoid obvious risks, and decide to allocate more time than you think is needed to
complex or risky tasks (you‘ll need every minute).
5. Conduct a postmortem analysis. Establish a consistent mechanism for extracting lessons learned for each
project. Evaluate the planned and actual schedules, collect and analyze software project metrics, get feedback
from team members and customers, and record findings in written form.

PROCESS AND PROJECT METRICS

Software process and project metrics are quantitative measures that enable us to gain insight into the efficiency
of the software process and the projects that are conducted using the process as a framework. Basic quality and
productivity data are collected. These data are then analyzed, compared against past averages, and assessed to
determine whether quality and productivity improvements have occurred.

Metrics are also used to pinpoint problem areas so that remedies can be developed and the software process
can be improved.

Software process metrics can provide significant benefits as an organization works to improve its overall
level of process maturity.
Managers and Practitioners as they institute a process metrics program:
• Use common sense and organizational sensitivity when interpreting metrics data.
• Provide regular feedback to the individuals and teams who collect measures and metrics.
• Don‘t use metrics to appraise individuals.
• Work with practitioners and teams to set clear goals and metrics that will be used to achieve them.
• Never use metrics to threaten individuals or teams.
• Metrics data that indicate a problem area should not be considered ―negative.‖ These data are merely an
indicator for process improvement.
Software process and project metrics are quantitative measures that enable us to gain insight into the efficiency
of the software process and the projects that are conducted using the process as a framework. Basic quality and
productivity data are collected. These data are then analyzed, compared against past averages, and assessed to
determine whether quality and productivity improvements have occurred. Metrics are also used to pinpoint
problem areas so that remedies can be developed and the software process can be improved.

Key Project & Process Metric Groups


Process Metrics- Process Quality
• They are used to measure the efficiency and effectiveness of various processes.
Project Metrics
• that relate to Project Quality.
• They are used to quantify defects, cost, schedule, productivity and estimation of various project
resources and deliverables..
Product Metrics
• They are used to measure cost, quality, and the product‘s time-to-market.
Organizational Metrics
• organizational economics, employee satisfaction, communication, and organizational growth
factors of the project.
Software Development Metrics
• the quality of the software, the productivity of the development team,
• code complexity,
• customer satisfaction,
• agile process, and
• operational metrics.
• Project Metrics

Schedule Variance:
• Any difference between the scheduled completion of an activity and the actual
completion is known as Schedule Variance.

• Schedule variance = ((Actual calendar days – Planned calendar days) + Start variance)/ Planned
calendar days x 100.

Effort Variance:
• Difference between the planned outlined effort and the effort required to actually
undertake the task is called Effort variance.,
• Effort variance = (Actual Effort – Planned Effort)/ Planned Effort x 100..

Size Variance:
• Difference between the estimated size of the project and the actual size of the project (normally
in KLOC or FP)..
• Size variance = (Actual size – Estimated size)/ Estimated size x 100.
Requirement Stability Index:
• Provides visibility to the magnitude and impact of requirements changes.
RSI = 1- ((Number of changed + Number of deleted + Number of added) / Total number of initial
requirements) x100.

Productivity (Project):
• Is a measure of output from a related process for a unit of input. Project
Productivity = Actual Project Size / Actual effort expended in the project.
Productivity (for test case execution) = Actual number of test cases / actual effort
expended in testing..
Productivity (defect detection) = Actual number of defects (review + testing) / actual effort
spent on (review + testing).
Productivity (defect fixation) = actual no of defects fixed/ actual effort spent on defect
` fixation.

• The deviation between planned and actual schedules for the phases within a project.
Schedule variance for a phase = (Actual Calendar days for a phase – Planned calendar days for a
phase + Start variance for a phase)/ (Planned calendar days for a phase) x 100
Effort variance for a phase: The deviation between a planned and actual effort for various phases within
the project.
Effort variance for a phase = (Actual effort for a phase – a planned effort for a phase)/
(planned effort for a phase) x 100.
Cost of quality:
• It is a measure of the performance of quality initiatives in an organization. It‘s
expressed in monetary terms

Cost of quality = (review + testing + verification review + verification testing + QA + configuration


management + measurement + training + rework review + rework testing)/ total effort x 100.
Cost of poor quality: It is the cost of implementing imperfect processes and products.
Cost of poor quality = rework effort/ total effort x 100.
Defect density:
It is the number of defects detected in the software during development divided by the size of the software
(typically in KLOC or FP)
Defect density for a project = Total number of defects/ project size in KLOC or FP Review
efficiency:
defined as the efficiency in harnessing/ detecting review defects in the verification stage.
Review efficiency = (number of defects caught in review)/ total number of defects caught) x 100.
.
Testing Efficiency:
Testing efficiency = 1 – ((defects found in acceptance)/ total number of testing defects) x 100.
Defect removal efficiency:
• Quantifies the efficiency with which defects were detected and prevented from reaching
the customer.
Defect removal efficiency = (1 – (total defects caught by customer/ total number of defects)) x 100. Residual
defect density = (total number of defects found by a customer)/ (Total number of defects including
customer found defects) x 100.
Object-Oriented Concepts
Object-oriented concepts are used in the design methods such as

• classes, objects,
• polymorphism,
• encapsulation,
• inheritance,
• dynamic binding,
• information hiding,
• interface,
• constructor, destructor. Advantage of
object oriented design is that
• improving the software development and maintainability.
• faster and low cost development,
• creates a high quality software.
• improving the software development and maintainability.
• faster and low cost development,
• creates a high quality software.
The disadvantage of the object-oriented design is
• Only suitable for larger program size
• it is not suitable for all types of program.

Design classes
.
Five different types of design classes represents the layer of the design architecture
1. User interface classesThese classes are designed for Human Computer Interaction(HCI).
These interface classes define all abstraction which is required for Human Computer Interaction(HCI).
2. Business domain classesThese classes are commonly refinements of the analysis classes.
These classes are recognized as attributes and methods which are required to implement the elements of the
business domain
3. Process classes
It implement the lower level business abstraction which is needed to completely manage the business domain
class.

4. Persistence classes
It shows data stores that will persist behind the execution of the software.

5. System Classes
System classes implement software management and control functions that allow to operate and
communicate in computing environment and outside world.

Design class characteristic

.1.Complete and sufficient


encapsulation of all attributes and methods
2. Primitiveness
fulfill one service for the class.
3. High cohesion
class has a small and focused set of responsibilities.
set of responsibilities the design classes are applied single-mindedly to the methods and
attribute.
4. Low-coupling
• All the design classes should collaborate with each
• The minimum acceptable of collaboration must be kept in this model.
• If a design model is highly coupled
• then the system is difficult to implement, to test and to maintain over time.
Principles & Methodologies.

Principles that Guide Process –


• Principle #1. Be agile.
• Principle #2. Focus on quality at every step.
• Principle #3. Be ready to adapt.
• Principle #4. Build an effective team.
• Principle #5. Establish mechanisms for communication and coordination
• Principle #6. Manage change..
• Principle #7. Assess risk.

• Principle #1. Be agile.


• Whether the process model you choose is prescriptive or agile, the basic tenets of agile
development should govern your approach.
• Principle #2. Focus on quality at every step.
• The exit condition for every process activity, action, and task should focus on the quality of the
work product that has been produced.
• Principle #3. Be ready to adapt.
• Process is not a religious experience and dogma has no place in it. When necessary, adapt
your approach to constraints imposed by the problem, the people, and the project itself.
• Principle #4. Build an effective team.
• Software engineering process and practice are important, but the bottom line is people. Build
a self-organizing team that has mutual trust and respect.

• Projects fail because important information falls into the cracks and/or stakeholders fail to
coordinate their efforts to create a successful end product.

• The approach may be either formal or informal, but mechanisms must be established to
manage the way changes are requested, assessed, approved and implemented

• Lots of things can go wrong as software is being developed. It‘s essential that you
establish contingency plans.

• Create only those work products that provide value for other process activities, actions or
tasks.

Principles that Guide Practice


.
• Principle #1. Divide and conquer.
• Stated in a more technical manner, analysis and design should always emphasize
separation of concerns (SoC).
• Principle #2. Understand the use of abstraction.
• At it core, an abstraction is a simplification of some complex used element of a system
to communication meaning in a single phrase.
• Principle #3. Strive for consistency.
• A familiar context makes software easier to use.
• Principle #4. Focus on the transfer of information.
• Pay special attention to the analysis, design, construction, and testing of interfaces.
• Principle #5. Build software that exhibits effective modularity.
• Separation of concerns (Principle #1) establishes a philosophy for software. Modularity
provides a mechanism for realizing the philosophy.,
• Principle #6. Look for patterns.
• Brad Appleton [App00] suggests that: ―The goal of patterns within the software community is
to create a body of literature to help software developers resolve recurring problems
encountered throughout all of software development.

• Principle #7. When possible, represent the problem and its solution from a number of
different perspectives.
• Principle #8. Remember that someone will maintain the software.

• Communication Principles

• Principle #1. Listen.


• Try to focus on the speaker‘s words, rather than formulating your response to those words.
• Principle # 2. Prepare before you communicate.
• Spend the time to understand the problem before you meet with others.
• Principle # 3. Someone should facilitate the activity.
• Every communication meeting should have a leader (a facilitator) to keep the conversation
moving in a productive direction; (2) to mediate any conflict that does occur, and (3) to
ensure than other principles are followed..
Principle #4. Face-to-face communication is best.
• But it usually works better when some other representation of the relevant information is
present.
Principle # 5. Take notes and document decisions.
• Someone participating in the communication should serve as a ―recorder‖ and write
down all important points and decisions.
Principle # 6. Strive for collaboration.
• Collaboration and consensus occur when the collective knowledge of members of the
team is combined …
Principle # 7. Stay focused, modularize your discussion.
• The more people involved in any communication, the more likely that discussion will
bounce from one topic to the next.
• Principle # 8. If something is unclear, draw a picture.
• Principle # 9. (a) Once you agree to something, move on; (b) If you can’t agree to something,
move on; (c) If a feature or function is unclear and cannot be clarified at the moment, move on.
• Principle # 10. Negotiation is not a contest or a game. It works best when both parties win.

• Planning Principles

• Principle #1. Understand the scope of the project


• It‘s impossible to use a roadmap if you don‘t know where you‘re going.
• Scope provides the software team with a destination.
• Principle #2. Involve the customer in the planning activity.
• The customer defines priorities and establishes project constraints.
• Principle #3. Recognize that planning is iterative.
• A project plan is never engraved in stone. As work begins, it very likely that things will
change.
• Principle #4. Estimate based on what you know.
• The intent of estimation is to provide an indication of effort, cost, and task duration, based on
the team‘s current understanding of the work to be done.
• Principle #5. Consider risk as you define the plan.
• If you have identified risks that have high impact and high probability, contingency
planning is necessary.
• Principle #6. Be realistic.
• People don‘t work 100 percent of every day.
• Principle #7. Adjust granularity as you define the plan.
• Granularity refers to the level of detail that is introduced as a project plan is developed.
• Principle #8. Define how you intend to ensure quality.
• The plan should identify how the software team intends to ensure quality.
• Principle #9. Describe how you intend to accommodate change.
• Even the best planning can be obviated by uncontrolled change.
• Principle #10. Track the plan frequently and make adjustments as required.
• Software projects fall behind schedule one day at a time.

Modeling Principles

• In software engineering work, two classes of models can be created:


• Requirements models (also called analysis models) represent the customer
requirements by depicting the software in three different domains: the information
domain, the functional domain, and the behavioral domain.
• Design models represent characteristics of the software that help practitioners to construct it
effectively: the architecture, the user interface, and component-level detail.
• Requirements Modeling Principles

• Principle #1. The information domain of a problem must be represented and understood.
• Principle #2. The functions that the software performs must be defined.
• Principle #3. The behavior of the software (as a consequence of external events) must be
represented.
• Principle #4. The models that depict information, function, and behavior must be partitioned in a
manner that uncovers detail in a layered (or hierarchical) fashion.
• Principle #5. The analysis task should move from essential information toward implementation detail.
• Design Modeling Principles

• Principle #1. Design should be traceable to the requirements model.

• Principle #2. Always consider the architecture of the system to be built. Principle #3. Design
of data is as important as design of processing functions.

• Principle #4. User interface design should be tuned to the needs of the end-user. However, in
every case, it should stress ease of use.
• Principle 6. Component-level design should be functionally independent

• Principle #7. Components should be loosely coupled to one another and to the external
environment.
• Principle #8. Design representations (models) should be easily understandable.

• Principle #9. The design should be developed iteratively. With each iteration, the designer should
strive for greater simplicity.

• Agile Modeling Principles

• Principle #1. The primary goal of the software team is to build software, not create
models.
• Principle #2. Travel light—don’t create more models than you need.
• Principle #3. Strive to produce the simplest model that will describe the problem or the
software.
• Principle #4. Build models in a way that makes them amenable to change.
• Principle #5. Be able to state an explicit purpose for each model that is created.
• Principle #6. Adapt the models you develop to the system at hand.
• Principle #7. Try to build useful models, but forget about building Perfect models.
• Principle #8. Don’t become dogmatic about the syntax of the model.
• If it communicates content successfully, representation is secondary.
• Principle #9. If your instincts tell you a model isn’t right even though it seems okay
on paper, you probably have reason to be concerned.
• Principle #10. Get feedback as soon as you can..

• Preparation Principles
• Understand of the problem you’re trying to solve.
• Understand basic design principles and concepts.
• Pick a programming language that meets the needs of the software to be built and the
environment in whichit will operate.
• Select a programming environment that provides tools that will make your work easier.
• Create a set of unit tests that will be applied once the
• component you code is completed.
• Coding Principles
o Constrain your algorithms by following structured programming [Boh00] practice. o
Consider the use of pair programming
o Select data structures that will meet the needs of the design.
o Understand the software architecture and create interfaces that are consistent with it.
o Keep conditional logic as simple as possible.
o Create nested loops in a way that makes them easily testable.
o Select meaningful variable names and follow other local coding standards. o Write
code that is self-documenting.
Create a visual layout (e.g., indentation and blank lines) that aids understanding
Validation Principles

• Conduct a code walkthrough when appropriate.


• Perform unit tests and correct errors you’ve uncovered.
• Refactor the code.
Testing Principles
• Al Davis [Dav95] suggests the following:
• Principle #1. All tests should be traceable to customer requirements.
• Principle #2. Tests should be planned long before testing begins.
• Principle #3. The Pareto principle applies to software testing.

• Principle #4. Testing should begin “in the small” and progress toward testing “in the
large.”
• Principle #5. Exhaustive testing is not possible.
Deployment Principles

• Principle #1. Customer expectations for the software must be managed.

• Too often, the customer expects more than the team has promised to deliver, and
disappointment occurs immediately.
• Principle #2. A complete delivery package should be assembled and tested.
• Principle #3. A support regime must be established before the software is delivered.

• An end-user expects responsiveness and accurate information when a question or


problem arises.

• Principle #4. Appropriate instructional materials must be provided to end-


users.
Principle #5. Buggy software should be fixed first, delivered later
SCHOOL OF COMPUTING
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING
VTU R15 B.TECH – CSE

UNIT-1 Planning & Scheduling


Software Requirements Specification, Software prototyping - Software project planning - Scope - Resources -
Software Estimation - Empirical Estimation Models – Planning - Risk Management - Software Project
Scheduling - Object Oriented Estimation & Scheduling.

Software Requirements Specification


Software Requirements:
IEEE defines a requirement as “(1) A condition of capability needed by a user to solve a
problem or achieve an objective; (2) A condition or a capability that must be met or possessed by a
system to satisfy a contract, standard, specification, or other formally imposed document”.

1.2 Requirements Elicitation:


It involves asking the customer, user, others about fixing the objectives of the system, what is
to be accomplished, how the system (or) product fits into the needs of the business and how the
system (or) product is to be used on a day-day basis.
Christeland Kang identify a number of problems that help us understand why requirements
elicitation is difficult:
Problems of scope:
The boundary of the system is ill-defined or the customers/ users specify unnecessary
technical detail that may confuse, rather than clarify, overall system objectives.
Problems of understanding:
The customers/users have a poor understanding of the capabilities and limitations of
computing environment and don’t have a full understanding of problem domain. They have trouble
in communicating needs to the system engineer or specify requirements that are ambiguous or
untestable.
Problems of volatility:
The requirements change over time.To help overcome these problems, system engineers must
approach the requirements gathering activity in an organized manner.

1.3 Need for SRS


The origin of most software systems is in the needs of some clients. The software system
itself is created by some developers. Finally, the completed system will be used by the end users.
Thus, there are three major parties interested in a new system: the client, the developer, and the
users. Somehow the requirements for the system that will satisfy the needs of the clients and the
concerns of the users have to be communicated to the developer. The problem is that the client
usually does not understand software or the software development process, and the developer often
does not understand the client's problem and application area. This causes a communication gap
between the parties involved in the development project. A basic purpose of software requirements
specification is to bridge this communication gap.

SRS is the medium through which the client and user needs are accurately specified to the developer.
Hence one of the main advantages is:

An SRS establishes the basis for agreement between the client and the supplier on what the
software product will do.
This basis for agreement is frequently formalized into a legal contract between the client (or
the customer) and the developer (the supplier). So, through SRS, the client clearly describes what it
expects, and the developer clearly understands what capabilities to build in the software.

An SRS provides a reference for validation of the final product.


The SRS helps the client determine if the software meets the requirements. Without a proper
SRS, there is no way a client can determine if the software being delivered is what was ordered, and
there is no way the developer can convince the client that the requirements have been fulfilled.
Providing the basis of agreement and validation should be strong enough reasons for both the client
and the developer to do a thorough and rigorous job of requirement understanding and specification,
but there are other very practical and pressing reasons for having a good SRS.

It is clear that many errors are made during the requirements phase. And an error in the SRS with
most likely manifest itself as an error in the final system implementing the SRS; after ah, if the SRS
document specifies a wrong system (i.e., one that will not satisfy the client's objectives), then even a
correct implementation of the SRS will lead to a system that will not satisfy the client. Clearly, if we
want a high-quality end product that has few errors, we must begin with a high-quality SRS.

A high-quality SRS is a prerequisite to high-quality software.


The quality of SRS has an impact on cost (and schedule) of the project. We have already seen that
errors can exist in the SRS. We saw earlier that the cost of fixing an error increases almost
exponentially as time progresses. That is, a requirement error, if detected and removed after the
system has been developed, can cost up to 100 times more than removing it during the requirements
phase itself.
A high quality SRS reduces the development cost.
The quality of the SRS impacts customer and developer satisfaction, system validation,
quality of the final software and the software development cost. The critical role the SRS plays in a
software development project should be evident from these.

Users of Requirement Document


Requirement Process
The requirement process is the sequence of activities that need to be performed in the
requirements phase and that culminate in producing a high quality document containing the software
requirements specification (SRS). The requirements process typically consists of three basic tasks:
problem or requirement analysis, requirement specification, and requirements validation.

Problem analysis often starts with a high-level "problem statement." During analysis the
problem domain and the environment are modeled in an effort to understand the system behavior,
constraints on the system, its inputs and outputs, etc. The basic purpose of this activity is to obtain a
thorough understanding of what the software needs to provide. The understanding obtained by
problem analysis forms the basis of requirements specification,in which the focus is on clearly
specifying the requirements in a document.

Issues such as representation, specification languages, and tools, are addressed during this activity.
As analysis produces large amounts of information and knowledge with possible redundancies;
properly organizing and describing the requirements is an important goal of this activity.
1.4.1 Requirements validation:
Focuses on ensuring that what has been specified in the SRS are indeed all the requirements
of the software and making sure that the SRS is of good quality. The requirements process terminates
with the production of the validated SRS.

Though it seems that the requirements process is a linear sequence of these three activities, in reality
it is not so for anything other than trivial systems. In most real systems, there is considerable overlap
and feedback between these activities. So, some parts of the system are analyzed and then specified
while the analysis of the other parts is going on. Furthermore, if the validation activities reveal
problems in the SRS, it is likely to lead to further analysis and specification. However, in general, for
a part of the system, analysis precedes specification and specification precedes validation.

Validation Process
Software prototyping:
Prototyping takes a different approach to problem analysis as compared to modeling-based
approaches. In prototyping, a partial system is constructed, which is then used by the client, users,
and developers to gain a better understanding of the problem and the needs. Hence, actual experience
with a prototype that implements part of the eventual software system is used to analyze the problem
and understand the requirements for the eventual software system. A software prototype can be
defined as a partial implementation of a system whose purpose is to learn something about the
problem being solved or the solution approach. As stated in this definition, prototyping can also be
used to evaluate or check a design alternative (such a prototype is called a design prototype).

There are two approaches to prototyping:


1. Throwaway
2. Evolutionary.
Approaches of prototyping
In the throwaway approach the prototype is constructed with the idea that it will be discarded after
the analysis is complete and the final system will be built from scratch. In the evolutionary approach,
the prototype is built with the idea that it will eventually be converted into the final system. From the
point of view of problem analysis and understanding, the throwaway prototypes are more suited.

The requirements of a system can be divided into three sets those that are well understood, those that
are poorly understood, and those that are not known. In a throwaway prototype, the poorly
understood requirements are the ones that should be incorporated. Based on the experience with the
prototype, these requirements then become well understood.

Prototyping
Development of a throwaway prototype is fundamentally different from developing final
production-quality software. The basic focus during prototyping is to keep costs low and minimize
the prototype production time. Due to this, many of the bookkeeping, documenting, and quality
control activities that are usually performed during software product development are kept to a
minimum during prototyping. Efficiency concerns also take a back seat, and often very high-level
interpretive languages are used for prototyping.
For these reasons, temptation to convert the prototype into the final system should be resisted.
Experience is gained by putting the system to use by the actual client and users. Constant interaction
is needed with the client/users during this activity to understand their responses. Questionnaires and
interviews might be used to gather user response.

The final SRS is developed in much the same way as any SRS is developed. The difference here is
that the client and users will be able to answer questions and explain their needs much better because
of their experience with the prototype. Some initial analysis is also available. For prototyping for
requirements analysis to be feasible, its cost must be kept low. Consequently, only those features that
will have a valuable return from the user experience are included in the prototype. Exception
handling, recovery, conformance to some standards and formats are typically not included in
prototypes. Because the prototype is to be thrown away, only minimal development documents need
to be produced during prototyping; for example, design documents, a test plan, and a test case
specification are not needed during the development of the prototype. Another important cost-cutting
measure is reduced testing. Testing consumes a major part of development expenditure during
regular software development. By using cost-cutting methods, it is possible to keep the cost of the
prototype to less than a few percent of the total development cost.
The first step in developing a prototype is to prepare an SRS for the prototype. The SRS need not be
formal but should identify the different system utilities to be included in the prototype. As mentioned
earlier, these are typically the features that are most unclear or where the risk is high. It was decided
that the prototype will demonstrate the following features:
1. Customer order processing and billing.
2. Supply ordering and processing.

The first was included, as that is where the maximum risk exists for the restaurant (after all,
customer satisfaction is the basic objective of the restaurant, and if customers are unhappy the
restaurant will lose business). The second was included, as maximum potential benefit can be
derived from this feature. Accounting and statistics generation were not to be included in the
prototype.
The prototype was developed using a database system, in which good facilities for data entry and
form (bill) generation exist. The user interface for the waiters and the restaurant manager was
included in the prototype. The system was used, in parallel with the existing system, for a few weeks,
and informal surveys with the customers were conducted. Customers were generally pleased with the
accuracy of the bills and the details they provided. Some gave suggestions about the bill layout.
Based on the experience of the waiters, the codes for the different menu items were modified to an
alphanumeric code. They found that the numeric codes used in the prototype were hard to remember.
The experience of the restaurant manager and feedback from the supply were used to determine the
final details about supply processing and handling.

Software Project Planning:


Objectives of project planning are listed below
 It defines the roles and responsibilities of the project management team members.
 It ensures that the project management team works according to the business objectives.
 It checks feasibility of the schedule and user requirements.
 It determines project constraints.

Task set for project planning:


 Establish project scope.
 Determine feasibility.
 Analysis risk.
 Define resources:
 Determine human resources.
 Identify environment resources.
 Estimate cost and effort.
 Develop project schedule.
Software project estimation:
 Software project estimation can be transformed from a black arc to a series of systematic
steps that provide estimates with acceptable risk.
 To achieve reliable cost and effort estimate the no of option arise.
 Delay estimation until late in project.
 Base estimates on similar project that have already been completed.
 Use relatively simple decomposition techniques to generate project cost and effort
estimates.
 Use one or more empirical models for software cost & effort estimation.eg:COCOMO
MODEL.
 Unfortunately the first option however alternative is not practical cost estimates must be
provide “up front”.
 The second option can work reasonably well and if the current project is quite familiar to
past.
 Unfortunately past experience as not always be a good indicator of future results.
 Decomposition techniques take a divide and conquer approach to software project
estimation by decomposing a project into major function and related software engineering
activities.
 Empirical estimation model can be used to complement decomposition technique and offer
a potentially valuable estimation approach in their own right production to LOC, FP based
estimation.
 Line of code and functional point described as measures from which productivity metrics
can be completed.
 LOC, FP estimation are distinct estimation techniques but characteristic is common.
 The project planner begins with a bounded statement of software box. From this statement
attempts to decomposed software into problem function that can be estimated individually.
 The base line productivity matrices are then applied to the appropriate estimation variables
and cost or effort of the function is derived.
 Function estimates are combined to produce overall estimates for the entire project.
 When a new project is estimated it should first be allocated to a domain and then
appropriate domain average for productivity should be used in generating the estimates.
 LOC, FP estimation techniques differs in the level of detail requires for decomposition
&target of the portioning.
 The relevant estimate can be used to derive an FP value that can be tied to pass data and
used to generate and estimate.

S=(Sopt+4Sm+Spess)/6
S - Estimation value.
Sopt - optimistic (lower).
Sm - most likely.
Spess - pessimistic (higher).

1.17LOC based Estimation:


 Loc is used as the estimation variables, decomposition is absolutely essential.
 The greater degree of portioning codes to a development of accurate estimates of Loc.
 An example of Loc-based estimation.
 Let us consider a software package to be developed for a computer-aided design
application for mechanical component.
 The software is to be executed on an engineering workstation and must interface with
various peripherals including a mouse, digitizer, high-resolution, colordisplay&laser
printer.

Table 1.11 LOC Estimation Table


 For example the range of loc estimates for the 3D geometric analysis function is
optimistic -4600loc most-likely-6900loc, pessimistic-8600loc.
 Applying equation we can get exact answer.
 Estimation can be done for calculating labor state,projectcost,estimated effort.

1.18 FP BASED ESTIMATION:


Decomposition for FP-based estimation focuses on information domain values rather than software
functions. Referring to the function point calculation the project planner estimates inputs, outputs,
inquiries, files, and external interfaces for the CAD software. For the purposes of this estimate, the
complexity weighting factor is assumed to be average.

Table: 1.12 FP Based Estimation Table


Each of the complexity weighting factors is estimated and the complexity
adjustment factor is computed.

Table: 1.13 Factor Values


Finally, the estimated number of FP is derived:
FPestimated = count-total x [0.65 + 0.01 x (Fi)]
FPestimated = 375

1.19 ObjectOriented Estimation


 It is worthwhile to supplement conventional software cost estimation methods with an
approach that has been designed explicitly for object oriented software.
 Develop estimates using effort decomposition FP analysis and any other method that is
applicable for conventional application.
 Using object-oriented analysis modeling develop use case and determine a count that
recognize that the number of use-case may change as the project program
 From 50 analyses, determine the number of key classes.
 Categorize the type of its input for the applicationdevelopsmultiples for support classes.

Interface Type Multiples


No GUI 20
Test-based user i/p 2.25
GUI 2.5
Complex GUI 3.0
 Multiply the number of key classes(step3) by the multiples to obtain an estimate for the
number of support classes.
 Multiply the total no. of classes [key support] by the average no. of work units per class.
 Cross check the class based estimate by multiplying the avg.no of work unit/per usecase.
 Develop estimates using decomposition, FP, Loc.
 Use 50 analysis model usecase& determine a count.
 Build analysis model determine no of key classes
 Categorize the type of i/p for application & develop a multiples for support classes.
 Multiply no of key classes by the multiples to obtain estimates for the no.ofsuppor
classes.K*S=avg.no of work units per class
 Cross check=multiply avg.no.of work unit per use case.
Planning
Before starting a software project, it is essential to determine the tasks to be performed and properly
manage allocation of tasks among individuals involved in the software development. Hence,
planning is important as it results in effective software development.
Project planning is an organized and integrated management process, which focuses on activities
required for successful completion of the project. It prevents obstacles that arise in the project such
as changes in projects or organization’s objectives, non-availability of resources, and so on. Project
planning also helps in better utilization of resources and optimal usage of the allotted time for a
project. The other objectives of project planning are listed below.

 It defines the roles and responsibilities of the project management team members.
 It ensures that the project management team works according to the business objectives.
 It checks feasibility of the schedule and user requirements.
 It determines project constraints.
Several individuals help in planning the project. These include senior management and project
management team. Senior management is responsible for employing team members and providing
resources required for the project. The project management team, which generally includes project
managers and developers, is responsible for planning, determining, and tracking the activities of the
project. Table lists the tasks performed by individuals involved in the software project.

 Planning is necessary: Planning should be done before a project begins. For effective
planning, objectives and schedules should be clear and understandable.
 Risk analysis: Before starting the project, senior management and the project management
team should consider the risks that may affect the project. For example, the user may desire
changes in requirements while the project is in progress. In such a case, the estimation of time
and cost should be done according to those requirements (new requirements).
 Tracking of project plan: Once the project plan is prepared, it should be tracked and
modified accordingly.
 Meet quality standards and produce quality deliverables: The project plan should identify
processes by which the project management team can ensure quality in software. Based on the
process selected for ensuring quality, the time and cost for the project is estimated.
 Description of flexibility to accommodate changes: The result of project planning is
recorded in the form of a project plan, which should allow new changes to be accommodated
when the project is in progress.

Project planning comprises project purpose, project scope, project planning process, and project
plan. This information is essential for effective project planning and to assist project management
team in accomplishing user requirements.

Empirical Estimation Models


• Estimation models for computer software use empirically derived formulas to predict effort
as a function of LOC (line of code) or FP(function point)

• Resultant values computed for LOC or FP are entered into an estimation model

• The empirical data for these models are derived from a

limited sample of projects. Consequently, the models should be calibrated to reflect local software
development conditions
Empirical Estimation Models
 The structure of empirical estimation models is a formula, derived from data collected from
past software projects, that uses software size to estimate effort.

 Size, itself, is an estimate, described as either lines of code (LOC) or function points (FP).

 The typical formula of estimation models is:

E = a + b(S)c
where; E represents effort, in person months,
S is the size of the software development, in LOC or FP,
a, b, and c are values derived from data.
The necessary steps in this model are:
1.Get an initial estimate of the development effort from evaluation of thousands of delivered lines of
source code (KDLOC).
2.Determine a set of 15 multiplying factors from various attributes of the project.
3.Calculate the effort estimate by multiplying the initial estimate with all the multiplying factors i.e.,
multiply the values in step1 and step2.
The initial estimate (also called nominal estimate) is determined by an equation of the form used in
the static single variable models, using KDLOC as the measure of the size.
To determine the initial effort Ei in person-months the equation is, is

Ei=a*(KDLOC)b

COCOMO:
 When Barry Boehm introduced an empirical effort estimation model (COCOMO -
COnstructive COst MOdel)

 COCOMO predicts the efforts and schedule of a software product based on the size of the
software.

 Stands for Constructive Cost Model

 Became one of the well-known and widely-used estimation models in the industry

 It has evolved into a more comprehensive estimation model called COCOMO II

 COCOMO II is actually a hierarchy of three estimation models

 It requires sizing information and accepts it in three forms: object points, function points,
and lines of source code

 In COCOMO, projects are categorized into three types:

1.Organic:
 A development project can be treated of the organic type, if the project deals with developing
a well-understood application program, the size of the development team is reasonably small,
and the team members are experienced in developing similar methods of projects.

 Examples of this type of projects are simple business systems, simple inventory
management systems, and data processing systems.

2. Semidetached:
 A development project can be treated with semidetached type if the development consists of
a mixture of experienced and inexperienced staff.

 Team members may have finite experience in related systems but may be unfamiliar with
some aspects of the order being developed.

 Example of Semidetached system includes developing a new operating system (OS), a


Database Management System (DBMS), and complex inventory management system.

3.Embedded:
 A development project is treated to be of an embedded type, if the software being developed
is strongly coupled to complex hardware, or if the stringent regulations on the operational
method exist.

 For Example: ATM, Air Traffic control

 According to Boehm, software cost estimation should be done through three stages:

1. Basic Model

2. Intermediate Model

3. Detailed Model

• Types of Models: COCOMO consists of a hierarchy of three increasingly detailed and


accurate forms.

• Any of the three forms can be adopted according to our requirements. These are types of
COCOMO model:

1. Basic COCOMO Model

2. Intermediate COCOMO Model

3. Detailed COCOMO Model

• Basic COCOMO can be used for quick and slightly rough calculations of Software Costs.

• Its accuracy is somewhat restricted due to the absence of sufficient factor considerations.
Basic Model

• The effort is measured in Person-Months and as evident from the formula is dependent on
Kilo-Lines of code.

• The development time is measured in Months.

• These formulas are used as such in the Basic Model calculations, as not much consideration
of different factors such as reliability, expertise is taken into account, henceforth the estimate
is rough.

• The Intermediate COCOMO formula now takes the form:

1. Basic COCOMO Model:

The basic COCOMO model provide an accurate size of the project parameters. Expressions
for the basic COCOMO estimation model:
Effort=a1*(KLOC) a2 PM
Tdev=b1*(efforts)b2 Months
Where
 KLOC is the estimated size of the software product indicate in Kilo Lines of Code,

 a1,a2,b1,b2 are constants for each group of software products,

 Tdev is the estimated time to develop the software, expressed in months,

 Effort is the total effort required to develop the software product, expressed in person
months (PMs).

 Estimation of development effort


For the three classes of software products, the formulas for estimating the effort based on the
code size are shown below:
Organic: Effort = 2.4(KLOC) 1.05 PM
Semi-detached: Effort = 3.0(KLOC) 1.12 PM
Embedded: Effort = 3.6(KLOC) 1.20 PM

Estimation of development time


For the three classes of software products, the formulas for estimating the development time
based on the effort are given below:
Organic: Tdev = 2.5(Effort) 0.38 Months
Semi-detached: Tdev = 2.5(Effort) 0.35 Months
Embedded: Tdev = 2.5(Effort) 0.32 Months

2. Intermediate Model:
 The basic Cocomo model considers that the effort is only a function of the number of lines of
code and some constants calculated according to the various software systems.
 The intermediate COCOMO model recognizes these facts and refines the initial estimates
obtained through the basic COCOMO model by using a set of 15 cost drivers based on
various attributes of software engineering.

 The cost drivers are grouped into four categories : – Software Product Attributes – Computer
Attributes – Personnel Attributes – Project Attributes

(i) Product attributes


• Required software reliability extent

• Size of the application database

• The complexity of the product

(ii) Hardware attributes


• Run-time performance constraints

• Memory constraints

• The volatility of the virtual machine environment

• Required turnabout time

(iii) Personnel attributes


 Analyst capability

 Software engineering capability

 Applications experience

 Virtual machine experience

 Programming language experience

(iv) Project attributes


 Use of software tools

 Application of software engineering methods

 Required development schedule

The Six phases of detailed COCOMO are:


 Planning and requirements

 System structure

 Complete structure
 Module code and test

 Integration and test

 Cost Constructive model

Planning:
Planning
 Planning is perhaps the most important activity of management.

 The basic goal of planning is to find the activity which are to be performed for completing a
project.

 A good plan is that which can handle all the uncertain event which can occur during the
development of project.

 A good planning helps in good decision making.

Lack of planning is a primary cause of schedule slippage, cost overruns, poor quality, and high
maintenance costs for software

Project Planning
Project planning
 Project planning is an organized and integrated management process, which focuses on
activities required for successful completion of the project.
 It prevents obstacles that arise in the project such as changes in projects or organization's
objectives, non-availability of resources, and so on.

 Project planning also helps in better utilization of resources and optimal usage of the allotted
time for a project.

Objectives of project planning are listed below


 It defines the roles and responsibilities of the project management team members.

 It ensures that the project management team works according to the business objectives.

 It checks feasibility of the schedule and user requirements.

 It determines project constraints.

Principles of project planning.


 Planning is necessary

 Planning should be done before a project begins.

 For effective planning, objectives and schedules should be clear and understandable.

 Risk analysis

 Before starting the project, senior management and the project management team should
consider the risks that may affect the project.

 For example, the user may desire changes in requirements while the project is in progress.

 In such a case, the estimation of time and cost should be done according to those
requirements (new requirements).

 Tracking of project plan

 Once the project plan is prepared, it should be tracked and modified accordingly.

 Meet quality standards and produce quality deliverables

 The project plan should identify processes by which the project management team can
ensure quality in software.

 Based on the process selected for ensuring quality, the time and cost for the project is
estimated.

 Description of flexibility to accommodate changes

 The result of project planning is recorded in the form of a project plan, which should allow
new changes to be accommodated when the project is in progress.
Project plan structure
1.Introduction
 This briefly describes the objectives of the project and sets out the constraints (e.g.,
budget, time, etc.) that affect the management of the project.

2. Project organization
 This describes the way in which the development team is organized, the people
involved, and their roles in the team.

3. Risk analysis
 This describes possible project risks, the likelihood of these risks arising, and the risk
reduction strategies that are proposed.

4. Hardware and software resource requirements


 This specifies the hardware and support software required to carry out the
development.

 If hardware has to be bought, estimates of the prices and the delivery schedule may be
included.

5. Work breakdown
 This sets out the breakdown of the project into activities and identifies the milestones
and deliverables associated with each activity.

 Milestones are key stages in the project where progress can be assessed; deliverables
are work products that are delivered to the customer.

6. Project schedule
 This shows the dependencies between activities, the estimated time required to reach
each milestone, and the allocation of people to activities.

7. Monitoring and reporting mechanisms


 This defines the management reports that should be produced, when these should be
produced, and the project monitoring mechanisms to be used.

Software project is carried out to accomplish a specific purpose,


• Meet user requirements: Develop the project according to the user requirements after
understanding them.

• Meet schedule deadlines: Complete the project milestones as described in the project plan
on time in order to complete the project according to the schedule.

• Be within budget: Manage the overall project cost so that the project is within the allocated
budget.
• Produce quality deliverables: Ensure that quality is considered for accuracy and overall
performance of the project.

Project plan
• It helps a project manager to understand, monitor, and control the development of software
project.

• This plan is used as a means of communication between the users and project management
team.

Advantages of project plan


• It ensures that software is developed according to the user requirements, objectives,
and scope of the project.

• It identifies the role of each project management team member involved in the
project.

• It monitors the progress of the project according to the project plan.

Risk Management

• Creates a safe and secure work environment for all staff and customers.

• Risk assessments save the business money

• Risk assessments reduce the chance of injury in the workplace

• A risk management plan protects a company’s resources

• A risk management plan improves a company’s brand image


Risk Management
1. Risk identification: You should identify possible project, product, and business risks.

2. Risk analysis: You should assess the likelihood and consequences of these risks.

3. Risk planning: You should make plans to address the risk, either by avoiding it or
minimizing its effects on the project.

4. Risk monitoring: You should regularly assess the risk and your plans for risk mitigation and
revise these when you learn more about the risk.

 Risk identification

 Technology risks

Risks that derive from the software or hardware technologies that are used to develop the
system.
 People risks
Risks that are associated with the people in the development team.
 Organizational risks

Risks that derive from the organizational environment where the software is being developed.
 Tools risks

Risks that derive from the software tools and other support software used to develop the
system.
 Requirements risks

Risks that derive from changes to the customer requirements and the process of managing the
requirements change.
 Estimation risks

Risks that derive from the management estimates of the resources required to build the
system
Risk planning
 The risk planning method considers each of the key risks that have been identified and
develop ways to maintain these risks.

 For each of the risks, you have to think of the behavior that you may take to minimize the
disruption to the plan if the issue identified in the risk occurs.

 You also should think about data that you might need to collect while monitoring the plan so
that issues can be anticipated.

 Again, there is no easy process that can be followed for contingency planning. It rely on the
judgment and experience of the project manager.

Risks Planning strategies


• Avoidance strategies

• Minimization strategies

• Contingency plans

Software Project Scheduling :


It involves deciding which task would be taken up when.Estimate the calendar time needed to
complete each task, the effort required, and who will work on the tasks that have been identified.
Estimate the resources needed to complete each task, such as the disk space required on a server, the
time required on specialized hardware, such as a simulator, and what the travel budget will be. At the
project level, software project managers using information solicited from software engineers. At an
individual level, software engineers themselves. The project schedule and related information are
produced. be tracked.
 General Practices

 On large projects, hundreds of small tasks must occur to accomplish a larger goal
 Project manager's objectives

 Define all project tasks

 Build an activity network that depicts their interdependencies

 Identify the tasks that are critical within the activity network

 Build a timeline depicting the planned and actual progress of each task

 Track task progress to ensure that delay is recognized "one day at a time"

 To do this, the schedule should allow progress to be monitored and the project to be
controlled

 Software project scheduling distributes estimated effort across the planned project duration
by allocating the effort to specific tasks.

 Scheduling for projects can be viewed from two different perspectives

1. An end-date for release of a computer based system has already been established and
fixed
 The software organization is constrained to distribute effort within the prescribed time
frame

2.Assume that rough chronological bounds have been discussed but that the end-date is
set by the software engineering organization
 Effort is distributed to make best use of resources and an end-date is defined after
careful analysis of the software

 Basic Pinciples of Project Scheduling

Compartmentalization
 The project must be compartmentalized into a number of manageable activities, actions, and
tasks; both the product and the process are decomposed

Interdependency
 The interdependency of each compartmentalized activity, action, or task must be determined

 Some tasks must occur in sequence while others can occur in parallel

 Some actions or activities cannot commence until the work product produced by another is
available

Time allocation
 Each task to be scheduled must be allocated some number of work units
 In addition, each task must be assigned a start date and a completion date that are a function
of the interdependencies

 Start and stop dates are also established based on whether work will be conducted on a full-
time or part-time basis

Effort validation
 Every project has a defined number of people on the team

 As time allocation occurs, the project manager must ensure that no more than the allocated
number of people have been scheduled at any given time

Defined responsibilities
 Every task that is scheduled should be assigned to a specific team member

Defined outcomes
 Every task that is scheduled should have a defined outcome for software projects such as a
work product or part of a work product

 Work products are often combined in deliverables

Defined milestones
 Every task or group of tasks should be associated with a project milestone

 A milestone is accomplished when one or more work products has been reviewed for quality
and has been approved

 The Relationship Between People and Effort

 People work on the software project doing various activities.

 The Putnam-Norden-Rayleigh (PNR) Curve5 provides an indication of the relationship


between effort applied and delivery time for a software project.
 The number of delivered lines of code (source statements), L, is related to effort and
development time by the equation:

L= P * E1/3 T ¾
 where E is development effort in person-months,

 P is a productivity parameter that reflects a variety of factors that lead to high-quality


software engineering work (typical values for P range between 2000 and 12,000)

 t is the project duration in calendar months.

 After rearranging the last equation can arrive at an expansion for development effort e.

E = L3/(P3 T4)
 E is called as effort expanded over entire life cycle for software development and
maintenance

 T is the development period in years.

 And this equation is lead to

 E = L3 / (P3 T4) ~ 3.8 Person years.

 Effort distribution should be used as a guideline only.

 The characteristics of each project dictate the distribution of effort

 Task Network

 Defining a Task

 A task set is the work breakdown structure for the project


 No single task set is appropriate for all projects and process models

 It varies depending on the project type and the degree of rigor (based on influential
factors) with which the team plans to work

 The task set should provide enough discipline to achieve high software quality

 But it must not burden the project team with unnecessary work

 The task set will vary depending upon the project type and the degree of rigor with which the
software team decides to do its work

 Concept development -projects that are initiated to explore some new business concept or
application of some new technology.

 New application development -projects that are undertaken as a consequence of a specific


customer request.

 Application enhancement -projects that occur when existing software undergoes major
modifications to function, performance, or interfaces that are observable by the end user.

Factors That Influence Project Schedule


• Size of the project

• Number of potential users

• Application longevity

• Stability of requirements

• Ease of customer/developer communication

• Maturity of applicable technology

• Performance constraints

• Embedded and non-embedded characteristics

• Project staff

• Reengineering factors

 Scheduling

 Program evaluation and review technique (PERT) and the critical path method (CPM) are
two project scheduling methods that can be applied to software development. Inter
dependencies among tasks may be defined using a task network. Tasks, sometimes called the
project work breakdown structure (WBS), are defined for the product as a whole or for
individual functions.

 Both techniques are driven by information already developed in earlier project planning
activities: estimates of effort, a decomposition of the product function, the selection of the
appropriate process model and task set, and decomposition of the tasks that are selected

 Both PERT and CPM provide quantitative tools that allow you to

(1) determine the critical path—the chain of tasks that determines the duration of the project,
(2) establish “most likely” time estimates for individual tasks by applying statistical models,
and
(3) calculate “boundary times” that define a time “window” for a particular task
Mechanics of Timeline Chart
 Also called a Gantt chart.

 All project tasks are listed in the far left column.

 The next few columns may list the following for each task: projected start date, projected
stop date, projected duration, actual start date, actual stop date, actual duration, task
interdependencies (i.e., predecessors).

 To the far right are columns representing dates on a calendar.

 The length of a horizontal bar on the calendar indicates the duration of the task

 Tracking the Schedule

The project schedule becomes a road map that defines the tasks and milestones to be tracked
and controlled as the project proceeds. Tracking can be accomplished in a number of
different ways: Conducting periodic project status meetings in which each team member
reports progress and problems
 Evaluating the results of all reviews conducted throughout the software engineering process

 Determining whether formal project milestones have been accomplished by the scheduled
date

 Comparing the actual start date to the planned start date for each project task listed in the
resource table

 Meeting informally with practitioners to obtain their subjective assessment of progress to


date and problems on the horizon

 Using earned value analysis to assess progress quantitatively

Earned value analysis (EVA).


• A technique for performing quantitative analysis of progress is called earned value analysis
(EVA).

• The earned value system provides a common value scale for every task of software project.

• To determine the earned value, the following steps are performed:

1. The budgeted cost of work scheduled (BCWS) is determined for each work task
represented in the schedule.
2.The BCWS values for all work tasks are summed to derive the budget at completion
(BAC).
3.The value for budgeted cost of work performed (BCWP) is computed.
Schedule performance index, SPI=BCWP/BCWS
Schedule variance, SV=BCWP-BCWS
Percent scheduled for completion=BCWS/BAC
Percent complete=BCWP/BAC
Cost performance index, CPI =BCWP/ACWP
Cost variance, CV=BCWP-ACWP
SCHOOL OF COMPUTING
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING
VTU R15 B.TECH – CSE

UNIT -III ANALYSIS


UML: Analysis Modeling - Data Modeling - Functional Modeling & Information Flow - Behavioral
Modeling-Structured Analysis - Object Oriented Analysis - Domain Analysis-Object oriented
Analysis process - Object Relationship Model - Object Behavior Model. Design modelling with
UML

 UML: ANALYSIS MODELING


The Unified Modeling Language (UML) is a graphical language for OOAD that gives a standard way to write
a software system’s blueprint. It helps to visualize, specify, construct, and document the artifacts of an object-
oriented system. It is used to depict the structures and the relationships in a complex system.
Systems and Models in UML
 System − A set of elements organized to achieve certain objectives form a system. Systems are often
divided into subsystems and described by a set of models.
 Model − Model is a simplified, complete, and consistent abstraction of a system, created for better
understanding of the system.
 View − A view is a projection of a system’s model from a specific perspective.
Conceptual Model of UML
The Conceptual Model of UML encompasses three major elements −
 Basic building blocks
 Rules
 Common mechanisms
Basic Building Blocks
The three building blocks of UML are −
 Things
 Relationships
 Diagrams
Things
There are four kinds of things in UML, namely −
 Structural Things − These are the nouns of the UML models representing the static elements that
may be either physical or conceptual. The structural things are class, interface, collaboration, use case,
active class, components, and nodes.
 Behavioral Things − These are the verbs of the UML models representing the dynamic behavior over
time and space. The two types of behavioral things are interaction and state machine.
 Grouping Things − They comprise the organizational parts of the UML models. There is only one
kind of grouping thing, i.e., package.
 Annotational Things − These are the explanations in the UML models representing the comments
applied to describe elements.
Relationships
Relationships are the connection between things. The four types of relationships that can be represented in
UML are −
 Dependency − This is a semantic relationship between two things such that a change in one thing
brings a change in the other. The former is the independent thing, while the latter is the dependent
thing.
 Association − This is a structural relationship that represents a group of links having common
structure and common behavior.
 Generalization − This represents a generalization/specialization relationship in which subclasses
inherit structure and behavior from super-classes.
 Realization − This is a semantic relationship between two or more classifiers such that one classifier
lays down a contract that the other classifiers ensure to abide by.
Diagrams
A diagram is a graphical representation of a system. It comprises of a group of elements generally in the form
of a graph. UML includes nine diagrams in all, namely −
 Class Diagram
 Object Diagram
 Use Case Diagram
 Sequence Diagram
 Collaboration Diagram
 State Chart Diagram
 Activity Diagram
 Component Diagram
 Deployment Diagram
Analysis modeling is an extremely robust subject. This set of resources has been organized into the following
topic areas:

 Requirements Analysis - General


 UML-Based Modeling
 Scenario-Based Modeling
 Data Modeling
 Flow-Oriented Modeling (Structured Analysis)
 Object-Oriented Modeling
 Behavioral Modeling
 DATA MODELING

 Data Model is a collection of conceptual tools for describing data, data relationships, data semantics
and consistency constraint.
 A data model is a conceptual representation of data structures required for data base and is very
powerful in expressing and communicating the business requirements.
 A data model visually represents the nature of data, business rules governing the data, and how it will
be organized in the database
Types of Data Models
1.Entity-Relationship (E-R) Models
2. UML (unified modeling language)

1.Entity-Relationship (E-R) Models


Entity relational (ER) model is a high-level conceptual data model diagram. ER modeling helps you
to analyze data requirements systematically to produce a well-designed database. The Entity-Relation model
represents real-world entities and the relationship between them.Components of a ER Diagram
 Entities
 Attributes
 Relationship

Entity–relationship elements
 Entity:
 something capable of an independent existence that can be uniquely identified
 thing of significance about which the organization wishes to hold information
 physical, tangible object: a house or a car, or a concept (intangible thing) such as a
transaction, order or role.

 Relationship:
 captures how entities are related to one another
 Types
i. One-to-One Relationships
ii. One-to-Many Relationships
iii. May to One Relationships
iv. Many-to-Many Relationship

 Attribute:
 describes an entity or a relationship, defines one piece of information

 FUNCTIONAL MODELING & INFORMATION FLOW

Information is transformed as it flows through a computer-based system. The system accepts input in a
variety of forms; applies hardware, software, and human elements to transform it; and produces output in a
variety of forms. Input may be a control signal transmitted by a transducer, a series of numbers typed by a
human operator, a packet of information transmitted on a network link, or a voluminous data file retrieved
from secondary storage. The transform(s) may comprise a single logical comparison, a complex numerical
algorithm, or a rule-inference approach of an expert system. Output may light a single LED or produce a 200-
page report. In effect, we can create a flow model for any computer-based system, regardless of size and
complexity.
Structured analysis began as an information flow modeling technique. A computer- based system is
represented as an information transform . A rectangle is used to represent an external entity; that is, a system
element (e.g., hardware, a person, another program) or another system that produces information for
transformation by the software or receives information produced by the software. A circle (sometimes called a
bubble) represents a process or transform that is applied to data (or control) and changes it in some way. An
arrow represents one or more data items (data objects). All arrows on a data flow diagram should be labeled.
The double line represents a data store—stored information that is used by the software. The simplicity of
DFD notation is one reason why structured analysis techniques are widely used.

It is important to note that no explicit indication of the sequence of processing or conditional logic is supplied
by the diagram. Procedure or sequence may be implicit in the diagram, but explicit logical details are
generally delayed until software design. It is important not to confuse a DFD with the flowchart.

Data Flow Diagrams

As information moves through software, it is modified by a series of transformations. A data flow diagram is a
graphical representation that depicts information flow and the transforms that are applied as data move from
input to output. The basic form of a data flow diagram, also known as a data flow graph or a bubble chart, is
illustrated in the figure below.
The data flow diagram may be used to represent a system or software at any level of abstraction. In fact,
DFDs may be partitioned into levels that represent increasing information flow and functional detail.
Therefore, the DFD provides a mechanism for functional modeling as well as information flow modeling. In
so doing, it satisfies the second operational analysis principle (i.e., creating a functional model)

A level 0 DFD, also called a fundamental system model or a context model, represents the entire software
element as a single bubble with input and output data indicated by incoming and outgoing arrows,
respectively. Additional processes (bubbles) and information flow paths are represented as the level 0 DFD is
partitioned to revealmore detail. For example, a level 1 DFD might contain five or six bubbles with
interconnecting arrows. Each of the processes represented at level 1 is a subfunction of the overall system
depicted in the context model.

As we noted earlier, each of the bubbles may be refined or layered to depict more detail. A fundamental
model for system F indicates the primary input is A and ultimate output is B. We refine the F model into
transforms f1 to f7. Note that information flow continuity must be maintained; that is, input and output to each
refinement must remain the same. This concept, sometimes called balancing, is essential for the development
of consistent models. Further refinement of f4 depicts detail in the form of transforms f41 to f45. Again, the
input (X, Y) and output (Z) remain unchanged.

The basic notation used to develop a DFD is not in itself sufficient to describe requirements for software. For
example, an arrow shown in a DFD represents a data object that is input to or output from a process. A data
store represents some organized collection of data. But what is the content of the data implied by the arrow or
depicted by the store? If the arrow (or the store) represents a collection of objects, what are they? These
questions are answered by applying another component of the basic notation for structured analysis—the data
dictionary.

DFD graphical notation must be augmented with descriptive text. A process specification (PSPEC) can be
used to specify the processing details implied by a bubble within a DFD. The process specification describes
the input to a function, the algorithm that is applied to transform the input, and the output that is produced. In
addition, the PSPEC indicates restrictions and limitations imposed on the process (function), performance
characteristics that are relevant to the process, and design constraints that may influence the way in which the
process will be implemented.
Data Flow Diagram (DFD) is a graphical representation of data flow in any system. It is capable of
illustrating incoming data flow, outgoing data flow and store data. Data flow diagram describes anything
about how data flows through the system.

Components of Data Flow Diagram:


Following are the components of the data flow diagram that are used to represent source, destination,
storage and flow of data.

 Entities:
Entities include source and destination of the data. Entities are represented by rectangle with their
corresponding names.
 Process:
The tasks performed on the data is known as process. Process is represented by circle. Somewhere
round edge rectangles are also used to represent process.
 Data Storage:
Data storage includes the database of the system. It is represented by rectangle with both smaller sides
missing or in other words within two parallel lines.
 DataFlow:
The movement of data in the system is known as data flow. It is represented with the help of arrow.
The tail of the arrow is source and the head of the arrow is destination.
Advantages of DFD
 A graphical technique that is relatively easy to understand for stakeholders and other users.
 Provides a detailed view of the system components and boundaries.
 Provide clear and detailed information about the processes within a system.
 Shows the logic of the data flow.
 Presents a functional breakdown of the system.
 Used as a part of the system documentation.

 BEHAVIORAL MODELING

UML behavioral diagrams visualize, specify, construct, and document the dynamic aspects of a system. The
behavioral diagrams are categorized as follows:
 Use case diagrams,
 Interaction diagrams,
 State–chart diagrams,
 Activity diagrams.

Use Case Daigram
A use case diagram at its simplest is a representation of a user's interaction with the system that shows the
relationship between the user and the different use cases in which the user is involved.
a)Use case
A use case describes the sequence of actions a system performs yielding visible results. It
shows the interaction of things outside the system with the system itself. Use cases may be applied to
the whole system as well as a part of the system.
b)Actor
An actor represents the roles that the users of the use cases play. An actor may be a person
(e.g. student, customer), a device (e.g. workstation), or another system (e.g. bank, institution).

Purposes of use case diagrams


 Used to gather the requirements of a system.
 Used to get an outside view of a system.
 Identify the external and internal factors influencing the system.
 Show the interaction among the requirements are actors.
Interaction Diagrams

Interaction diagrams depict interactions of objects and their relationships. They also include the messages
passed between them. There are two types of interaction diagrams −

 Sequence Diagrams
 Collaboration Diagrams
Interaction diagrams are used for modeling
 the control flow by time ordering using sequence diagrams.
 the control flow of organization using collaboration diagrams.

Sequence Diagrams

Sequence diagrams are interaction diagrams that illustrate the ordering of messages according to time.

Purpose of a Sequence Diagram

To model high-level interaction among active objects within a system.


To model interaction among objects inside a collaboration realizing a use case.
It either models generic interactions or some certain instances of interaction.

Notations:
These diagrams are in the form of two-dimensional charts. The objects that initiate the interaction are placed
on the x–axis. The messages that these objects send and receive are placed along the y–axis, in the order of
increasing time from top to bottom.
Example − A sequence diagram for the Automated Trading House System is shown in the following figure.
Collaboration Diagrams

Collaboration diagrams are interaction diagrams that illustrate the structure of the objects that send and
receive messages.

When to use a Collaboration Diagram?

The collaborations are used when it is essential to depict the relationship between the object. Both the
sequence and collaboration diagrams represent the same information, but the way of portraying it quite
different. The collaboration diagrams are best suited for analyzing use cases.

Steps for creating a Collaboration Diagram

1. Determine the behavior for which the realization and implementation are specified.
2. Discover the structural elements that are class roles, objects, and subsystems for performing the
functionality of collaboration.
o Choose the context of an interaction: system, subsystem, use case, and operation.
3. Think through alternative situations that may be involved.
o Implementation of a collaboration diagram at an instance level, if needed.
o A specification level diagram may be made in the instance level sequence diagram for
summarizing alternative situations

Notations:
In these diagrams, the objects that participate in the interaction are shown using vertices. The links that
connect the objects are used to send and receive messages. The message is shown as a labeled arrow.

Example − Collaboration diagram for the Automated Trading House System is illustrated in the figure
below.
State–Chart Diagrams

A state–chart diagram shows a state machine that depicts the control flow of an object from one state to
another. A state machine portrays the sequences of states which an object undergoes due to events and their
responses to events.

Notations:

State–Chart Diagrams comprise of −

 States: Simple or Composite


 Transitions between states
 Events causing transitions
 Actions due to the events

Following are the steps that are to be incorporated while drawing a state machine diagram:

1. A unique and understandable name should be assigned to the state transition that describes the
behaviour of the system.
2. Out of multiple objects, only the essential objects are implemented.
3. A proper name should be given to the events and the transitions

State machine diagram is used for:

1. For modelling the object states of a system.


2. For modelling the reactive system as it consists of reactive objects.
3. For pinpointing the events responsible for state transitions.
4. For implementing forward and reverse engineering

Example

In the Automated Trading House System, let us model Order as an object and trace its sequence. The
following figure shows the corresponding state–chart diagram

Activity Diagram

The activity diagram helps in envisioning the workflow from one activity to another. It put emphasis on the
condition of flow and the order in which it occurs.

The flow can be sequential, branched, or concurrent, and to deal with such kinds of flows, the activity diagram
has come up with a fork, join, etc.

It is also termed as an object-oriented flowchart. It encompasses activities composed of a set of actions or


operations that are applied to model the behavioral diagram.

Notations

Activity diagrams symbol can be generated by using the following notations:


 Initial states: The starting stage before an activity takes place is depicted as the initial state
 Final states: The state which the system reaches when a specific process ends is known as a Final
State
 State or an activity box:
 Decision box: It is a diamond shape box which represents a decision with alternate paths. It represents
the flow of control.

Example

 STRUCTURED ANALYSIS
UML structural diagrams are categorized as follows: class diagram, object diagram, component diagram, and
deployment diagram.

 Class Diagram
Class Diagram defines the types of objects in the system and the different types of relationships that
exist among them. It gives a high-level view of an application. This modeling method can run with almost all
Object-Oriented Methods. A class can refer to another class. A class can have its objects or may inherit from
other classes.

Benefits of Class Diagrams


 It can represent the object model for complex systems.
 It reduces the maintenance time by providing an overview of how an application is structured before
coding.
 It provides a general schematic of an application for better understanding.
 It represents a detailed chart by highlighting the desired code, which is to be programmed.
 It is helpful for the stakeholders and the developers

Essential elements of UML class diagram are:

 Class Name
 Attributes
 Operations

Example
 Deployment Diagram
It is a type of diagram that specifies the physical hardware on which the software system will execute. It also
determines how the software is deployed on the underlying hardware. It maps software pieces of a system to
the device that are going to execute it.

Purposes of deployment diagram enlisted below:

1. To envision the hardware topology of the system.


2. To represent the hardware components on which the software components are installed.

A deployment diagram consists of the following notations:


1. A node
2. A component
3. An artifact
4. An interface
Deployment diagrams can be used for the followings:

 To model the network and hardware topology of a system.


 To model the distributed networks and systems.
 Implement forwarding and reverse engineering processes.
 To model the hardware details for a client/server system.
 For modeling the embedded system.

Component Diagram

A component is a replaceable and executable piece of a system whose implementation details are
hidden. A component provides the set of interfaces that a component realizes or implements. Components also
require interfaces to carry out a function.

Purpose of the component diagram


 It envisions each component of a system.
 It constructs the executable by incorporating forward and reverse engineering.
 It depicts the relationships and organization of components

Use component diagram:


To divide a single system into multiple components according to the functionality.
To represent the component organization of the system.

Component diagram Notations


 OBJECT ORIENTED ANALYSIS(OOA), OBJECT ORIENTED ANALYSIS PROCESS
Object–Oriented Analysis (OOA) is the procedure of identifying software engineering requirements
and developing software specifications in terms of a software system’s object model, which comprises of
interacting objects.
A typical OOA phase consists of five stages:

 Find and define the objects.


 Organize the objects.
 Describe how the objects interact with one another.
 Define the external behavior of the objects.
 Define the internal behavior of the objects.

 DOMAIN ANALYSIS
Domain analysis is the process of identifying, collecting, organizing, analyzing and representing a
domain model from the study of existing systems, underlying theory, emerging technology and development
histories within the domain of interest.
 OBJECT RELATIONSHIP MODEL
The CRC modeling approach establishes the first elements of class and object relationships. The first step in
establishing relationships is to understand the responsibilities for each class. The CRC model index card
contains a list of responsibilities. The next step is to define those collaborator classes that help in achieving
each responsibility. This establishes the “connection” between classes.

A relationship exists between any two classes that are connected. Therefore, collaborators are always related
in some way. The most common type of relationship is binary—a connection exists between two classes.
When considered within the context of an OO system, a binary relationship has a specific direction that is
defined based on which class plays the role of the client and which acts as a server.

Rumbaugh and his colleagues suggest that relationships can be derived by examining the stative verbs or verb
phrases in the statement of scope or use-cases for the system. Using a grammatical parse, the analyst isolates
verbs that indicate physical location or placement (next to, part of, contained in), communications (transmits
to, acquires from), ownership (incorporated by, is composed of), and satisfaction of a condition (manages,
coordinates, controls). These provide an indication of a relationship.

The Unified Modeling Language notation for the object-relationship model makes use of a symbology that
has been adapted from the entity-relationship modeling techniques . In essence, objects are connected to other
objects using named relationships. The cardinality of the connection is specified and an overall network of
relationships is established.

The object relationship model (like the entity relationship model) can be derived in three steps:

1. Using the CRC index cards, a network of collaborator objects can be drawn. Figure represents the
class connections for SafeHome objects. First the objects are drawn, connected by unlabeled lines (not shown
in the figure) that indicate some relationship exists between the connected objects.
2. Reviewing the CRC model index card, responsibilities and collaborators are evaluated and each
unlabeled connected line is named. To avoid ambiguity, an arrow head indicates the “direction” of the
relationship.

3. Once the named relationships have been established, each end is evaluated to determine cardinality .
Four options exist: 0 to 1, 1 to 1, 0 to many, or 1 to many. For example, the SafeHome system contains a
single control panel (the 1:1 cardinality notation indicates this). At least one sensor must be present for polling
by the control panel. However, there may be many sensors present (the 1:m notation indicates this). One
sensor can recognize from 0 to many sensor events (e.g., smoke is detected or a break-in has occurred).

The steps just noted continue until a complete object-relationship model has been produced.

 OBJECT BEHAVIOR MODEL.


The CRC model and the object-relationship model represent static elements of the OO analysis model. It is
now time to make a transition to the dynamic behavior of the OO system or product. To accomplish this, we
must represent the behavior of the system as a function of specific events and time.

The object-behavior model indicates how an OO system will respond to external events or stimuli. To create
the model, the analyst must perform the following steps:

1. Evaluate all use-cases to fully understand the sequence of interaction within the system.
2. Identify events that drive the interaction sequence and understand how these events relate to specific
objects.
3.Create an event trace for each use-case.
4.Build a state transition diagram for the system.
5. Review the object-behavior model to verify accuracy and consistency.
Each of these steps is discussed in the sections that follow.

Event Identification with Use-Cases

The use-case represents a sequence of activities that involves actors and the system. In general, an event
occurs whenever an OO system and an actor (recall that an actor can be a person, a device, or even an external
system) exchange information. It is important to note that an event is Boolean. That is, an event is not the
information that has been exchanged but rather the fact that information has been exchanged.

A use-case is examined for points of information exchange. To illustrate, reconsider the use-case for
SafeHome :

1. The homeowner observes the SafeHome control panel to determine if the system is ready for input. If the
system is not ready, the homeowner must physically close windows/doors so that the ready indicator is
present. [A not-ready indicator implies that a sensor is open, i.e., that a door or window is open.]

2. The homeowner uses the keypad to key in a four-digit password. The password is compared with the valid
password stored in the system. If the password is incorrect, the control panel will beep once and reset itself for
additional input. If the password is correct, the control panel awaits further action.

3. The homeowner selects and keys in stay or away to activate the system. Stay activates only perimeter
sensors (inside motion detecting sensors are deactivated). Away activates all sensors.

4. When activation occurs, a red alarm light can be observed by the homeowner.

The underlined portions of the use-case scenario indicate events. An actor should be identified for each event;
the information that is exchanged should be noted; and any conditions or constraints should be listed.

As an example of a typical event, consider the underlined use-case phrase “homeowner uses the keypad to key
in a four-digit password.” In the context of the OO analysis model, the object, homeowner, transmits an event
to the object control panel. The event might be called password entered. The information transferred is the
four digits that constitute the password, but this is not an essential part of the behavioral model. It is important
to note that some events have an explicit impact on the flow of control of the use-case, while others have no
direct impact on the flow of control. For example, the event password entered does not explicitly change the
flow of control of the use-case, but the results of the event compare password (derived from the interaction
“password is compared with the valid password stored in the system”) will have an explicit impact on the
information and control flow of the Safe- Home software.

Once all events have been identified, they are allocated to the objects involved. Objects can be responsible for
generating events (e.g., homeowner generates the password entered event) or recognizing events that have
occurred elsewhere (e.g., control panel recognizes the binary result of the compare password event).

State Representations

In the context of OO systems, two different characterizations of states must be considered: (1) the state of
each object as the system performs its function and (2) the state of the system as observed from the outside as
the system performs its function.

The state of an object takes on both passive and active characteristics . A passive state is simply the current
status of all of an object’s attributes. For example, the passive state of the aggregate object player would
include the current position and orientation attributes of player as well as other features of player that are
relevant to the game (e.g., an attribute that indicates magic wishes remaining). The active state of an object
indicates the current status of the object as it undergoes a continuing transformation or processing. The object
player might have the following active states: moving, at rest, injured, being cured; trapped, lost, and so forth.
An event (sometimes called a trigger) must occur to force an object to make a transition from one active state
to another. One component of an object-behavior model is a simple representation of the active states for each
object and the events (triggers) that cause changes between these active states. figure illustrates a simple
representation of active states for the control panel object in the SafeHome system.
Each arrow shown in figure represents a transition from one active state of an object to another. The labels
shown for each arrow represent the event that triggers the transition. Although the active state model provides
useful insight into the “life history” of an object, it is possible to specify additional information to provide
more depth in understanding the behavior of an object. In addition to specifying the event that causes the
transition to occur, the analyst can specify a guard and an action . A guard is a Boolean condition that must be
satisfied in order for the transition to occur. For example, the guard for the transition from the “at rest” state to
the “comparing state” in figure can be determined by examining the use-case:

if (password input = 4 digits) then make transition to comparing state;

In general, the guard for a transition usually depends upon the value of one or moreattributes of an object. In
other words, the guard depends on the passive state of the object.

An action occurs concurrently with the state transition or as a consequence of it and generally involves one or
more operations (responsibilities) of the object. For example, the action connected to the password entered
event is an operation that accesses a password object and performs a digit-by-digit comparison to validate the
entered password.

The second type of behavioral representation for OOA considers a state representation for the overall product
or system. This representation encompasses a simple event trace model that indicates how events cause
transitions from object to object and a state transition diagram that depicts the processing behavior of each
object.

Once events have been identified for a use-case, the analyst creates a representation of how events cause flow
from one object to another. Called an event trace, this representation is a shorthand version of the use-case. It
represents key objects and the events that cause behavior to flow from object to object.

Figure illustrates a partial event trace for the SafeHome system. Each of the arrows represents an event
(derived from a use-case) and indicates how the event channels behavior between SafeHome objects. The first
event, system ready, is derived from the external environment and channels behavior to the homeowner
object. The homeowner enters a password. The event initiates beep and “beep sounded” and indicates how
behavior is channeled if the password is invalid. A valid password results in flow back to homeowner. The
remaining events and traces follow the behavior as the system is activated or deactivated.
Once a complete event trace has been developed, all of the events that cause transitions between system
objects can be collated into a set of input events and output events (from an object). This can be represented
using an event flow diagram. All events that flow into and out of an object are noted as shown in below
figure state transition diagram can then be developed to represent the behaviour associated with
responsibilities for each class.

UML uses a combination of state diagrams, sequence diagrams, collaboration diagrams, and activity diagrams
to represent the dynamic behavior of the objects and classes that have been identified as part of the analysis
model.

 DESIGN MODELLING WITH UML.


Unified Modeling Language (UML)
The unified modeling language become the standard modeling language for object-oriented modeling. It has
many diagrams, however, the most diagrams that are commonly used are:
 Use case diagram: It shows the interaction between a system and it’s environment (users or systems)
within a particular situation.
 Class diagram: It shows the different objects, their relationship, their behaviors, and attributes.
 Sequence diagram: It shows the interactions between the different objects in the system, and between
actors and the objects in a system.
 State machine diagram: It shows how the system respond to external and internal events.
 Activity diagram: It shows the flow of the data between the processes in the system
SCHOOL OF COMPUTING
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING
VTU R15 B.TECH – CSE

UNIT -IV Design


Design Concepts & Principles - Design Process - Design Concepts - Modular Design - Design
Effective Modularity - Introduction to Software Architecture - Data Design - Transform Mapping -
Transaction Mapping - Object Oriented Design - System design process- Object design process -
Design Patterns.

Design Concepts & Principles

Design Process

 The main aim of design engineering is to generate a model which shows firmness, delight
and commodity.
 Software design is an iterative process through which requirements are translated into the
blueprint for building the software.

Software quality guidelines

 A design is generated using the recognizable architectural styles and compose a good design
characteristic of components and it is implemented in evolutionary manner for testing.
 A design of the software must be modular i.e the software must be logically partitioned into
elements.
 In design, the representation of data , architecture, interface and components should be
distinct.
 A design must carry appropriate data structure and recognizable data patterns.
 Design components must show the independent functional characteristic.
 A design creates an interface that reduce the complexity of connections between the
components.
 A design must be derived using the repeatable method.
 The notations should be use in design which can effectively communicates its meaning.

Quality attributes

The attributes of design name as 'FURPS' are as follows:

Functionality:
It evaluates the feature set and capabilities of the program.

Usability:
It is accessed by considering the factors such as human factor, overall aesthetics, consistency and
documentation.

Reliability:
It is evaluated by measuring parameters like frequency and security of failure, output result accuracy,
the mean-time-to-failure(MTTF), recovery from failure and the the program predictability.

Performance:
It is measured by considering processing speed, response time, resource consumption, throughput
and efficiency.

Supportability:
 It combines the ability to extend the program, adaptability, serviceability. These three term
defines the maintainability.
 Testability, compatibility and configurability are the terms using which a system can be
easily installed and found the problem easily.
 Supportability also consists of more attributes such as compatibility, extensibility, fault
tolerance, modularity, reusability, robustness, security, portability, scalability.

Design concepts

The set of fundamental software design concepts are as follows:

1. Abstraction
 A solution is stated in large terms using the language of the problem environment at the
highest level abstraction.
 The lower level of abstraction provides a more detail description of the solution.
 A sequence of instruction that contain a specific and limited function refers in a procedural
abstraction.
 A collection of data that describes a data object is a data abstraction.
2. Architecture
 The complete structure of the software is known as software architecture.
 Structure provides conceptual integrity for a system in a number of ways.
 The architecture is the structure of program modules where they interact with each other in a
specialized way.
 The components use the structure of data.
 The aim of the software design is to obtain an architectural framework of a system.
 The more detailed design activities are conducted from the framework.
3.Patterns
A design pattern describes a design structure and that structure solves a particular design problem in
a specified content.

4. Modularity
 A software is separately divided into name and addressable components. Sometime they are
called as modules which integrate to satisfy the problem requirements.
 Modularity is the single attribute of a software that permits a program to be managed easily.
5. Information hiding
Modules must be specified and designed so that the information like algorithm and data presented in
a module is not accessible for other modules not requiring that information.

6. Functional independence
 The functional independence is the concept of separation and related to the concept of
modularity, abstraction and information hiding.
 The functional independence is accessed using two criteria i.e Cohesion and coupling.
Cohesion
 Cohesion is an extension of the information hiding concept.
 A cohesive module performs a single task and it requires a small interaction with the other
components in other parts of the program.
Coupling
Coupling is an indication of interconnection between modules in a structure of software.

7. Refinement
 Refinement is a top-down design approach.
 It is a process of elaboration.
 A program is established for refining levels of procedural details.
 A hierarchy is established by decomposing a statement of function in a stepwise manner till
the programming language statement are reached.
8. Refactoring
 It is a reorganization technique which simplifies the design of components without changing
its function behaviour.
 Refactoring is the process of changing the software system in a way that it does not change
the external behaviour of the code still improves its internal structure.
9. Design classes
 The model of software is defined as a set of design classes.
 Every class describes the elements of problem domain and that focus on features of the
problem which are user visible.
 OO design concept in Software Engineering

 Software design model elements

Software design
Software design is a process to transform user requirements into some suitable form, which helps
the programmer in software coding and implementation.
For assessing user requirements, an SRS (Software Requirement Specification) document is created
whereas for coding and implementation, there is a need of more specific and detailed requirements
in software terms. The output of this process can directly be used into implementation in
programming languages.
Software design is the first step in SDLC (Software Design Life Cycle), which moves the
concentration from problem domain to solution domain. It tries to specify how to fulfill the
requirements mentioned in SRS.
Software Design Levels
Software design yields three levels of results:

 Architectural Design - The architectural design is the highest abstract version of the system.
It identifies the software as a system with many components interacting with each other. At
this level, the designers get the idea of proposed solution domain.
 High-level Design- The high-level design breaks the ‘single entity-multiple component’
concept of architectural design into less-abstracted view of sub-systems and modules and
depicts their interaction with each other. High-level design focuses on how the system along
with all of its components can be implemented in forms of modules. It recognizes modular
structure of each sub-system and their relation and interaction among each other.
 Detailed Design- Detailed design deals with the implementation part of what is seen as a
system and its sub-systems in the previous two designs. It is more detailed towards modules
and their implementations. It defines logical structure of each module and their interfaces to
communicate with other modules.

Modularization
Modularization is a technique to divide a software system into multiple discrete and independent
modules, which are expected to be capable of carrying out task(s) independently. These modules
may work as basic constructs for the entire software. Designers tend to design modules such that
they can be executed and/or compiled separately and independently.
Modular design unintentionally follows the rules of ‘divide and conquer’ problem-solving strategy
this is because there are many other benefits attached with the modular design of a software.
Modularity: Trade-offs

Advantage of modularization:

 Smaller components are easier to maintain


 Program can be divided based on functional aspects
 Desired level of abstraction can be brought in the program
 Components with high cohesion can be re-used again
 Concurrent execution can be made possible
 Desired from security aspect

Concurrency
Back in time, all software are meant to be executed sequentially. By sequential execution we mean
that the coded instruction will be executed one after another implying only one portion of program
being activated at any given time. Say, a software has multiple modules, then only one of all the
modules can be found active at any time of execution.
In software design, concurrency is implemented by splitting the software into multiple independent
units of execution, like modules and executing them in parallel. In other words, concurrency
provides capability to the software to execute more than one part of code in parallel to each other.
It is necessary for the programmers and designers to recognize those modules, which can be made
parallel execution.
Example
The spell check feature in word processor is a module of software, which runs along side the word
processor itself.
Coupling and Cohesion
When a software program is modularized, its tasks are divided into several modules based on some
characteristics. As we know, modules are set of instructions put together in order to achieve some
tasks. They are though, considered as single entity but may refer to each other to work together.
There are measures by which the quality of a design of modules and their interaction among them
can be measured. These measures are called coupling and cohesion.

Cohesion
Cohesion is a measure that defines the degree of intra-dependability within elements of a module.
The greater the cohesion, the better is the program design.
There are seven types of cohesion, namely –
 Co-incidental cohesion - It is unplanned and random cohesion, which might be the result of
breaking the program into smaller modules for the sake of modularization. Because it is
unplanned, it may serve confusion to the programmers and is generally not-accepted.
 Logical cohesion - When logically categorized elements are put together into a module, it is
called logical cohesion.
 Temporal Cohesion - When elements of module are organized such that they are processed
at a similar point in time, it is called temporal cohesion.
 Procedural cohesion - When elements of module are grouped together, which are executed
sequentially in order to perform a task, it is called procedural cohesion.
 Communicational cohesion - When elements of module are grouped together, which are
executed sequentially and work on same data (information), it is called communicational
cohesion.
 Sequential cohesion - When elements of module are grouped because the output of one
element serves as input to another and so on, it is called sequential cohesion.
 Functional cohesion - It is considered to be the highest degree of cohesion, and it is highly
expected. Elements of module in functional cohesion are grouped because they all contribute
to a single well-defined function. It can also be reused.

Coupling
Coupling is a measure that defines the level of inter-dependability among modules of a program. It
tells at what level the modules interfere and interact with each other. The lower the coupling, the
better the program.
There are five levels of coupling, namely -

 Content coupling - When a module can directly access or modify or refer to the content of
another module, it is called content level coupling.
 Common coupling- When multiple modules have read and write access to some global data,
it is called common or global coupling.
 Control coupling- Two modules are called control-coupled if one of them decides the
function of the other module or changes its flow of execution.
 Stamp coupling- When multiple modules share common data structure and work on
different part of it, it is called stamp coupling.
 Data coupling- Data coupling is when two modules interact with each other by means of
passing data (as parameter). If a module passes data structure as parameter, then the receiving
module should use all its components.
Ideally, no coupling is considered to be the best.

Design Verification
The output of software design process is design documentation, pseudo codes, detailed logic
diagrams, process diagrams, and detailed description of all functional or non-functional
requirements.
The next phase, which is the implementation of software, depends on all outputs mentioned above.
It is then becomes necessary to verify the output before proceeding to the next phase. The early any
mistake is detected, the better it is or it might not be detected until testing of the product. If the
outputs of design phase are in formal notation form, then their associated tools for verification
should be used otherwise a thorough design review can be used for verification and validation.
By structured verification approach, reviewers can detect defects that might be caused by
overlooking some conditions. A good design review is important for good software design,
accuracy and quality.
Introduction to Software Architecture
The architecture of a system describes its major components, their relationships (structures), and
how they interact with each other. Software architecture and design includes several contributory
factors such as Business strategy, quality attributes, human dynamics, design, and IT environment.

We can segregate Software Architecture and Design into two distinct phases: Software Architecture
and Software Design. In Architecture, nonfunctional decisions are cast and separated by the
functional requirements. In Design, functional requirements are accomplished.

Software Architecture

Architecture serves as a blueprint for a system. It provides an abstraction to manage the system
complexity and establish a communication and coordination mechanism among components. It
defines a structured solution to meet all the technical and operational requirements, while
optimizing the common quality attributes like performance and security.
Further, it involves a set of significant decisions about the organization related to software
development and each of these decisions can have a considerable impact on quality, maintainability,
performance, and the overall success of the final product. These decisions comprise of −
 Selection of structural elements and their interfaces by which the system is composed.
 Behavior as specified in collaborations among those elements.
 Composition of these structural and behavioral elements into large subsystem.
 Architectural decisions align with business objectives.
 Architectural styles guide the organization.

Software Design

Software design provides a design plan that describes the elements of a system, how they fit, and
work together to fulfill the requirement of the system. The objectives of having a design plan are as
follows −
 To negotiate system requirements, and to set expectations with customers, marketing, and
management personnel.
 Act as a blueprint during the development process.
 Guide the implementation tasks, including detailed design, coding, integration, and testing.
Domain analysis, requirements analysis, and risk analysis comes before architecture design phase,
whereas the detailed design, coding, integration, and testing phases come after it.

Goals of Architecture

The primary goal of the architecture is to identify requirements that affect the structure of the
application. A well-laid architecture reduces the business risks associated with building a technical
solution and builds a bridge between business and technical requirements.
Some of the other goals are as follows −
 Expose the structure of the system, but hide its implementation details.
 Realize all the use-cases and scenarios.
 Try to address the requirements of various stakeholders.
 Handle both functional and quality requirements.
 Reduce the goal of ownership and improve the organization’s market position.
 Improve quality and functionality offered by the system.
 Improve external confidence in either the organization or system.
Limitations
Software architecture is still an emerging discipline within software engineering. It has the
following limitations −
 Lack of tools and standardized ways to represent architecture.
 Lack of analysis methods to predict whether architecture will result in an implementation
that meets the requirements.
 Lack of awareness of the importance of architectural design to software development.
 Lack of understanding of the role of software architect and poor communication among
stakeholders.
 Lack of understanding of the design process, design experience and evaluation of design.
Architectural Styles
Each style describes a system category that encompasses: (1) a set of components (e.g., a database,
computational modules) that perform a function required by a system, (2) a set of connectors that
enable “communication, coordination and cooperation” among components, (3) constraints that
define how components can be integrated to form the system, and (4) semantic models that enable a
designer to understand the overall properties of a system by analyzing the known properties of its
constituent parts.
 Data-centered architectures
 Data flow architectures
 Call and return architectures
 Object-oriented architectures
 Layered architectures

Data-Centered Architecture
Data centered - data store (e.g. file or database) lies at the center of this architecture and is accessed
frequently by other components that modify data
Data flow - input data is transformed by a series of computational or manipulative components into output data

Call and return - program structure decomposes function into control hierarchy with main program invokes several
subprograms

Object-oriented - components of system encapsulate data and operations, communication between components is by
message passing

Layered - several layers are defined, each accomplishing operations that progressively become closer to the machine
instruction set
Role of Software Architect

A Software Architect provides a solution that the technical team can create and design for the entire
application. A software architect should have expertise in the following areas −
Design Expertise
 Expert in software design, including diverse methods and approaches such as object-oriented
design, event-driven design, etc.
 Lead the development team and coordinate the development efforts for the integrity of the
design.
 Should be able to review design proposals and tradeoff among themselves.
Domain Expertise
 Expert on the system being developed and plan for software evolution.

 Assist in the requirement investigation process, assuring completeness and consistency.


 Coordinate the definition of domain model for the system being developed.
Technology Expertise
 Expert on available technologies that helps in the implementation of the system.

 Coordinate the selection of programming language, framework, platforms, databases, etc.


Methodological Expertise
 Expert on software development methodologies that may be adopted during SDLC (Software
Development Life Cycle).
 Choose the appropriate approaches for development that helps the entire team.
Deliverable of the Architect
An architect is expected to deliver clear, complete, consistent, and achievable set of functional goals
to the organization. Besides, he is also responsible to provide −
 A simplified concept of the system
 A design in the form of the system, with at least two layers of decomposition.
 A functional description of the system, with at least two layers of decomposition.
 A notion of the timing, operator attributes, and the implementation and operation plans
 A document or process which ensures functional decomposition is followed, and the form of
interfaces is controlled
Hidden Role of Software Architect
Besides, facilitating the technical work among team members, it has also some subtle roles such as
reinforce the trust relationship among team members and protect team members from the external
forces that could distract them and bring less value to the project.

Quality Attributes

Quality is a measure of excellence or the state of being free from deficiencies or defects. Quality
attributes are the system properties that are separate from the functionality of the system.
Implementing quality attributes makes it easier to differentiate a good system from a bad one.
Attributes are overall factors that affect runtime behavior, system design, and user experience.
They can be classified as −
 Static Quality Attributes − Reflect the structure of a system and organization, directly
related to architecture, design, and source code. They are invisible to end-user, but affect the
development and maintenance cost, e.g.: modularity, testability, maintainability, etc.
 Dynamic Quality Attributes − Reflect the behavior of the system during its execution.
They are directly related to system’s architecture, design, source code, configuration,
deployment parameters, environment, and platform. They are visible to the end-user and
exist at runtime, e.g. throughput, robustness, scalability, etc.

Quality Scenarios

Quality scenarios specify how to prevent a fault from becoming a failure. They can be divided into
six parts based on their attribute specifications −
 Source − An internal or external entity such as people, hardware, software, or physical
infrastructure that generate the stimulus.
 Stimulus − A condition that needs to be considered when it arrives on a system.
 Environment − The stimulus occurs within certain conditions.
 Artifact − A whole system or some part of it such as processors, communication channels,
persistent storage, processes etc.
 Response − An activity undertaken after the arrival of stimulus such as detect faults, recover
from fault, disable event source etc.
 Response measure − Should measure the occurred responses so that the requirements can be
tested.
Common Quality Attributes
The following table lists the common quality attributes a software architecture must have −
Category Quality Attribute Description

Conceptual Integrity Defines the consistency and


coherence of the overall design. This
includes the way components or
modules are designed.

Maintainability Ability of the system to undergo


Design Qualities changes with a degree of ease.

Reusability Defines the capability for


components and subsystems to be
suitable for use in other applications.

Interoperability Ability of a system or different


systems to operate successfully by
communicating and exchanging
information with other external
systems written and run by external
parties.

Run-time Qualities
Manageability Defines how easy it is for system
administrators to manage the
application.

Reliability Ability of a system to remain


operational over time.
Scalability Ability of a system to either handle
the load increase without impacting
the performance of the system or the
ability to be readily enlarged.

Security Capability of a system to prevent


malicious or accidental actions
outside of the designed usages.

Performance Indication of the responsiveness of a


system to execute any action within
a given time interval.

Availability Defines the proportion of time that


the system is functional and
working. It can be measured as a
percentage of the total system
downtime over a predefined period.

Supportability Ability of the system to provide


information helpful for identifying
and resolving issues when it fails to
work correctly.
System Qualities

Testability Measure of how easy it is to create


test criteria for the system and its
components.

User Qualities Usability Defines how well the application


meets the requirements of the user
and consumer by being intuitive.
Architecture Quality Correctness Accountability for satisfying all the
requirements of the system.

Portability Ability of the system to run under


different computing environment.

Integrality Ability to make separately developed


components of the system work
Non-runtime Quality
correctly together.

Modifiability Ease with which each software


system can accommodate changes to
its software.

Business quality attributes Cost and schedule Cost of the system with respect to
time to market, expected project
lifetime & utilization of legacy.

Marketability Use of system with respect to market


competition.

Data Design
 Data Design at Application Level
 Data Design at Business Level
 Data Modeling, Data
 Structure, Database, and the Data Warehouse
Subject oriented
Integration
Time Variance
 NonVolatility

Data Design Principles


Systematic analysis principles applied to function and behavior should also be applied to data.
All data structures and the operations to be performed on each should be identified.
Data dictionary should be established and used to define both data and program design.
Low level design processes should be deferred until late in the design process.
Representations of data structure should be known only to those modules that must make direct use
of the data contained within in the data structure.
A library of useful data structures and operations should be developed.
A software design and its implementation language should support the specification and realization
of abstract data types.
Architectural Design
The software must be placed into context
the design should define the external entities (other systems, devices, people) that the software
interacts with and the nature of the interaction
A set of architectural archetypes should be identified
An archetype is an abstraction (similar to a class) that represents one element of system behavior
The designer specifies the structure of the system by defining and refining software components that
implement each archetype
Structured design, considered as a tool that converts data flow diagrams (DFDs) to software
architecture can be described as a data-flow-oriented design method.’
There are two types of information flow in DFDs:
Transform flow
Transaction flow
Transform and transaction mapping:

Transform Flow

The objective of structured design is to convert the outcome of structured analysis (e.g. DFDs) into a
structure chart. Structured design provides two methods to guide transformation of a DFD into a
structure chart. These two methods are the transform analysis and transaction analysis.

Structure Design

Structured design, considered as a tool that converts data flow diagrams (DFDs) to software
architecture can be described as a data-flow-oriented design method.’

There are two types of information flow in DFDs:

Transform flow: The mapping used in this case is the transform mapping.

Transaction flow: The mapping used in this case is the transaction mapping.

The main notations used to develop structure charts are as follows:

Rectangular boxes: Represent modules.


Module invocation arrows: Represents the flow of control from one module to the other.

Data flow arrows: Data name is attached to the arrows. The direction of the arrow signifies the
direction of the data, from one module to the other.

Library modules: Represented by a double edged rectangle.

Selection: Represented by a diamond.

Repetition: Represented by a loop.

Transform Mapping (Analysis)

A structure chart is produced by the conversion of a DFD diagram; this conversion is described as
‘transform mapping (analysis)’. It is applied through the ‘transforming’ of input data flow into output
data flow.

Transform analysis establishes the modules of the system, also known as the primary functional
components, as well as the inputs and outputs of the identified modules in the DFD. Transform
analysis is made up of a number of steps that need to be carried out.

Transform analysis is a set of design steps that allows a DFD with transform flow characteristics to
be mapped into specific architectural style. These steps are as follows:

Step1: Review the fundamental system model

Step2: Review and refine DFD for the SW

Step3: Assess the DFD in order to decide the usage of transform or transaction flow.

Step4: Identify incoming and outgoing boundaries in order to establish the transform center.

Step5: Perform "first-level factoring".

Step6: Perform "second-level factoring".

Step7: Refine the first-iteration architecture using design heuristics for improved SW quality.

Transaction Mapping (Analysis)

Similar to ‘transform mapping’, transaction analysis makes use of DFDs diagrams to establish the
various transactions involved and producing a structure chart as a result.

Transaction mapping Steps:

 Review the fundamental system model


 Review and refine DFD for the SW
 Assess the DFD in order to decide the usage of transform or transaction flow.
 Identify the transaction center and the flow characteristics along each action path
 Find transaction center
 Identify incoming path and isolate action paths
 Evaluate each action path for transform vs. transaction characteristics
 Map the DFD in a program structure agreeable to transaction processing
 Carry out the ‘factoring’ process

Software Engineering-Transform Mapping


Transform mapping is a set of design steps that allows a DFD with transform flow characteristics to
be mapped into a specific architectural style. In this section transform mapping is described by
applying design steps to an example system—a portion of the SafeHome security software.

An Example

The SafeHome security system is representative of many computer-based products and systems in
use today. The product monitors the real world and reacts to changes that it encounters. It also
interacts with a user through a series of typed inputs and alphanumeric displays. The level 0 data
flow diagram for SafeHome, is shown in figure

During requirements analysis, more detailed flow models would be created for SafeHome. In
addition, control and process specifications, a data dictionary, and various behavioral models would
also be created.

Design Steps
The preceding example will be used to illustrate each step in transform mapping. The steps begin
with a re-evaluation of work done during requirements analysis and then move to the design of the
software architecture.

Step 1. Review the fundamental system model. The fundamental system model encompasses the
level 0 DFD and supporting information. In actuality, the design step begins with an evaluation of
both the System Specification and the Software Requirements Specification. Both documents
describe information flow and structure at the software interface. Figure 1 and 2 depict level 0 and
level 1 data flow for the Safe Home software.

Step 2. Review and refine data flow diagrams for the software. Information obtained from
analysis models contained in the Software Requirements Specification is refined to produce greater
detail. For example, the level 2 DFD for monitor sensors is examined, and a level 3 data flow
diagram is derived . At level 3, each transform in the data flow diagram exhibits relatively high
cohesion. That is, the process implied by a transform performs a single, distinct function that can be
implemented as a module9 in the Safe Home software. Therefore, the DFD in figure contains
sufficient detail for a "first cut" at the design of architecture for the monitor sensors subsystem, and
we proceed without further refinement.
Step 3. Determine whether the DFD has transform or transaction flow characteristics. In
general, information flow within a system can always be represented as transform. However, when
an obvious transaction characteristic is encountered, a different design mapping is recommended. In
this step, the designer selects global (softwarewide) flow characteristics based on the prevailing
nature of the DFD. In addition, local regions of transform or transaction flow are isolated. These
subflows can be used to refine program architecture derived from a global characteristic described
previously. For now, we focus our attention only on the monitor sensors subsystem data flow
depicted in figure.

Evaluating the DFD , we see data entering the software along one incoming path and exiting along
three outgoing paths. No distinct transaction center is implied (although the transform establishes
alarm conditions that could be perceived as such). Therefore, an overall transform characteristic will
be assumed for information flow.

Step 4. Isolate the transform center by specifying incoming and outgoing flow boundaries. In
the preceding section incoming flow was described as a path in which information is converted from
external to internal form; outgoing flow converts from internal to external form. Incoming and
outgoing flow boundaries are open to interpretation. That is, different designers may select slightly
different points in the flow as boundary locations. In fact, alternative design solutions can be derived
by varying the placement of flow boundaries. Although care should be taken when boundaries are
selected, a variance of one bubble along a flow path will generally have little impact on the final
program structure.

Flow boundaries for the example are illustrated as shaded curves running vertically through the flow
in the above figure. The transforms (bubbles) that constitute the transform center lie within the two
shaded boundaries that run from top to bottom in the figure. An argument can be made to readjust a
boundary (e.g, an incoming flow boundary separating read sensors and acquire response info could
be proposed). The emphasis in this design step should be on selecting reasonable boundaries, rather
than lengthy iteration on placement of divisions.
Step 5. Perform "first-level factoring." Program structure represents a top-down distribution
of control. Factoring results in a program structure in which top-level modules perform decision
making and low-level modules perform most input, computation, and output work. Middle-level
modules perform some control and do moderate amounts of work.

When transform flow is encountered, a DFD is mapped to a specific structure (a call and return
architecture) that provides control for incoming, transform, and outgoing information processing.
This first-level factoring for the monitor sensors subsystem is illustrated in figure below. A main
controller (called monitor sensors executive) resides at the top of the program structure and
coordinates the following subordinate control functions:

• An incoming information processing controller, called sensor input controller, coordinates receipt
of all incoming data.
• A transform flow controller, called alarm conditions controller, supervises all operations on data in
internalized form (e.g., a module that invokes various data transformation procedures).
• An outgoing information processing controller, called alarm output controller,
coordinates production of output information.

Although a three-pronged structure is implied by figure complex flows in large systems may dictate
two or more control modules for each of the generic control functions described previously. The
number of modules at the first level should be limited to the minimum that can accomplish control
functions and still maintain good coupling and cohesion characteristics.

Step 6. Perform "second-level factoring." Second-level factoring is accomplished by mapping


individual transforms (bubbles) of a DFD into appropriate modules within the architecture.
Beginning at the transform center boundary and moving outward along incoming and then outgoing
paths, transforms are mapped into subordinate levels of the software structure. The general approach
to second-level factoring for the SafeHome data flow is illustrated in figure.

Although the figure illustrates a one-to-one mapping between DFD transforms and software
modules, different mappings frequently occur. Two or even three bubbles can be combined and
represented as one module (recalling potential problems with cohesion) or a single bubble may be
expanded to two or more modules. Practical considerations and measures of design quality dictate
the outcome of second level factoring. Review and refinement may lead to changes in this structure,
but it can serve as a "first-iteration" design.

Second-level factoring for incoming flow follows in the same manner. Factoring is again
accomplished by moving outward from the transform center boundary on the incoming flow side.
The transform center of monitor sensors subsystem software is mapped somewhat differently. Each
of the data conversion or calculation transforms of the transform portion of the DFD is mapped into a
module subordinate to the transform controller. A completed first-iteration architecture is shown in
figure.
The modules mapped in the preceding manner and shown in figure represent an initial design of
software architecture. Although modules are named in a manner that implies function, a brief
processing narrative (adapted from the PSPEC created during analysis modeling) should be written
for each. The narrative describes
• Information that passes into and out of the module (an interface description).
• Information that is retained by a module, such as data stored in a local data structure.
• A procedural narrative that indicates major decision points and tasks.
• A brief discussion of restrictions and special features (e.g., file I/O, hardwaredependent
characteristics, special timing requirements).
The narrative serves as a first-generation Design Specification. However, further refinement and
additions occur regularly during this period of design.

Step 7. Refine the first-iteration architecture using design heuristics for improved software
quality. A first-iteration architecture can always be refined by applying concepts of module
independence . Modules are exploded or imploded to produce sensible factoring, good cohesion,
minimal coupling, and most important, a structure that can be implemented without difficulty, tested
without confusion, and maintained without grief.

Refinements are dictated by the analysis and assessment methods described briefly , as well as
practical considerations and common sense. There are times, for example, when the controller for
incoming data flow is totally unnecessary, when some input processing is required in a module that
is subordinate to the transform controller, when high coupling due to global data cannot be avoided,
or when
optimal structural characteristics cannot be achieved. Software requirements coupled with human
judgment is the final arbiter. Many modifications can be made to the first iteration architecture
developed for the Safe Home monitor sensors subsystem. Among many possibilities,

1. The incoming controller can be removed because it is unnecessary when a single incoming flow
path is to be managed.

2. The substructure generated from the transform flow can be imploded into the module establish
alarm conditions (which will now include the processing implied by select phone number). The
transform controller will not be needed and the small decrease in cohesion is tolerable.
3. The modules format display and generate display can be imploded (we assume that display
formatting is quite simple) into a new module called produce display.

The refined software structure for the monitor sensors subsystem is shown in figure.

The objective of the preceding seven steps is to develop an architectural representation of software.
That is, once structure is defined, we can evaluate and refine software architecture by viewing it as a
whole. Modifications made at this time require little additional work, yet can have a profound impact
on software quality.
Software Engineering-Transaction Mapping
In many software applications, a single data item triggers one or a number of information flows that
effect a function implied by the triggering data item. The data item, called a transaction, and its
corresponding flow characteristics . In this section we consider design steps used to treat transaction
flow.

An Example

Transaction mapping will be illustrated by considering the user interaction subsystem of the
SafeHome software.
As shown in the figure, user commands flows into the system and results in additional information
flow along one of three action paths. A single data item, command type, causes the data flow to fan
outward from a hub. Therefore, the overall data flow characteristic is transaction oriented.

It should be noted that information flow along two of the three action paths accommodate additional
incoming flow (e.g., system parameters and data are input on the "configure" action path). Each
action path flows into a single transform, display messages and status.

Design Steps

The design steps for transaction mapping are similar and in some cases identical to steps for
transform mapping . A major difference lies in the mapping of DFD to software structure.
Step 1. Review the fundamental system model.
Step 2. Review and refine data flow diagrams for the software.
Step 3. Determine whether the DFD has transform or transaction flow characteristics. Steps 1,
2, and 3 are identical to corresponding steps in transform mapping. The DFD shown in above figure
has a classic transaction flow characteristic. However, flow along two of the action paths emanating
from the invoke command processing bubble appears to have transform flow characteristics.
Therefore, flow boundaries must be established for both flow types.

Step 4. Identify the transaction center and the flow characteristics along each of the action
paths. The location of the transaction center can be immediately discerned from the DFD. The
transaction center lies at the origin of a number of actions paths that flow radially from it. For the
flow shown in figure , the invoke command processing bubble is the transaction center.

The incoming path (i.e., the flow path along which a transaction is received) and all action paths
must also be isolated. Boundaries that define a reception path and action paths are also shown in the
figure. Each action path must be evaluated for its individual flow characteristic. For example, the
"password" path has transform characteristics. Incoming, transform, and outgoing flow are indicated
with boundaries.
Step 5. Map the DFD in a program structure amenable to transaction processing. Transaction
flow is mapped into an architecture that contains an incoming branch and a dispatch branch. The
structure of the incoming branch is developed in much the same way as transform mapping. Starting
at the transaction center, bubbles along the incoming path are mapped into modules. The structure of
the dispatch branch contains a dispatcher module that controls all subordinate action modules. Each
action flow path of the DFD is mapped to a structure that corresponds to its specific flow
characteristics. This process is illustrated schematically infigure below.

Considering the user interaction subsystem data flow, first-level factoring for step 5 is shown in
below figure.

The bubbles read user command and activate/deactivate system map directly into the architecture
without the need for intermediate control modules. The transaction center, invoke command
processing, maps directly into a dispatcher module of the same name. Controllers for system
configuration and password processing are created as illustrated in figure.
Step 6. Factor and refine the transaction structure and the structure of each action path. Each
action path of the data flow diagram has its own information flow characteristics. We have already
noted that transform or transaction flow may be encountered. The action path-related "substructure"
is developed using the design steps discussed in this and the preceding section.

As an example, consider the password processing information flow shown (inside shaded area) in
figure The flow exhibits classic transform characteristics. A password is input (incoming flow) and
transmitted to a transform center where it is compared against stored passwords. An alarm and
warning message (outgoing flow) are produced (if a match is not obtained). The "configure" path is
drawn similarly using the transform mapping. The resultant software architecture is shown in the
figure below

Step 7. Refine the first-iteration architecture using design heuristics for improved software
quality. This step for transaction mapping is identical to the corresponding step for transform
mapping. In both design approaches, criteria such as module independence, practicality (efficacy of
implementation and test), and maintainability must be carefully considered as structural
modifications are proposed.

Design patterns
Design patterns represent the best practices used by experienced object-oriented software developers.
Design patterns are solutions to general problems that software developers faced during software
development. These solutions were obtained by trial and error by numerous software developers over
quite a substantial period of time.
In software engineering, a design pattern is a general repeatable solution to a commonly occurring
problem in software design.
- A design pattern is a description or template for how to solve a problem
- Not a finished design
- Patterns capture design expertise and allow that expertise to be transferred and reused
- Patterns provide common design vocabulary, improve communication, ease implementation
& documentation

What is Gang of Four (GOF)?

In 1994, four authors Erich Gamma, Richard Helm, Ralph Johnson and John Vlissides published a
book titled Design Patterns - Elements of Reusable Object-Oriented Software which initiated the
concept of Design Pattern in Software development.
These authors are collectively known as Gang of Four (GOF). According to these authors design
patterns are primarily based on the following principles of object orientated design.
 Program to an interface not an implementation
 Favor object composition over inheritance

Usage of Design Pattern

Design Patterns have two main usages in software development.


Common platform for developers
Design patterns provide a standard terminology and are specific to particular scenario. For example,
a singleton design pattern signifies use of single object so all developers familiar with single design
pattern will make use of single object and they can tell each other that program is following a
singleton pattern.
Best Practices
Design patterns have been evolved over a long period of time and they provide best solutions to
certain problems faced during software development. Learning these patterns helps unexperienced
developers to learn software design in an easy and faster way.

Types of Design Patterns

As per the design pattern reference book Design Patterns - Elements of Reusable Object-Oriented
Software , there are 23 design patterns which can be classified in three categories: Creational,
Structural and Behavioral patterns. We'll also discuss another category of design pattern: J2EE
design patterns.
S.N. Pattern & Description
1 Creational
These design patterns provide a way to create objects while hiding the creation logic, rather
than instantiating objects directly using new operator. This gives program more flexibility in
deciding which objects need to be created for a given use case.
2 Structural
These design patterns concern class and object composition. Concept of inheritance is used to
compose interfaces and define ways to compose objects to obtain new functionalities.
3 Behavioral
These design patterns are specifically concerned with communication between objects.
4 J2EE
These design patterns are specifically concerned with the presentation tier. These patterns are
identified by Sun Java Center.

Components of a pattern
- Pattern Name
- Intent
What problem does it solve?
- Participants
What classes participate
These classes usually have very general names, the pattern is meant to be used in
many situations!
- Structure
How are the classes organized?
How do they collaborate?

Behavioral Patterns
Mediator
Observer
Visitor
Chain of Responsibility
Command
Interpreter
Iterator
Memento
State
Strategy
Template Method

Creational Patterns
Singleton
Factory Method
Abstract Factory
Builder n Prototype

Design problem
Build a maze for a computer game
A maze is a set of rooms
A room knows its neighbours: room, door, wall
Ignore players, movement, etc.

Structural Patterns
Façade
Composite
Decorator
Adapter
Bridge
Flyweight n Proxy
SCHOOL OF COMPUTING
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING
VTU R15 B.TECH – CSE

UNIT -V Implementation, Testing & Maintenance


Top - Down, Bottom-Up, object-oriented product Implementation & Integration. Software Testing Methods-
White Box, Basis Path-Control Structure - Black Box - Unit Testing - Integration testing - Validation &
System testing - Testing Tools – Software Maintenance & Reengineering.

Software Testing

Software testing is widely used technology because it is compulsory to test each and every software
before deployment.

Software testing includes all topics of Software testing such as Methods such as Black Box Testing,
White Box Testing, Visual Box Testing and Gray Box Testing. Levels such as Unit Testing,
Integration Testing, Regression Testing, Functional Testing. System Testing, Acceptance Testing,
Alpha Testing, Beta Testing, Non-Functional testing, Security Testing, Portability Testing.

What is Software Testing

Software testing is a process of identifying the correctness of software by considering its all
attributes (Reliability, Scalability, Portability, Re-usability, Usability) and evaluating the execution
of software components to find the software bugs or errors or defects.
Software testing provides an independent view and objective of the software and gives surety of
fitness of the software. It involves testing of all components under the required services to confirm
that whether it is satisfying the specified requirements or not. The process is also providing the client
with information about the quality of the software.

Testing is mandatory because it will be a dangerous situation if the software fails any of time due to
lack of testing. So, without testing software cannot be deployed to the end user.

What is Testing
Testing is a group of techniques to determine the correctness of the application under the predefined
script but, testing cannot find all the defect of application. The main intent of testing is to detect
failures of the application so that failures can be discovered and corrected. It does not demonstrate
that a product functions properly under all conditions but only that it is not working in some specific
conditions.

Testing furnishes comparison that compares the behavior and state of software against mechanisms
because the problem can be recognized by the mechanism. The mechanism may include past
versions of the same specified product, comparable products, and interfaces of expected purpose,
relevant standards, or other criteria but not limited up to these.

Testing includes an examination of code and also the execution of code in various environments,
conditions as well as all the examining aspects of the code. In the current scenario of software
development, a testing team may be separate from the development team so that Information derived
from testing can be used to correct the process of software development.

The success of software depends upon acceptance of its targeted audience, easy graphical user
interface, strong functionality load test, etc. For example, the audience of banking is totally different
from the audience of a video game. Therefore, when an organization develops a software product, it
can assess whether the software product will be beneficial to its purchasers and other audience.

Type of Software testing

We have various types of testing available in the market, which are used to test the application or the
software.

With the help of below image, we can easily understand the type of software testing:
Manual Testing

Manual testing is a software testing process in which test cases are executed manually without using
any automated tool. All test cases executed by the tester manually according to the end user's
perspective. It ensures whether the application is working, as mentioned in the requirement
document or not. Test cases are planned and implemented to complete almost 100 percent of the
software application. Test case reports are also generated manually.

Manual Testing is one of the most fundamental testing processes as it can find both visible and
hidden defects of the software. The difference between expected output and output, given by the
software, is defined as a defect. The developer fixed the defects and handed it to the tester for
retesting.

Manual testing is mandatory for every newly developed software before automated testing. This
testing requires great efforts and time, but it gives the surety of bug-free software. Manual Testing
requires knowledge of manual testing techniques but not of any automated testing tool.

Manual testing is essential because one of the software testing fundamentals is "100% automation is
not possible."
Why we need manual testing

Whenever an application comes into the market, and it is unstable or having a bug or issues or
creating a problem while end-users are using it.

If we don't want to face these kinds of problems, we need to perform one round of testing to make
the application bug free and stable and deliver a quality product to the client, because if the
application is bug free, the end-user will use the application more conveniently.

If the test engineer does manual testing, he/she can test the application as an end-user perspective
and get more familiar with the product, which helps them to write the correct test cases of the
application and give the quick feedback of the application.

Types of Manual Testing

There are various methods used for manual testing. Each technique is used according to its testing
criteria. Types of manual testing are given below:

o White Box Testing


o Black Box Testing
o Gray Box Testing

White-box testing

The white box testing is done by Developer, where they check every line of a code before giving it to
the Test Engineer. Since the code is visible for the Developer during the testing, that's why it is also
known as White box testing.
Black box testing

The black box testing is done by the Test Engineer, where they can check the functionality of an
application or the software according to the customer /client's needs. In this, the code is not visible
while performing the testing; that's why it is known as black-box testing.

Gray Box testing

Gray box testing is a combination of white box and Black box testing. It can be performed by a
person who knew both coding and testing. And if the single person performs white box, as well as
black-box testing for the application, is known as Gray box testing.

How to perform Manual Testing

o First, tester observes all documents related to software, to select testing areas.
o Tester analyses requirement documents to cover all requirements stated by the customer.
o Tester develops the test cases according to the requirement document.
o All test cases are executed manually by using Black box testing and white box testing.
o If bugs occurred then the testing team informs the development team.
o The Development team fixes bugs and handed software to the testing team for a retest.

Software Build Process

o Once the requirement is collected, it will provide to the two different team development and
testing team.
o After getting the requirement, the concerned developer will start writing the code.
o And in the meantime, the test engineer understands the requirement and prepares the required
documents, up to now the developer may complete the code and store in the Control Version
tool.
o After that, the code changes in the UI, and these changes handle by one separate team, which
is known as the build team.
o This build team will take the code and start compile and compress the code with the help of a
build tool. Once we got some output, the output goes in the zip file, which is known
as Build (application or software).Each Build will have some unique number like (B001,
B002).
o Then this particular Build will be installed in the test server. After that, the test engineer will
access this test server with the help of the Test URL and start testing the application.
o If the test engineer found any bug, he/she will be reported to the concerned developer.
o Then the developer will reproduce the bug in the test server and fix the bug and again store
the code in the Control version tool, and it will install the new updated file and remove the
old file; this process is continued until we get the stable Build.
o Once we got the stable Build, it will be handed over to the customer.

Note1

o Once we collect the file from the Control version tool, we will use the build tool to compile
the code from high-level language to machine level language. After compilation, if the file
size will increase, so we will compress that particular file and dumped into the test server.
o This process is done by Build team, developer (if build team is not there, a developer can do
it) or the test lead (if the build team directly handle the zip and install the application to the
test server and inform the test engineer).
o Generally, we can't get a new Build for every bug; else, most of the time will be wasted only
in creating the builds.
Note2

Build team

The main job of the build team is to create the application or the Build and converting the high-level
language into low-level language.

Build

It is software, which is used to convert the code into application format. And it consists of some set
of features and bug fixes that are handed over to the test engineer for testing purposes until it
becomes stable.

Control version tool

It is a software or application, which is used for the following purpose:

o In this tool, we can save different types of files.


o It is always secured because we access the file from the tools using the same login
credentials.
o The primary objective of the tools is to track the changes done for the existing files.

Example of Build process

Let see one example to understand how to build process work on the real scenarios:

As soon as the test engineer gets the bug, they will send it to the developers, and they need some
time to analyze; after that, he/she only fixes the bug (Test engineer can't give the collection of bug).

The developer is decided how many bugs he can fix according to their time. And the test engineer is
decided, which bug should be fixed first according to their needs because the test engineers cannot
afford to stop testing.

And the test engineer getting the mail, they can only know that which bug is fixed by the list of the
bug fixes.

The time will increase because at the first Build, and developers should write the code in the
different features. And at the end, he/she can only do the bug fixes and the numbers of days will be
decreased.
Note3

Test cycle

The test cycle is the time duration given to the test engineer to test every Build.

Differences between the two build

The bugs found in one build and can be fixed any of the future Build, which depends on the test
engineer's requirement. Each new Build is the modified version of the old one, and these
modifications could be the bug fixes or adding some new features.

How frequently we were getting the new Build

In the beginning, we used to get weekly builds, but in the latest stage of testing, when the application
was getting stable, we used to get the new Build once in 3 days, two days, or a daily basis as well.
How many builds we get

If we consider one year of any project duration, we got 22-26 builds.

When we get the bug fixes

Generally, we understand the bug fixes only after the test cycle is completed, or the collection of
bugs is fixed in one build, and handover in the next builds.

Advantages of Manual Testing

o It does not require programming knowledge while using the Black box method.
o It is used to test dynamically changing GUI designs.
o Tester interacts with software as a real user so that they are able to discover usability and user
interface issues.
o It ensures that the software is a hundred percent bug-free.
o It is cost-effective.
o Easy to learn for new testers.

Disadvantages of Manual Testing

o It requires a large number of human resources.


o It is very time-consuming.
o Tester develops test cases based on their skills and experience. There is no evidence that they
have covered all functions or not.
o Test cases cannot be used again. Need to develop separate test cases for each new software.
o It does not provide testing on all aspects of testing.
o Since two teams work together, sometimes it is difficult to understand each other's motives, it
can mislead the process.

Manual testing tools

In manual testing, different types of testing like unit, integration, security, performance, and bug
tracking, we have various tools such as Jira, Bugzilla, Mantis, Zap, NUnit, Tessy, LoadRunner,
Citrus, SonarQube, etc. available in the market. Some of the tools are open-source, and some are
commercial.
Automation testing

Automation testing is a process of converting any manual test cases into the test scripts with the help
of automation tools, or any programming language is known as automation testing. With the help of
automation testing, we can enhance the speed of our test execution because here, we do not require
any human efforts. We need to write a test script and execute those scripts.

Automation Testing

When the testing case suites are performed by using automated testing tools is known as Automation
Testing. The testing process is done by using special automation tools to control the execution of test
cases and compare the actual result with the expected result. Automation testing requires a pretty
huge investment of resources and money.

Generally, repetitive actions are tested in automated testing such as regression tests. The testing tools
used in automation testing are used not only for regression testing but also for automated GUI
interaction, data set up generation, defect logging, and product installation.
The goal of automation testing is to reduce manual test cases but not to eliminate any of them. Test
suits can be recorded by using the automation tools, and tester can play these suits again as per the
requirement. Automated testing suites do not require any human intervention.

The life cycle of Automation Testing

The life cycle of automation testing is a systematic approach to organize and execute testing
activities in a manner that provides maximum test coverage with limited resources. The structure of
the test involves a multi-step process that supports the required, detailed and inter-related activities to
perform the task.
The life cycle of automation testing consists of the following components:

Decision to Automation Testing

It is the first phase of Automation Test Life-cycle Methodology (ATLM). At this phase, the main
focus of the testing team is to manage expectations from the test and find out the potential benefits if
applying the automated testing correctly.

On adopting an automated testing suit, organizations have to face many issues, some are listed
below:

o Testing tool experts are required for automation testing, so the first issue, to appoint a testing
equipment specialist.
o The second issue is, choose the exact tool for the testing of a particular function.
o The issue of design and development standards in the implementation of an automated testing
process.
o Analysis of various automated testing tools to choose the best tool for automation testing.
o The issue of money and time occurs as the consumption of money and time is high in the
beginning of the testing.
Test Tool Selection

Test Tool Selection represents the second phase of the Automation Test Life-cycle Methodology
(ATLM). This phase guides the tester in the evaluation and selection of the testing tool.

Since the testing tool supports almost all testing requirements, the tester still needs to review the
system engineering environment and other organizational needs and then make a list of evaluation
parameters of the tools. Test engineers evaluate the equipment based on the provided sample criteria.

Scope Introduction

This phase represents the third phase of Automation Test Life-cycle Methodology (ATLM). The
scope of automation includes the testing area of the application. The determination of scope is based
on the following points:

o Common functionalities of the software application that are held by every software
application.
o Automation test sets the reusable range of business components.
o Automation Testing decides the extent of reusability of the business components.
o An application should have business-specific features and must be technically feasible.
o Automation testing provides the repetition of test cases in the case of cross-browser testing.

This phase ensures the overall testing strategy that should be well managed and modified if required.
In order to ensure the availability of skills, testing skills of a particular member and whole team are
analyzed against the required specific skills for a particular software application.

Test Planning and Development

Test planning and development is the fourth and most important phase of Automation Test Life -
cycle Methodology (ATLM) because all the testing strategies are defined here. Planning of long -
lead test activities, the creation of standards and guidelines,an arrangement of the required
combination of hardware, software and network to create a test environment, defect tracking
procedure, guidelines to control test configuration and environment all are identified in this phase.
Tester determines estimated effort and cost for the entire project. Test strategy and effort estimation
documents are the deliverables provided by this phase. Test case execution can be started after the
successful completion of test planning.

Test Case Execution

Test case Execution is the sixth phase of Automation Test Life -cycle Methodology (ATLM). It
takes place after the successful completion of test planning. At this stage, the testing team defines
test design and development. Now, test cases can be executed under product testing.In this phase, the
testing team starts case development and execution activity by using automated tools. The prepared
test cases are reviewed by peer members of the testing team or quality assurance leaders.
During the execution of test procedures, the testing team directed to comply with the execution
schedule. Execution phase implements the strategies such as integration, acceptance and unit testing
that have defined in the test plan previously.

Review and Assessment

Review and assessment is the sixth and final stage of the automated testing life cycle but the
activities of this phase are conducted throughout the whole life cycle to maintain continuous quality
improvement. The improvement process is done via the evaluation of matrices, review and
assessment of the activities.

During the review, the examiner concentrates whether the particular metric satisfies the acceptance
criteria or not, if yes, then it is ready to use in software production.It is comprehensive as test cases
cover each feature of the application.

The test team performs its own survey to inquire about the potential value of the process; if the
potential benefit is less than sufficient, the testing team can change the testing tool. The team also
provides a sample survey form to ask for feedback from the end user about the attributes and
management of the software product.

Advantages of Automation Testing

o Automation testing takes less time than manual testing.


o A tester can test the response of the software if the execution of the same operation is
repeated several times.
o Automation Testing provides re-usability of test cases on testing of different versions of the
same software.
o Automation testing is reliable as it eliminates hidden errors by executing test cases again in
the same way.
o Automation Testing is comprehensive as test cases cover each and every feature of the
application.
o It does not require many human resources, instead of writing test cases and testing them
manually, they need an automation testing engineer to run them.
o The cost of automation testing is less than manual testing because it requires a few human
resources.

Disadvantages of Automation Testing

o Automation Testing requires high-level skilled testers.


o It requires high-quality testing tools.
o When it encounters an unsuccessful test case, the analysis of the whole event is complicated.
o Test maintenance is expensive because high fee license testing equipment is necessary.
o Debugging is mandatory if a less effective error has not been solved, it can lead to fatal
results.

White Box Testing

The box testing approach of software testing consists of black box testing and white box testing. We
are discussing here white box testing which also known as glass box is testing, structural testing,
clear box testing, open box testing and transparent box testing. It tests internal coding and
infrastructure of a software focus on checking of predefined inputs against expected and desired
outputs. It is based on inner workings of an application and revolves around internal structure testing.
In this type of testing programming skills are required to design test cases. The primary goal of white
box testing is to focus on the flow of inputs and outputs through the software and strengthening the
security of the software.

The term 'white box' is used because of the internal perspective of the system. The clear box or white
box or transparent box name denote the ability to see through the software's outer shell into its inner
workings.

Developers do white box testing. In this, the developer will test every line of the code of the
program. The developers perform the White-box testing and then send the application or the software
to the testing team, where they will perform the black box testing and verify the application along
with the requirements and identify the bugs and sends it to the developer.

The developer fixes the bugs and does one round of white box testing and sends it to the testing
team. Here, fixing the bugs implies that the bug is deleted, and the particular feature is working fine
on the application.

Here, the test engineers will not include in fixing the defects for the following reasons:

o Fixing the bug might interrupt the other features. Therefore, the test engineer should always
find the bugs, and developers should still be doing the bug fixes.
o If the test engineers spend most of the time fixing the defects, then they may be unable to find
the other bugs in the application.

The white box testing contains various tests, which are as follows:

o Path testing
o Loop testing
o Condition testing
o Testing based on the memory perspective
o Test performance of the program
Path testing

In the path testing, we will write the flow graphs and test all independent paths. Here writing the
flow graph implies that flow graphs are representing the flow of the program and also show how
every program is added with one another as we can see in the below image:

And test all the independent paths implies that suppose a path from main() to function G, first set the
parameters and test if the program is correct in that particular path, and in the same way test all other
paths and fix the bugs.

Loop testing

In the loop testing, we will test the loops such as while, for, and do-while, etc. and also check for
ending condition if working correctly and if the size of the conditions is enough.

For example: we have one program where the developers have given about 50,000 loops.

1. {
2. while(50,000)
3. ……
4. ……
5. }

We cannot test this program manually for all the 50,000 loops cycle. So we write a small program
that helps for all 50,000 cycles, as we can see in the below program, that test P is written in the
similar language as the source code program, and this is known as a Unit test. And it is written by the
developers only.

1. Test P
2. {
3. ……
4. …… }

As we can see in the below image that, we have various requirements such as 1, 2, 3, 4. And then,
the developer writes the programs such as program 1,2,3,4 for the parallel conditions. Here the
application contains the 100s line of codes.

The developer will do the white box testing, and they will test all the five programs line by line of
code to find the bug. If they found any bug in any of the programs, they will correct it. And they
again have to test the system then this process contains lots of time and effort and slows down the
product release time.

Now, suppose we have another case, where the clients want to modify the requirements, then the
developer will do the required changes and test all four program again, which take lots of time and
efforts.
These issues can be resolved in the following ways:

In this, we will write test for a similar program where the developer writes these test code in the
related language as the source code. Then they execute these test code, which is also known as unit
test programs. These test programs linked to the main program and implemented as programs.

Therefore, if there is any requirement of modification or bug in the code, then the developer makes
the adjustment both in the main program and the test program and then executes the test program.

Condition testing

In this, we will test all logical conditions for both true and false values; that is, we will verify for
both if and else condition.

For example:

1. if(condition) - true
2. {
3. …..
4. ……
5. ……
6. }
7. else - false
8. {
9. …..
10. ……
11. ……
12. }

The above program will work fine for both the conditions, which means that if the condition is
accurate, and then else should be false and conversely.

Testing based on the memory (size) perspective

The size of the code is increasing for the following reasons:

o The reuse of code is not there: let us take one example, where we have four programs of the
same application, and the first ten lines of the program are similar. We can write these ten
lines as a discrete function, and it should be accessible by the above four programs as well.
And also, if any bug is there, we can modify the line of code in the function rather than the
entire code.
o The developers use the logic that might be modified. If one programmer writes code and the
file size is up to 250kb, then another programmer could write a similar code using the
different logic, and the file size is up to 100kb.
o The developer declares so many functions and variables that might never be used in any
portion of the code. Therefore, the size of the program will increase.

For example,

1. Int a=15;
2. Int b=20;
3. String S= "Welcome";
4. ….
5. …..
6. …..
7. ….
8. …..
9. Int p=b;
10. Create user()
11. {
12. ……
13. ……
14. ….. 200's line of code
15. }

In the above code, we can see that the integer a has never been called anywhere in the program, and
also the function Create user has never been called anywhere in the code. Therefore, it leads us to
memory consumption.

We cannot remember this type of mistake manually by verifying the code because of the large code.
So, we have a built-in tool, which helps us to test the needless variables and functions. And, here we
have the tool called Rational purify.

Suppose we have three programs such as Program P, Q, and R, which provides the input to S. And S
goes into the programs and verifies the unused variables and then gives the outcome. After that, the
developers will click on several results and call or remove the unnecessary function and the
variables.

This tool is only used for the C programming language and C++ programming language; for another
language, we have other related tools available in the market.

o The developer does not use the available in-built functions; instead they write the full
features using their logic. Therefore, it leads us to waste of time and also postpone the
product releases.
Test the performance (Speed, response time) of the program

The application could be slow for the following reasons:

o When logic is used.


o For the conditional cases, we will use or & and adequately.
o Switch case, which means we cannot use nested if, instead of using a switch case.

As we know that the developer is performing white box testing, they understand that the code is
running slow, or the performance of the program is also getting deliberate. And the developer cannot
go manually over the program and verify which line of the code is slowing the program.

To recover with this condition, we have a tool called Rational Quantify, which resolves these kinds
of issues automatically. Once the entire code is ready, the rational quantify tool will go through the
code and execute it. And we can see the outcome in the result sheet in the form of thick and thin
lines.

Here, the thick line specifies which section of code is time-consuming. When we double-click on the
thick line, the tool will take us to that line or piece of code automatically, which is also displayed in a
different color. We can change that code and again and use this tool. When the order of lines is all
thin, we know that the presentation of the program has enhanced. And the developers will perform
the white box testing automatically because it saves time rather than performing manually.

Test cases for white box testing are derived from the design phase of the software development
lifecycle. Data flow testing, control flow testing, path testing, branch testing, statement and decision
coverage all these techniques used by white box testing as a guideline to create an error-free
software.

White box testing follows some working steps to make testing manageable and easy to understand
what the next task to do. There are some basic steps to perform white box testing.

Generic steps of white box testing

o Design all test scenarios, test cases and prioritize them according to high priority number.
o This step involves the study of code at runtime to examine the resource utilization, not
accessed areas of the code, time taken by various methods and operations and so on.
o In this step testing of internal subroutines takes place. Internal subroutines such as nonpublic
methods, interfaces are able to handle all types of data appropriately or not.
o This step focuses on testing of control statements like loops and conditional statements to
check the efficiency and accuracy for different data inputs.
o In the last step white box testing includes security testing to check all possible security
loopholes by looking at how the code handles security.

Reasons for white box testing

o It identifies internal security holes.


o To check the way of input inside the code.
o Check the functionality of conditional loops.
o To test function, object, and statement at an individual level.
Advantages of White box testing

o White box testing optimizes code so hidden errors can be identified.


o Test cases of white box testing can be easily automated.
o This testing is more thorough than other testing approaches as it covers all code paths.
o It can be started in the SDLC phase even without GUI.

Disadvantages of White box testing

o White box testing is too much time consuming when it comes to large-scale programming
applications.
o White box testing is much expensive and complex.
o It can lead to production error because it is not detailed by the developers.
o White box testing needs professional programmers who have a detailed knowledge and
understanding of programming language and implementation.

Techniques Used in White Box Testing

Data Flow Data flow testing is a group of testing strategies that


Testing examines the control flow of programs in order to explore
the sequence of variables according to the sequence of
events.

Control Control flow testing determines the execution order of


Flow statements or instructions of the program through a control
Testing structure. The control structure of a program is used to
develop a test case for the program. In this technique, a
particular part of a large program is selected by the tester to
set the testing path. Test cases represented by the control
graph of the program.

Branch Branch coverage technique is used to cover all branches of


Testing the control flow graph. It covers all the possible outcomes
(true and false) of each condition of decision point at least
once.
Statement Statement coverage technique is used to design white box
Testing test cases. This technique involves execution of all
statements of the source code at least once. It is used to
calculate the total number of executed statements in the
source code, out of total statements present in the source
code.

Decision This technique reports true and false outcomes of Boolean


Testing expressions. Whenever there is a possibility of two or more
outcomes from the statements like do while statement, if
statement and case statement (Control flow statements), it is
considered as decision point because there are two outcomes
either true or false.

Black box testing

Black box testing is a technique of software testing which examines the functionality of software
without peering into its internal structure or coding. The primary source of black box testing is a
specification of requirements that is stated by the customer.

In this method, tester selects a function and gives input value to examine its functionality, and checks
whether the function is giving expected output or not. If the function produces correct output, then it
is passed in testing, otherwise failed. The test team reports the result to the development team and
then tests the next function. After completing testing of all functions if there are severe problems,
then it is given back to the development team for correction.

Generic steps of black box testing

o The black box test is based on the specification of requirements, so it is examined in the
beginning.
o In the second step, the tester creates a positive test scenario and an adverse test scenario by
selecting valid and invalid input values to check that the software is processing them
correctly or incorrectly.
o In the third step, the tester develops various test cases such as decision table, all pairs test,
equivalent division, error estimation, cause-effect graph, etc.
o The fourth phase includes the execution of all test cases.
o In the fifth step, the tester compares the expected output against the actual output.
o In the sixth and final step, if there is any flaw in the software, then it is cured and tested
again.

Test procedure

The test procedure of black box testing is a kind of process in which the tester has specific
knowledge about the software's work, and it develops test cases to check the accuracy of the
software's functionality.

It does not require programming knowledge of the software. All test cases are designed by
considering the input and output of a particular function.A tester knows about the definite output of a
particular input, but not about how the result is arising. There are various techniques used in black
box testing for testing like decision table technique, boundary value analysis technique, state
transition, All-pair testing, cause-effect graph technique, equivalence partitioning technique, error
guessing technique, use case technique and user story technique. All these techniques have been
explained in detail within the tutorial.

Test cases

Test cases are created considering the specification of the requirements. These test cases are
generally created from working descriptions of the software including requirements, design
parameters, and other specifications. For the testing, the test designer selects both positive test
scenario by taking valid input values and adverse test scenario by taking invalid input values to
determine the correct output. Test cases are mainly designed for functional testing but can also be
used for non-functional testing. Test cases are designed by the testing team, there is not any
involvement of the development team of software.

Techniques Used in Black Box Testing

Decision Table Decision Table Technique is a systematic


Technique approach where various input combinations and
their respective system behavior are captured in a
tabular form. It is appropriate for the functions that
have a logical relationship between two and more
than two inputs.

Boundary Boundary Value Technique is used to test


Value boundary values, boundary values are those that
Technique contain the upper and lower limit of a variable. It
tests, while entering boundary value whether the
software is producing correct output or not.

State State Transition Technique is used to capture the


Transition behavior of the software application when
Technique different input values are given to the same
function. This applies to those types of
applications that provide the specific number of
attempts to access the application.

All-pair All-pair testing Technique is used to test all the


Testing possible discrete combinations of values. This
Technique combinational method is used for testing the
application that uses checkbox input, radio button
input, list box, text box, etc.

Cause-Effect Cause-Effect Technique underlines the


Technique relationship between a given result and all the
factors affecting the result.It is based on a
collection of requirements.

Equivalence Equivalence partitioning is a technique of software


Partitioning testing in which input data divided into partitions
Technique of valid and invalid values, and it is mandatory
that all partitions must exhibit the same behavior.

Error Guessing Error guessing is a technique in which there is no


Technique specific method for identifying the error. It is
based on the experience of the test analyst, where
the tester uses the experience to guess the
problematic areas of the software.
Use Case Use case Technique used to identify the test cases
Technique from the beginning to the end of the system as per
the usage of the system. By using this technique,
the test team creates a test scenario that can
exercise the entire software based on the
functionality of each function from start to end.

Difference between white-box testing and black-box testing

Following are the significant differences between white box testing and black box testing:

White-box testing Black box testing

The developers can perform white The test engineers perform the black box
box testing. testing.

To perform WBT, we should have an To perform BBT, there is no need to


understanding of the programming have an understanding of the
languages. programming languages.

In this, we will look into the source In this, we will verify the functionality of
code and test the logic of the code. the application based on the requirement
specification.

In this, the developer should know In this, there is no need to know about
about the internal design of the code. the internal design of the code.

GreyBox Testing

Greybox testing is a software testing method to test the software application with partial knowledge
of the internal working structure. It is a combination of black box and white box testing because it
involves access to internal coding to design test cases as white box testing and testing practices are
done at functionality level as black box testing.
GreyBox testing commonly identifies context-specific errors that belong to web systems. For
example; while testing, if tester encounters any defect then he makes changes in code to resolve the
defect and then test it again in real time. It concentrates on all the layers of any complex software
system to increase testing coverage. It gives the ability to test both presentation layer as well as
internal coding structure. It is primarily used in integration testing and penetration testing.

Why GreyBox testing?

Reasons for GreyBox testing are as follows

o It provides combined benefits of both Blackbox testing and WhiteBox testing.


o It includes the input values of both developers and testers at the same time to improve the
overall quality of the product.
o It reduces time consumption of long process of functional and non-functional testing.
o It gives sufficient time to the developer to fix the product defects.
o It includes user point of view rather than designer or tester point of view.
o It involves examination of requirements and determination of specifications by user point of
view deeply.
GreyBox Testing Strategy

Grey box testing does not make necessary that the tester must design test cases from source code. To
perform this testing test cases can be designed on the base of, knowledge of architectures, algorithm,
internal states or other high -level descriptions of the program behavior. It uses all the
straightforward techniques of black box testing for function testing. The test case generation is based
on requirements and preset all the conditions before testing the program by assertion method.

Generic Steps to perform Grey box Testing are:

1. First, select and identify inputs from BlackBox and WhiteBox testing inputs.
2. Second, Identify expected outputs from these selected inputs.
3. Third, identify all the major paths to traverse through during the testing period.
4. The fourth task is to identify sub-functions which are the part of main functions to perform
deep level testing.
5. The fifth task is to identify inputs for subfunctions.
6. The sixth task is to identify expected outputs for subfunctions.
7. The seventh task includes executing a test case for Subfunctions.
8. The eighth task includes verification of the correctness of result.

The test cases designed for Greybox testing includes Security related, Browser related, GUI related,
Operational system related and Database related testing.

Techniques of Grey box Testing


Matrix Testing

This testing technique comes under Grey Box testing. It defines all the used variables of a particular
program. In any program, variable are the elements through which values can travel inside the
program. It should be as per requirement otherwise, it will reduce the readability of the program and
speed of the software. Matrix technique is a method to remove unused and uninitialized variables by
identifying used variables from the program.

Regression Testing

Regression testing is used to verify that modification in any part of software has not caused any
adverse or unintended side effect in any other part of the software. During confirmation testing, any
defect got fixed, and that part of software started working as intended, but there might be a
possibility that fixed defect may have introduced a different defect somewhere else in the software.
So, regression testing takes care of these type of defects by testing strategies like retest risky use
cases, retest within a firewall, retest all, etc.

Orthogonal Array Testing or OAT

The purpose of this testing is to cover maximum code with minimum test cases. Test cases are
designed in a way that can cover maximum code as well as GUI functions with a smaller number of
test cases.

Pattern Testing

Pattern testing is applicable to such type of software that is developed by following the same pattern
of previous software. In these type of software possibility to occur the same type of defects. Pattern
testing determines reasons of the failure so they can be fixed in the next software.

Usually, automated software testing tools are used in Greybox methodology to conduct the test
process. Stubs and module drivers provided to a tester to relieve from manually code generation.

Unit Testing

Unit testing involves the testing of each unit or an individual component of the software application.
It is the first level of functional testing. The aim behind unit testing is to validate unit components
with its performance.

A unit is a single testable part of a software system and tested during the development phase of the
application software.

The purpose of unit testing is to test the correctness of isolated code. A unit component is an
individual function or code of the application. White box testing approach used for unit testing and
usually done by the developers.
Whenever the application is ready and given to the Test engineer, he/she will start checking every
component of the module or module of the application independently or one by one, and this process
is known as Unit testing or components testing.

Why Unit Testing?

In a testing level hierarchy, unit testing is the first level of testing done before integration and other
remaining levels of the testing. It uses modules for the testing process which reduces the dependency
of waiting for Unit testing frameworks, stubs, drivers and mock objects are used for assistance in
unit testing.

Generally, the software goes under four level of testing: Unit Testing, Integration Testing, System
Testing, and Acceptance Testing but sometimes due to time consumption software testers does
minimal unit testing but skipping of unit testing may lead to higher defects during Integration
Testing, System Testing, and Acceptance Testing or even during Beta Testing which takes place
after the completion of software application.

Some crucial reasons are listed below:

o Unit testing helps tester and developers to understand the base of code that makes them able
to change defect causing code quickly.
o Unit testing helps in the documentation.
o Unit testing fixes defects very early in the development phase that's why there is a possibility
to occur a smaller number of defects in upcoming testing levels.
o It helps with code reusability by migrating code and test cases.
Example of Unit testing

Let us see one sample example for a better understanding of the concept of unit testing:

For the amount transfer, requirements are as follows:

1. Amount transfer

1.1 From account number (FAN)→ Text Box

1.1.1 FAN→ accept only 4 digit

1.2 To account no (TAN)→ Text Box

1.2.1 TAN→ Accept only 4 digit


1.3 Amount→ Text Box

1.3.1 Amount → Accept maximum 4 digit

1.4 Transfer→ Button

1.4.1 Transfer → Enabled

1.5 Cancel→ Button

1.5.1 Cancel→ Enabled

Values Description

1234 accept

4311 Error message→ account valid or not

blank Error message→ enter some values

5 digit/ 3 digit Error message→ accept only 4 digit

Alphanumeric Error message → accept only digit


Blocked account no Error message
Below are the
Copy and paste the value Error message→ type the value application access
details, which is given
by the customer
Same as FAN and TAN Error message
o URL→ login
Page
o Username/password/OK → home page
o To reach Amount transfer module follow the below

Loans → sales → Amount transfer

While performing unit testing, we should follow some rules, which are as follows:
o To start unit testing, at least we should have one module.
o Test for positive values
o Test for negative values
o No over testing
o No assumption required

When we feel that the maximum test coverage is achieved, we will stop the testing.

Now, we will start performing the unit testing on the different components such as

o From account number(FAN)


o To account number(TAN)
o Amount
o Transfer
o Cancel

o
Values Description

1234 accept

4311 Error message→ account valid or not

Blank Error message→ enter some values

5 digit/ 3 digit Error message→ accept only 4 digit

Alphanumeric Error message → accept only digit

Blocked account no Error message

Copy and paste the value Error message→ type the value For the TAN
component

Same as FAN and TAN Error message o Provide the


values just like
we did in From account number (FAN) components

For Amount component

o Provide the values just like we did in FAN and TAN components.

For Transfer component

o Enter valid FAN value


o Enter valid TAN value
o Enter the correct value of Amount
o Click on the Transfer button→ amount transfer successfully( confirmation message)

For Cancel Component

o Enter the values of FAN, TAN, and amount.


o Click on the Cancel button → all data should be cleared.

Unit Testing Tools

We have various types of unit testing tools available in the market, which are as follows:

o NUnit
o JUnit
o PHPunit
o Parasoft Jtest
o EMMA

Unit Testing Techniques:

Unit testing uses all white box testing techniques as it uses the code of software application:

o Data flow Testing


o Control Flow Testing
o Branch Coverage Testing
o Statement Coverage Testing
o Decision Coverage Testing

How to achieve the best result via Unit testing?

Unit testing can give best results without getting confused and increase complexity by following the
steps listed below:

o Test cases must be independent because if there is any change or enhancement in


requirement, the test cases will not be affected.
o Naming conventions for unit test cases must be clear and consistent.
o During unit testing, the identified bugs must be fixed before jump on next phase of the
SDLC.
o Only one code should be tested at one time.
o Adopt test cases with the writing of the code, if not doing so, the number of execution paths
will be increased.
o If there are changes in the code of any module, ensure the corresponding unit test is available
or not for that module.

Advantages and disadvantages of unit testing

The pros and cons of unit testing are as follows:

Advantages
o Unit testing uses module approach due to that any part can be tested without waiting for
completion of another parts testing.
o The developing team focuses on the provided functionality of the unit and how functionality
should look in unit test suits to understand the unit API.
o Unit testing allows the developer to refactor code after a number of days and ensure the
module still working without any defect.

Disadvantages
o It cannot identify integration or broad level error as it works on units of the code.
o In the unit testing, evaluation of all execution paths is not possible, so unit testing is not able
to catch each and every error in a program.
o It is best suitable for conjunction with other testing activities.

Integration testing

Integration testing is the second level of the software testing process comes after unit testing. In this
testing, units or individual components of the software are tested in a group. The focus of the
integration testing level is to expose defects at the time of interaction between integrated components
or units.

Unit testing uses modules for testing purpose, and these modules are combined and tested in
integration testing. The Software is developed with a number of software modules that are coded by
different coders or programmers. The goal of integration testing is to check the correctness of
communication among all the modules.
Once all the components or modules are working independently, then we need to check the data flow
between the dependent modules is known as integration testing.

Let us see one sample example of a banking application, as we can see in the below image of amount
transfer.

o First, we will login as a user P to amount transfer and send Rs200 amount, the confirmation
message should be displayed on the screen as amount transfer successfully. Now logout as
P and login as user Q and go to amount balance page and check for a balance in that account
= Present balance + Received Balance. Therefore, the integration test is successful.
o Also, we check if the amount of balance has reduced by Rs200 in P user account.
o Click on the transaction, in P and Q, the message should be displayed regarding the data and
time of the amount transfer.

Guidelines for Integration Testing

o We go for the integration testing only after the functional testing is completed on each
module of the application.
o We always do integration testing by picking module by module so that a proper sequence is
followed, and also we don't miss out on any integration scenarios.
o First, determine the test case strategy through which executable test cases can be prepared
according to test data.
o Examine the structure and architecture of the application and identify the crucial modules to
test them first and also identify all possible scenarios.
o Design test cases to verify each interface in detail.
o Choose input data for test case execution. Input data plays a significant role in testing.
o If we find any bugs then communicate the bug reports to developers and fix defects and
retest.
o Perform positive and negative integration testing.

Here positive testing implies that if the total balance is Rs15, 000 and we are transferring Rs1500
and checking if the amount transfer works fine. If it does, then the test would be a pass.

And negative testing means, if the total balance is Rs15, 000 and we are transferring Rs20, 000 and
check if amount transfer occurs or not, if it does not occur, the test is a pass. If it happens, then there
is a bug in the code, and we will send it to the development team for fixing that bug.

For example: In the Gmail application, the Source could be Compose, Data could be Email and
the Destination could be the Inbox.

Example of integration testing

Let us assume that we have a Gmail application where we perform the integration testing.

First, we will do functional testing on the login page, which includes the various components such
as username, password, submit, and cancel button. Then only we can perform integration testing.

The different integration scenarios are as follows:


Scenarios1:

o First, we login as P users and click on the Compose mail and performing the functional
testing for the specific components.
o Now we click on the Send and also check for Save Drafts.
o After that, we send a mail to Q and verify in the Send Items folder of P to check if the send
mail is there.
o Now, we will log out as P and login as Q and move to the Inbox and verify that if the mail
has reached.

Secanrios2: We also perform the integration testing on Spam folders. If the particular contact has
been marked as spam, then any mail sent by that user should go to the spam folder and not in the
inbox.

As we can see in the below image, we will perform the functional testing for all the text fields and
every feature. Then we will perform integration testing for the related functions. We first test
the add user, list of users, delete user, edit user, and then search user.
Note:

o There are some features, we might be performing only the functional testing, and there are
some features where we are performing both functional and integration testing based on the
feature's requirements.
o Prioritizing is essential, and we should perform it at all the phases, which means we will
open the application and select which feature needs to be tested first. Then go to that feature
and choose which component must be tested first. Go to those components and determine
what values to be entered first.
And don't apply the same rule everywhere because testing logic varies from feature to
feature.
o While performing testing, we should test one feature entirely and then only proceed to
another function.
o Among the two features, we must be performing only positive integrating testing or
both positive and negative integration testing, and this also depends on the features need.

Reason Behind Integration Testing

Although all modules of software application already tested in unit testing, errors still exist due to the
following reasons:
1. Each module is designed by individual software developer whose programming logic may
differ from developers of other modules so; integration testing becomes essential to
determine the working of software modules.
2. To check the interaction of software modules with the database whether it is an erroneous or
not.
3. Requirements can be changed or enhanced at the time of module development. These new
requirements may not be tested at the level of unit testing hence integration testing becomes
mandatory.
4. Incompatibility between modules of software could create errors.
5. To test hardware's compatibility with software.
6. If exception handling is inadequate between modules, it can create bugs.

Integration Testing Techniques

Any testing technique (Blackbox, Whitebox, and Greybox) can be used for Integration Testing; some
are listed below:

Black Box Testing


o State Transition technique
o Decision Table Technique
o Boundary Value Analysis
o All-pairs Testing
o Cause and Effect Graph
o Equivalence Partitioning
o Error Guessing

White Box Testing


o Data flow testing
o Control Flow Testing
o Branch Coverage Testing
o Decision Coverage Testing

Types of Integration Testing

Integration testing can be classified into two parts:

o Incremental integration testing


o Non-incremental integration testing
Incremental Approach

In the Incremental Approach, modules are added in ascending order one by one or according to need.
The selected modules must be logically related. Generally, two or more than two modules are added
and tested to determine the correctness of functions. The process continues until the successful
testing of all the modules.

OR

In this type of testing, there is a strong relationship between the dependent modules. Suppose we
take two or more modules and verify that the data flow between them is working fine. If it is, then
add more modules and test again.
For example: Suppose we have a Flipkart application, we will perform incremental integration
testing, and the flow of the application would like this:

Flipkart→ Login→ Home → Search→ Add cart→Payment → Logout

Incremental integration testing is carried out by further methods:

o Top-Down approach
o Bottom-Up approach

Top-Down Approach

The top-down testing strategy deals with the process in which higher level modules are tested with
lower level modules until the successful completion of testing of all the modules. Major design flaws
can be detected and fixed early because critical modules tested first. In this type of method, we will
add the modules incrementally or one by one and check the data flow in the same order.
In the top-down approach, we will be ensuring that the module we are adding is the child of the
previous one like Child C is a child of Child B and so on as we can see in the below image:
Advantages:

o Identification of defect is difficult.


o An early prototype is possible.

Disadvantages:

o Due to the high number of stubs, it gets quite complicated.


o Lower level modules are tested inadequately.
o Critical Modules are tested first so that fewer chances of defects.

Bottom-Up Method

The bottom to up testing strategy deals with the process in which lower level modules are tested with
higher level modules until the successful completion of testing of all the modules. Top level critical
modules are tested at last, so it may cause a defect. Or we can say that we will be adding the modules
from bottom to the top and check the data flow in the same order.

In the bottom-up method, we will ensure that the modules we are adding are the parent of the
previous one as we can see in the below image:
Advantages

o Identification of defect is easy.


o Do not need to wait for the development of all the modules as it saves time.

Disadvantages

o Critical modules are tested last due to which the defects can occur.
o There is no possibility of an early prototype.

In this, we have one addition approach which is known as hybrid testing.

Hybrid Testing Method

In this approach, both Top-Down and Bottom-Up approaches are combined for testing. In this
process, top-level modules are tested with lower level modules and lower level modules tested with
high-level modules simultaneously. There is less possibility of occurrence of defect because each
module interface is tested.
Advantages

o The hybrid method provides features of both Bottom Up and Top Down methods.
o It is most time reducing method.
o It provides complete testing of all modules.

Disadvantages

o This method needs a higher level of concentration as the process carried out in both
directions simultaneously.
o Complicated method.

Non- incremental integration testing

We will go for this method, when the data flow is very complex and when it is difficult to find who
is a parent and who is a child. And in such case, we will create the data in any module bang on all
other existing modules and check if the data is present. Hence, it is also known as the Big bang
method.
Big Bang Method

In this approach, testing is done via integration of all modules at once. It is convenient for small
software systems, if used for large software systems identification of defects is difficult.

Since this testing can be done after completion of all modules due to that testing team has less time
for execution of this process so that internally linked interfaces and high-risk critical modules can be
missed easily.
Advantages:

o It is convenient for small size software systems.

Disadvantages:

o Identification of defects is difficult because finding the error where it came from is a
problem, and we don't know the source of the bug.
o Small modules missed easily.
o Time provided for testing is very less.
o We may miss to test some of the interfaces.

Let us see examples for our better understanding of the non-incremental integrating testing or big
bang method:

Example1

In the below example, the development team develops the application and sends it to the CEO of the
testing team. Then the CEO will log in to the application and generate the username and password
and send a mail to the manager. After that, the CEO will tell them to start testing the application.

Then the manager manages the username and the password and produces a username and password
and sends it to the test leads. And the test leads will send it to the test engineers for further testing
purposes. This order from the CEO to the test engineer is top-down incremental integrating
testing.

In the same way, when the test engineers are done with testing, they send a report to the test leads,
who then submit a report to the manager, and the manager will send a report to the CEO. This
process is known as Bottom-up incremental integration testing as we can see in the below image:

Example2

The below example demonstrates a home page of Gmail's Inbox, where we click on the Inbox link,
and we are moved to the inbox page. Here we have to do non- incremental integration
testing because there is no parent and child concept.
Note

Stub and driver

The stub is a dummy module that receives the data and creates lots of probable data, but it performs
like a real module. When a data is sent from module P to Stub Q, it receives the data without
confirming and validating it, and produce the estimated outcome for the given data.
The function of a driver is used to verify the data from P and sends it to stub and also checks the
expected data from the stub and sends it to P.

The driver is one that sets up the test environments and also takes care of the communication,
evaluates results, and sends the reports. We never use the stub and driver in the testing process.

In White box testing, bottom-up integration testing is ideal because writing drivers is accessible.
And in black box testing, no preference is given to any testing as it depends on the application.

System Testing

System Testing includes testing of a fully integrated software system. Generally, a computer system
is made with the integration of software (any software is only a single element of a computer
system). The software is developed in units and then interfaced with other software and hardware to
create a complete computer system. In other words, a computer system consists of a group of
software to perform the various tasks, but only software cannot perform the task; for that software
must be interfaced with compatible hardware. System testing is a series of different type of tests with
the purpose to exercise and examine the full working of an integrated software computer system
against requirements.
To check the end-to-end flow of an application or the software as a user is known as System testing.
In this, we navigate (go through) all the necessary modules of an application and check if the end
features or the end business works fine, and test the product as a whole system.

It is end-to-end testing where the testing environment is similar to the production environment.

There are four levels of software testing: unit testing, integration testing, system testing
and acceptance testing, all are used for the testing purpose. Unit Testing used to test a single
software; Integration Testing used to test a group of units of software, System Testing used to test a
whole system and Acceptance Testing used to test the acceptability of business requirements. Here
we are discussing system testing which is the third level of testing levels.

Hierarchy of Testing Levels

There are mainly two widely used methods for software testing, one is White box testing which uses
internal coding to design test cases and another is black box testing which uses GUI or user
perspective to develop test cases.

o White box testing


o Black box testing

System testing falls under Black box testing as it includes testing of the external working of the
software. Testing follows user's perspective to identify minor defects.

System Testing includes the following steps.

o Verification of input functions of the application to test whether it is producing the expected
output or not.
o Testing of integrated software by including external peripherals to check the interaction of
various components with each other.
o Testing of the whole system for End to End testing.
o Behavior testing of the application via auser's experience

Example of System testing


Suppose we open an application, let say www.rediff.com, and there we can see that an
advertisement is displayed on the top of the homepage, and it remains there for a few seconds before
it disappears. These types of Ads are done by the Advertisement Management System (AMS). Now,
we will perform system testing for this type of field.

The below application works in the following manner:

o Let's say that Amazon wants to display a promotion ad on January 26 at precisely 10:00 AM
on the Rediff's home page for the country India.
o Then, the sales manager logs into the website and creates a request for an advertisement
dated for the above day.
o He/she attaches a file that likely an image files or the video file of the AD and applies.
o The next day, the AMS manager of Rediffmail login into the application and verifies the
awaiting Ad request.
o The AMS manager will check those Amazons ad requests are pending, and then he/she will
check if the space is available for the particular date and time.
o If space is there, then he/she evaluate the cost of putting up the Ad at 15$ per second, and the
overall Ad cost for 10 seconds is approximate 150$.
o The AMS manager clicks on the payment request and sends the estimated value along with
the request for payment to the Amazon manager.
o Then the amazon manager login into the Ad status and confirms the payment request, and
he/she makes the payment as per all the details and clicks on the Submit and Pay
o As soon as Rediff's AMs manager gets the amount, he/she will set up the Advertisement for
the specific date and time on the Rediffmail's home page.

The various system test scenarios are as follows:

Scenario1: The first test is the general scenario, as we discussed above. The test engineer will do the
system testing for the underlying situation where the Amazon manager creates a request for the Ad
and that Ad is used at a particular date and time.

Scenario2: Suppose the Amazon manager feels that the AD space is too expensive and cancels the
request. At the same time, the Flipkart requests the Ad space on January 26 at 10:00 AM. Then the
request of Amazon has been canceled. Therefore, Flipkart's promotion ad must be arranged on
January 26 at 10 AM.

After all, the request and payment have been made. Now, if Amazon changes their mind and they
feel that they are ready to make payment for January 26 at 10 AM, which should be given because
Flipkart has already used that space. Hence, another calendar must open up for Amazon to make
their booking.
Scenario3: in this, first, we login as AMS manger, then click on Set Price page and set the price for
AD space on logout page to 10$ per second.

Then login as Amazon manager and select the date and time to put up and Ad on the logout page.
And the payment should be 100$ for 10 seconds of an Ad on Rediffmail logout page.

As we can see in the below image, we have three different modules like Loans, Sales, and
Overdraft. And these modules are going to be tested by their assigned test engineers only because if
data flow between these modules or scenarios, then we need to clear that in which module it is going
and that test engineer should check that thing.
Let us assume that here we are performing system testing on the interest estimation, where the
customer takes the Overdraft for the first time as well as for the second time.

In this particular example, we have the following scenarios:

Scenarios 1
o First, we will log in as a User; let see P, and apply for Overdraft Rs15000, click on apply, and
logout.
o After that, we will log in as a Manager and approve the Overdraft of P, and logout.
o Again we will log in as a P and check the Overdraft balance; Rs15000 should be deposited
and logout.
o Modify the server date to the next 30 days.
o Login as P, check the Overdraft balance is 15000+ 300+200=15500, than logout
o Login as a Manager, click on the Deposit, and Deposit Rs500, logout.
o Login as P, Repay the Overdraft amount, and check the Overdraft balance, which is Rs zero.
o Apply for Overdraft in Advance as a two-month salary.
o Approve by the Manager, amount credit and the interest will be there to the processing fee for
1st time.
o Login user → Homepage [Loan, Sales, Overdraft] → Overdraft page [Amount Overdraft,
Apply Overdraft, Repay Overdraft] →Application
o Login manager → Homepage [Loan, Sales, Overdraft] → Overdraft page [Amount
Overdraft, Apply Overdraft, Repay Overdraft, Approve Overdraft]→ Approve Page
→Approve application.
o Login as user P → Homepage [Loan, Sales, Overdraft] → Overdraft page [Amount
Overdraft, Apply Overdraft, Repay Overdraft] →Approved Overdraft →Amount Overdraft
o Login as user P→Homepage [Loan, Sales, Overdraft] → Overdraft page [Amount Overdraft,
Apply Overdraft, Repay Overdraft] →Repay Overdraft → with process fee + interest
amount.

Scenario 2
Now, we test the alternative scenario where the bank provides an offer, which says that a customer
who takes Rs45000 as Overdraft for the first time will not charge for the Process fee. The processing
fee will not be refunded when the customer chooses another overdraft for the third time.

We have to test for the third scenario, where the customer takes the Overdraft of Rs45000 for the
first time, and also verify that the Overdraft repays balance after applying for another overdraft for
the third time.

Scenario 3

In this, we will reflect that the application is being used generally by all the clients, all of a sudden
the bank decided to reduce the processing fee to Rs100 for new customer, and we have test Overdraft
for new clients and check whether it is accepting only for Rs100.

But then we get conflicts in the requirement, assume the client has applied for Rs15000 as Overdraft
with the current process fee for Rs200. Before the Manager is yet to approve it, the bank decreases
the process fee to Rs100.

Now, we have to test what process fee is charged for the Overdraft for the pending customer. And
the testing team cannot assume anything; they need to communicate with the Business Analyst or the
Client and find out what they want in those cases.

Therefore, if the customers provide the first set of requirements, we must come up with the
maximum possible scenarios.

Types of System Testing

System testing is divided into more than 50 types, but software testing companies typically uses
some of them. These are listed below:
Regression Testing

Regression testing is performed under system testing to confirm and identify that if there's any defect
in the system due to modification in any other part of the system. It makes sure, any changes done
during the development process have not introduced a new defect and also gives assurance; old
defects will not exist on the addition of new software over the time.

Load Testing

Load testing is performed under system testing to clarify whether the system can work under real-
time loads or not.

Functional Testing

Functional testing of a system is performed to find if there's any missing function in the system.
Tester makes a list of vital functions that should be in the system and can be added during functional
testing and should improve quality of the system.
Recovery Testing

Recovery testing of a system is performed under system testing to confirm reliability,


trustworthiness, accountability of the system and all are lying on recouping skills of the system. It
should be able to recover from all the possible system crashes successfully.

In this testing, we will test the application to check how well it recovers from the crashes or
disasters.

Recovery testing contains the following steps:

o Whenever the software crashes, it should not vanish but should write the crash log message
or the error log message where the reason for crash should be mentioned. For
example: C://Program Files/QTP/Cresh.log
o It should kill its own procedure before it vanishes. Like, in Windows, we have the Task
Manager to show which process is running.
o We will introduce the bug and crash the application, which means that someone will lead us
to how and when will the application crash. Or By experiences, after few months of
involvement on working the product, we can get to know how and when the application will
crash.
o Re-open the application; the application must be reopened with earlier settings.

For example: Suppose, we are using the Google Chrome browser, if the power goes off, then we
switch on the system and re-open the Google chrome, we get a message asking whether we want
to start a new session or restore the previous session. For any developed product, the developer
writes a recovery program that describes, why the software or the application is crashing, whether
the crash log messages are written or not, etc.

Migration Testing

Migration testing is performed to ensure that if the system needs to be modified in new infrastructure
so it should be modified without any issue.

Usability Testing

The purpose of this testing to make sure that the system is well familiar with the user and it meets its
objective for what it supposed to do.

Software and Hardware Testing

This testing of the system intends to check hardware and software compatibility. The hardware
configuration must be compatible with the software to run it without any issue. Compatibility
provides flexibility by providing interactions between hardware and software.
Why is System Testing Important?

o System Testing gives hundred percent assurance of system performance as it covers end to
end function of the system.
o It includes testing of System software architecture and business requirements.
o It helps in mitigating live issues and bugs even after production.
o System testing uses both existing system and a new system to feed same data in both and then
compare the differences in functionalities of added and existing functions so, the user can
understand benefits of new added functions of the system.

Testing Any Application

Here, we are going to test the Gmail application to understand how functional, integration, and
System testing works.

Suppose, we have to test the various modules such as Login, Compose, Draft, Inbox, Sent Item,
Spam, Chat, Help, Logout of Gmail application.

We do Functional Testing on all Modules First, and then only we can perform integration testing and
system testing.

In functional testing, at least we have one module to perform functional testing. So here we have the
Compose Module where we are performing the functional testing.
Compose

The different components of the Compose module are To, CC, BCC, Subject, Attachment, Body,
Sent, Save to Draft, Close.

o First, we will do functional testing on the To


o For CC & BCC components, we will take the same input as To component.

o Maximum character
o Minimum character
o Flash files (GIF)
o Smiles
o Format
o Blank
o Copy & Paste
o Hyperlink
o Signature

o For the Attachment component, we will take the help of the below scenarios and test the
component.
o File size at maximum
o Different file formats
o Total No. of files
o Attach multiple files at the same time
o Drag & Drop
o No Attachment
o Delete Attachment
o Cancel Uploading
o View Attachment
o Browser different locations
o Attach opened files
o For Sent component, we will write the entire field and click on the Sent button, and the
Confirmation message; Message sent successfully must be displayed.
o For Saved to Drafts component, we will write the entire field and click on aved to drafts,
and the Confirmation message must be displayed.
o For the Cancel component, we will write all fields and click on the Cancel button, and
the Window will be closed or moved to save to draft or all fields must be refreshed.
Once we are done performing functional testing on compose module, we will do the Integration
testing on Gmail application's various modules:

Login

o First, we will enter the username and password for login to the application and Check the
username on the Homepage.

Compose

o Compose mail, send it and check the mail in Sent Item [sender]
o Compose mail, send it and check the mail in the receiver [Inbox]
o Compose mail, send it and check the mail in self [Inbox]
o Compose mail, click on Save as Draft, and check-in sender draft.
o Compose mail, send it invalid id (valid format), and check for undelivered message.
o Compose mail, close and check-in Drafts.

Inbox

o Select the mail, reply, and check in sent items or receiver Inbox.
o Select the mail in Inbox for reply, Save as Draft and check in the Draft.
o Select the mail then delete it, and check in Trash.

Sent Item

o Select the mail, Sent Item, Reply or Forward, and check in Sent item or receiver inbox.
o Select mail, Sent Item, Reply or Forward, Save as Draft, and verify in the Draft.
o Select mail, delete it, and check in the Trash.

Draft

o Select the email draft, forward and check Sent item or Inbox.
o Select the email draft, delete and verify in Trash.

Chat

o Chat with offline users saved in the inbox of the receiver.


o Chat with the user and verify it in the chat window.
o Chat with a user and check in the chat history.
Testing Documentation

Testing documentation is the documentation of artifacts that are created during or before the testing
of a software application. Documentation reflects the importance of processes for the customer,
individual and organization.

Projects which contain all documents have a high level of maturity. Careful documentation can save
the time, efforts and wealth of the organization.

There is the necessary reference document, which is prepared by every test engineer before stating
the test execution process. Generally, we write the test document whenever the developers are busy
in writing the code.

Once the test document is ready, the entire test execution process depends on the test document. The
primary objective for writing a test document is to decrease or eliminate the doubts related to the
testing activities.

Types of test document

In software testing, we have various types of test document, which are as follows:

o Test scenarios
o Test case
o Test plan
o Requirement traceability matrix(RTM)
o Test strategy
o Test data
o Bug report
o Test execution report
Test Scenarios

It is a document that defines the multiple ways or combinations of testing the application. Generally,
it is prepared to understand the flow of an application. It does not consist of any inputs and
navigation steps.

For more information about test scenario, refers to the below link:

https://www.javatpoint.com/test-scenario

Test case

It is an in-details document that describes step by step procedure to test an application. It consists of
the complete navigation steps and inputs and all the scenarios that need to be tested for the
application. We will write the test case to maintain the consistency, or every tester will follow the
same approach for organizing the test document.

For more information about test case, refers to the below link:

https://www.javatpoint.com/test-case
Test plan

It is a document that is prepared by the managers or test lead. It consists of all information about the
testing activities. The test plan consists of multiple components such as Objectives, Scope,
Approach, Test Environments, Test methodology, Template, Role & Responsibility, Effort
estimation, Entry and Exit criteria, Schedule, Tools, Defect tracking, Test Deliverable,
Assumption, Risk, and Mitigation Plan or Contingency Plan.

Requirement Traceability Matrix (RTM)

The Requirement traceability matrix [RTM] is a document which ensures that all the test case has
been covered. This document is created before the test execution process to verify that we did not
miss writing any test case for the particular requirement.

Test strategy

The test strategy is a high-level document, which is used to verify the test types (levels) to be
executed for the product and also describe that what kind of technique has to be used and which
module is going to be tested. The Project Manager can approve it. It includes the multiple
components such as documentation formats, objective, test processes, scope, and customer
communication strategy, etc. we cannot modify the test strategy.

Test data

It is data that occurs before the test is executed. It is mainly used when we are implementing the test
case. Mostly, we will have the test data in the Excel sheet format and entered manually while
performing the test case.

The test data can be used to check the expected result, which means that when the test data is
entered, the expected outcome will meet the actual result and also check the application performance
by entering the in-correct input data.

Bug report

The bug report is a document where we maintain a summary of all the bugs which occurred during
the testing process. This is a crucial document for both the developers and test engineers because,
with the help of bug reports, they can easily track the defects, report the bug, change the status of
bugs which are fixed successfully, and also avoid their repetition in further process.

Test execution report

It is the document prepared by test leads after the entire testing execution process is completed. The
test summary report defines the constancy of the product, and it contains information like the
modules, the number of written test cases, executed, pass, fail, and their percentage. And each
module has a separate spreadsheet of their respective module.

Why documentation is needed

If the testing or development team gets software that is not working correctly and developed by
someone else, so to find the error, the team will first need a document. Now, if the documents are
available then the team will quickly find out the cause of the error by examining documentation. But,
if the documents are not available then the tester need to do black box and white box testing again,
which will waste the time and money of the organization. More than that, Lack of documentation
becomes a problem for acceptance.

Example

Let's take a real-time example of Microsoft, Microsoft launch every product with proper user
guidelines and documents, which are very explanatory, logically consistent and easy to understand
for any user. These are all the reasons behind their successful products.

Benefits of using Documentation

o Documentation clarifies the quality of methods and objectives.


o It ensures internal coordination when a customer uses software application.
o It ensures clarity about the stability of tasks and performance.
o It provides feedback on preventive tasks.
o It provides feedback for your planning cycle.
o It creates objective evidence for the performance of the quality management system.
o If we write the test document, we can't forget the values which we put in the first phase.
o It is also a time-saving process because we can easily refer to the text document.
o It is also consistent because we will test on the same value.

The drawback of the test document

o It is a bit tedious because we have to maintain the modification provided by the customer and
parallel change in the document.
o If the test documentation is not proper, it will replicate the quality of the application.
o Sometimes it is written by that person who does not have the product knowledge.
o Sometimes the cost of the document will be exceeding its value.

Test Scenario
The test scenario is a detailed document of test cases that cover end to end functionality of a software
application in liner statements. The liner statement is considered as a scenario. The test scenario is a
high-level classification of testable requirements. These requirements are grouped on the basis of the
functionality of a module and obtained from the use cases.

In the test scenario, there is a detailed testing process due to many associated test cases. Before
performing the test scenario, the tester has to consider the test cases for each scenario.

In the test scenario, testers need to put themselves in the place of the user because they test the
software application under the user's point of view. Preparation of scenarios is the most critical part,
and it is necessary to seek advice or help from customers, stakeholders or developers to prepare the
scenario.

How to write Test Scenarios

As a tester, follow the following steps to create Test Scenarios-

o Read the requirement document such as BRS (Business Requirement Specification), SRS
(System Requirement Specification) and FRS (Functional Requirement Specification) of the
software which is under the test.
o Determine all technical aspects and objectives for each requirement.
o Find all the possible ways by which the user can operate the software.
o Ascertain all the possible scenario due to which system can be misused and also detect the
users who can be hackers.
o After reading the requirement document and completion of the scheduled analysis make a list
of various test scenarios to verify each function of the software.
o Once you listed all the possible test scenarios, create a traceability matrix to find out whether
each and every requirement has a corresponding test scenario or not.
o Supervisor of the project reviews all scenarios. Later, they are evaluated by other
stakeholders of the project.

Features of Test Scenario

o The test scenario is a liner statement that guides testers for the testing sequence.
o Test scenario reduces the complexity and repetition of the product.
o Test scenario means talking and thinking about tests in detail but write them in liner
statements.
o It is a thread of operations.
o Test scenario becomes more important when the tester does not have enough time to write
test cases, and team members agree with a detailed liner scenario.
o The test scenario is a time saver activity.
o It provides easy maintenance because the addition and modification of test scenarios are easy
and independent.

Example of Test scenarios

Here we are taking the Gmail application and writing test scenarios for different modules which are
most commonly used such as Login, Compose, Inbox, and Trash

Test scenarios on the Login module

o Enter the valid login details (Username, password), and check that the home page is
displayed.
o Enter the invalid Username and password and check for the home page.
o Leave Username and password blank, and check for the error message displayed.
o Enter the valid Login, and click on the cancel, and check for the fields reset.
o Enter invalid Login, more than three times, and check that account blocked.
o Enter valid Login, and check that the Username is displayed on the home screen.

Test scenarios on Compose module

o Checks that all users can enter email ides in the To, Cc, and Bcc.
o Check that the entire user can enter various email ids in To, Cc, and Bcc.
o Compose a mail, send it, and check for the confirmation message.
o Compose a mail, send it, and check in the sent item of the sender and the inbox.
o Compose a mail, send it, and check for invalid and valid email id (valid format), check the
mail in sender inbox.
o Compose main, discard, and then check for conformation message and check-in draft.
o Compose mail click on save as draft and check for the confirmation message
o Compose mail click on close and check for conformation save as drafts.

Test scenarios on Inbox module

o Click on the inbox, and verify all received mail are displayed and highlighted in the inbox.
o Check that a latest received mail has been displayed to the sender email id correctly.
o Select the mail, reply and forward send it; check in the sent item of sender and inbox of the
receiver.
o Check for any attached attachments to the mail that are downloaded or not.
o Check that attachment is scanned correctly for any viruses before download.
o Select the mail, reply and forward save as draft, and check for the confirmation message and
checks in the Draft section.
o Check all the emails are marked as read are not highlighted.
o Check all mail recipients in Cc are visible to all users.
o Checks all email recipients in Bcc are not visible to the users.
o Select mail, delete it, and then check in the Trash section.

Test scenario on Trash module

o Open trash, check all deleted mail present.


o Restore mail from Trash; check-in the corresponding module.
o Select mail from trash, delete it, and check mail is permanently deleted.

Test Case

The test case is defined as a group of conditions under which a tester determines whether a software
application is working as per the customer's requirements or not. Test case designing includes
preconditions, case name, input conditions, and expected result. A test case is a first level action and
derived from test scenarios.
It is an in-details document that contains all possible inputs (positive as well as negative) and the
navigation steps, which are used for the test execution process. Writing of test cases is a one-time
attempt that can be used in the future at the time of regression testing.

Test case gives detailed information about testing strategy, testing process, preconditions, and
expected output. These are executed during the testing process to check whether the software
application is performing the task for that it was developed or not.

Test case helps the tester in defect reporting by linking defect with test case ID. Detailed test case
documentation works as a full proof guard for the testing team because if developer missed
something, then it can be caught during execution of these full-proof test cases.

To write the test case, we must have the requirements to derive the inputs, and the test scenarios
must be written so that we do not miss out on any features for testing. Then we should have the test
case template to maintain the uniformity, or every test engineer follows the same approach to prepare
the test document.

Generally, we will write the test case whenever the developer is busy in writing the code.

When do we write a test case?

We will write the test case when we get the following:

o When the customer gives the business needs then, the developer starts developing and says
that they need 3.5 months to build this product.
o And In the meantime, the testing team will start writing the test cases.
o Once it is done, it will send it to the Test Lead for the review process.
o And when the developers finish developing the product, it is handed over to the testing team.
o The test engineers never look at the requirement while testing the product document because
testing is constant and does not depends on the mood of the person rather than the quality of
the test engineer.

Why we write the test cases?

We will write the test for the following reasons:

o To require consistency in the test case execution


o To make sure a better test coverage
o It depends on the process rather than on a person
o To avoid training for every new test engineer on the product

To require consistency in the test case execution: we will see the test case and start testing the
application.

To make sure a better test coverage: for this, we should cover all possible scenarios and document
it, so that we need not remember all the scenarios again and again.

It depends on the process rather than on a person: A test engineer has tested an application
during the first release, second release, and left the company at the time of third release. As the test
engineer understood a module and tested the application thoroughly by deriving many values. If the
person is not there for the third release, it becomes difficult for the new person. Hence all the derived
values are documented so that it can be used in the future.

To avoid giving training for every new test engineer on the product: When the test engineer
leaves, he/she leaves with a lot of knowledge and scenarios. Those scenarios should be documented
so that the new test engineer can test with the given scenarios and also can write the new scenarios.

Test case template

The primary purpose of writing a test case is to achieve the efficiency of the application.
As we know, the actual result is written after the test case execution, and most of the time, it would
be same as the expected result. But if the test step will fail, it will be different. So, the actual result
field can be skipped, and in the Comments section, we can write about the bugs.

And also, the Input field can be removed, and this information can be added to the Description
field.

The above template we discuss above is not the standard one because it can be different for each
company and also with each application, which is based on the test engineer and the test lead. But,
for testing one application, all the test engineers should follow a usual template, which is formulated.

The test case should be written in simple language so that a new test engineer can also understand
and execute the same.

In the above sample template, the header contains the following:

Step number
It is also essential because if step number 20 is failing, we can document the bug report and hence
prioritize working and also decide if it’s a critical bug.

Test case type

It can be functional, integration or system test cases or positive or negative or positive and negative
test cases.

Release

One release can contain many versions of the release.

Pre-condition

These are the necessary conditions that need to be satisfied by every test engineer before starting the
test execution process. Or it is the data configuration or the data setup that needs to be created for the
testing.

For example: In an application, we are writing test cases to add users, edit users, and delete users.
The per-condition will be seen if user A is added before editing it and removing it.

Test data

These are the values or the input we need to create as per the per-condition.

For example, Username, Password, and account number of the users.

The test lead may be given the test data like username or password to test the application, or the test
engineer may themself generate the username and password.

Severity

The severity can be major, minor, and critical, the severity in the test case talks about the
importance of that particular test cases. All the text execution process always depends on the severity
of the test cases.

We can choose the severity based on the module. There are many features include in a module, even
if one element is critical, we claim that test case to be critical. It depends on the functions for which
we are writing the test case.

The process to write test cases

The method of writing a test case can be completed into the following steps, which are as below:
System study

In this, we will understand the application by looking at the requirements or the SRS, which is given
by the customer.

Identify all scenarios:

o When the product is launched, what are the possible ways the end-user may use the software
to identify all the possible ways.
o I have documented all possible scenarios in a document, which is called test design/high-
level design.
o The test design is a record having all the possible scenarios.
Write test cases

Convert all the identified scenarios to test claims and group the scenarios related to their features,
prioritize the module, and write test cases by applying test case design techniques and use the
standard test case template, which means that the one which is decided for the project.

Review the test cases

Review the test case by giving it to the head of the team and, after that, fix the review feedback given
by the reviewer.

Test case approval

After fixing the test case based on the feedback, send it again for the approval.

Store in the test case repository

After the approval of the particular test case, store in the familiar place that is known as the test case
repository.

Software Testing Tools

Software testing tools are required for the betterment of the application or software.

That's why we have so many tools available in the market where some are open-source and paid
tools.

The significant difference between open-source and the paid tool is that the open-source tools have
limited features, whereas paid tool or commercial tools have no limitation for the features. The
selection of tools depends on the user's requirements, whether it is paid or free.

The software testing tools can be categorized, depending on the licensing (paid or commercial, open-
source), technology usage, type of testing, and so on.

With the help of testing tools, we can improve our software performance, deliver a high-quality
product, and reduce the duration of testing, which is spent on manual efforts.

The software testing tools can be divided into the following:

o Test management tool


o Bug tracking tool
o Automated testing tool
o Performance testing tool
o Cross-browser testing tool
o Integration testing tool
o Unit testing tool
o Mobile/android testing tool
o GUI testing tool
o Security testing tool

Test management tool

Test management tools are used to keep track of all the testing activity, fast data analysis, manage
manual and automation test cases, various environments, and plan and maintain manual testing as
well.

Bug tracking tool

The defect tracking tool is used to keep track of the bug fixes and ensure the delivery of a quality
product. This tool can help us to find the bugs in the testing stage so that we can get the defect-free
data in the production server. With the help of these tools, the end-users can allow reporting the bugs
and issues directly on their applications.

Automation testing tool

This type of tool is used to enhance the productivity of the product and improve the accuracy. We
can reduce the time and cost of the application by writing some test scripts in any programming
language.

Performance testing tool

Performance or Load testing tools are used to check the load, stability, and scalability of the
application. When n-number of the users using the application at the same time, and if the
application gets crashed because of the immense load, to get through this type of issue, we need load
testing tools.

Cross-browser testing tool

This type of tool is used when we need to compare a web application in the various web browser
platforms. It is an important part when we are developing a project. With the help of these tools, we
will ensure the consistent behavior of the application in multiple devices, browsers, and platforms.

Integration testing tool

This type of tool is used to test the interface between modules and find the critical bugs that are
happened because of the different modules and ensuring that all the modules are working as per the
client requirements.
Unit testing tool

This testing tool is used to help the programmers to improve their code quality, and with the help of
these tools, they can reduce the time of code and the overall cost of the software.

Mobile/android testing tool

We can use this type of tool when we are testing any mobile application. Some of the tools are open-
source, and some of the tools are licensed. Each tool has its functionality and features.

GUI testing tool

GUI testing tool is used to test the User interface of the application because a proper GUI (graphical
user interface) is always useful to grab the user's attention. These type of tools will help to find the
loopholes in the application's design and makes its better.

Security testing tool

The security testing tool is used to ensure the security of the software and check for the security
leakage. If any security loophole is there, it could be fixed at the early stage of the product. We need
this type of the tool when the software has encoded the security code which is not accessible by the
unauthorized users.

Software Maintenance

Software maintenance is a part of the Software Development Life Cycle. Its primary goal is to
modify and update software application after delivery to correct errors and to improve performance.
Software is a model of the real world. When the real world changes, the software require alteration
wherever possible.

Software Maintenance is an inclusive activity that includes error corrections, enhancement of


capabilities, deletion of obsolete capabilities, and optimization.

Need for Maintenance

Software Maintenance is needed for:-

o Correct errors
o Change in user requirement with time
o Changing hardware/software requirements
o To improve system efficiency
o To optimize the code to run faster
o To modify the components
o To reduce any unwanted side effects.

Thus the maintenance is required to ensure that the system continues to satisfy user requirements.

Types of Software Maintenance

1. Corrective Maintenance

Corrective maintenance aims to correct any remaining errors regardless of where they may cause
specifications, design, coding, testing, and documentation, etc.

2. Adaptive Maintenance

It contains modifying the software to match changes in the ever-changing environment.

3. Preventive Maintenance

It is the process by which we prevent our system from being obsolete. It involves the concept of
reengineering & reverse engineering in which an old system with old technology is re-engineered
using new technology. This maintenance prevents the system from dying out.

4. Perfective Maintenance

It defines improving processing efficiency or performance or restricting the software to enhance


changeability. This may contain enhancement of existing system functionality, improvement in
computational efficiency, etc.

Causes of Software Maintenance Problems

Lack of Traceability

o Codes are rarely traceable to the requirements and design specifications.


o It makes it very difficult for a programmer to detect and correct a critical defect affecting
customer operations.
o Like a detective, the programmer pores over the program looking for clues.
o Life Cycle documents are not always produced even as part of a development project.

Lack of Code Comments

o Most of the software system codes lack adequate comments. Lesser comments may not be
helpful in certain situations.

Obsolete Legacy Systems


o In most of the countries worldwide, the legacy system that provides the backbone of the
nation's critical industries, e.g., telecommunications, medical, transportation utility services,
were not designed with maintenance in mind.
o They were not expected to last for a quarter of a century or more!
o As a consequence, the code supporting these systems is devoid of traceability to the
requirements, compliance to design and programming standards and often includes dead,
extra and uncommented code, which all make the maintenance task next to the impossible.

Software Maintenance Process

Program Understanding

The first step consists of analyzing the program to understand.


Generating a Particular maintenance problem

The second phase consists of creating a particular maintenance proposal to accomplish the
implementation of the maintenance goals.

Ripple Effect

The third step consists of accounting for all of the ripple effects as a consequence of program
modifications.

Modified Program Testing

The fourth step consists of testing the modified program to ensure that the revised application has at
least the same reliability level as prior.

Maintainability

Each of these four steps and their associated software quality attributes is critical to the maintenance
process. All of these methods must be combined to form maintainability.

Software Maintenance Cost Factors

There are two types of cost factors involved in software maintenance. These are

o Non-Technical Factors
o Technical Factors

Non-Technical Factors
1. Application Domain

o If the application of the program is defined and well understood, the system requirements
may be definitive and maintenance due to changing needs minimized.
o If the form is entirely new, it is likely that the initial conditions will be modified frequently,
as user gain experience with the system.

2. Staff Stability

o It is simple for the original writer of a program to understand and change an application
rather than some other person who must understand the program by the study of the reports
and code listing.
o If the implementation of a system also maintains that systems, maintenance costs will reduce.
o In practice, the feature of the programming profession is such that persons change jobs
regularly. It is unusual for one user to develop and maintain an application throughout its
useful life.

3. Program Lifetime

o Programs become obsolete when the program becomes obsolete, or their original hardware is
replaced, and conversion costs exceed rewriting costs.

4. Dependence on External Environment

o If an application is dependent on its external environment, it must be modified as the climate


changes.
o For example:
o Changes in a taxation system might need payroll, accounting, and stock control programs to
be modified.
o Taxation changes are nearly frequent, and maintenance costs for these programs are
associated with the frequency of these changes.
o A program used in mathematical applications does not typically depend on humans changing
the assumptions on which the program is based.

5. Hardware Stability

o If an application is designed to operate on a specific hardware configuration and that


configuration does not changes during the program's lifetime, no maintenance costs due to
hardware changes will be incurred.
o Hardware developments are so increased that this situation is rare.
o The application must be changed to use new hardware that replaces obsolete equipment.
Technical Factors

Technical Factors include the following:

Module Independence

It should be possible to change one program unit of a system without affecting any other unit.

Programming Language

Programs written in a high-level programming language are generally easier to understand than
programs written in a low-level language.

Programming Style

The method in which a program is written contributes to its understandability and hence, the ease
with which it can be modified.
Program Validation and Testing

o Generally, more the time and effort are spent on design validation and program testing, the
fewer bugs in the program and, consequently, maintenance costs resulting from bugs
correction are lower.
o Maintenance costs due to bug's correction are governed by the type of fault to be repaired.
o Coding errors are generally relatively cheap to correct, design errors are more expensive as
they may include the rewriting of one or more program units.
o Bugs in the software requirements are usually the most expensive to correct because of the
drastic design which is generally involved.

Documentation

o If a program is supported by clear, complete yet concise documentation, the functions of


understanding the application can be associatively straight-forward.
o Program maintenance costs tends to be less for well-reported systems than for the system
supplied with inadequate or incomplete documentation.

Configuration Management Techniques

o One of the essential costs of maintenance is keeping track of all system documents and
ensuring that these are kept consistent.
o Effective configuration management can help control these costs.

Software Re-engineering

When we need to update the software to keep it to the current market, without impacting its
functionality, it is called software re-engineering. It is a thorough process where the design of
software is changed and programs are re-written.
Legacy software cannot keep tuning with the latest technology available in the market. As the
hardware become obsolete, updating of software becomes a headache. Even if software grows old
with time, its functionality does not.
For example, initially Unix was developed in assembly language. When language C came into
existence, Unix was re-engineered in C, because working in assembly language was difficult.
Other than this, sometimes programmers notice that few parts of software need more maintenance
than others and they also need re-engineering.
Re-Engineering Process

 Decide what to re-engineer. Is it whole software or a part of it?


 Perform Reverse Engineering, in order to obtain specifications of existing software.
 Restructure Program if required. For example, changing function-oriented programs into
object-oriented programs.
 Re-structure data as required.
 Apply Forward engineering concepts in order to get re-engineered software.
There are few important terms used in Software re-engineering

Reverse Engineering

It is a process to achieve system specification by thoroughly analyzing, understanding the existing


system. This process can be seen as reverse SDLC model, i.e. we try to get higher abstraction level
by analyzing lower abstraction levels.
An existing system is previously implemented design, about which we know nothing. Designers
then do reverse engineering by looking at the code and try to get the design. With design in hand,
they try to conclude the specifications. Thus, going in reverse from code to system specification.

Program Restructuring
It is a process to re-structure and re-construct the existing software. It is all about re-arranging the
source code, either in same programming language or from one programming language to a
different one. Restructuring can have either source code-restructuring and data-restructuring or
both.
Re-structuring does not impact the functionality of the software but enhance reliability and
maintainability. Program components, which cause errors very frequently can be changed, or
updated with re-structuring.
The dependability of software on obsolete hardware platform can be removed via re-structuring.

Forward Engineering

Forward engineering is a process of obtaining desired software from the specifications in hand
which were brought down by means of reverse engineering. It assumes that there was some
software engineering already done in the past.
Forward engineering is same as software engineering process with only one difference – it is carried
out always after reverse engineering.

Component reusability

A component is a part of software program code, which executes an independent task in the system.
It can be a small module or sub-system itself.

Example

The login procedures used on the web can be considered as components, printing system in
software can be seen as a component of the software.
Components have high cohesion of functionality and lower rate of coupling, i.e. they work
independently and can perform tasks without depending on other modules.
In OOP, the objects are designed are very specific to their concern and have fewer chances to be
used in some other software.
In modular programming, the modules are coded to perform specific tasks which can be used across
number of other software programs.
There is a whole new vertical, which is based on re-use of software component, and is known as
Component Based Software Engineering (CBSE).
Re-use can be done at various levels
 Application level - Where an entire application is used as sub-system of new software.
 Component level - Where sub-system of an application is used.
 Modules level - Where functional modules are re-used.
Software components provide interfaces, which can be used to establish communication
among different components.

Reuse Process

Two kinds of method can be adopted: either by keeping requirements same and adjusting
components or by keeping components same and modifying requirements.

 Requirement Specification - The functional and non-functional requirements are specified,


which a software product must comply to, with the help of existing system, user input or
both.
 Design - This is also a standard SDLC process step, where requirements are defined in terms
of software parlance. Basic architecture of system as a whole and its sub-systems are
created.
 Specify Components - By studying the software design, the designers segregate the entire
system into smaller components or sub-systems. One complete software design turns into a
collection of a huge set of components working together.
 Search Suitable Components - The software component repository is referred by designers
to search for the matching component, on the basis of functionality and intended software
requirements..
 Incorporate Components - All matched components are packed together to shape them as
complete software.

You might also like