You are on page 1of 119

SOFTWARE

ENGINEERING
(PC 501 CS)
AICTEM-OU
A.Y 2021-2022

Lecture Notes
by
Er Sandeep Ravikanti
Assistant Professor
Computer Science & Engineering
Methodist College of Engineering & Technology

Disclaimer: The contents in this document are reproduced based on the information collected from Software engineering text books, Digital resources

and other materials. You can make use of this material for theory exams preparation if you are interested.
Software Engineering PC 501 CS AICTEM-OU

UNIT I
Introduction to Software Engineering: A Generic view of Process: Software Engineering, Process
Framework, CMM Process Patterns, Process Assessment.
Process Models: Prescriptive Models, Waterfall Model, Incremental Process Models, Evolutionary Process
Models, Specialized Process Models, The Unified Models, Personal and Team Process Models, Process
Technology, Product and Process.
An Agile view of Process: Introduction to Agility and Agile Process, Agile Process Models.

Software: “It is defined as an organized set of instructions to deliver a desired output by considering
various processes and functions.”
Now-a-days software acts both as product (Application Software) and vehicle for delivering a product
(System Software).
 As product, software delivers computing potential embodied by computer hardware (or) by a network of
computers that are accessible by local hardware.
 As the vehicle used to deliver product, software acts as basis for control of PC (OS), Communication of
information (Networks) and Creation and Control of other programs (Software Tools, Environments).
Software Development: “It is a creative activity, where a software system is developed from initial concept
through a working system”.
Software Maintenance: “Process of changing a developed system once it is delivered”.
Software Evolution: “Evolutionary process where software is conceptually changed over its lifetime in
response to the changing requirements”.

Software Characteristics:
1. Software is developed (or) engineered, it is not manufactured.
2. Software doesn’t “wear out”. Sometimes, it may deteriorate with too many changes.
3. Most of softwares continue to be custom build, component-based assembly. [As software is both product
and vehicle to carry product].

Types of Software Applications: Various types of softwares include:


1) System Software: “A collection of programs written to service other programs.”
2) Application Software: “It consists of stand-alone programs that solve a specific business need.”
3) Engineering/Scientific Software: “Characterized by numerical algorithm.” Ex: Astronomy etc.
4) Embedded Software: “Resides within a product/system to implement and control features and functions
for end user.” (Ex: keypad control for oven.)
5) Web-Applications: “Set of linked hyper text files which provide e-commerce and B2B (Business to
Business) application.”
6) AI Software: “Makes use of non-numerical algorithms to solve complex problems.” Ex: Robotics.
7) Product-line Software: “Capable to be used by many different customers.” (Ex: DBMS in BSE.)
8) Open Source: “Available for all customers and can be modified.”
9) Net Sourcing: “WWW is used as a content provider. So Software Engineers can develop simple and
sophisticated applications.”
10) Ubiquitous Computing: “Allows small devices to communicate across vast networks.”

Er Sandeep R, Assistant Professor,CSE,MCET


Software Engineering PC 501 CS AICTEM-OU

A GENERIC VIEW OF PROCESS


Software Engineering-A Layered Technology: According to IEEE, “Software Engineering is the
application of a systematic, disciplined, quantifiable approach to the development, operation and maintenance
of software i.e., application of engineering to software.

Software engineering is a layered technology. The foundation for software engineering is the process
layer. The software engineering process is the glue that holds the technology layers together and enables
timely development of software. Process defines a framework that must be established for effective delivery
of software engineering technology. The software process forms the basis for management control of software
projects and establishes the context in which technical methods are applied, work products (models,
documents, data, reports, forms, etc.) are produced, milestones are established, quality is ensured, and change
is properly managed.
Software engineering methods provide the technical how-to’s for building software. Methods
encompass a broad array of tasks that include communication, requirements analysis, design modeling,
program construction, testing, and support.
Software engineering tools provide automated or semi automated support for process and the methods.
When tools are integrated so that information created by one tool can be used by another, a system for the
support of software development, called Computer Aided Software Engineering (CASE), is established.

Fig: Software Engineering Layers

A Process Framework: “It establishes a foundation for complete software process by identifying a small
number of framework activities and umbrella activities that are applicable to all software projects.”
 Each framework activity is populated by a set of Software Engineering actions (Ex: Design) – collection
of related tasks.
 Each action is populated with individual work tasks.
 Software is determinate if order and timing of inputs, processing and outputs is predictable, otherwise it is
referred an indeterminate.
Framework Activities:
1. Communication: It involves heavy communication and collaboration with customer and other
stakeholders. It encompasses requirements gathering and related activities.
2. Planning: It plans for Software Engineering work that follows. It describes technical tasks to be
conducted, resources that will be required, likely risks, work products to be produced and a work schedule.
3. Modeling: This activity focuses on creation of models that allow stakeholders (customer, developer) to
better under software requirements and design that will achieve requirements.

Er Sandeep R, Assistant Professor,CSE,MCET


Software Engineering PC 501 CS AICTEM-OU

4. Construction: It combines code generation and testing to uncover errors in the code. (Manual, automated
actions).
5. Development: The software (completed/partial increment) is delivered to the customer for evaluation and
feedback of it.

Fig: Process Framework

Umbrella Activities:
1. Software Project Tracking and Control – Assess progress against the project plan and maintain schedule.
2. Risk Management – Assesses risk that effect project outcome (or) Quality of product.
3. Software Quality Assurance – Activities to ensure software quality.
4. Formal Technical Reviews - Assess work products to uncover and remove errors before next action.
5. Software Configuration Management – Manages effects of changes throughout the software process.
6. Measurement – Defines and collects process, project, product measures to meet customer requirements.
7. Reusability Management – Defines criteria for work product reuse and establishes mechanisms to
achieve reusable components.
8. Work Product Preparation and Production – Focuses on activities required to create work product such
as models, documents, logs, forms and lists.

Er Sandeep R, Assistant Professor,CSE,MCET


Software Engineering PC 501 CS AICTEM-OU

Product and Process: If process is weak, the end products will undoubtedly suffer. But an obsessive
over- reliance on process is also dangerous.

Process Technology: “Process Technology tools are used to help software organizations to analyze their
current process, organize work tasks, controls and monitor progress, manage technical quality”.

Process Assessment: It is required to ensure that the process meets set of basic principles that are needed
for a successful software engineering.
Approaches:
 CMM-Based Appraisal for Internal Process Improvement (CBA IPI): It provides a diagnostic
technique for assessing the relative maturity of a software organization; uses the SEI CMM as the basis
for the assessment
 SPICE: Software Process Improvement and Capability dEtermination.
 SCAMPI: Standard CMMI Assessment Method for Process Improvement. It provides a five step process
assessment model that includes: Initiating, Diagnosing, Establishing, Acting, and Learning.
 ISO 9001:2000 for software: Generic standard to improve the overall quality of products, system and
services of software organization. It adopted a plan-do-check-act cycle for continuous process
improvement

CMMI
Capability Maturity Model Integration is a maturity model used to rank software development
organizations. It is proposed by Software Engineering Institute (SEI). It represents process model as
1.Continuous Model 2.Staged Model

CMM: “It is maturity framework strategy that focuses on continuously improving the development and
management of organization workforce.”
Capability Levels: The Capability levels depend on Key Process Areas (KPA). These are given as:
1. Level 0 (Incomplete): The process area (Ex; Requirement Management) is not performed or does not
achieve all goals and objectives defined by CMMI for level 1 capability.
2. Level 1 (Initial/Performed): All specific goals of process area have been satisfied. Work tasks required
to produce defined work products are being conducted.
3. Level 2 (Repeatable/Managed): All level 1 criteria have been satisfied.
 All work related to PA is up-to-date with organization expectations.
 All people doing work have access to adequate resources to get the job done i.e., stakeholders are
actively involved.
 Work task and work products are “monitored, controlled, reviewed and evaluated”.
4. Level 3 (Defined): All level 2 criteria are achieved. “The process is tailored from organization’s set of
standard other processes and contributes work products, measures and other process improvement
information to organizational process assets”
5. Level 4 (Quantitatively managed): All level 3 criteria are satisfied. Quantitative objectives for quality
and process performance are established and used as criteria in managing the process.
6. Level 5 (Optimized): All level 4 criteria are satisfied. Process Area (PA) is adapted and optimized using
quantitative needs to meet changing customer needs and continually improve efficiency of Process Area.

Er Sandeep R, Assistant Professor,CSE,MCET


Software Engineering PC 501 CS AICTEM-OU

SEI has associated key process areas (KPAs) with each of the maturity levels. KPAs describe those software
engineering functions (e.g., software project planning, requirements management) that must be present to
satisfy good practice at a particular level. Each KPA is described by identifying the following characteristics:
 Goals—the overall objectives that the KPA must achieve.
 Commitments—requirements (imposed on the organization) that must be met to achieve the goals or
provide proof of intent to comply with the goals.
 Abilities—those things that must be in place (organizationally and technically) to enable the organization
to meet the commitments.
 Activities—the specific tasks required to achieve the KPA function.
 Methods for monitoring implementation—the manner in which the activities are monitored as they are put
into place.
 Methods for verifying implementation—the manner in which proper practice for the KPA can be verified.

Eighteen KPAs are defined across the maturity model and mapped into different levels of process maturity.
The following KPAs should be achieved at each process maturity level:

Process maturity level 2


 Software configuration management
 Software quality assurance
 Software subcontract management
 Software project tracking and oversight
 Software project planning
 Requirements management
Process maturity level 3
 Peer reviews
 Intergroup coordination
 Software product engineering

Er Sandeep R, Assistant Professor,CSE,MCET


Software Engineering PC 501 CS AICTEM-OU

 Integrated software management


 Training program
 Organization process definition
 Organization process focus
Process maturity level 4
 Software quality management
 Quantitative process management
Process maturity level 5
 Process change management
 Technology change management
 Defect prevention

Each of the KPAs is defined by a set of key practices that contribute to satisfying its goals. The key practices
are policies, procedures, and activities that must occur before a key process area has been fully instituted. The
SEI defines key indicators as “those key practices or components of key practices that offer the greatest
insight into whether the goals of a key process area have been achieved”.

CMMI defines each process area in terms of:


 Specific Goals: Essential characteristics which must exist in all the activities implied by a given Process
Area.
 Specific Practices: Set of tasks to be accomplished to achieve specific goals. Ex: SG and SP for “Project
Planning”
SG1: Establish Estimates
SP 1.1 – Establish Project Scope
SP 1.2 – Establish Work Product Estimation SP 1.3 – Define Project Life Cycle
SP 1.4 – Determine estimates of effort cost.
SG2: Develop a Project Plan
SG3: Obtain commitment to the plan
 Generic Goals: These are used to achieve a particular capability.
 Generic Practices: Practices that correspond to the level goal must be achieved.

PROCESS PATTERNS: Software process can be a collection of patterns that define a set of activities,
work tasks, work products and relational behaviors. Template to describe a process pattern contains:
 Pattern Name: It should describe function within software process. (Ex: Customer-Communication).
 Intent: Purpose of pattern. Can be explained with diagrams.
 Type: Three types of patterns are there:
1. Task Patterns: Software Engineering action or work task that is part of process and relevant to
successful Software Engineering practice defined.
2. Stage Patterns: Defines “a framework activity with multiple work tasks in it”, for the process. Ex:
Communication contains Requirement Gathering
3. Phase Pattern: Defines “sequence of framework activities that occur with the purpose”, may be
iterative. Ex: Spiral Model
 Initial Context: Condition under which pattern applies are described.
 Problem: Problem to be solved by pattern is described.
 Resulting Context: Conditions that result after implementation.
 Related Patterns: List of process patterns that are related.
 Known Uses/Examples: Instances where patterns are applicable.

Er Sandeep R, Assistant Professor,CSE,MCET


Software Engineering PC 501 CS AICTEM-OU

PERSONAL AND TEAM PROCESS MODELS: These models were proposed by Watts Humpry.
Each software engineer can create a process that best fits his or her needs. A team can also create a process
that meets narrower needs of individuals and broader needs of organization.
Personal Software Process (PSP): Every developer uses some process o build the computer software. The
process may be Adhoc, may change daily and may not be efficient. PSP process model defines 5 framework
activities:
1. Planning
2. High-Level Design
3. High-Level Design Review
4. Development
5. Post-mortem.
Team Software Process (TSP): An ideal software team can be of 3-20 Software Engineers (Integrated
Product Teams). Its goal is to build a “self-directed” project team to produce high-quality software.
Objectives:
1. Build self-directed teams that plan and track their work, establish goals and own their processes and plans.
2. Show managers how to motivate their teams and sustain peak performance.
3. Accelerate software project improvement to achieve CMM level-5 targets.
4. Guide high-maturity organization to improve process standards.
Framework Activities:
1. Launch (Communication and Planning).
2. High-level Design.
3. Implementation.
4. Integration and Test.
5. Post-mortem.
Note: TSP uses scripts (sequence of Tasks), forms, and standards to guide team members.

PROCESS MODELS
PRESCRIPTIVE MODELS
Prescribes set of process elements such as framework activities, Software Engineering actions, tasks, work
products, assurance and charge control mechanism. Each process model prescribes a work flow. Various
Prescriptive models include:

1) THE WATERFALL MODEL: This model is proposed by Winston Royce. It is also called as classic life
cycle. It is the oldest paradigm (model) in Software Engineering.
It suggests a systematic, sequential approach to software development that begins with customer
specification of requirements and progresses through planning, then with Modeling, construction and
deployment.

Fig: Waterfall Model.

Er Sandeep R, Assistant Professor,CSE,MCET


Software Engineering PC 501 CS AICTEM-OU

The framework activities of the Waterfall model include:


1. Communication: It involves heavy communication and collaboration with customer and other
stakeholders. It encompasses requirements gathering and related activities.
2. Planning: It plans for Software Engineering work that follows. It describes technical tasks to be
conducted, resources that will be required, likely risks, work products to be produced and a work schedule.
3. Modeling: This activity focuses on creation of models that allow stakeholders (customer, developer) to
better under software requirements and design that will achieve requirements.
4. Construction: It combines code generation and testing to uncover errors in the code. (Manual, automated
actions).
5. Development: The software (completed/partial increment) is delivered to the customer for evaluation and
feedback of it.

Problems:
 Real projects rarely follow the sequential flow.
 It’s often difficult for customer to state all requirements explicitly.
 Working version of program(s) will not be available until late in project time-span. So, customer must
have patience.

A variation in the representation of the waterfall model is called the V-model. The V-model depicts the
relationship of quality assurance actions to the actions associated with communication, modeling, and early
construction activities. As software team moves down the left side of the V, basic problem requirements are
refined into progressively more detailed and technical representations of the problem and its solution. Once
code has been generated, the team moves up the right side of the V, essentially performing a series of tests
(quality assurance actions) that validate each of the models created as the team moved down the left side. In
reality, there is no fundamental difference between the classic life cycle and the V-model. The V-model
provides a way of visualizing how verification and validation actions are applied to earlier engineering work.

Fig: The V – Model

Er Sandeep R, Assistant Professor,CSE,MCET


Software Engineering PC 501 CS AICTEM-OU

2) INCREMENTAL PROCESS MODEL: “It divides the software development process into certain
number of increments with each increment comprising 5 phases of waterfall model”. Each linear sequence
produces “deliverable increments” of software. Ex: WORD software (Entering Data, Editing, Spell check etc.)
First increment is often a core product i.e., basic requirements are addressed, supplementary features
are not delivered. The core product is used and evaluated by the customer, based on that plans for next
increment development. This process is repeated till complete product is produced.
The framework activities of the Incremental Process model include:
1. Communication: It involves heavy communication and collaboration with customer and other
stakeholders. It encompasses requirements gathering and related activities.
2. Planning: It plans for Software Engineering work that follows. It describes technical tasks to be
conducted, resources that will be required, likely risks, work products to be produced and a work schedule.
3. Modeling: This activity focuses on creation of models that allow stakeholders (customer, developer) to
better under software requirements and design that will achieve requirements.
4. Construction: It combines code generation and testing to uncover errors in the code.
5. Development: The software (completed/partial increment) is delivered to the customer for evaluation and
feedback of it.

Figure: Incremental Process model.


Advantages:
 Technical risks reduced, with each increment.
 When team size is small, it is the correct choice.
 Customer can expect a core product in short time-span.

3) RAD MODEL: RAD stands for Rapid Application Development. It is an incremental software process
model that emphasizes short development cycle. It is a version of waterfall model with rapid development.
If requirements are well understood and project scope is constrained RAD process enables development
team to create a “fully functional system” within a very short time period (60- 90 days). Each major
function can be addressed by a separate RAD team and then integrated to form a whole project.
The framework activities of RAD model include:
1. Communication: It involves heavy communication and collaboration with customer and other
stakeholders. It encompasses requirements gathering and related activities.
2. Planning: It plans for Software Engineering work that follows. It describes technical tasks to be
conducted, resources that will be required, likely risks, work products to be produced and a work schedule.

Er Sandeep R, Assistant Professor,CSE,MCET


Software Engineering PC 501 CS AICTEM-OU

3. Modeling: This activity focuses on creation of models that allow stakeholders (customer, developer) to
better under software requirements and design that will achieve requirements.
4. Construction: It combines code generation and testing to uncover errors in the code.
5. Development: The software (completed/partial increment) is delivered to the customer for evaluation and
feedback of it.

Fig: RAD Process model.


Advantages:
 Project with 2-3 months deadline opt for RAD.
 Task is divided among teams to fasten development process.
Disadvantages:
 Large projects using RAD may not work.
 If high performance is an issue, RAD may not work.
 RAD may not be appropriate when technical risks are high.

EVOLUTIONARY PROCESS MODELS


These models are specially designed to accommodate a product that evolves over time. These are iterative
and enable software engineers to develop more complete software versions.
1) PROTOTYPING: Prototyping model is used when customer defines a set of objectives, but does not
identify detailed input, processing output requirements, developer is unsure of efficiency of algorithm,
adaptability of operating system etc, where phased model is inappropriate.
Prototyping model can be used as a standalone process model. Prototyping paradigm assists the
software engineer and customer to better understand what is to be built when requirements are fuzzy.
Prototype helps to identify software requirements.
Prototyping paradigm begins with communication, then quickly planning the prototyping iteration,
modeling quick design, construction of prototype, and the prototype is deployed and then evaluated by the
customer/user. Feedback is used to refine requirements for the software.
Prototype can serve as “the first system”, where users get a feel of actual system and developers get to
build something immediately.

Er Sandeep R, Assistant Professor,CSE,MCET


Software Engineering PC 501 CS AICTEM-OU

Fig: Prototyping Model


Prototyping can be problematic for following reasons:
 Developers may not consider overall software quality. (Few “fixes” are applied to satisfy customer).
 The developer often compromises in the implementation to get a prototype working quickly. The key is to
define the rules of the game at the beginning; i.e., customer and developer must both agree that prototype
is built to serve as a mechanism for defining requirements.

2) THE SPIRAL MODEL: The Spiral model is proposed by “Boehm”. This model was developed to
encompass the best features of waterfall model and prototyping. It is a risk-driven process model with the
risk analysis feature.
Features:
1) Cyclic Approach for increasing system’s degree of definition and implementation while decreasing degree
of risk-Risk is considered as each revolution is made.
2) Anchor Point Milestone for ensuring stakeholders commitment to feasible and mutually satisfactory
systems solution. (Milestone is a combination of work products and conditions).

Spiral model may be viewed as a Meta model, as it can accommodate any process development model.
Software is developed as a series of evolutionary releases. Project manager adjusts planned number of
iterations to complete the software. During early iterations prototype is generated and during later iterations
complete version is developed.

Fig: Spiral Model

Er Sandeep R, Assistant Professor,CSE,MCET


Software Engineering PC 501 CS AICTEM-OU

In Spiral model, the first circuit around the spiral might result in the development of a product
specification; subsequent passes around the spiral model might be used to develop a prototype and then
progressively more sophisticated versions of the software. Unlike other process models that end when
software is delivered, the spiral model can be adapted to apply throughout the life of the software.
 The first circuit around the spiral might represent a “Concept Development Project” that starts at the core
of the project and continues till concept development is over.
 Then with the next spiral “New Product Development Project” commences. New product will evolve
through a number of iterations around spiral.
 Next circuit around the spiral might be used to represent a “Product Enhancement Project”. The spiral,
when characterized in this way, remains operative until software is retired.
Advantages:
 It is a realistic approach to the development of large scale systems and software. (Software evolves as the
process progresses).
 It uses and enables the developer to apply the prototyping approach to any stage in evolution of product.
 Considers technical risks at all the stages of the project, and reduces risks before they become
problematic.
 Like other paradigms, spiral model is not a panacea (Medicine).It demands considerable risk assessment
expertise for success. If a major risk is not covered and managed, problems will occur.

3) CONCURRENT DEVELOPMENT MODEL: It is also called as “Concurrent Engineering”. This model


is represented schematically as a series of framework activities, Software Engineering actions and tasks,
and their associated states concurrently. It strives to make all software development activities to be
concurrently implemented.
 Ex: “Modeling” activity for spiral model is accomplished by invoking prototyping and/or analysis
Modeling and specification and design.

Fig: One element of concurrent model development model.

Er Sandeep R, Assistant Professor,CSE,MCET


Software Engineering PC 501 CS AICTEM-OU

All activities (communication/modeling/construction etc) exist concurrently but reside in different


states. State is an externally observable mode of behavior). For example, early in a project, the
communication activity has completed its first iteration and exists in the awaiting changes state. Modeling
activity which was in none state will now move to under development state.
 This model defines a series of events which will trigger transition from state to state for each of
Software Engineering activities, actions/tasks.
Advantages:
 Applicable to all types of software development, provides accurate picture of current state of a project.
 The software engineering activities, tasks and actions are defined as a network of activities, rather than
sequence of events.
Evolutionary Models Drawbacks:
 Prototyping poses a problem to project planning because of uncertain number of cycles required to
construct product.
 Do not establish maximum speed of evolution.
 May not give flexibility and extensibility for the software process.

SPECIALIZED PROCESS MODELS


These models are used when a narrowly defined Software Engineering approach is chosen.
1. COMPONENT-BASED DEVELOPMENT: Commercial Off-The-Shelf (COTS) software components
are used when software is to be build. These components provide targeted functionality so that component
to be integrated into the software.
 It incorporates many characters of the spiral model. It is evolutionary in nature, composes applications
from pre-packaged (COTS) software components.
Steps:
1. Available component-based products are researched and evaluated for the application domain in question.
2. Component integration issues are considered.
3. Software architecture is designed to accommodate the components.
4. Components are integrated into the architecture.
5. Comprehensive testing is used (conducted to ensure proper functionality).
Advantage: Software reuse-Important to produce high quality software.

2. FORMAL METHODS MODEL: Specialized software development approach that uses mathematical
based techniques for specifying, developing and verifying the computer softwares. Formal Methods
Model helps the software developers to apply correct mathematical notations to create the issue of
insufficiency, inconsistency and uncertainty of the software by applying mathematical analysis.
During design phase, formal methods model acts as a program verifier and help Software
Engineers to detect and correct these errors, which are otherwise very difficult to be detected. This model
assures defect free software.
Drawbacks:
1. Time consuming and expensive.
2. Software Engineers need extensive training to apply this model.
3. Clients needed to be technically sound for proper communication.
 Because of these reasons Formal Methods Models are used only in development of high integrity
software applications where safety and security is of atmost importance.

Er Sandeep R, Assistant Professor,CSE,MCET


Software Engineering PC 501 CS AICTEM-OU

3. ASPECT-ORIENTED SOFTWARE DEVELOPMENT (AOSD): As modern computer based systems


become more sophisticated and complex, some concerns (security, fault tolerance, memory management
etc) span the entire architecture. When concern cut across multiple system functions, features and
information, they are referred as crosscutting concerns. Aspectual Requirements define those crosscutting
concerns that have impact across the software architecture.
AOSD, often referred to as Aspect-Oriented Programming (AOP), is a relatively new software
engineering paradigm which provides a process for defining, specifying, designing and constructing aspects
(crosscutting concerns).
Presently there is no distinct Aspect-Oriented Process. If such an approach is developed, then it must
integrate the characteristics of both spiral and concurrent model, because of their evolutionary and parallel
natures respectively.

THE UNIFIED PROCESS MODEL: This model is also referred as RUP (Rational Unified Process).
Unified Process refers to a methodology of extracting the most essential activities of conventional software
development phases (communication, planning, Modeling, construction and deployment) and characterizing
them, so that they can be applied in the Agile (highly valued) software development.
History: Jacobson, Rumbaugh and Greedy Booch developed the Unified Process, a framework for Object-
Oriented Software Engineering using UML. Today, Unified Process and UML are used on Object- Oriented
projects of all kinds.
 The iterative, incremental model proposed by the Unified Process can and should be adapted to meet
specific project needs.

Phases of the Unified Process:


1. Inception Phase: The Inception Phase encompasses both customer communication and planning
activities. By collaborating with customer and end-users, business requirements are identified and
described through a set of use cases [sequence of actions that are performed by an actor (Ex: A person,
machine, another system). As actor interacts with the software, use cases provide project scope.
The Inception Phase must:
i. Produce a business case.
ii. Identify business requirements, business and process risks.
iii. Give overall vision for the project, as the outputs result in various documents/work products.
Work Products:
1) Vision Documents 5) Initial Risk Assessment
2) Initial use-case model 6) Project Plan [Phases and Interaction]
3) Initial Project Glossary 7) Business Model (if necessary)
4) Initial Business case 8) One or more Prototypes

2. Elaboration Phase: The Elaboration Phase encompasses planning and Modeling activities. This phase
refines and expands preliminary use-cases that were developed in inception phase. The Elaboration Phase
expands the architectural representation to five views: 1) Use case Model, 2) Analysis Model, 3) Design Model, 4)
Implementation Model, 5) Deployment Model.
Elaboration Phase creates an “executable architectural baseline” that represents “first cut” executable
system-prototype. Architectural baseline provides viability of the project but not all features and functions
required to use the system. Modifications to the plan may be made at this time.

Er Sandeep R, Assistant Professor,CSE,MCET


Software Engineering PC 501 CS AICTEM-OU

Work Products:
1) Use case Model 7) Revised Risk List
2) Supplementary Requirements 8) Project plan, includes
3) Analysis Model a) Iteration Plan
4) Software Architecture Prototype b) Adapted workflow
5) Executable Architecture Description c) Milestones
6) Preliminary Design Model d) Technical work products
9) Preliminary User Manual

Fig: Unified Process Model.

3. Construction Phase: The Construction Phase is same as construction activity, where the application is
coded and tested. The Construction Phase develops suitable code for each component of the software. To
do this, analysis and design models started in the elaboration phase are completed to reflect the final
version of software increment.
 All necessary and required features and functions of software increment (release) are implemented
in source code.
 Unit tests are designed and executed for each software increment.
 Integration activities (Component Assembly and Integration Testing) are conducted.

Work Products:
1) Design Model.
2) Software Components
3) Integrated Software Increment
4) Test Plan and Procedure.
5) Test Cases
6) Support Document
a. User Manuals
b. Installation Manuals
c. Description of Current Increment

Er Sandeep R, Assistant Professor,CSE,MCET


Software Engineering PC 501 CS AICTEM-OU

4. Transition Phase: The Transition Phase encompasses latter stages of construction and first part of
deployment activities. Software is given to end-users for beta testing and user feedback about both defects
and necessary changes. Software team creates necessary support information (Ex: user manuals,
installation procedures) required for release.
Work Products:
1) Delivered software increment
2) General User Feedback
3) Beta Test Report
5. Production Phase: In the Production Phase, on-going use of software is monitored, support for operating
environment is provided and defect reports and request for changes are submitted and evaluated.

 Construction, Transition and Production phases are being conducted concurrently sometimes. So, five
Unified Process phases do not occur in a sequence.

AN AGILE VIEW OF PROCESS


Q: What is Agility?
Ans: Agility is dynamic, content specific, aggressively change and growth oriented. Agile software is highly
valued software. Agile team is a nimble team able to respond to changes appropriately.
The Agile Alliance defines 12 principles to achieve agility:
1. Our highest priority is to satisfy customer through early and continuous delivery of valuable software.
2. Welcome changing requirements, even later in development. Agile processes harness change for
customer’s competitive advantage.
3. Deliver working software frequently, from couple of weeks to months.
4. Business people and developers must daily work together throughout the project.
5. Build projects around motivated individuals.(Give them support environment and trust to get the job
done).
6. Most efficient and effective method of conveying information in a development team is face-to face
conversation.
7. Working software is primary measure of progress.
8. Agile processes promote sustainable development. (Users, sponsors, developers should maintain a
constant pace).
9. Continuous attention to technical excellence and good design enhances agility.
10. Simplicity is essential. (Art of maximum amount of work not done).
11. Self-organizing teams are required for best architectures, requirements and designs.
12. At regular intervals, team will tune and adjust its behavior to become more effective.

Q: What is an Agile Process?


Ans: Any Agile software process has 3 assumptions about software projects:
i. It is difficult to predict in advance which software requirements, customer priorities will change and
which will persist.
ii. For many types of software, design and construction are interleaved (performed together). It is difficult
to predict how much design is necessary before construction is used.
iii. Analysis, design, construction and testing are not as predictable as we might like.

Er Sandeep R, Assistant Professor,CSE,MCET


Software Engineering PC 501 CS AICTEM-OU

Q: How do we create a process that can manage unpredictability?


Ans: Agile process must be adaptable, to have process adaptability. An agile software process must adapt
incrementally. Customer feedback will make the process effective. Software increments must be delivered in
short time periods, so that adaption keeps pace with change (unpredictability).
Human Factors: Agile development focuses on the talents and skills of individuals, modeling the process to
specific people and teams.
Traits that must exist among people of Agile Team:
1. Competence: It encompasses innate talent, specific software knowledge of the process which the team
applies. Skill and knowledge of process should be taught to all agile team members.
2. Common Focus: Although, Agile team members perform different tasks and bring different skills to be
project, all should be focused on one goal-to deliver a working software increment to the customer within
the time promised.
3. Collaboration: Team members must collaborate with one another, with customer and with business
managers, as Software Engineering is
1) Assessing, analysing, using information that is communicated to software team.
2) Creating information that will help customer.
3) Building information (DBs) that provides business value for customer.
4. Decision Making Ability: Agile team is given autonomy decision making authority for both technical and
project issues.
5. Fuzzy Problem-Solving Ability: Agile team will continually have to deal with ambiguity and changes.
Lesson learned any problem solving activity benefits the team later in the project.
6. Mutual Trust and Respect: Agile team should be a “Jelled” team. Jelled team exhibits the trust and
respect requirement for the project.
7. Self-Organisation: It implies 3 things:
a) Agile team organises itself for the work to be done.
b) Agile team organises the process to best accommodate its environment.
c) Agile team organises the work schedule to achieve project delivery.

AGILE PROCESS MODELS: Many similarities among these approaches.


1) EXTREME PROGRAMMING (XP): XP uses Object-Oriented approach for development. The four
Framework activities are: Planning, Design, Coding and Testing.
1. Planning: Planning begins with creation of a set of stories that describe required features and
functionality for software to be built. Each story is written by customer and is placed on an index card.
Customer assigns a value (priority) to it based on business value of it. XP team members then assess each
story and assign a cost measured in development weeks to it. If the story will require more than 3
development weeks, customer is asked to split it into smaller stories, assignment of value and cost will
occur again.
Once a basic commitment (agreement on stories to be included, delivery date and other project
matters) is made for a release, XP team orders stories that will be developed in one of three ways:
1. All stories will be implemented immediately.
2. Stories with highest value will be implemented first.
3. Riskiest stories will be moved up in schedule and implemented first.

Er Sandeep R, Assistant Professor,CSE,MCET


Software Engineering PC 501 CS AICTEM-OU

After first project release (software increment) has been delivered, XP team computes project
velocity. Project velocity is number of customer stories released during first release. It is used to:
a) Estimate delivery dates and schedule for subsequent releases.
b) Determine whether an over-commitment has been made for all stories across development project. If
so, content of releases is modified or end-delivery dates are changed.

Figure: XP Model.

2. Design: XP design follows KIS (Keep It Simple) principle. A simple design is always preferred over more
complex representation. XP encourages use of CRC (Class-Responsibility Collaborator) to identify and
organise object-oriented classes that are relevant to current software increment. CRC cards are only design
work produce in XP process.
If a difficult design problem of software is encountered as part of design of a story, XP
recommends the immediate creation of an operational prototype of that portion of the design, called a
Spike Solution.
XP encourages refactoring, a construction technique which can rapidly improve the design.
“Refactoring is the process of changing a software system so that it does not alter external behaviour of
code yet improves the internal structure”. With refactoring the design occurs continuously as the system is
constructed.
3. Coding: According to XP after stories are developed and design work is done, the team should not move
to coding, but develop unit tests on stories to be included in the current release. So, the developer can
focus on what must be implemented to pass the unit tested immediately.
XP recommends that two people work together at one workstation (system) to create code
for a story. This concept is known as pair programming. This helps in real-time problem solving and real-
time quality assurance. For example, one person might think about cooling details, while other ensures
coding standards are high.

Er Sandeep R, Assistant Professor,CSE,MCET


Software Engineering PC 501 CS AICTEM-OU

As pair programmers complete their work, their code is integrated within the work of
others. This “Continuous Integration” helps to avoid compatibility and interfacing problem and provides a
“smoking testing/ smoke testing” environment that helps to uncover errors early.
4. Testing: The unit tests that are created should be implemented easily and repeatedly. This encourages a
Regression Testing strategy whenever code is modified.
 Regression Testing is the re-execution of same subset of tests that have already been conducted to
ensure that changes have not propagated unintended side effects.
 Integration and validation testing can occur on a daily basis.
 XP acceptance tests, also called customer tests are specified by the customer and focus on overall
system features and functionality.

2) ADAPTIVE SOFTWARE DEVELOPMENT (ASD): This model was proposed by Jim HighSmith.
This is the best technique for building complex software and systems. ASD focus on human collaboration
and self-organization. ASD life cycle has 3 phases:
1. Speculation: Project is initiated and adaptive cycle planning is conducted. Adaptive cycle planning uses
customer is mission statement, project constraints (delivery dates etc) and basic requirements to define set
of release cycle in the project.
2. Collaboration: Motivation of people to work together in a way that multiples their talent and creative
output. Collaboration is not easy, as it is not just communication. It is a matter of trust. People working
together must trust one another to:
a. Criticize without animosity (strong dislike)
b. Assist without resentment (feeling of displeasure)
c. Work as harder as they do
d. Have the skill set to contribute to the work at hand
e. Communication problems (concerns in a way that leads to effective action).

Fig: Adaptive Software Development

Er Sandeep R, Assistant Professor,CSE,MCET


Software Engineering PC 501 CS AICTEM-OU

3. Learning: Software development may often over estimate their own understanding and learning will help
them to improve their level of real understanding. ASD teams learn in 3 ways:
a) Focus Groups: The customer lends users provide feedback on software increments that are being
delivered. This provides direct indication of whether the product is satisfying business needs or not.
b) Formal Technical Reviews (FTRS): ASD team members review the software components that are
developed, improving quality and learning as they proceed.
c) Post-mortems: ASD team becomes introspective (self thinking) addressing its performance and
process. (With the intent of learning and then improving its approach).

3) DYNAMIC SYSTEMS DEVELOPMENT METHOD (DSDM): This approach “provides a framework


for building and maintaining systems which meet tight time constraints through the use of incremental
prototyping in a controlled project environment”.
Ex: 80% of an application can be delivered in 20% of time it takes to deliver complete application.
Like XP, ASD, DSDM also suggests an iterative software process. DSDM approach to each
iteration follows 80% rule, where much of the detail can be completed when more business
requirement/changes are known.
DSDM Consortium is worldwide group of member companies, which uses DSDM approach.
DSDM lifecycle defines 3 different iterative cycles, preceded by 2 additional life cycle activities.
1. Feasibility Study: Establishes business requirements and application constraints and then assesses
whether the application is viable candidate for DSDM process.
2. Business Study: Establishes functional information requirements that allow the application to provide
business value. Defines basic application architecture and identifies maintainability requirements for the
application.
3. Functional Model Iteration: Produces a set of incremental prototypes that demonstrate functionality for
the customer. It helps in gathering additional requirements from user feedback who exercises the
prototype.
4. Design and Build Iteration: Revisits prototypes built during functional model iteration to ensure that
they provide business value for end-users. Often occurs concurrently with Functional Model Iteration.
5. Implementation: Places latest software increment into operational environment. It should be noted that
a) Increment may not be 100% complete.
b) Changes may be requested as increment is put in place.
In both cases, DSDM development work continues by returning to Functional Model Iteration activity.
DSDM can be combined with XP to provide a combination approach that defines a solid process model.

4) CRYSTAL: Alistair Cockburn and Jim Highsmith created the “crystal family of agile methods”, to
achieve a software development approach that focuses on “manoeuvrability”-“a resource-limited,
cooperative game of invention and communication, with a primary goal of delivering useful, working
software and secondary goal of setting up for the next game”.
 Crystal family is a set of agile processes that are effective for different types of projects.
 The intent is to allow agile teams to select the member of crystal family that is most appropriate for their
software project and environment.

Er Sandeep R, Assistant Professor,CSE,MCET


Software Engineering PC 501 CS AICTEM-OU

5) SCRUM: (Name derived from an activity during “RUGBY”). Developed by Jeff Sutherland and team in
early 1990’s.
Principles:
1. Small working teams are organized to “maximize communication, minimize overhead and maximize
sharing of tacit, informal knowledge”.
2. Process must be adaptable to both technical and business changes “to ensure best possible product is
produced.”
3. Process yields frequent software increments “that can be inspected, adjusted, tested, documented and
build on”.
4. Developed work and teams are partitioned “into clean low coupling packets”.
5. Constant testing and documentation is performed as the product is built.
6. Scrum process provides the “ability to declare a product ‘done’ whenever required”.
Scrum principles are used to guide development activities within a process that incorporates the framework
activities: Requirements, analysis, design, evolution and delivery. Scrum allows us to build softer software.
 With each framework activity, work tasks occur within a process pattern called a Sprint.

Fig: Scrum process flow


Scrum emphasizes the use of a set of “software process patterns” that have proven effect for the projects with
tight timelines, changing requirements and business criticality. It includes following development activities:
1) Backlog: “A prioritized list of project requirements or features that provide business value for the
customer”. Product manager assesses each backlog and updates priorities as required.
2) Sprints: “Consists of work units that have to achieve a requirement defined in the backlog that must be fit
into a predefined time-box (30 days)”.As changes are not introduced during the sprint, it allows team
members to work in a short-term, but stable environment.
3) Scrum Meeting: “Short (15 minutes) meeting held daily by the scrum team. A team leader, called a
“scrum master” leads the team meeting and assesses responses from each person. The key question asked
and answered by all team members are:
 What did you do since last meeting?
 What obstacles are you encountering?
 What do you plan to accomplish by next meeting?
These daily meetings help to know the problems in the team and lead to “knowledge socialization” and so
promote team structure.
Demos: “Delivers the software increment to the customer so that functionality that has been implemented can
be demonstrated and evaluated by the customer”. Demo may not contain all planned functionality.

Er Sandeep R, Assistant Professor,CSE,MCET


Software Engineering PC 501 CS AICTEM-OU

6) FEATURE DRIVEN DEVELOPMENT (FDD): This model is also called as Features Design and
Development. This is the Process model for Object-Oriented Software Engineering (OOSE). It is an
Adaptive, Agile process that can be applied for moderately sized and largest software projects.
 In FDD, a feature “is a client-valued function that can be implemented in two weeks or less”.
Benefits:
1. As features are small blocks of deliverable finality, users can describe, understand the relation and
review them easily for ambiguity or errors.
2. Features can be organized into a hierarchical business-related grouping.
3. The team develops operational features every 2 weeks.
4. Design and code representations are easier to inspect effectively.
5. Project planning, scheduling and tracking are driven by hierarchy.

Code and his colleagues suggested a template for defining a feature:


<Action> the<result><by/for/of/to> a(n) <object>where <object> is a person, place (or) thing.
Ex. of Features: Add the product to a shopping cart. Display technical specification of product. Store the
shipping information for a customer.

 A feature set group related features in to business related categories.


<action><-ing> a(n) <object>
Ex: Making a product sale is feature set for above features.
 FDD approach defines five “collaborating” framework activities. These are also called as “Processes” in
FDD.

Fig: FDD process.

FDD provides greater emphasis on project management guidelines and techniques than many other agile
methods. If deadline pressure is significant, it is critical to determine if software increments (features) are
properly scheduled. To accomplish this, FDD defines six milestones during design and implementation of
features: “Design walk through, Design, Design Inspection, Code, Code Inspection, and Promote to build.”

Er Sandeep R, Assistant Professor,CSE,MCET


Software Engineering PC 501 CS AICTEM-OU

AGILE MODELING (AM): There are many situations in which software engineers must build large,
business critical systems. Scope and complexity of such systems must be modeled so that,
 All constituencies can better understand what to be accomplished.
 Problem can be effectively partitioned among Software Engineers.
 Quality can be assessed at every step of system.
Agile Modeling is a practice-based methodology for effective Modeling and documentation of software-
based systems. Agile Modeling is a collection of values, principles and practices for modeling effective
software.
An Agile team must be courageous to reject any requirement, design and need to re-factor. It must have all
answers, business experts and other stakeholders should be respected and embraced.

Modeling principles that make Agile Modeling unique are:

1) Model with a Purpose: A software engineer who uses AM should have a specific goal in mind before
creating the model. Once a goal of model is identified, type of notation and level of details required will be
more obvious.
2) Use Multiple Models: Agile Modeling suggests that each model should present a different aspect of the
system and only models provide value to their developers should be used.
3) Travel Light: As Software Engineering work proceeds, keep only those models that will provide long-
term value and discard the rest. Every work product that is kept, must be maintained as changes occur. It is
required to look for best possible model from various sources.
4) Know the models and tools used to create them: Understand the tools used to create the models and
also strengths and weaknesses of each model.
5) Adapt Locality: Modeling approach should be adapted to the needs of agile team.
6) Content is more important than representation: A perfect model that imports little useful content is not
as valuable as a flawed notation with valuable content. So, focus should be on the content in model.

Er Sandeep R, Assistant Professor,CSE,MCET


Software Engineering PC 501 CS AICTEM-OU

UNIT II
Planning and Managing the Project: Tracking Progress, Project Personnel, Effort Estimation, Risk Management,
the Project Plan, Process Models and Project Management, Information Systems Example, Real-time Example.
Requirement Engineering: A bridge to design and construction, Requirement Engineering tasks, Initiating
Requirement Engineering Process, Eliciting Requirement, Developing Uses cases, Building the Analysis Model,
Negotiating Requirements, Validating Requirements.

SOFTWARE ENGINEERING PRINCIPLES


Software engineering is guided by a collection of core principles that help in the application of a software
process and the execution of effective software engineering methods. At the process level, core principles establish
a philosophical foundation that guides a software team as it performs framework and umbrella activities, navigates
the process flow, and produces a set of software engineering work products. At the practice level, core principles
establish a collection of values and rules that serve as a guide as you analyze a problem, design a solution,
implement and test the solution, and ultimately deploy the software in the user community.

General principles that span software engineering process and practice: (1) provide value to end users, (2)
keep it simple, (3) maintain the vision (of the product and the project), (4) recognize that others consume (and must
understand) what you produce, (5) be open to the future, (6) plan ahead for reuse, and (7) think! Although these
general principles are important, they are characterized at such a high level of abstraction that they are sometimes
difficult to translate into day-to-day software engineering practice. In the subsections that follow, I take a more
detailed look at the core principles that guide process and practice.

Principles That Guide Process: The following set of core principles can be applied to the framework, and
by extension, to every software process.
1. Be agile: Whether the process model you choose is prescriptive or agile, the basic tenets of agile development
should govern your approach. Every aspect of the work you do should emphasize economy of action—keep
your technical approach as simple as possible, keep the work products you produce as concise as possible, and
make decisions locally whenever possible.
2. Focus on quality at every step: The exit condition for every process activity, action, and task should focus on
the quality of the work product that has been produced.
3. Be ready to adapt: Process is not a religious experience, and dogma has no place in it. When necessary, adapt
your approach to constraints imposed by the problem, the people, and the project itself.
4. Build an effective team: Software engineering process and practice are important, but the bottom line is
people. Build a self-organizing team that has mutual trust and respect.
5. Establish mechanisms for communication and coordination: Projects fail because important information falls
into the cracks and/or stakeholders fail to coordinate their efforts to create a successful end product. These are
management issues and they must be addressed.
6. Manage change: The approach may be either formal or informal, but mechanisms must be established to
manage the way changes are requested, assessed, approved, and implemented.
7. Assess risk: Lots of things can go wrong as software is being developed. It’s essential that you establish
contingency plans.

Er Sandeep R, Assistant Professor,CSE,MCET


Software Engineering PC 501 CS AICTEM-OU

8. Create work products that provide value for others: Create only those work products that provide value for
other process activities, actions, or tasks. Every work product that is produced as part of software engineering
practice will be passed on to someone else. A list of required functions and features will be passed along to the
person (people) who will develop a design; the design will be passed along to those who generate code, and so
on. Be sure that the work product imparts the necessary information without ambiguity or omission.

Principles That Guide Practice: Software engineering practice has a single overriding goal—to deliver on-
time, high quality, operational software that contains functions and features that meet the needs of all stakeholders.
To achieve this goal, you should adopt a set of core principles that guide your technical work. The following sets
of core principles are fundamental to the practice of software engineering:

1. Divide and conquer: Analysis and design should always emphasize separation of concerns (SoC). A large
problem is easier to solve if it is subdivided into a collection of elements (or concerns). Each concern delivers
distinct functionality that can be developed, and in some cases validated, independently of other concerns.
2. Understand the use of abstraction: An abstraction is a simplification of some complex element of a system
used to communicate meaning in a single phrase. In software engineering practice, you use many different
levels of abstraction. In analysis and design work, a software team normally begins with models that represent
high levels of abstraction and slowly refines those models into lower levels of abstraction. The intent of an
abstraction is to eliminate the need to communicate details. Without an understanding of the details, the cause
of a problem cannot be easily diagnosed.
3. Strive for consistency: The principle of consistency suggests that a familiar context makes software easier to
use. As an example, consider the design of a user interface for a WebApp. Consistent placement of menu
options, the use of a consistent color scheme, and the consistent use of recognizable icons all help to make the
interface ergonomically sound.
4. Focus on the transfer of information: Software is about information transfer—from a database to an end user,
from an OS to an application etc. In every case, information flows across an interface, and as a consequence,
there are opportunities for error, or omission, or ambiguity. The implication of this principle is that you must
pay special attention to the analysis, design, construction, and testing of interfaces.
5. Build software that exhibits effective modularity: Any complex system can be divided into modules
(components), but good software engineering practice demands more. Modularity must be effective i.e., each
module should focus exclusively on one aspect of the system (cohesion). Modules should be interconnected in
a relatively simple manner—each module should exhibit low coupling to other modules.
6. Look for patterns: The goal of patterns within the software community is to help software developers resolve
recurring problems encountered throughout all of software development. Patterns help create a shared language
for communicating insight and experience about these problems and their solutions.
7. When possible, represent the problem and its solution from a number of different perspectives: When a
problem and its solution are examined from a number of different perspectives, it is more likely that greater
insight will be achieved and that errors and omissions will be uncovered.
8. Remember that someone will maintain the software: Over the long term, software will be corrected as defects
are uncovered, adapted as its environment changes, and enhanced as stakeholders request more capabilities.
These maintenance activities can be facilitated if solid software engineering practice is applied throughout the
software process.

Er Sandeep R, Assistant Professor,CSE,MCET


Software Engineering PC 501 CS AICTEM-OU

Communication Principles: Before customer requirements can be analyzed, modeled, or specified they must
be gathered through the communication activity. Communication activity helps you to define your overall goals
and objectives. Effective communication is among the most challenging activities that you will confront. The
communication principles include:
1. Listen: Try to focus on the speaker’s words, rather than formulating your response to those words. Ask for
clarification if something is unclear, but avoid constant interruptions. Never become contentious in your words
or actions (e.g., rolling your eyes or shaking your head) as a person is talking.
2. Prepare before you communicate: Spend the time to understand the problem before you meet with others. If
necessary, do some research to understand business domain. If you have responsibility for conducting a
meeting, prepare an agenda in advance of the meeting.
3. Someone should facilitate the activity: Every communication meeting should have a leader (a facilitator) to
keep the conversation moving in a productive direction, (2) to mediate any conflict that does occur, and (3) to
ensure that other principles are followed.
4. Face-to-face communication is best: It usually works better when some other representation of the relevant
information is present.
5. Take notes and document decisions: Someone participating in the communication should serve as a “recorder”
and write down all important points and decisions.
6. Strive for collaboration: Collaboration occurs when the collective knowledge of members of the team is used
to describe product or system functions or features. Each collaboration serves to build trust among team
members and creates a common goal for the team.
7. Stay focused; modularize your discussion: The more people involved in any communication, the more likely
that discussion will bounce from one topic to the next. The facilitator should keep the conversation modular,
leaving one topic only after it has been resolved.
8. If something is unclear, draw a picture: Verbal communication goes only so far. A sketch or drawing can
often provide clarity when words fail to do the job.
9. a) Once you agree to something, move on. (b) If you can’t agree to something, move on. (c) If a feature or
function is unclear and cannot be clarified at the moment, move on. Communication takes time. Rather than
iterating endlessly, the people who participate should recognize that many topics require discussion and that
“moving on” is sometimes the best way to achieve communication agility.
10. Negotiation is not a contest or a game. It works best when both parties win: There are many instances in
which stakeholders must negotiate functions and features, priorities, and delivery dates. If team has
collaborated well, all parties have a common goal. Still, negotiation will demand compromise from all parties.

Planning Principles: The planning activity encompasses a set of management and technical practices that
enable the software team to define a road map as it travels toward its strategic goal and tactical objectives. There
are many different planning philosophies. Some people are “minimalists” arguing that change often obviates the
need for a detailed plan. Others are “traditionalists” arguing that the plan provides an effective road map and the
more detail it has, the less likely the team will become lost. Still others are “agilists” arguing that a quick planning
may be necessary. Regardless of the rigor with which planning is conducted, following principles always apply:
1. Understand the scope of the project: Scope provides the software team with a destination.
2. Involve stakeholders in the planning activity: Stakeholders define priorities and establish project constraints.
To accommodate these realities, software engineers must often negotiate order of delivery, time lines, and other
project-related issues.
Er Sandeep R, Assistant Professor,CSE,MCET
Software Engineering PC 501 CS AICTEM-OU

3. Recognize that planning is iterative: As work begins, it is very likely that things will change. As a
consequence, the plan must be adjusted to accommodate these changes. Iterative, incremental process models
dictate replanning after the delivery of each software increment based on feedback received from users.
4. Estimate based on what you know: The intent of estimation is to provide an indication of effort, cost, and task
duration, based on the team’s current understanding of the work to be done.
5. Consider risk as you define the plan: If you have identified risks that have high impact and high probability,
contingency planning is necessary. The project plan should be adjusted to accommodate the likelihood that one
or more of these risks will occur.
6. Be realistic: People don’t work 100 percent of every day. Change will occur. Even the best software engineers
make mistakes. These realities should be considered as a project plan is established.
7. Adjust granularity as you define the plan: Granularity refers to the level of detail that is introduced as a
project plan is developed. A “high-granularity” plan provides significant work task detail that is planned over
relatively short time increments. A “low-granularity” plan provides broader work tasks that are planned over
longer time periods. In general, granularity moves from high to low as the project time line moves away from
the current date.
8. Define how you intend to ensure quality: The plan should identify how the software team intends to ensure
quality. If technical reviews are to be conducted, they should be scheduled. If pair programming is to be used
during construction, it should be explicitly defined within the plan.
9. Describe how you intend to accommodate change: Even the best planning can be obviated by uncontrolled
change. You should identify how changes are to be accommodated as software engineering work proceeds. For
example, can the customer request a change at any time? If a change is requested, is the team obliged to
implement it immediately? How is the impact and cost of the change assessed?
10. Track the plan frequently and make adjustments as required: Software projects fall behind schedule one day
at a time. Therefore, it makes sense to track progress on a daily basis, looking for problem areas and situations
in which scheduled work does not conform to actual work conducted. When slippage is encountered, the plan is
adjusted accordingly.

Modeling Principles: We create models to gain a better understanding of the actual entity to be built. In
software engineering work, two classes of models can be created: requirements models and design models.
Requirements models (also called analysis models) represent customer requirements by depicting the software in
three different domains: the information domain, the functional domain, and the behavioral domain.
Design models represent characteristics of the software that help practitioners to construct it effectively: the
architecture, the user interface, and component-level detail.
The modeling principles include:
1. The primary goal of the software team is to build software, not create models: Agility means getting software
to the customer in the fastest possible time. Models that make this happen are worth creating, but models that
slow the process down or provide little new insight should be avoided.
2. Travel light—don’t create more models than you need: Every model that is created must be kept up-to-date as
changes occur. Create only those models that make it easier and faster to construct the software.
3. Strive to produce the simplest model that will describe the problem or the software: Don’t overbuild the
software. By keeping models simple, the resultant software will also be simple. The result is software that is
easier to integrate, easier to test, and easier to maintain (to change).
4. Build models in a way that makes them amenable to change: Assume that your models will change. The
problem with this attitude is that without a reasonably complete requirements model, you’ll create a design that
will invariably miss important functions and features.
Er Sandeep R, Assistant Professor,CSE,MCET
Software Engineering PC 501 CS AICTEM-OU

5. Be able to state an explicit purpose for each model that is created: Every time you create a model, ask
yourself why you’re doing so. If you can’t provide justification for existence of model, don’t spend time on it.
6. Adapt the models you develop to the system at hand: It may be necessary to adapt model rules to the
application.
7. Try to build useful models, but not building perfect models: Modeling should be conducted with an eye to
next software engineering steps. Iterating endlessly to make a model “perfect” does not serve need for agility.
8. Don’t become dogmatic about the syntax of the model. If it communicates content successfully,
representation is secondary: The most important characteristic of the model is to communicate information
that enables next software engineering task. If a model does this successfully, incorrect syntax can be forgiven.
9. If your instincts tell you a model isn’t right even though it seems okay on paper, you probably have reason to
be concerned: If you are an experienced software engineer, trust your instincts. If something tells you that a
model is doomed to fail, you have reason to spend additional time examining it or developing a different one.
10. Get feedback as soon as you can: Every model should be reviewed by members of the software team. The
intent of these reviews is to provide feedback that can be used to correct modeling mistakes, change
misinterpretations, and add features or functions that were inadvertently omitted.

Requirements Modeling Principles: All analysis methods are related by a set of operational principles:
1. The information domain of a problem must be represented and understood: The information domain
encompasses the data that flow into the system, the data that flow out of the system, and the data stores that
collect and organize persistent data objects.
2. The functions that the software performs must be defined: Software functions provide direct benefit to end
users and also provide internal support for those features that are user visible. Some functions transform data
that flow into the system. Functions can be described at many different levels of abstraction.
3. The behavior of the software must be represented: The behavior of software is driven by its interaction with
the external environment. Input provided by end users, control data provided by an external system, or
monitoring data collected over a network all cause the software to behave in a specific way.
4. The models that depict information, function, and behavior must be partitioned in a manner that uncovers
detail in a layered fashion: Complex problems are difficult to solve in their entirety. For this reason, you
should use a divide-and-conquer strategy. A large, complex problem is divided into sub problems until each
sub problem is relatively easy to understand. This concept is called partitioning or separation of concerns, and
it is a key strategy in requirements modeling.
5. The analysis task should move from essential information toward implementation detail: The “essence” of
the problem is described without any consideration of how a solution will be implemented. Implementation
detail indicates how the essence will be implemented.

Design Modeling Principles: The design model created for software provides a variety of different views of the
system. Set of design principles that can be applied are:
1. Design should be traceable to the requirements model: The design model translates the information from
requirements model into architecture, a set of subsystems that implement major functions, and a set of
components that are the realization of requirements classes. The elements of the design model should be
traceable to the requirements model.
2. Always consider the architecture of the system to be built: Software architecture is the skeleton of the system
to be built. It affects interfaces, data structures, program control flow and behavior, and much more. For all of
these reasons, design should start with architectural considerations. Only after the architecture has been
established component-level issues should be considered.

Er Sandeep R, Assistant Professor,CSE,MCET


Software Engineering PC 501 CS AICTEM-OU

3. Design of data is as important as design of processing functions: A well-structured data design helps to
simplify program flow, makes the design and implementation of software components easier, and makes
overall processing more efficient.
4. Interfaces must be designed with care: A well-designed interface makes integration easier and assists the
tester in validating component functions.
5. User interface design should be tuned to the needs of the end user. However, it should stress ease of use:
The user interface is the visible manifestation of the software. A poor interface design often leads to the
perception that the software is “bad.”
6. Component-level design should be functionally independent: Functional independence is a measure of
“single-mindedness” of a software component. The functionality that is delivered by a component should be
cohesive—that is, it should focus on one and only one function or sub-function.
7. Components should be loosely coupled to one another and to the external environment: Coupling is achieved
in many ways. As level of coupling increases, the likelihood of error propagation also increases and the overall
maintainability of software decreases. Therefore, component coupling should be kept as low as is reasonable.
8. Design representations (models) should be easily understandable: If the design is difficult to understand, it
will not serve as an effective communication medium.
9. The design should be developed iteratively. With each iteration, the designer should strive for greater
simplicity: Like almost all creative activities, design occurs iteratively. The first iterations work to refine the
design and correct errors, but later iterations should strive to make the design as simple as is possible.

Construction Principles: The construction activity encompasses a set of coding and testing tasks that lead to
operational software that is ready for delivery to the customer or end user. The following set of fundamental
principles and concepts are applicable to coding and testing:
Coding Principles: The principles that guide the coding task are closely aligned with programming style,
programming languages, and programming methods. However, there are a number of fundamental principles that
can be stated:
Preparation principles: Before you write one line of code, be sure you
 Understand of the problem you’re trying to solve.
 Understand basic design principles and concepts.
 Pick a programming language that meets needs of the software to be built and environment in which it will
operate.
 Select a programming environment that provides tools that will make your work easier.
 Create a set of unit tests that will be applied once the component you code is completed.
Programming principles: As you begin writing code, be sure you
 Constrain your algorithms by following structured programming practice.
 Consider the use of pair programming.
 Select data structures that will meet the needs of the design.
 Understand the software architecture and create interfaces that are consistent with it.
 Keep conditional logic as simple as possible.
 Create nested loops in a way that makes them easily testable.
 Select meaningful variable names and follow other local coding standards. Write code that is self-
documenting.
 Create a visual layout that aids understanding.
Validation Principles: After you’ve completed your first coding pass, be sure you
Er Sandeep R, Assistant Professor,CSE,MCET
Software Engineering PC 501 CS AICTEM-OU

 Conduct a code walkthrough when appropriate.


 Perform unit tests and correct errors you’ve uncovered.
 Refactor the code.

Testing Principles: Testing is a process of executing a program with the intent of finding an error. If testing is
conducted successfully, it will uncover errors in the software. Set of testing principles include:
1. All tests should be traceable to customer requirements: The objective of software testing is to uncover errors.
It follows that the most severe defects are those that cause the program to fail to meet its requirements.
2. Tests should be planned long before testing begins: Test planning can begin as soon as the requirements
model is complete. Detailed definition of test cases can begin as soon as the design model has been solidified.
Therefore, all tests can be planned and designed before any code has been generated.
3. The Pareto principle applies to software testing: In this context the Pareto principle implies that 80 percent of
all errors uncovered during testing will likely be traceable to 20 percent of all program components. The
problem is to isolate these suspect components and to thoroughly test them.
4. Testing should begin “in the small” and progress toward testing “in the large”: The first tests planned and
executed generally focus on individual components. As testing progresses, focus shifts in an attempt to find
errors in integrated clusters of components and ultimately in the entire system.
5. Exhaustive testing is not possible: It is impossible to execute every combination of paths during testing. It is
possible to adequately cover program logic and to ensure that all conditions in the component-level design have
been exercised.

Deployment Principles: The deployment activity encompasses three actions: delivery, support, and feedback.
A number of key principles should be followed as the team prepares to deliver an increment:
1. Customer expectations for the software must be managed: Too often, the customer expects more than the
team has promised to deliver, and disappointment occurs immediately. This results in feedback that is not
productive and ruins team morale. So, a software engineer must be careful about sending the customer
conflicting messages (e.g., promising more than you can reasonably deliver in the time frame provided).
2. A complete delivery package should be assembled and tested: All relevant information should be assembled
and thoroughly beta-tested with actual users. All installation scripts and other operational features should be
thoroughly exercised in as many different computing configurations as possible.
3. A support regime must be established before the software is delivered: An end user expects responsiveness
and accurate information when a question or problem arises. If support is worse, nonexistent, customer will
become dissatisfied immediately. Support should be planned, support materials should be prepared.
4. Appropriate instructional materials must be provided to end users. The software team delivers more than the
software itself. Appropriate training aids should be developed; troubleshooting guidelines should be provided,
and when necessary, a “what’s different about this software increment” description should be published.
5. Buggy software should be fixed first, delivered later. Under time pressure, some software organizations
deliver low-quality increments with a warning to the customer that bugs “will be fixed in the next release.”
This is a mistake. There’s a saying in the software business: “Customers will forget you delivered a high-
quality product a few days late, but they will never forget the problems that a low-quality product caused them.
The software reminds them every day.”
The delivered software provides benefit for the end user, but it also provides useful feedback for the software team.
As the increment is put into use, end users should be encouraged to comment on features and functions, ease of
use, reliability, and any other characteristics that are appropriate.

Er Sandeep R, Assistant Professor,CSE,MCET


Software Engineering PC 501 CS AICTEM-OU

SYSTEM ENGINEERING
Computer-Based Systems: All complex systems can be viewed as being composed of cooperating
subsystems. A computer-based system makes use of a variety of system elements.

1. Software: programs, data structures, and related work products.


2. Hardware: electronic devices that provide computing capabilities.
3. People: Users and operators of hardware and software.
4. Database: A large, organized collection of information that is accessed via S/w and persists over time.
5. Documentation: manuals, on-line help files.
6. Procedures: the steps that define the specific use of each system element.

One complicating characteristic of computer-based system is that the elements constituting one system may also
represent one macro element of a still large system. The micro-element is a computer-based system that is one part
of a larger computer based system.

The System Engineering Hierarchy: The key to system engineering is a clear understanding of context.
For software development this means creating a "world view" and progressively narrowing its focus until all
technical detail is known.

In software engineering there is rarely one right way of doing something. Instead designers must consider the
tradeoffs present in the feasible solutions and select one that seems advantageous for the current problem. This
section lists several factors that need to be examined by software engineers when evaluating alternative solutions
(assumptions, simplifications, limitations, constraints, and preferences).

Regardless of its domain of focus, system eng. Encompasses a collection of top-down and bottom-up methods to
navigate the hierarchy illustrated below:

Er Sandeep R, Assistant Professor,CSE,MCET


Software Engineering PC 501 CS AICTEM-OU

The system eng. process usually begins with a “world view.” The entire business or product domain is
examined to ensure that the proper business or technology context can be established. The world view is refined to
focus more fully on a specific domain of interest. Within a specific domain, the need for targeted system elements
(data, S/W, H/W, and people) is analyzed. Finally, the analysis, design, and construction of a targeted system
element are initiated.

System Modeling: System modeling is an important element of the system eng. Process. The Engineer creates
models that:
1. Define the processes that serve the needs of the view under consideration.
2. Represent the behavior of the processes and the assumptions on which the behavior is based.
3. Explicitly define both exogenous and endogenous input to the model.
Exogenous inputs link one constituent of a given view with other constituents at the same level of
other levels; endogenous input links individual components of a constituent at a particular view.
4. Represent all linkages (including output) that will enable the engineer to better understand the view.

To construct a system model, the engineers should consider a number of restraining factors:
1. Assumptions that reduce the number of possible permutations and variations, thus enabling a model reflect
the problem in a reasonable manner.
2. Simplifications that enable the model to be created in a timely manner.
3. Limitations that help to bound the system.
4. Constraints that will guide the manner in which the model is created and the approach taken when the
model is implemented.
5. Preferences that indicate the preferred architecture for all data, functions, and technology.
Objective: Objective is a general statement of direction.
Goal: Goal defines a measurable objective: “reduce manufactured cost of our product”.
 Objectives tend to be strategic while goals tend to be tactical.

Business Process Engineering: The goal of Business Process Engineering (BPE) is to define architectures
that will enable a business to use information effectively. BPE is one process for creating an overall plan for
implementing the computing architecture.

BPE uses an integrated set of procedures, methods, and tools to identify how information systems can best
meet the strategic goals of an enterprise. It focuses first on the enterprise and then on the business area. BPE
Creates enterprise models, data models and process models. It also creates a framework for better information
management distribution, and control.

Three different architectures must be analyzed and designed within the context of business objectives and goals:

1. Data architecture: The data architecture provides a framework for the information needs of a business. The
building blocks of the architecture are the data objects that are used by the business.
Once a set of data objects is defined, their relationships are identified. A relationship indicates how objects are
connected to one another.
2. Application architecture: The application architecture encompasses those elements of a system that transform
objects within the data architecture for some business purpose.
3. Technology infrastructure: The technology infrastructure provides the foundation for the data and application
architectures. The infrastructure encompasses h/w and s/w that are used to support the applications and data.

The BPE Hierarchy: The BPE hierarchy includes four elements. They are:
1. Information strategy planning (ISP): The strategic goals are defined, success factors/business rules identified,
enterprise model created.
2. Business area analysis (BAA): All processes/services modeled, interrelationships of processes and data.
3. Application Engineering (Software Engineering): It involves modeling applications/procedures that address
(BAA) and constraints of ISP.
4. Construction and delivery: Done by using CASE and 4GTs, testing.
Er Sandeep R, Assistant Professor,CSE,MCET
Software Engineering PC 501 CS AICTEM-OU

Information Strategy Planning:


 Management Issues:
o define strategic business goals/objectives
o isolate critical success factors
o conduct analysis of technology impact
o perform analysis of strategic systems
 Technical Issues:
o create a top-level data model
o cluster by business/organizational area
o refine model and clustering

Business Area Analysis: It defines “naturally cohesive groupings of business functions and data”
 Perform many of the same activities as ISP, but narrow scope to individual business area
 Identify existing (old) information systems / determine compatibility with new ISP model
o define systems that are problematic
o defining systems that are incompatible with new information model
o begin to establish re-engineering priorities
The Business Area Analysis Process:

Er Sandeep R, Assistant Professor,CSE,MCET


Software Engineering PC 501 CS AICTEM-OU

Product Engineering: The goal of product engineering is to translate customer’s desire into a working
product. It consists of four system components.
 Software
 Hardware
 Data
 People

Emphasize that software engineers participate in all levels of the product engineering process that begins with
requirements engineering. The analysis step maps requirements into representations of data, function, and
behavior. The design step maps the analysis model into data, architectural, interface, and software component
designs.

Product Architecture Template: Proposed by Hatley and Pirbhai, also known as Hatley-Pirbhai modeling.

System Modeling with UML: In represents the data that describe the element and the operations that
manipulate the data.

Er Sandeep R, Assistant Professor,CSE,MCET


Software Engineering PC 501 CS AICTEM-OU

1. Deployment diagrams (Modeling hardware): Each Cube (3-D box) depicts a hardware element that is part of
the physical architecture of the system

2. Activity diagrams (Modeling software): Represent procedural aspects of a system element

3. Class diagrams (Modeling data): Represent system level elements in terms of the data that describe the
element and the operations that manipulate the data

4. Use-case diagrams (Modeling people): Illustrate the manner in which an actor interacts with the system

Er Sandeep R, Assistant Professor,CSE,MCET


Software Engineering PC 501 CS AICTEM-OU

REQUIREMENTS ENGINEERING
A BRIDGE TO DESIGN AND CONSTRUCTION:
Requirement engineering, like all other Software Engineering activities, must be adapted to the process, project,
product and the people doing the work. Requirement Engineering begins during the communication activity and
continues into the modeling activity. It is essential that the software team make a real effort to understand the
requirements of a problem before the team atoms to solve the problem.
Requirement Engineering builds a bridge to design and construction. It allows a software team, to examine,
1) About the context of the software work to be performed.
2) The specific needs that design the construction must address.
3) The priorities that guide the order in which work is to be completed.
4) The information, functions and behaviors that will have a profound impact on the resultant design.

REQUIREMENTS ENGINEERING TASKS


Requirement Engineering Provides an appropriate mechanism for understanding what the customer wants,
analyzing need, assessing feasibility, negotiating a reasonable solution, specifying the solution unambiguously,
validating the specification and managing requirements. Requirement engineering tasks are classified as:
1. Inception
2. Elicitation
3. Elaboration
4. Negotiation
5. Specification
6. Validation
7. Requirements management
1) Inception: In some cases, for casual conversation all that is needed is to precipitate in a major software
engineering effort. At project Inception, Software Engineers ask a set of questions, which establish:
 Basic understanding of the problem
 People who want a solution.
 Nature of solution that is desired.
 Effectiveness of preliminary communication and collaboration between customer and developer.

2) Elicitation: Elicitation is about asking the customers, uses and others “what they want”, i.e., what the
objectives of the product are, what is to be accomplished, how the product/ system fits into business needs, and
finally how product is to be used on day-to-day basis. But elicitation is difficult because:
a. Problems of Scope: Boundary of the system is ill-defined, or customers/ users specify unnecessary technical
detail that make confuse rather than clarify, overall system objectives.
b. Problems of Understanding: Customers or users have a poor understanding of their computing
environment, the problem domain etc., specify requirements that conflict with other users needs or specify
requirements that are ambiguous or untestable.
c. Problems of Volatility: Requirements change over time.

 Software engineers overcome these three problems by gathering requirements in an organised manner.

3) Elaboration: Information obtained from the customer during Inception and elicitation is expanded and refined
during elaboration. It focuses on developing a refined technical model of software functions, features and
constraints.
Er Sandeep R, Assistant Professor,CSE,MCET
Software Engineering PC 501 CS AICTEM-OU

Elaboration is driven by creation and reinforcement of user scenarios that describe how end-user will
interact with the system. Each user scenario is parsed to extract analysis classes (business entities) that are
visible to end user. The relationships and collaboration between classes are identified and UML diagrams are
produced. End-result of elaboration is an analysis model that defines informational, functional and behavioural
domain of the problem.

4) Negotiation: In this task, customers, users and other stakeholders are asked to rank requirements and then
discuss conflicts in priority. Risks associated with each requirement are identified and analysed. Rough
“guestimates” of development are made and used to assess impact of each requirement on project cost and
delivery time. Measures are taken in negotiations (discussions) so that each party achieves some satisfaction.

5) Specification: A specification can be written document, a set of graphical models, a formal mathematical
model (algorithm), a collection of usage scenario or a prototype or a combination of these. It is necessary to
remain flexible when specification is to be developed. For large systems, a written document is the best
approach, whereas for smaller products, usage scenarios are best.
1. Specification is final work product by requirements engineer.
2. It serves as foundation for subsequent Software Engineering activities and describes function, performance
and constraints in the product.

6) Validation: Work products produced as a result of Requirement Engineering are assessed for quality during
validation step. It examines,
1. Specification to ensure all software requirements are stated unambiguously.
2. That inconsistencies, omissions, errors have been detected and corrected.
3. The work products conform to standards established for the process, project and product.
 Primary requirement validation mechanism is the Formal Technical Review. The review team consists of
Software Engineers, customers, users other stakeholders. Review team examines specification for errors,
missing information, inconsistencies, conflicting requirements or unrealistic requirements.

7) Requirements Management: “It is a set of activities that help the project team identify, control and track
requirements and changes two requirements at any time as the project proceeds.”
 It begins with identification; each requirement is assigned a unique identifier. Once requirement have been
identified, traceability/ feasibility tables are developed. Each traceability table relates requirements to one or
more aspects of the system or its environment.
Possible traceability tables are:
1. Features traceability table: shows how requirements relate to important customer observable system/ product
features.
2. Source traceability table: Identifies source of each requirement.
3. Dependency traceability table: Indicates how requirements are related to one another.
4. Subsystem traceability table: Categorises requirements by the subsystem(s) that they govern.
5. Interface traceability table: Shows how requirements relate to both internal and external system interfaces.
These traceability tables are maintained in Requirements Database to understand how a change in requirements
will affect different aspects of the system to be built.

Er Sandeep R, Assistant Professor,CSE,MCET


Software Engineering PC 501 CS AICTEM-OU

INCEPTION (OR) INITIATING REQUIREMENT ENGINEERING PROCESS:


To get the project started and forward towards a successful solution, we need the following steps to initiate
Requirement Engineering:
1. Identifying the Stakeholders: stakeholder is defined as “anyone who benefits in a direct or indirect way from
the system which is being developed.” Each stakeholder has a different view of the system, achieves different
benefits when system is successfully developed and is open to different risks.
At inception, requirement engineer should create list of people who will input as requirements are elicited. The
initial list will grow as stakeholders are contacted further.
2. Recognizing Multiple Viewpoints: As different stakeholders exist; requirements of the system will be
explored from many different points of the view. Each of various constituencies such as marketing groups,
business managers, end users, Software Engineers will contribute information to the Requirements Engineering
process. For ex. support engineer may focus on the software maintainability.
The job of requirements engineer is to categorize all stakeholder information including inconsistent and
conflicting requirements in a way that will allow decision makers to choose a consistent set of requirements for the
system.
3. Working towards Collaboration: Customers should collaborate among themselves and with Software
Engineers to result a successful system. The job of requirement engineer is to identify areas of commonality
and areas of conflict or inconsistency.
In many cases, stakeholders collaborate by providing their view of requirements, but a strong “project
champion” (Ex: business manager) may make the final decision about which requirements make the cut.
4. Asking the first question: The questions asked at the Inception of the project should be “context free”. The
first set of questions focus on customer and other stakeholders, overall goals, and benefits.
 For ex: requirements engineering might ask:
 Who is behind the request for this work?
 Who will use the solution?
 What will be the economical benefit of the successful solution?
 Is there another source for the solution that you need?
These questions help to identify all stakeholders who will have interest in software to be built. These also
identify measurable benefits of successful implementation and alternatives for software development.
 Next set of questions include:
 What problems will this solution address?
 How would you characterize “good output”?
 Can you show me the environment where solution will be used?
 Will special performance issues or constraints affect the way the solution is approached?
These questions enable software team to gain better understanding of the problems and allows customer to say
his/her perceptions.
 Final set of questions are:
 Are you the right person and are your answers “official”?
 Are my questions relevant to your problem?
 Am I asking too many questions?
 Can anyone else provide additional information?
 Should I be asking you anything else?
These questions focus on effectiveness of communication. These are also called as meta- questions. All these
questions will help to “break the ice” and initiate the communication that is essential for successful elicitation.

Er Sandeep R, Assistant Professor,CSE,MCET


Software Engineering PC 501 CS AICTEM-OU

ELICITING REQUIREMENTS
The Q&A session should be used for the first encounter only and then replaced by a requirements
elicitation format.
Collaborative Requirements Gathering: Many different approaches to collaborative requirements gathering have
been proposed and each follows the basic guidelines:
1. Meetings are conducted and attended by both Software Engineers, customers along with stakeholders.
2. Rules for preparation and participation are established.
3. An agenda is suggested that is formal enough to cover all important points but informal enough to encourage
free flow of ideas.
4. A “facilitator” (customer/ developer/ outsider) controls the meeting.
5. A “definition mechanism” (worksheets etc) can be used.
6. The goal is to
a. Identify the problem.
b. Propose elements of the solution
c. Negotiate different approaches
d. Specify preliminary set of solution requirements.
During Inception the stakeholders write a “one or two page request”. A meeting place, time, date and a facilitator
are selected. Then the product request is distributed to all attendees before the meeting date, and ask to go through
the product request and make suggestions in the meeting.

Quality Function Deployment: QFD is a technique that translates the needs of customer into the technical
requirements for software. QFD “concentrates on maximizing customer satisfaction from the Software Engineering
process”. QFD identifies three types of requirements:
1. Normal Requirements: These reflect objectives and goals stated for a product during meetings with the
customer. If these requirements are present then the customer is satisfied.
2. Expected Requirements: These are implicit to the product and customer does not explicitly state them. Their
absence will cause significance dissatisfaction.
3. Exciting Requirements: These reflect features that go beyond customer’s expectations and prove to be very
satisfying when present.

 In meetings with the customer, Function Deployment determines value of each function that is required for
the system.
 Information Deployment identifies both data objects and events that system must consumer and produce.
 Task Deployment examines behaviour of the system within context of its environment.
 Value Analysis is conducted to determine relative priority of the requirements determined during each of
three deployments.

 QFD uses customer interviews, observations, surveys and examination of historical data as raw data for
requirements gathering activity. These data are then translated into a table of requirements, called as the
Customer voice table that is reviewed with customer.

User Scenarios: Developers and users create a set of scenarios that identify a trend of usage for the system to be
constructed. These scenarios often called use cases provide a description of how the system will be used.

Er Sandeep R, Assistant Professor,CSE,MCET


Software Engineering PC 501 CS AICTEM-OU

Elicitation Work Products: These depend on the size of the system/product to be built. These include:
 A statement of need or feasibility.
 A bounded statement of scope for system or product.
 A list of customers, users and stakeholders.
 A description of system’s technical environment.
 A list of requirements and constraints that apply.
 A set of usage scenarios that provide insight into use of the system.
 Any prototypes developed to better define requirements.

ELABORATION
Developing Use Cases: “Use case is defined as set of sequence of actions performed by an actor to achieve a
specific result”. An actor refers to various people that use system or product within context of the function.

Fig: Use Case Diagram for ATM


Use Case is a collection of user scenarios that describe the thread of usage of a system. Each scenario is described
from the point-of-view of an “actor”—a person or device that interacts with the software in some way. Each
scenario answers the following questions:
 Who is the primary actor, the secondary actor (s)?
 What are the actor’s goals?
 What preconditions should exist before the story begins?
 What main tasks or functions are performed by the actor?
 What extensions might be considered as the story is described?
 What variations in the actor’s interaction are possible?
 What system information will the actor acquire, produce, or change?
 Will the actor have to inform the system about changes in the external environment?
 What information does the actor desire from the system?
 Does the actor wish to be informed about unexpected changes?

Er Sandeep R, Assistant Professor,CSE,MCET


Software Engineering PC 501 CS AICTEM-OU

Building the Analysis Model: The intent of the analysis model is to provide a description of the functional,
informational and behavioural requirements for a computer-based system.
 Analysis model is a snapshot of requirements at any given time. We expect it to change. As the analysis
model evolves, certain elements will become relatively stable and other elements may be more volatile,
indicating customer does not yet understand requirements for system.

Elements of Analysis Model: The specific elements of the analysis model are dictated by analysis modeling
method. These elements include:
1. Scenario-based elements: These are often the first part of analysis model that is developed. They serve as
input (or) informational requirements for creation of other modeling elements.
 A variation in scenario based modeling depicts activities (functions/operations) that have been defined as a part
of requirement elicitation task, i.e., sequence of activities is defined as part of analysis model. Activities can be
represented iteratively at different levels of abstraction by using activity diagrams (or) use case diagrams.
As an example, consider UML diagrams for eliciting requirements.

Fig: Activity Diagram


2. Class-based elements: Each usage scenario implies a set of “Objects” that are manipulated as an actor
interacts with the system. These objects are categorised into “Classes”- a collection of things that have similar
attributes and common behaviour.
 A class diagram usually represents the functional requirements in analysis model. Analysis model may also
depict the manner in which classes collaborate with one another and relationships between them. An example
for class diagram is,

Er Sandeep R, Assistant Professor,CSE,MCET


Software Engineering PC 501 CS AICTEM-OU

3. Behavioral Elements: The behaviour of a product can have a profound effect on design and implementation
approach. [i.e., static or dynamic].
 State diagram is used to represent behaviour of a system buy depicting its states and events that cause the
system to change state. A state is any observable mode of behaviour.
 A state diagram indicates what actions are taken as a consequence of a particular event. A state diagram is
given as,

Fig: State Diagram Notation


4. Flow oriented elements: Information/Data is transformed as it flows through a computer-based system.
System accepts input in a variety of forms, applies functions to transform it, and produces output in a variety of
forms. This data flow is depicted using DFDs (Data Flow Diagrams).

Fig: Data Flow Diagram

Analysis Patterns: Analysis patterns represent the things (class, function or behaviour) that can be reused when
modeling many applications. Analysis patterns are integrated into analysis model by reference to the pattern name.
They are also stored in a repository so that Requirement Engineers can reuse them.
Analysis pattern template includes:
1. Pattern name: A descriptive that captures essence of pattern.
2. Intent: Describes what pattern accomplishes or represents and/or what problem is addressed.
3. Motivation: A scenario that illustrates how pattern can be used to address the problem.
4. Forces and context: Description of external issues that can affect how pattern is used and how they will
be resolved.
5. Solution: Description of how pattern is applied to solve problem.

Er Sandeep R, Assistant Professor,CSE,MCET


Software Engineering PC 501 CS AICTEM-OU

6. Consequences: Address what happens when pattern is applied.


7. Design: Discusses how analysis pattern can be achieved through use of known design patterns.
8. Known uses: Examples of uses within actual systems.
9. Related patterns: One or more analysis patterns that are related to named pattern because the analysis
pattern,
a. is commonly used with named pattern
b. is structurally similar to the named pattern
c. is a variation of named pattern.

NEGOTIATING REQUIREMENTS
Customer and developer enter into a process of negotiation, where they will have a discussion about balancing
functionality, performance, and other product or system characteristics against cost and time to market.
 The best negotiations strive for a “win-win” result. i.e., customer wins by getting system/ product that
satisfies the needs, and software team wins by working to realistic and achievable budget and deadlines.
Boehm defines a set of negotiation activities:
1. Identification of system/ subsystem key stakeholders.
2. Determination of stakeholders “Win conditions”.
3. Negotiate stakeholders win conditions to reconcile them into a set of win-win conditions for all concerned
(including software team).

VALIDATING REQUIREMENTS
Requirements are validated in this task by a review of customer requirements. A review of analysis model
addresses the following questions:
 Is each requirement consistent with overall objective for the system/ product?
 Have all requirements been specified at proper level of abstraction?
 Is the requirement really necessary (or) does it represent and add-on feature that may not be essential?
 Is each requirement bounded and unambiguous?
 Does each requirement have attribution? That is, is a source noted for each requirement?
 Do any requirements conflict with other requirements?
 Is each requirement testable?
 Is each requirement achievable in technical environment?
 Does requirements model properly reflect information, function and behaviour of system to be built?
 Are all patterns consistent with customer requirements?
 Have all patterns been properly validated?
 Have requirements patterns been used to simplify the requirements model?
 Has the requirements model been “partitioned” in a way that exposes progressively more detailed
information about the system?

Er Sandeep R, Assistant Professor,CSE,MCET


Software Engineering PC 501 CS AICTEM-OU

UNIT – III
Building the Analysis Model: Requirements Analysis Modeling approaches, Data modeling concepts,
Object oriented analysis, Scenario based modeling, Flow oriented modeling, Class- based modeling,
Creating a Behavioral Modeling.
Design Engineering: Design within the context of SE, Design Process and Design quality, Design
concepts, The Design Model, Pattern-based Software Design.

BUILDING THE ANALYSIS MODEL


REQUIREMENTS ANALYSIS: Requirements analysis results in specification of software's
operational characteristics, indicates software's interface with other system elements and establishes
constraints that software must meet.
 It allows the Software Engineers (sometimes called analysts/modelers) to elaborate on basic requirements
established during earlier Requirements Engineering tasks and build models that depict user scenarios,
functional activities, problem classes and their relationships, behavior and flow ofdata.
 It provides the designer with a representation of function, information and behavior that can be translated
to architectural, interface and component- level designs.
 Finally, analysis model and requirements specification provide developer and the customer with the
means to assess quality once software is built.
 Analyst should model what is known and use that model as the basis for design of software increment.

system
description

analysis
model

design
model

Figure: Analysis Model as a Bridge between System Description and Design Model

Overall Objectives and Philosophy:


The analysis model must achieve 3 primary objectives:
1. To describe what the customer requires.
2. To establish a basis for creation of a software design.
3. To define a set of requirements that can be validated once the software is built.
All elements of analysis model are directly traceable to parts of the design model. A clear division of design
and analysis tasks between these two important modeling activities is not always possible.
Software Engineering PC 501 CS AICTEM-OU

Analysis Rules Of Thumb:


1. "The model should focus on requirements that are visible within the problem or business domain. The
level of abstraction should be relatively high". (Don't show details that explain how system works.)
2. "Each element of analysis model should add to an overall understanding of software requirements and
provide insight into the information domain, function and behavior of system".
3. "Delay consideration of infrastructure and other non- functional models until design". (For ex. A DB
may be required, but classes, functions required to access it should be considered only after problem
domain analysis is completed.)
4. "Minimize coupling throughout the system". It is important to represent relationships between classes
and functions. Efforts should be made to reduce their "interconnectedness".
5. "Be certain that the analysis model provides value to all stakeholders". Each constituency has its own
use for model.
6. "Keep the model as simple as it can be". Don't add additional diagrams when they provide no new
information.

DOMAIN ANALYSIS: "Domain analysis is the identification, analysis and specification of common
requirements from a specific application domain, for reuse on multiple projects within that application
domain."

Figure: Input and output for Domain Analysis

 The “Specific Application Domain” can range from banking, multimedia video games to software
embedded within medical devices.
Goal of domain analysis is to find or create those classes/analysis classes and/or common functions and
features that are broadly applicable, so they may be reused. Role of domain analyst is to discover and define
reusable analysis patterns, analysis classes and related information that may be used by many people working
on similar but not necessarily same applications.

DATA MODELING CONCEPTS: Analysis modeling begins with data modeling. Its concepts are:
1. Data Objects: “A data object is a representation of any composite information that must be understood
by software”. A data object can be an external entity (Ex: Anything that produces or consumes
information), a thing (Ex: Report or display), an occurrence (Ex: Telephone call) or an event (Ex: an
alarm), a role, an organizational unit, a place or astructure.
 A data object encapsulates data (attributes) only. There is no reference within a data object to operations
that act on the data.
 Data object description incorporates data object and all of its attributes.
Software Engineering PC 501 CS AICTEM-OU

2. Data Attributes: “These define properties of a data object and take on one of three different
characteristics”. They may be used to:
 Name an instance of data object.
 Describe the instance of data object.
 Make reference to another instance in another table.
One or more attributes must be defined as an identifier; it becomes a “key” when we want to find an instance
of the data object. Values for identifiers are generally unique.
3. Relationships: Data objects are connected to one another in different ways. We can define a set of
object/relationship parts that define relevant relationships.
Types of Relationships:
1) Association Relationship:

2) Dependency Relationship:

4. Cardinality and Modality: Cardinality refers to the process of mapping between various objects. For
example, if object X relates to object Y, cardinality specifies how many occurrences of object X are
related to how many occurrences of object Y. Cardinality also defines "the maximum number of objects
that can participate in a relationship".
Modality provides an indication of whether or not a particular data object must participate in the
relationship. Modality is "1" if an occurrence of relationship is mandatory or it is "0".

ANALYSIS MODELING APPROACHES:

1. Structured Analysis: Considers data and processes that transform data as separate entities. Data objects
are modeled in a manner that shows how they transform data as data objects flow through the system.
2. Object-Oriented Analysis: It focuses on definition of classes and the manner in which they collaborate
with one another to effect customer requirements.UML and unified process are predominantly object
oriented.

The intent of Object-Oriented Analysis is to define all classes and relationships and behavior associated with
them that are relevant to the problem to be solved. To accomplish this, a number of tasks must occur:
1. Basic user requirements must be communicated between user & Software Engineer.
2. Classes must be identified (i.e., attributes and methods are defined).
3. A class hierarchy is defined.
Software Engineering PC 501 CS AICTEM-OU

4. Object-to-object relationships should be represented.


5. Object behavior must be modeled.
6. Repeat tasks 1 to 5 until model is complete.
 Object-Oriented Analysis builds a class-oriented model that relies on understanding of Object- Oriented
concepts.

Analysis modeling leads to derivation of each of the modeling elements as shown below:

Figure: Elements of Analysis Model

 The specific content of each element (diagrams/models used to construct element) differs from project to
project.

SCENARIO-BASED MODELING: Analysis modeling with UML begins with creation of scenarios
in the form of use-cases, activity diagrams and swim lane diagrams.

1) WRITING USE CASES: “A Use Case is defined as a set of sequence of actions performed by an actor
to obtain a specific output”.
"An actor may be a person that uses a system or product, or a system itself, anything that performs an
action in system".
 A use-case captures the interactions that occur between producers and consumers of information
and system itself.
 The concept of a use-case is relatively easy to understand- describe a specific usage scenario in
straight forward language from point of view of defined actor.

What to write about? The first two requirement engineering tasks-inception and elicitation-provide us the
information we need to begin writing use cases.
 To begin developing a set of use-cases, activities performed by a specific actor are listed.
 As conversations with stakeholder progress, the Requirement Engineering team develops use-cases
for each of activities noted.
 A variation of a formal use-case presents the interaction as an ordered sequence of user actions.
Each action is represented as a declarative sentence.
Software Engineering PC 501 CS AICTEM-OU

It is important to note that sequential presentation does not consider any alternative interactions. Such use
cases are referred to as "primary scenarios".
A description of alternative interactions is essential for complete understanding of the function, by asking the
following questions: [Answers result in secondary use cases].
1. Can the actor take some other action at that point?
2. Is it possible that actor will encounter some error at this point? If so, what might it be?
3. Is it possible that actor will encounter some other behavior? If so, what might it be?

2) USE – CASE DIAGRAMS: “A Use Case is defined as a set of sequence of actions performed by an
actor to obtain a specific output”.

Figure: Use-Case Diagram for ATM

3) ACTIVITY DIAGRAMS: “The UML activity diagram supplements (helps) the use-case by providing a
graphical representation of flow of interaction within a specific scenario”.

Figure: Activity Diagram for Bank


Software Engineering PC 501 CS AICTEM-OU

Similar to flow chart, an activity diagram uses Rounded Rectangles to imply a specific system function,
Arrows to represent flow through the system, Diamonds to depict a branching decision and Solid Horizontal
Lines to indicate parallel activities that occur.

4) SWIMLANE DIAGRAMS: “It is a useful variation of activity diagram and allows the analyst to
represent flow of activities described by use-case and indicate which actor or analysis class has
responsibility for action described by an activity rectangle."
Activities are represented as parallel segments that divide diagram vertically like lanes of swimming pool.

Figure: Swimlane Diagram

FLOW - ORIENTED MODELING: Data Flow Diagrams (DFD) can be used to complement UML
diagrams and provide additional insight into system requirements and data flow. DFD takes an input-
process-output view of a system, i.e., data objects flow into the software, transformed by processing
elements and resultant data objects flow out of software.

 Data objects are represented by labeled arrows, i.e., Data Flow.


 External entities are represented by square. (Producer/Consumer of data).
 Processes/Transformations are represented by bubbles/circles.
 Data stores represented by (or) cylinders.
Software Engineering PC 501 CS AICTEM-OU

1. CREATING A DATA FLOW MODEL: Data Flow Diagram enables the software engineer to develop
models of information and functional domains at the same time.
Guidelines for DFDs:
1) Level 0 DFD should depict software /system as a single bubble.
2) Primary input and output should be carefully noted.
3) Refinement should begin by isolating candidate processes, data objects and data stores to be
represented at next level.
4) All arrows & bubbles should be labeled with meaningful names.
5) Information flow continuity must be maintained between levels.
6) One bubble at a time should be refined.

There is a natural tendency to overcomplicate DFD, when analyst attempts to show too much detail early.

 DFD is represented in a hierarchical fashion i.e., the first data flow model (sometimes called a Level 0
DFD or Context Level DFD) represents system as a whole. Subsequent DFD's refines context diagram,
providing increasing detail with each level. Level 0 DFD will contain only one bubble (Main process).

Figure: Context-Level DFD/DFD-0 Diagram

The level-0 DFD is then expanded into a level-1 DFD. This is done by performing a "grammatical
parse" on the narrative that describes context level bubble.

A processing narrative is similar to use-case in style but different in purpose. It provides an overall
description of the function to be developed, not from one actor point of view.

By performing a grammatical parse on processing narrative for a bubble at any DFD level, we can generate
much useful information about how to proceed with refinement to next level.

The processes represented at a level-1 DFD can be further refined into lower levels, the continuity of
information flow will be maintained between all these levels. Refinement of DFDs continues until each
bubble performs a simple function, so that it will be easily implemented as a program component. This
concept called as "cohesion" can be used to assess simplicity of a given function.
Software Engineering PC 501 CS AICTEM-OU

Fig.: Level -1 DFD Diagram

2. CREATING A CONTROL FLOW MODEL: A large class of applications is driven by events rather
than data, produce control information and process information with concern for time and performance.
Such applications require use of control flow modeling in addition to data flow modeling. Event or
control item is implemented as a Boolean value (T/F, on/off, yes/no, 1/0) or a discrete list of conditions.

Guidelines to select possible events:


1) List all sensors that are "read" by the software.
2) List all interrupt conditions.
3) List all data conditions.
4) List all "switches" actuated by an operator.
5) Describe behavior of a system by identifying its states, identify how each state is reached, and define
state transitions.
6) Focus on possible omissions-common error in specifying control.

Control Specification: Control specification (CSPEC) represents behavior of system in 2 different ways.
1. It contains a state diagram that is sequential specification of behavior.
2. It can also contain a program activation table- a combinational specification of behavior.
 By reviewing the state diagram, software engineer can determine system behavior, and more
importantly, can ascertain whether there are "holes'" in the specified behavior.
 CSPEC describes the behavior of the system, but it gives us no information about inner working of
processes, that are activated as a result of this behavior, which is the disadvantage of CSPEC.
Software Engineering PC 501 CS AICTEM-OU

Process Specification: The process specification (PSPEC) is used to describe all flow model processes that
appear at final level of refinement. The content of PSPEC includes narrative text, a Program Design
Language (PDL) description of process algorithm, mathematical equations, tables, diagrams or charts.
 By providing PSPEC to accompany each bubble in flow model, software engineer creates a "mini-
spec" that can serve as guide for design of software component to implement process.

CLASS-BASED MODELING:
IDENTIFYING ANALYSIS CLASSES: To identify classes by examining the problem statement or by
performing a "grammatical parse" on processing narratives developed for the system to be built, underline
each noun or noun clause and enter it into a simple table. Synonyms should be noted. Analysis classes
manifest in one of following ways:
1) External Entities: External entities produce or consume information to be used by a computer-based
system.
2) Things: Things are part of information domain for the problem. (Ex: Reports, Designs, Signals)
3) Occurrences/Events: These occur within context of system operation. (Ex: Completion of a robot
movement.)
4) Roles: Roles are played by people who interact with system. (Ex: Manager)
5) Organizational Units: These are the units relevant to an application. (Ex: Team, Division)
6) Structures: Structures define a class of objects. (Ex: Sensor)
7) Places: Places establish context of the problem.(Ex: The Manufacturing Floor)

Coad and Yourdon suggest six selection characteristics that should be used to include each potential class in
analysis model.
1) Retained Information: The potential class will be useful during analysis only if information about it
must be remembered so that system can function.
2) Needed Services: Class must have set of identifiable operations that can change value of its attributes
in some way.
3) Multiple Attributes: A class with multiple attributes is more useful during design, than a single
attribute.
4) Common Attributes: A set of attributes can be defined for potential class, and these attributes apply
to all instances of class.
5) Common Operations: A set of operations can be defined for potential class, and these operations
apply to all instances ofclass.
6) Essential Requirements: External entities that produce or consume information essential to operation
of any solution for system will always be defined as classes in requirements model.

Specifying Attributes: Attributes define the class, i.e., what is meant by the class in the context of problem
space. To develop a meaningful set of attributes for an analysis class, a software engineer can again study a
use-case and select those “things” that reasonably “belong” to the class.
 What data items fully define class in context of problem at hand? Should be answered.
9
Software Engineering PC 501 CS AICTEM-OU

Defining Operations: Operations define behavior of an object. Operations can be classified as:
1) Operations that manipulate data in some way. (Ex: Adding, Deleting, Selecting)
2) Operations that perform computation.
3) Operations that inquire about state of an object.
4) Operations that monitor an object for occurrence of a controlling object.
 An operation must have "knowledge" of the nature of the class attributes and associations.
 The analyst can again study a use-case and select those operations that reasonably belong to the class.

Fig.: Class Diagram

CLASS-RESPONSIBILITY-COLLABORATOR (CRC) MODELING: “CRC Modeling provides a


means for identifying and organizing the classes that are relevant to system or product requirements”. The
intent of CRC model is to develop an organized representation of classes, using actual or virtual index cards.
 Responsibilities are the attributes and operations that relevant for the class.
 Collaborators are those classes that are required to provide a class with information needed to complete
a responsibility.
Types of Classes in CRC:
1. Entity Class: It Contains information important to users. It is also called model or business classes, are
executed directly from statement of the problem. These classes represent things that are to be stored in a
database and persist throughout duration of the application.
2. Boundary Class: Used to create interface that the user sees and interacts with as the software is used.
These classes are designed with responsibility of managing the way entity objects are represented to
users.
3. Controller Class: Manage a "unit of work" from start to finish. These can be designed to manage,
i. Creation or update of entity objects;
ii. Complex communication between sets ofobjects;
iii. Instantiation of boundary objects, as they obtain information from entity objects.
iv. Validation of data communicated between objects or between user &application.
 Controller classes are not considered until design has begun.
10
Software Engineering PC 501 CS AICTEM-OU

Class:
Class:
Class:
Desc ription:
Desc ription:
Desc Class:
ription:
FloorPlan
Responsibility:
Desc ription: Collaborator:
Responsibility: Collaborator:
Responsibility: Collaborator:
Responsibility: Collaborator:
defines floor plan name/type
manages floor plan positioning
sc ales floor plan for display
sc ales floor plan for display
incorporates walls, doors and windows Wall
shows position of video cameras Camera

Fig: CRC Modeling Cards

Responsibilities:
Guidelines for Allocating Responsibilities to Classes:
1. System Intelligence should be distributed across classes to best address the needs of the problem:
Intelligence is what the system knows and what it can do."Dumb" classes (those have few
responsibilities) can be modeled to act as servants to few "smart" classes (those have many
responsibilities). Flow of control will be straight forward.
Disadvantages:
 Concentrates all intelligence within a few classes, making changes more difficult.
 Tends to require more classes, so more development effort.
 If system intelligence is more evenly distributed across classes in an application, maintainability of
software is enhanced, impact of side effects due to change are reduced.
2. Each Responsibility should be stated as generally as possible: It implies that general responsibilities (both
attributes and operations) should reside high in class hierarchy (parent class)
3. Information and behavior related to it should reside within the same class: Data and processes that
manipulate data should be packaged as class, i.e., Encapsulation.
4. Information about one thing should be localized with a single class, not distributed across multiple
classes: A single class should take on the responsibility for storing and manipulating specific type of
information. If information is distributed, software becomes more difficult to test and maintain.
5. Responsibilities should be shared among related class when appropriate: A variety of related objects
must exhibit the same behavior at the same time.

Collaborators: Classes fulfill their responsibilities in one of two ways:


i. A class can use its own operations to manipulate its own attributes, there by fulfilling a particular
responsibility.
ii. A class can collaborate with their classes.

11
Software Engineering PC 501 CS AICTEM-OU

Collaborations identify relationships between classes. When a set of classes collaborate to achieve some
requirement, they can be organized into a sub-system. If a class cannot fulfill a responsibility, then it needs to
interact with another class, hence collaboration. To identify collaborations, analysts can examine three
different generic relationships between classes:
 The depends-upon relationship (dependency relationship).
 The has-knowledge-of relationship.
 The is-part-of relationship(Aggregation)

Index Card: Index card, contains a list of responsibilities, and corresponding collaborations that enable
responsibilities to be fulfilled.
 When a complete CRC model has been developed, representatives from customer & software
organizations can review.

Fig: Composite Aggregate Class

The model uses following approach:


1. All participants are given a subset of CRC model index cards. Cards that collaborate should be separated.
2. All use -case scenarios and diagrams should be organized into categories.
3. Review leader reads the use-case deliberately. As the review leader comes to a named class, he/she
passes a taken to the person holding corresponding class index card.
4. When token is passed, holder of class card is asked to describe responsibilities noted on the card. Group
determines whether one of the responsibilities satisfies use-case required.
5. If responsibilities and collaboration noted on index cards cannot accommodate use-case, modifications
are made to the cards. This may include definition of new classes or specification of new or revised
responsibilities or collaborations on existing cards.
This modus operandi continues until the use-case is finished. When all use-cases have been reviewed,
analysis, modeling continues.

Associations and Dependencies:


 Association Relationship: In many instances, two analysis classes are related to one another in some
fashion, much like two data objects may be related to one another. In UML, these relationships are called
"associations".
12
Software Engineering PC 501 CS AICTEM-OU

Multiplicity: In an association relationship, multiplicity is used represent the cardinality (mapping)


between two or more analysis classes. Ex: 0..*, 0..1, 1.. * etc.

Fig: Association with Multiplicity

 Dependency Relationship: In many cases, a client-server relationship exists between two analysis
classes, where client class depends on server class in some way. This establishes a dependency
relationship. In UML, dependencies are defined by a Stereotype, which is an "extensibility mechanism"
that allows a software engineer to define special modeling element.

Fig: Dependency Relationship

ANALYSIS PACKAGES: Various elements of the analysis model (Ex: Use-cases, Analysis classes) are
categorized in a manner that packages them as a grouping-called as an "Analysis Packages".
 The plus (+) sign preceding the analysis class name in each package indicates that the classes have
public visibility and are therefore accessible from other packages.
 Minus (-) sign indicates that an element is hidden from all other packages.
 A # sign indicates that an element is accessible only to classes contained within a given package.

Fig: Analysis Packages


13
Software Engineering PC 501 CS AICTEM-OU

CREATING A BEHAVIORAL MODEL:


The behavioral model indicates how software will respond to external events. To create the model,
analyst must perform the following steps:
1. Evaluate all use-cases to fully understand the sequence of interaction within the system.
2. Identify events that drive interaction sequence and understand how these events relate to specific classes.
3. Create a sequence for each use-case.
4. Build a state diagram for the system.
5. Review behavioral model to verify accuracy and consistency.

Identifying events with the use-case: The use-case represents a sequence of activities that involves actor
and the system. In general, an event occurs whenever the system and an actor exchange information.
A use-case is examined for points of information exchange. An actor should be identified for each
event. Information that is exchanged should be noted and any constraints should be listed. Once all events
have been identified, they are allocated to the objects involved. Objects can be responsible for generating
events or recognizing events.

STATE REPRESENTATIONS: In contest of behavioral modeling, the different characterizations of states


must be considered:
1. The state of each class as the system performs its function.
2. The state of the system as observed from the outside as the system performs its function.
State of a class takes both passive and active characteristics. A passive state is simply current status
of an object's attribute. Active state is the current status of an object as it undergoes a continuity transformation or
processing.
 An event (sometimes called a trigger) must occur to force an object to make a transition from one state to
other.

State Diagrams for Analysis Classes: One component of a behavioral modeling in a UML state diagram
that represents active states for each class and the events that cause changes between these active states.

Fig: State Diagram

14
Software Engineering PC 501 CS AICTEM-OU

Each arrow shown in figure represents a transition from one active state of a class to another. The labels
shown for each arrow represents the event that triggers the transition. Guard is a Boolean condition (1/0) that
must be satisfied, for a transition to occur.
Along with specifying the event, analyst can also specify a guard and an action. An action occurs
concurrently with state transition or as a consequence of it.

Sequence Diagrams for Analysis Classes: The sequence diagram indicates how events cause transitions
from identified examining a use-case, modeler create sequence diagram.
 It is a shorthand version of use-case, represents key classes and events that cause behavior to flow from
class to class.
 Once a complete sequence diagram has been developed, all of the events can be collaborate into a set of
input and output events, useful to create effective design.

Fig: Sequence Diagram

15
Software Engineering PC 501 CS AICTEM-OU

DESIGN ENGINEERING
Design is a core engineering activity. Design creates a model of the software. Design engineering
encompasses the set of principles, concepts and practices that lead to the development of a high quality
system or product.
The goal of design engineering is to produce a model or representation that exhibits firmness,
commodity and delight. Design engineering for computer software changes continually as new methods,
better analysis and broader understanding evolve.
 A product should be designed in a flexible manner to develop quality software.

DESIGN WITHIN THE CONTEXT OF SOFTWARE ENGINEERING:


Software design is the last software engineering action within modeling activity and sets the stage for
development. The analysis model, manifested by scenario-based, class- based, flow-oriented and behavior
elements, feed the design task.

Design model produces a data/class design, an architectural design, an interface design, a component design
and a deployment design.
 The data/class design transforms the analysis class models into design class realizations and data
structures require implementing the software. Part of class design may occur as each software component
is designed.
 The architectural design defines the relationship between major structural elements of software, the
architectural styles and design patterns and the constraints that affect the way in which architectural can
be derived from system specifications, the analysis model and interaction of subsystems defined within
analysis model.

Fig: Translating the Requirements model to Design model

16
Software Engineering PC 501 CS AICTEM-OU

 The interface design describes how the software communicates with systems that interoperate with it, and
with humans who use it. Usage scenarios and behavioral models provide much of information required
for the interface design.
 The component-level design transforms structural element of the software architecture into a procedural
description of software components. Information from class-based models, flow models, behavioral
models serve as basis for component design.

DESIGN PROCESS AND DESIGN QUALITY:


Importance of software design can be stated with one word QUALITY. Software design serves as foundation
for all software engineering and software support activities that follow.

Software design is an interactive process through which requirements are translated into a "blueprint" for
constructing the software. Initially, design is represented at a high level of abstraction. As iteration occurs,
subsequent refinement leads to design representations at lower levels of abstraction.

The following characteristics serve as a guide for evaluation of good design:


1. The design must implement all explicit requirements contained in analysis model, and accommodate all
implicit requirements desired by customer.
2. Design must be a readable, understandable guide for those who generate code, who test and support the
software.
3. Design should provide a complete picture of the software, addressing data, functional and behavioral
domains.
 Each of these characteristics is goal of the design process.

Quality Guidelines: Guidelines for quality design are:


1. A design should exhibit an architecture that
a) has been created using recognizable architectural styles/patterns.
b) composed of components that exhibit good design characteristics.
c) can be implemented in an evolutionary fashion.
2. A design should be modular; software should be logically partitioned into elements or sub-systems.
3. It should contain distinct representations of data, architecture, interfaces and components.
4. It should lead to data structures that are appropriate for the classes to be implemented.
5. It should lead to components that exhibit independent functional characteristics.
6. It should lead to interfaces that reduce complexity of connections between components and external
environment.
7. It should be represented using a repeatable (iterative) method.
8. It should be represented using a notation that effectively communicates its meaning.

 Design engineering encourages good design through the application of fundamental design principles,
systematic methodology and through review.

17
Software Engineering PC 501 CS AICTEM-OU

Quality Attributes: Hewlett-Packard (HP) developed a set software quality attributes, given by the acronym
FURPS:
1. Functionality: It is assessed by evaluating feature set and capabilities of the program, generality of
functions and security of overall system.
2. Usability: It is assessed by considering human factors, overall aesthetics, consistency and documentation.
3. Reliability: It is evaluated by measuring frequency & severity of failure, ability to recover, accuracy of
output results, Mean- Time-To-Failure (MTTF) and predictability of the program.
4. Performance: It is measured by processing speed, response time, resource consumption, throughout and
efficiency.
5. Supportability: It combines the ability to extend the program (extensibility), adaptability, serviceability,
which represent maintainability of the project.

These quality attributes must be considered as soon as design commences, but not after the design is
complete and construction has begun.

DESIGN CONCEPTS

Fundamental software design concepts provide necessary framework for "getting it right".

1. Abstraction: At highest level of abstraction, a solution for design problem is stated in broad terms using
language of the problem environment. At lower levels of abstraction, a more detailed description of
solution is provided.
 A data abstraction is a named collection of data that describes a data object.
 A procedural abstraction refers to a sequence of instructions that have a specific and limited function.
Name of procedural abstraction implies these functions, but specific details are suppressed.

2. Architecture: It is the structure or organization of program components (modules), their interaction, and
structure of data that are used by components. "Components can be generalized to represent major
system elements & their interactions.
A set of architectural patterns enable a software engineer to reuse design level concepts. One goal of
software design is to derive an architectural rendering of a system, which serves as a framework to conduct
detailed design activities.
Architectural design can be represented using one or more of a number of different models:
1. Structural models: Represent architecture as collection of programcomponents.
2. Framework models: Increase the level of design abstraction by attempting to identify repeatable
architectural design frameworks that are encountered in similar types of applications.
3. Dynamic models: Address behavioral change aspects of programarchitecture.
4. Process models: Focus on design of business or technical process that the system must accommodate.
5. Functional models: Can be used to represent functional hierarchy of a system.

18
Software Engineering PC 501 CS AICTEM-OU

3. Patterns:"A design pattern describes a design structure that solves a particular design problem within a
specific context amid "forces"(constraints) that may have an impact on the manner in which pattern is
applied and used."
Intent of each design pattern is to provide a description that enables a designer to determine:
i. Whether pattern is applicable to the current work.
ii. Whether pattern can be reused (hence, saving design time)
iii. Whether pattern can serve as a guide for developing a similar, but functionally or structurally
different pattern.

4. Modularity: Software architecture and design patterns embody modularity, i.e., software is divided into
separately named and addressable components, sometimes called modules that are integrated to satisfy
problem requirements.
Modularity is the single attribute of software that allows a program to the intellectually manageable
(by breaking big process into modules). Modularity leads to a "divide and conquer" strategy, it’s easier to
solve a complex problem when you break it into manageable pieces, hence effort required to develop
becomes negligibly small.
We modularize a design, so that development can be more easily planned, software increments can be
defined and delivered, changes can be more easily accommodated, testing and debugging can be conducted
more efficiently and long- term maintenance can be conducted without serious side effects.

5. Information Hiding: It suggests that "modules should be specified and designed so that information
(algorithms, data) contained within a module is inaccessible to other modules that have no need for such
information."
Hiding defines and enforces access constraints to both procedural detail within a module and any
local data structure used by the module. As most data and procedure are hidden from other parts of the
software, errors during modification are less likely to propagate to other locations within the software.

6. Functional Independence: It is achieved by developing modules with "single-minded" function and an


"aversion" to excessive interaction with other modules.
Functional independence is a key to good design and design is the key to software quality. Independence
is assessed using two qualitative criteria:
 Cohesion: Cohesion is an indication of relative functional strength of a module. A cohesive module
performs a single task, requiring little interaction with other components.
 Coupling: Coupling is an indication of interconnection among modules in software architecture.
Coupling depends on the interface complexity between modules.

7. Refinement: Stepwise refinement is a top-down design strategy, which is actually a process of


elaboration. A program is developed by successively refining levels of procedural detail.
It defines/begins with a statement of function that is defined at a high level of abstraction.
Refinement helps the designer to reveal low-level details as design progresses, thus in creating a
complete design model.

19
Software Engineering PC 501 CS AICTEM-OU

8. Refactoring: “Refactoring is the process of changing a software system in such a way that it does not
alter external behavior of the code (design) yet improves its internal structure”.
When software is refactored, the existing design is examined for redundancy, unused design
elements, poorly constructed data structures, unnecessary algorithms etc for better design.

9. Design Classes: As the design model evolves, the software team must define a set of design classes that:
 Refine analysis classes by providing design detail that will enable the classes to be implemented.
 Create a new set of design classes that implement a software infrastructure to support the business
solution.
Design classes provide more technical detail as a guide for implementation. Five different types of
design classes, each representing a different layer of design architecture are suggested. They are:
1. User Interface Classes: These define all abstractions that are necessary for Human Computer Interaction
(HCI). HCI occurs within context of a metaphor (Ex: Order form, a checkbook) and design classes for
interface may be visual representations of elements of metaphor.
2. Business Domain Classes: These are often refinements of the analysis classes defined earlier. The classes
identify attributes and services (operations) that are required to implement some element of the business
domain.
3. Process Classes: These implement lower-level business abstractions required to fully manage business
domain classes.
4. Persistent Classes: These represent data stores (DBs) that will persist beyond execution of the software.
5. System Classes: These implement software management and control functions that enable system to
operate and communicate. These are also known as supporting classes.
As design model evolves, software team must develop a complete set of attributes and operations for each
design class.

Four characteristics of a well-formed design class:

1. Complete and Sufficient: A design class should be the complete encapsulation of all attributes and methods
that can reasonably be expected to exist for the class. Sufficiency ensures that the design class contains only those
methods that are sufficient to achieve the intent of the class.(No more and No less).
2. Primitiveness: Methods associated with a design class should be focused on accomplishing one service
the class. Once the service has been implemented, with a method, the class should not provide another
way to accomplish same thing.
3. High Cohesion: A cohesion design class has a small, focused set of responsibilities and single- mindedly
applies attributes and methods to implement those responsibilities.
4. Low Coupling: Collaboration between design classes should be kept to an acceptable minimum. If a
design model is highly coupled, system is difficult to implement, test & maintain. So, design classes in a
subsystem should have only limited knowledge of classes in other subsystems. It is also called as "Law of
Demeter", suggests that a method should only send messages to methods in neighboring classes.

20
Software Engineering PC 501 CS AICTEM-OU

THE DESIGN MODEL


The design model can be viewed in two different dimensions:
 The process dimension indicates evolution of design model as design tasks are executed as part of the
software process.
 The abstraction dimension represents level of detail as each element of analysis model is transformed
into a design equivalent and then refined iteratively.

Fig: Dimensions of the Design Model


21
Software Engineering PC 501 CS AICTEM-OU

The elements of design model use many of same UML diagrams that were used in analysis model..The
difference is that these diagrams are refined and elaborated as part of design, more implementation- specific
detail is provided and emphasis is on architectural structure and style, components & interfaces.

Elements of design model:

1. Data Design Elements: Data design also sometimes referred as "Data Architecting". Data design creates
a model of data and/or information that is represented at a high level of abstraction.
In many software applications, architecture of data will have a profound influence on architecture of
software that must process it. Structure of data always plays important role in software design.
 At program component level, design of data structures and associated algorithms required to
manipulate them is essential to the creation of high-quality applications.
 At application level, translation of data model into a DB is important to achieve business objectives.
 At business level, collection of information stored in DBs and reorganized into a "data warehouse"
enables data mining or knowledge discovery.

2. Architectural Design Elements: These give us an overall view of the software. It is derived from 3
sources:
i. Information about application domain for software to be built.
ii. Specific analysis model elements such as DFDs or analysis classes, their relationships and
collaborations for the problem.
iii. Availability of architectural patterns and styles.

3. Interface Design Elements: These tell how information flows into and out of the system and how it is
communicated among components designed as part of the architecture. There are 3 important elements of
interface design:
i. User Interface (UI): Design of a UI incorporates aesthetic elements (Ex: color, layout, graphics),
ergonomic elements (information layout and placement, navigation), and technical elements (UI
patterns, reusable components). In general, UI is a unique subsystem within overall application architecture.
ii. External interfaces to other systems, devices, networks, other producers/consumers of
information: The design of external interfaces requires definitive information about the entity to
which information is sent or received. In every case, this information should be collected during
Requirement Engineering and verified. This design should incorporate error checking and appropriate security
features.
iii. Internal interfaces between various design components: It is closely aligned with component level
design. Design realizations of analysis classes represent all operations and messaging schemes
required to enable communication and collaboration between operations in various classes.

In some cases, an interface is modeled in same way as a class."An interface is a set of operations that
describes some part of the behavior of a class and provides access to those operations."

22
Software Engineering PC 501 CS AICTEM-OU

4. Component-Level Design Elements: Component level for software fully describes the internal detail of
each software component. To accomplish this, component-level design defines detail for all processing
that occurs within a component and an interface that allows access to all component operations.

Design details of a component can be modeled at many different levels of abstraction. An activity
diagram can be used to represent processing logic. Detailed procedural flow for a component can be
represented using either pseudo code or some diagrammatic form.

5. Deployment-Level Design Elements: These indicate how software functionality and subsystems will be
allocated within the physical computing environment that will support the software.
Deployment diagram shows the computing environment but does not explicitly indicate configuration
details. Each instance of deployment is identified.
During design, a UML deployment diagram is first developed, and then refined. In a deployment diagram,
each subsystem would be elaborated to indicate components that it implements.

23
Software Engineering PC 501 CS AICTEM-OU

PATTERN-BASED SOFTWARE DESIGN


Throughout the design process, a software engineer should look for every opportunity to reuse existing
design patterns (when they meet needs of the design) rather than creating new ones.
Describing a Design Pattern: Mature engineering disciplines make use of thousands of design patterns, for
things such as buildings, highways, electrical circuits, factories, weapons, computers etc. A description of
design pattern may also consider a set of design forces.
Design forces describe non-functional requirements (Ex: Portability) associated the software for
which the pattern is to be applied. These also define the constraints that may restrict the manner in which
design is to be implemented. Design forces describe the environment and constraints that must exist to make
design pattern applicable.
 Pattern characteristics (classes, responsibilities & collaborations) indicate the attributes of the design that
may be adjusted to enable the pattern to accommodate a variety of problems.
The names of design patterns should be chosen with care and should have a meaningful name.

Using Patterns in Design: Design patterns can be used throughout software design. The problem description
is examined at various levels of abstraction to determine if it is amenable to one or more following types of
patterns:
1. Architectural Patterns: These patterns,
 Define overall structure of the software,
 Indicate relationships among subsystems & software components,
 Define rules for specifying relationships among the elements (class, components, packages, subsystems)
of the architecture.
2. Design Patterns: These patterns address a specific element of the design such as an aggregation of
components to solve some design problem, relationships among components, or mechanisms for
effecting component-to component communication.
3. Coding Patterns: These are also called idioms; these language- specific patterns generally implement an
algorithmic element of a component, a specific interface protocol, or a mechanism for communication
among components.

Each of these pattern types differs in the level of abstraction and degree to which it provides direct guidance
for construction activity of software process.

Frameworks: “A framework is not an architectural pattern, but rather a skeleton with a collection of "plug
points" (also called hooks and slots) that enable it to be adapted to a specific problem-domain."
Plug points enable designer to integrate problem specific classes or functionality within the skeleton.
In O-O context, framework is collection of co-operating classes.
 To be most effective, frameworks are applied with no changes, additional design elements may be added,
but only via plug points that allow designer to flesh out the framework selection.

24
Software Engineering PC 501 CS AICTEM-OU

UNIT-IV
Creating Architectural Design: Software architecture, Data design, Architectural Styles and Patterns,
Architectural Design, Assessing alternative Architectural Designs, Mapping data flow into software
Architecture.
Modeling Component-Level Design: What is a Component, Designing Class-Based components, Conducting
Component–level Design, Object Constraint Language, Designing Conventional Components.
Performing User Interface Design: The Golden Rules, User Interface Analysis and Design, Interface Analysis,
Interface Design Steps, Design Evaluation.
CREATING AN ARCHITECTURAL DESIGN
SOFWARE ARCHITECTURE
What is Architecture? “The architecture of a system is a comprehensive framework that describes its form and
structure - its components and how they fit together”.
The software architecture of a computing system is the structure of the system, which comprise
software components, externally visible properties of those components and the relationships among them.
Architecture is representation that enables software engineer to,
1. Analyze effectiveness of design in meeting its stated requirements.
2. Consider architectural alternatives at a stage when making design changes is still relatively easy.
3. Reduce the risks associated with the construction of software.
 In the context of architectural design, a software component can be as simple as a program module, a class
or a database.

Why is Architecture Important? The three key reasons that software architecture is important are:
1. Representations of software architecture are an enabler for communication between all parties
(stakeholders) interested in dev of a computer-based system.
2. Architecture highlights early design decisions that will have a profound impact on all software
engineering work and on ultimate success of system.
3. Architecture “constitutes a relatively smalls intellectually graspable model of how system is structured
and how its components work together.
 Architectural styles and patterns can be applied to the design of other systems and represent a set of
abstractions that enable software engineers to describe architecture inpredictable ways.

DATA DESIGN
Data Design actions translate data objects as part of the analysis model into DSS at software component level,
and when necessary, a database architecture at application level.

Data Design at the Architectural Level: In today’s software business environment, where a lot of data and
databases are used, the challenge is to extract useful information, particularly when information desired is
cross- functional.
1
Software Engineering PC 501 CS AICTEM-OU

 To solve this challenge, business IT community has developed Data Mining techniques, also called
Knowledge Discovery in Databases (KDD), that navigate through existing databases to extract appropriate
business level in formation.
 An alternative solution, called a Data Warehouse is a large, independent database that has access to data
that are stored in databases that serve set of applications required by a business.

Data Design at the Component Level: Data design at the component level focuses on representation of data
structures that are directly accessed by one or more software components.

 In actuality, data design begins during creation of analysis model.

Set of Principles for Data Specification are:

1. The systematic analysis principles applied to functions and behavior should also be applied to data:
Representations of data flow and content should also be developed and reviewed, data objects should
be identified, alternative data organizations should be considered, and impact of data modeling on
software design should be evaluated.
2. All Data Structure and operations to be performed on each should be identified: Design of an
efficient data structure (DS) must take the operations to be performed into account. The attributes and
operations encapsulated within a class satisfy this principle.
3. A mechanism for defining content of each data object should be established and used to define both
data and operations applied to it: Class diagrams define the attributes contained within a class and
operations that are applied to attributes.
4. Low-level data design decisions should be deferred until late in the design process: Overall data
organization may be defined during requirements analysis, refined during data design work, and
specified in detail during component- level design.
5. The representation of a data structure should be known only to those modulesthat must make direct
use of data contained within the structure: Concepts of information hiding and coupling provide
important insight into the quality of a software design.
6. A library of useful DS and operations that may be applied to them should be developed: A class
library achieves this principle.
7. A software design and programming language should support the specification and realization of
abstract data types: The implementation of a sophisticated data structure can be made exceedingly
difficult if no means for direct specification of structure exists in programming language chosen for
implementation
These principles form a basis for a component level data design approach, that can be integrated in to both
analysis and design activities.

2
Software Engineering PC 501 CS AICTEM-OU

ARCHITECTURAL STYLES AND PATTERNS

An architectural style is a transformation that is imposed on the design of an entire system. The intent is to
establish a structure for all components of the system.

Each style describes a system category that encompasses:

1. A set of components (Ex: database, computational modules) that perform a function required by a system.
2. A set of connectors that enable “communication, coordination, and cooperation” among components.
3. Constraints that define how components can be integrated to form system.
4. Semantic models that enable a designer to understand overall properties of a system by analyzing know
properties its constituent parts.

An architectural pattern, like an architectural style, imposes a transformation on the design of an entire system.
A Pattern differs from a style in number of fundamental ways:

 Scope of a pattern is less broad, focusing on one aspect of the architecture, rather than architecture entirely.
 A pattern will describe how software handles some aspect of its functionality at the infrastructure level.
 Architectural patterns bend to address specific behavioral issues within the context of architecture (Ex:
Interrupts)

ARCHITECTURAL STYLES:

1. Data-Centered Architecture: A data store (Ex: Database) resides at the center of this architecture and is
accessed frequently by other components that update, add, delete, or modify data within the store.

Fig: Data-Centered Architecture

Client software access a central repository. It accesses data independent of any changes to data/actions of other
client software. A data-centered architecture promotes inerrability. i.e., existing components can be changed
and new client components added to the architecture without concern about other clients. Client components
independently execute processes.
3
Software Engineering PC 501 CS AICTEM-OU

2. Data-Flow Architecture: This architecture is applied when input data are to be transformed through a
series of computational (or manipulative) components in to output data.
A pipe and filter structure has a set of components called filters, connected by pipes that transmit data
from one component to the next. Each filter works independently and produces data output of a specified
form.

Fig: Data-Flow Architecture

If the data flow degenerates into a single line of transforms, it is termed “batch sequential”. This structure
accepts a batch of data and then applies a series of sequential components to transform it.

3. Call and Return Architecture: It enables a software designer (system architect) to achieve a program
structure that is relatively easy to modify and scale.
Main program/subprogram Architecture: The classic program structure decomposes function into a
control hierarchy, where a “main” program invokes no of program components, which in turn may invoke
still other components.

Fig: Main Program/Subprogram Architecture


Remote Procedure Call Architecture: The components of main program/subprogram architecture are
distributed across multiple computers on a network.
4
Software Engineering PC 501 CS AICTEM-OU

4. Object-Oriented Architecture: Components of a system encapsulate data and the operations that must be
applied to manipulate the data communication and coordination between components is accomplished via
message passing.

Fig: Object-Oriented Architecture

5. Layered Architecture: In this architecture, a number of different layers are defined, each accomplishing
operations that progressively become closer to the machine instruction set.

Fig: Layered Architecture

 At outer layer, components service under UI operations.


 At the inner layer, components perform OS interfacing.
 Intermediate layers provide utility services and application software functions.
 Once requirements engineering uncovers the characteristics and constraints of the system to be built,
the architectural style or combination of styles that best fits can be chosen.

5
Software Engineering PC 501 CS AICTEM-OU

Architectural Patterns: “Architectural patterns define a specific approach for handling some behavioral
characteristic of the system”. Some domains can be:
1. Concurrency: Many applications must handle multiple tasks in a manner that simulates parallelism. There
are a number of different approaches to handle concurrency.
i. Operating System Process Management Patterns: It provides built in operating system features that
allow components to execute concurrently. Pattern also incorporates operating system functionality
that manages the communication between processes, scheduling and other capabilities required to
achieve concurrency.
ii. Task Scheduler at application level: This pattern contains a set of active objects each contains a tick ()
operation, which performs its functions, before returning control back to scheduler.
2. Persistence: Persistent data are stored in a database or file and may be read or modified by other processes
at a later time. TWO architectural patterns are used to achieve persistence.
i. DBMS Pattern: Applies the storage and retrieval capability of a DBMS to the application architecture.
ii. Application Level Persistence Pattern: It builds persistence features into the application architecture.
3. Distribution: Distribution problem addresses the manner in which systems or components within systems
communicate with one another in a distributed environment. Broker pattern addresses this problem.
CORBA is an example for broker pattern.
Broker Pattern: A broker acts as a “middle-man” between the client component and server component. The
client sends a message to the broker, and broker completes connection.

Organization and Refinement: Following questions provide insight into architectural style that has been
derived.
 Control: How is control managed within the architecture? Does a distinct control hierarchy exist? How
do components transfer control within the system? How is control shared among components? What is
the control topology? Is control synchronized (or) do components operate asynchronously?
 Data: How are data communicated between components? Is the data flow continuous? What is the
made of data transfer? Do data components exist? What is the made of data transfer? Do data
components exist? If so, what is their roe? How do functional components interact with data
components? Are data components passive/active?

ARCHITECTURAL DESIGN
As architectural design begins, the software to be developed must be put into context i.e. the design
should define the external entities (other systems, people, and devices) that the software interacts with and the
nature of interaction.
Once context is modeled and all external software interfaces have been described, you can identify a
set of architectural archetypes. An archetype is an abstraction (similar to a class) that represents one element
of system behavior. The set of archetypes provides a collection of abstractions that must be modeled
architecturally if the system is to be constructed, but the archetypes themselves do not provide enough
implementation detail. Therefore, the designer specifies the structure of the system by defining and refining
software components that implement each archetype. This process continues iteratively until a complete
architectural structure has been derived
6
Software Engineering PC 501 CS AICTEM-OU

Representing the System in Context: At the architectural design level, a software architect uses an
Architectural Context Diagram (ACD) to model the manner in which software interacts with external entities
to its boundaries.

Fig: Architectural Context Diagram

Referring to the figure, systems that incorporate with the target system are represented as:

1. Superordinate Systems: Those systems that use the target system as part of some higher level processing
scheme.
2. Subordinate Systems: Systems that are used by target system and provide data or processing that are
necessary to complete target system functionality.
3. Peer-Level Systems: Systems that interact on a peer- to – peer basis (i.e. information is produced or
consumed by the peers and the target system)
4. Actors: Entities that interact with the target system by producing/consuming information that is necessary
for requisite processing.

 Each of these external entities communicates with the target system through an interface (small shaded
rectangles in fig). All data that flow into and out of the target system must be identified at this stage.

7
Software Engineering PC 501 CS AICTEM-OU

Defining Archetypes: “An archetype is a class or pattern that represents a core abstraction that is critical to the design
of architecture for target system”. Archetypes can be derived by examining the analysis classes.
 Target system architecture is composed of these archetypes, which represent stable elements of the
architecture.
Following archetypes can be defined for an application:
 Node: Represents a cohesive collection of input and output elements of the application.
 Detector: An abstraction that encompasses all sensing equipment that feeds information into target system.
 Indicator: An abstraction that represents all mechanisms for indicating an occurrence of a condition.
 Controller: An abstraction that depicts the mechanism that allows the arming/disarming of a node. If
controllers reside on a network, they have ability to communicate with one another.

Refining the Architecture into Components: As software architecture is refined into components, the
structure of the system begins to emerge. Analysis classes represent entities within the application domain that
must be addressed within the software architecture. So, application domain is one source for derivation and
refinement of components.
Another source is Infrastructure Domain. The architecture must accommodate many infrastructure
components that enable application components, but have no business connection to the application domain.
 Interfaces depicted in ACD imply one or more specialized components that process data that flow across
interface. In some cases, complete subsystem architecture with many components must be designed.

Fig: Refining the Architecture into Components


Describing Instantiations of the System: An actual instantiation of the system architecture is developed,
when further refinement (as all design is iterative), is still necessary. By this, the architecture is applied to a
specific problem with the intent of demonstrating that the structure and components are appropriate.
8
Software Engineering PC 501 CS AICTEM-OU

ASSESSING ALTERNATIVE ARCHITECTURAL DESIGNS


Design results in a no of architectural alternatives that are assessed to determine which is the most
appropriate for the problem to be solved.
Architecture Trade-Off Analysis Method: The Software Engineering Institute (SEI) has developed an
Architecture Trade –off Analysis Method (ATAM) that establishes an iterative evaluation process for software
architectures. The design analysis activities that follow are performed iteratively:
1) Collect Scenarios: A set of use-cases is developed to represent the system from the user’s point of view.
2) Elicit Requirements, constraints and Environments Description: This info is required as part of RE and is
used to be certain that all stakeholder concerns have been addressed.
3) Describe architectural styles/patterns that have been chosen to address the scenarios and requirements
4) Evaluate quality Attributes b considering each attribute in isolation: Quality attributes for architecture.
Design assessment includes reliability, performance, security, maintainability, flexibility, testability,
portability, reusability, andinteroperability.
5) Identify the sensitivity of Quality attributes to various architectural attributes for a specific architecture
style: This can be accomplished by making small changes in architecture. Attributes that are significantly
affected by variation in architecture are termed as sensitivity points.
6) Critique candidate Architectures (Developed in step 3) using sensitivity analysis conducted in stem 5:
Once architecture sensitive points have been determined, finding trade-off points is simply the
identification of architecture elements to which multiple attributes are sensitive.
These 6 steps represent first ATAM iteration. Based on results of steps 5 and 6, some architecture
alternatives may be eliminated, one or more of the remaining architectures may be modified and represented in
more detail, and then ATAM steps are reapplied.

Architectural Complexity: A useful technique for assessing the overall complexity of a proposed architecture
to consider dependencies between components within the architecture these dependencies are driven by
information/control flow within the system.
Types of Dependencies:
 Sharing Dependencies: These represent dependence relationships among consumers, who use the same
resource or producers, who produce for the same consumers.
 Flow Dependencies: These represent dependence relations between producers and consumers of resources.
 Constrained Dependencies: These represent constraints on the relative flow of control among a set of
activities.
The sharing and flow dependencies are similar to coupling. Coupling is an important design
concept that is applicable at the architectural level and component level.

Architectural Description Languages (ADL): ADL provides a semantics and syntax for describing software
architecture. ADL should provide the designer with the ability to decompose architecture components,
compose individual components into larger architectural blocks, and represent interfaces between components.
Once descriptive, language – based techniques for architecture design have been established, it is more likely
that effective assessment methods for architectures will be established as the design evolves.

9
Software Engineering PC 501 CS AICTEM-OU

MAPPING DATA FLOW INTO A SOFTWARE ARCHITECTURE:


Call and return structure is the mostly commonly used to structure for many types of systems, for architectural
mapping. This mapping technique enables a designer to derive reasonably complex call and return
architectures from DFDS within analysis model. This technique, sometimes called “structured Design.”
 Structured Design is often characterized as a data flow oriented design method, as it provides a convenient
transition from a DFD to software architecture. Type of info flow is the driver for mapping approach.

TRANSACTION FLOW: Information enters the system along paths that transform external data into an
internal form. These paths are identified as incoming flow. At the kernel of software, a transition occurs.
Incoming data are passed through a “transform center” and begin to move along paths that now lead out the
software. Data moving along these paths are called outgoing flow.
 Overall data flow occurs in a sequential manner and follows one, or only a few “straight line” paths. When
a segment of a DFD exhibits these characteristics, Transform Flow is present.

TRANSACTION FLOW: Usually a transaction triggers data flow along one of many paths. Transaction
Flow is characterized by data moving along an incoming path that converts external world information into a
transaction. The transaction is evaluated and based on its value flow along one of many paths is initiated. The
hub of information flow from which many action paths emanate is called as “transaction center”.

TRANSFORM MAPPING: Transform Mapping is a set of design steps that allows a DFD with transform
flow characteristics to be mapped into a specific architecture style. To map these DFDs into architecture,
following design steps are initiated.
Step 1: Review the Fundamental System Model: Fundamental system model or context diagram (Level 0
DFD) depicts an operation (function) as a single transformation, representing the external producers and
consumers ofdata that flow into and out of the function.

10
Software Engineering PC 501 CS AICTEM-OU

Step 2: Review and Refine DFDs for the Software: Information obtained from analysis models is refined to
produce greater detail. i.e., process implied by a transform performs a single, distinct function that can be
implemented as a component in software.

Step 3: Determine whether DFD has Transform (or) Transaction Flow Characteristics: In general,
information flow within a system can always be rep as transform. However, in this step, designer selects
global (software- wide) flow characteristics based on the prevailing nature of the DFD. Local regions of
transform or transaction flow or isolated. These sub flows can be used to refine program architecture derived
from a global characteristic described earlier. Hence, an overall transform characteristic will be assumed for
information flow.
Step 4: Isolate Transform Center by Specifying Incoming and Outgoing Flow Boundaries: Usually,
incoming and outgoing flow boundaries are open to interpretation. i.e. different designers may select slightly
11
Software Engineering PC 501 CS AICTEM-OU

different points in the flow as boundary locations. Proper care should be taken when boundaries are selected; a
variance of one bubble along flow path will have little impact on final program structure.
Step 5: Perform “First-level Factoring”: Factoring results in a program structure in which top-level
components perform decision-making and low-level components perform input, computation and output work.
Middle-level components perform some control and do moderate amounts of work. This type of mapping
results in top-down distribution of control.

In the first-level factoring a main controller resides at the top of the program structure and coordinates the
following subordinate control functions. When transform flow is encountered, a DFD is mapped to a specific
structure (call & return architecture) that provides control for incoming, transform, outgoing information
processing.
 No of modules at first level should be limited to minimum.

Step 6: Perform “Second-Level Factoring”: Second-level factoring is accomplished by mapping individual


transforms (bubbles) of a DFD into appropriate modules within the architecture. Beginning at transform center
boundary and moving outward along incoming and the outgoing paths, transforms are mapped into sub-
ordinate levels of the software structure.
Second-level factoring illustrates a one – to – one mapping between DFD transform and software
modules, different mappings frequently occur. Practical considerations and measures of design quality dictate
the outcome of second-level factoring. Review and refinement may lead to changes in this structure, but it can
serve as a “first- iteration” design.
 Factoring is accomplished by moving outward from the transform center boundary on incoming flow side.
 Components mapped in preceding manner represent an initial design of software architecture.
12
Software Engineering PC 501 CS AICTEM-OU

Step 7: Refine First-Iteration Architecture using Design Heuristics for Improved Software Quality: A
first iteration architecture can always be refined by applying concepts of functional independence. Components
are exploded/imploded to produce sensible factoring, good cohesion, minimal coupling and most importantly, a structure
that can be implemented without difficulty, tested without confusion and maintained without grief.
Refinements are dictated by analysis and assessment methods as well as practical considerations and
common sense. Software requirements coupled with human judgment is the final arbiter.

 The objective of preceding seven steps is to develop an architectural representation of software. i.e., once
the structure is defined, we can evaluate and refine software architecture by viewing it as a whole.
 Modifications at this time require little additional work, yet can have a profound impact on software quality.

TRANSACTION MAPPING: In many software applications, a single data item triggers one of a no of
information flows. This data item is called as a transaction.
Design steps for transaction mapping are similar and identical to steps for transform mapping. A major
difference lies in the mapping of DFD to software structure.
Step 1: Review the Fundamental System Model: This step is identical to corresponding step for transform
mapping.
Step 2: Review and Refine DFDs for Software: This step is identical to corresponding step for transform
mapping.
Step 3: Determine whether the DFD has Transform or Transaction Flow Characteristics: This step is
identical to corresponding step in transform mapping. (Refer step 3 in transform mapping). However, flow boundaries
must be established for both flow types.
Step 4: Identify the Transaction Center and Characteristics along each of the Action Paths: The location
of the transaction center can be immediately discerned from the DFD. The transaction center lies at the origin
of a no of action paths that flow radially from it.
The incoming path and all action paths must also be isolated. Each action path must be evaluated for
its individual flow characteristic. Incoming, transform and outgoing flow are indicated with boundaries.

13
Software Engineering PC 501 CS AICTEM-OU

Step 5: Map DFD in a Program Structure Amenable to Transaction Processing: The transaction flow is
mapped into an architecture that contains an incoming branch and a dispatch branch. Structure of incoming
branch is developed in much the same way as transform mapping. Starting at transaction center, bubbles along
incoming path are mapped into modules.
Structure of dispatch branch contains a dispatcher module that controls all subordinate action modules.
Each action flow path of DFD is mapped to a structure that corresponds to its specific flow characteristics.
Step 6: Factor and Refine the Transaction Structure and Structure of Each Action Path: Each action
path of DFD has its own information flow characteristics, i.e., transform or transaction flow. The action path –
related “substructure” is developed using the design steps.
Step 7: Refine First – Iteration Architecture Using Design Heuristics for Improved Software Quality:
This step is identical to corresponding step for transform mapping.

In both design approaches, criteria such as module independence, practicality and maintainability must be
carefully considered as structural modifications are proposed.

Refining Architectural Design: Refinement of software architecture during early stages of design is to be
encouraged. The approach for optimization is one of the true benefits derived by developing a representation
of software architecture.
 Structural simplicity reflects both elegance and efficiency.

14
Software Engineering PC 501 CS AICTEM-OU

MODELING COMPONENT-LEVEL DESIN


Component-level design occurs after the first iteration of architectural design has been completed. At
this stage, overall data and program structure of software has been established. The intent is to translate design
model into operational software.

What is a component? “A component is a modular, deployable and replaceable part of a system that
encapsulates implementation and exposes a set of interfaces.” The true meaning of the term “component” will
differ depending on the point of view of the software engineer who uses it.

An Object – Oriented View: In the context of OOSE, generally a component contains a set of collaborating
classes. Each class within a component has been fully elaborated to include all attributes and operations that
are relevant to its implementation. Class can be design or analysis class. This also involves defining the
interfaces that enable classes to communicate and collaborate.
This elaboration activity is applied to every component defined as part of the architectural design.
Once this is completed, following steps are performed:
1. Provide further elaboration of each attribute, operation and interface.
2. Specify the data structure appropriate for each attribute.
3. Design the algorithmic detail required to implement the processing logic associated with each operation.
4. Design mechanisms required to implement the interface to include messaging that occurs between objects.

15
Software Engineering PC 501 CS AICTEM-OU

Conventional View: This is the oldest of the ways of viewing various components of application.
A component is viewed as a functional element i.e., a module) of an application that will incorporate:
 The processing logic
 The internal data structures required to implement the processing logic.
 An interface that enables the component o be invoked and data to be passed to it.

A component serves one of the following roles:


 A control component that coordinates invocation of all other problemdomain components.
 A problem domain component that implements a complete or partial function req. bythe customer.
 An infrastructure component that is responsible for functions that support the processing requirements
in the problem domain.

Conventional software components are derived from DFDs in the analysis model.
 Each transform bubble (module) represented at lowest levels of DFD is mapped into a module
hierarchy.
 Control components reside near the top.
 Problem domain components and infrastructure components migrate toward the bottom.
 Functional Independence is strived for between the transforms.

Once this is completed, following steps are performed for each transform:
1. Define interface for transform (order, number of parameters)
2. Define data structures used by the transform.
3. Design algorithms used by the transform (use stepwise refinement)

16
Software Engineering PC 501 CS AICTEM-OU

Process-Related View: In this view, emphasis is placed on building systems from existing components
maintained in a library, rather than creating each component from scratch.
As software architecture is formulated, components are selected from library and used to populate the
architecture.

Because the components in library are created with reuse in mind, each contains:

1. A complete description of their interface.


2. The functions they perform.
3. The communication and collaboration they require.

DESIGNING CLASS-BASED COMPONENTS:

Basic Design principles:

1. Open-Closed Principle: A module/component should be open for extension, but closed for modification.
The designer should specify the component in a way that allows it to be extended without the need to make
internal code or design modifications to existing parts of the component.
2. Liskov Substitution Principle: All subclasses should be substitutable for their base classes. A component
that uses a base class should continue to function properly, if a subclass of base class is passed to the component
instead.
3. Dependency Inversion Principle: Always depend on abstractions (i.e., interfaces), but not one
concretions (classes). The more a component depends on other concrete components the more difficult it will be to
extend.
4. Interface Segregation Principle: Many client specific interfaces are better than one general purpose
interface. For a server class, specialized interfaces should be created to serve major categories of clients.
Only those operations that are relevant to a particular category of clients should be specified in the
interface.

Component Packaging Principles:

1. Release Reuse Equivalency Principle: “The granularity of reuse is the granularity of release”. Group the
reusable classes into packages that can be managed, upgraded and controlled as newer versions are created.
2. Common Closure Principle: “The classes that change together belong together”. Classes should be
packaged cohesively; they should address same functional or behavioral area on the assumption that if one
class experiences a change, thy all will change.
3. Common Reuse Principle: “Classes that aren’t reused together should not be grouped together”. Classes
that are grouped together may go through unnecessary integration and testing when they have experienced
no changes, but when other classes in the package have been upgraded.

Component-Level Design Guidelines:-

For notes here after Refer to PPTs of Unit – IV PART-II

17
Software Engineering PC 501 CS AICTEM-OU

USER INTERFACE DESIGN


Interface Design usually focuses on three areas of concern:
 Design of interfaces between software components
 Design of interfaces between software and other nonhuman producers and consumers of info.
 Design of interface between a human and computer.

Graphical User Interfaces have helped to eliminate many of most horrific interface problems. But, some
are still difficult to learn, hard to use, confusing, counterintuitive, unforgiving and frustrating.

 UI analysis and design has to do with study of people and how they relate to technology.

GOLDEN RULES OF USER INTERFACE DESIGN:


Three golden rules form the basis for a set of UI design principles. They are:
1) Place the user in control
2) Reduce the user’s memory load
3) Make the Interface Consistent

1) Place the User in Control:


1. Define interaction modes in a way that does not force a user into unnecessary/undesired actions. So the
user shall be able to enter and exit a mode with little or no effort.
2. Provide for flexible interaction: The user shall be able to perform the action via keyboard commands,
mouse movement or voice recognition.
3. Allow user interaction to be interruptible and “undo” able: The user shall be able to easily interrupt a
sequence of actions to do something else (without losing the work done so far). User can also “undo”
any action.
4. Streamline the interaction as skill levels advance and allow the interaction to be customized: User can
use a macro mechanism to perform a sequence of represented interactions and to customize the
interface.
5. Design for direct interaction with objects that appear on the screen: User can manipulate objects on the
screen in a manner similar to what would occur, if the object were a physical thing (Ex: Press a Button)
6. Hide technical internals from casual user: User shall not be required to directly use OS, file mgmt.,
networking etc., commands to perform any actions. Instead, these operations shall be hidden from user
and performed “behind–the–scenes”.

2) Reduce User’s Memory Load :


1. Reduce demand on short – term memory: The inter face shall reduce the user’s requirement to
remember past actions and results by providing visual cues of such actions.
2. Establish Meaningful Defaults: The system shall provide the user with default values that make sense
to the average user but allow the user to change these defaults. The user will also be easily able to reset
any value to its original or default value.
18
Software Engineering PC 501 CS AICTEM-OU

3. Define Shortcuts that are intuitive (Natural): The user shall be provided with the mnemonics (i.e.,
control or alt combinations) that tie easily to the action in a way that is easy to remember such as the
first letter.
4. The visual layout of the interface should be based on a real world metaphor (Appearance): The screen
layout of UI shall contain well-understood visual cues that the user can relate to real – world actions.
5. Disclose information in a progressive fashion:When interacting with a task, an object or some
behavior, the interface shall be organized hierarchically by moving the user progressively in a step –
wise fashion from an abstract concept to a concrete action.

The more a user has to remember, the more error – prone interaction with the system will be

3) Make the Interface Consistent:


1. Allow the user to put current task into a meaningful context: The interface shall provide indicators
(like window titles) that enable user to know context of work at hand. The user can determine where he
has come from and what alternatives exist for transition to a new task.
2. Maintain Consistency across a family of applications: A set of apps performing complimentary
functionality shall all implement same design rules so that consistency is maintained for all
interactions.
3. If past interactive models have created user expectations, do not make changes, unless there is a
compelling reason to do so: Once a particular interactive sequence has become a de facto standard
(actual standard), the application continues this expectation in every part of its functionality.

USER INTERFACE ANALYSIS AND DESIGN


Four different models come into play when a UI is analyzed and designed.
1. User model- Established by a human / software engineering
2. Design model – Created by a software engineer
3. User’s mental model / system perception – Developed by user when interacting with the application.
4. Implementation model –Created by software engineers
 The role of interface designer is to reconcile these differences and derive a consistent representation of the

interface.

1) User Model: It establishes the profile of the end-users of the system, based on age, gender, education,
physical abilities, motivation, goals etc. It considers semantic knowledge of the user, which include
understanding the functions that are performed, meaning of input and output, objectives of system.
The user model categorizes users as:
 Novices: No syntactic knowledge of system, little semantic knowledge of application, general computer
usage.
 Knowledgeable, Intermittent Users: Reasonable semantic knowledge of system, low recall of system’s
syntactic information to use the application.
 Knowledgeable, Frequently Users: Good semantic and syntactic knowledge (power user), look for
shortcuts and abbreviated modes of operation.
19
Software Engineering PC 501 CS AICTEM-OU

2) Design Model: It is derived from analysis model of requirements. Design model incorporates data,
architectural, interface and procedural representations of the software. It is constrained by information in
requirements specification that helps define user of the system.
3) User’s Mental Model: It is often called as user’s system perception. It consists of image of system that
users carry in their heads. The accuracy of description depends upon user’s profile and overall familiarity
with software in application domain.
4) Implementation Model: It consists of look and feel of the interface combined with all supporting
information (books, help files) that describes system’s syntax and semantics. It strives to agree with user’s
mental, so that user feels comfortable with software. It serves as a translation model from the design
model, using info of user model, user’s mental model.

User Interface Design Process: The analysis and design process for UIs is iterative and can be used /
represented with a Spiral Model. This process encompasses four distinct framework activities:
1. User, task and environment analysis and modeling
2. Interface Design
3. Interface construction (Implementation)
4. Interface Validation

Fig: The User Interface Design Process

INTERFACE ANALYSIS:
Interface Analysis means understanding:
 The people (end –users) who interacts with the system through the interface.
 The tasks end users must perform to do their work.
 The content that is presented as part of interface.
 Environment in which these tasks will be conducted.

User Analysis: The analyst strives to get to end user’s mental model and design model to understand:
 The users themselves.
 How users use the system.
20
Software Engineering PC 501 CS AICTEM-OU

Information for user analysis is obtained from:


 User interviews with end users
 Sales input from sales people who regularly interact with users.
 Marketing input based on market analysis to understand how diff populationsegments use software.
 Support input from support staff who are aware of users likes and dislikes, what works and what doesn’t,
what features generate questions etc.

A set of question should be answered during user analysis:


1. Are users trained professionals, clerical or manufacturing workers?
2. What level of formal education does average user have?
3. Are the users capable of learning from written materials or classroom training?
4. Are users expert of user community?
5. What is age range of user community?
6. Will users be compensated for work they perform?
7. How are users compensated for work they perform?
8. Do users work normal office hours/till job is done?
9. Is software to be an integral part of users work or will it be used occasionally? (Usage frequency)
10. What is primary spoken language among users?
11. What are consequences, if user commits a mistake?
12. Are users expert in the application that they will use?
13. Do users want to know technology behind the interface?

Task Analysis and Modeling: Task analysis answers following questions:


1. What work will user perform in specific circumstances?
2. What tasks and subtasks will be performed as user does the work?
3. What is the sequence of work tasks – work flow?
4. What is the hierarchy of tasks?
5. What specific problem domain objects will user manipulate as work is performed?

 Use-cases define basic interaction.


 Task elaboration refines interactive tasks.
 Object Elaboration identifies interface objects (classes)
 Work flow Analysis defines how a work process is competed when several people and roles are involved.

Hierarchical Representation: As an interface is analyzed, a process of elaboration occurs. Once work flow
has been established, a task hierarchy can be established for each user type. The hierarchy is derived by a step
wise elaboration of each task identified for the user.

Analysis of Display Content: Display content may range from character based reports to graphical displays
to multimedia information. A set of questions to be answered during content analysis:
1. Are different types of data assigned o consistent geographic locations on the screen? [Ex: Photos on
top-right corner]
21
Software Engineering PC 501 CS AICTEM-OU

2. Can user customize screen location for content?


3. Is proper on-screen identification assigned to all content?
4. If a large report is to be presented, how it should be partitioned for case of understanding?
5. How color will be used to enhance understanding?
6. How error messages and warnings will be presented to user?
7. Will graphical o/p fit within bounds of display device?
8. Will mechanisms be available for moving directly to summary information for large collections of
data?

Analysis of Work Environment: software products need to be designed to fit into work environments
otherwise they may be difficult or frustrating to use.
Factors to Consider include: Display size, type of lighting, keyboard size, height and ease of use, Mouse type
and ease of use, space limitations for PC and User. Weather/Atmospheric conditions, temperature or pressure
restrictions and time restrictions (when, how fast, and for how long).

INTERFACE DESIGN STEPS


1. Using information developed during interface analysis, define interface objects and actions(operations)
2. Define events (User Action) that will cause change in state of UI. Model this behavior.
3. Depict each interface state as it will actually look to the end-users.
4. Indicate how user interprets state of system form information provided through the interface.
During all these steps designer must:
 Always follow Three Golden rules of UIs.
 Model how interface will be implemented.
 Consider computing environment that will be used.

Applying Interface Design Steps: Interface objects are categorized into three types: Source, target and
application.
 A Source object is dragged and dropped into a target object such as to create hard copy of a report.
 An application object represents application-specific data that are not directly manipulated as part of
screen interaction such as a list.
After identifying objects and their actions, an interface designer performs screen layout which is an interactive
process that involves graphical design and placement of icons, Definition of descriptive screen text,
specification and titling for windows, Def of major and minor menu items, specification of a real – world
metaphor to follow.

Interface Design Patterns: Patterns are available for the complete UI, page layout, forms and input, tables,
navigation/searching, e-commerce.

Design Issues:-
1. Response Time 4. Menu and command labeling
2. Help Facilities 5. Application Accessibility
3. Error Handling 6. Internationalization
22
Software Engineering PC 501 CS AICTEM-OU

DESIGN EVALUATION:
Before prototyping occurs, a number of evaluation criteria can be applied during design reviews to the design
model itself.

1. The amount of learning required by the users: Derived from length and complexity of written
specification and its interfaces.
2. The interaction time and overall efficiency: Derived from no of user tasks specified and average no of
actions per task.
3. The memory load on users: Derived from no of actions, tasks and system states.
4. The complexity of interface and degree to which it will be accepted by users: Derived from interface
style, help facilities, and error handling procedures.

Fig: The Interface Design Evaluation Cycle

23
Software Engineering PC 501 CS AICTEM-OU

UNIT V
Testing Strategies: A Strategic approach to software testing, strategic issues, test strategies for O-O
software, validation testing, system testing, art of debugging.
Testing Tactics: Software Testing Fundamentals, Black-Box and white box Testing, basis path testing,
Control Structure Testing, O-O Testing methods, Testing Methods applicable on the class level, inter
class Test case design, Testing for Specialized environments, architectures and applications,
Testing Patterns.
Product Metrics: Software quality, A framework for product metrics, Metrics for the analysis model,
metrics for the Design model, metrics for source code, Metrics for Testing, Metrics for maintenance.

TESTING STRATEGIES
Testing is the process of exercising a program with the specific intent of finding errors prior to delivery to
the end user. Testing shows errors, requirements conformance, performance & indication of quality.

A STRATEGIC APPROACH TO SOFTWARE TESTING:


Test Strategy incorporates test planning, test case design, test execution and resultant data collection and
execution.

All provide the software developer with a template for testing and all have the following generic
characteristics:

• Perform Formal Technical reviews(FTR) to uncover errors during software development


• Begin testing at component level and move outward to integration of entire component based
system.
• Adopt testing techniques relevant to stages of testing
• Testing can be done by software developer and independent testing group
• Testing and debugging are different activities. Debugging follows testing

Verification & Validation

Verification refers to the set of activities that ensure that software correctly implements a specific
function.

Validation refers to a different set of activities that ensure that the software that has been built is
traceable to customer requirements.

Boehm [BOE81] states this way:

Verification: "Are we building the product right?"

Validation: "Are we building the right product?"

The definition of V&V encompasses many of the activities that we have referred to as software quality
assurance (SQA). Verification and validation encompasses a wide array of SQA activities that include
formal technical reviews, quality and configuration audits, performance monitoring, simulation,
feasibility study, documentation review, database review, algorithm analysis, development testing,
qualification testing, and installation testing. Although testing plays an extremely important role in V&V,
many other activities are also necessary.

Er Sandeep R, Assistant
Professor,CSE,MCET
Software Engineering PC 501 CS AICTEM-OU

Organizing for Software Testing: The people who have built the software are now asked to test the
software. This seems harmless in itself; after all, who knows the program better than its developers do?
Unfortunately, these same developers have a vested interest in demonstrating that the program is error
free, that it works according to customer requirements.

From a psychological point of view, software analysis and design (along with coding) are constructive
tasks. From the point of view of the builder, testing can be considered to be (psychologically) destructive.

The software developer is always responsible for testing the individual units (components) of the
program, ensuring that each performs the function for which it was designed. In many cases, the
developer also conducts integration testing—a testing step that leads to the construction (and test) of the
complete program structure. Only after the software architecture is complete does an independent test
group become involved.

The role of an independent test group (ITG) is to remove the inherent problems associated with letting
the builder test the thing that has been built. Independent testing removes the conflict of interest that may
otherwise be present.

However, the software engineer does not turn the program over to ITG and walk way. The developer and
the ITG work closely throughout a software project to ensure that thorough tests will be conducted. While
testing is conducted, the developer must be available to correct errors that are uncovered.

The ITG is part of the software development project team in the sense that it becomes involved during the
specification activity and stays involved (planning and specifying test procedures) throughout a large
project.

A Software Testing Strategy for Conventional Software Architecture:

The software engineering process may be viewed as the spiral illustrated in Figure18.1. Initially, system
engineering defines the role of software and leads to software requirements analysis, where the
information domain, function, behavior, performance, constraints, and validation criteria for software are
established.

Unit testing begins at the vortex of the spiral and concentrates on each unit (i.e., component) of the
software as implemented in source code. Testing progresses by moving outward along the spiral to
integration testing, where the focus is on design and the construction of the software architecture.
Taking another turn outward on the spiral, we encounter validation testing, where requirements

Er Sandeep R, Assistant
Professor,CSE,MCET
Software Engineering PC 501 CS AICTEM-OU

established as part of software requirements analysis are validated against the software that has been
constructed. Finally, we arrive at system testing, where the software and other system elements are tested
as a whole.

Unit testing makes heavy use of white-box testing techniques. Black-box test case design techniques are
the most prevalent during integration, although a limited amount of white-box testing may be used to
ensure coverage of major control paths. Black-box testing techniques are used exclusively during
validation. Software, once validated, must be combined with other system elements (e.g., hardware,
people, and databases). System testing verifies that all elements mesh properly and that overall system
function/performance is achieved.

TEST STRATEGIES FOR CONVENTIONAL SOFTWARE

Unit Testing:

This is a figure just to understand unit testing

Unit testing focuses verification effort on the smallest unit of software design—the software component
or module. Using the component-level design description as a guide, important control paths are tested to
uncover errors within the boundary of the module. The unit test is white-box oriented.

Unit Test Considerations

The tests that occur as part of unit tests are illustrated schematically in Figure 18.4. The module interface
is tested to ensure that information properly flows into and out of the program unit under test. The local

Er Sandeep R, Assistant
Professor,CSE,MCET
Software Engineering PC 501 CS AICTEM-OU

data structure is examined to ensure that data stored temporarily maintains its integrity during all steps in
an algorithm's execution. Boundary conditions are tested to ensure that the module operates properly at
boundaries established to limit or restrict processing. All independent paths (basis paths) through the
control structure are exercised to ensure that all statements in a module have been executed at least once.
And finally, all error handling paths are tested.

What errors are commonly found during Unit Testing?

(1) Misunderstood or incorrect arithmetic precedence, (2) mixed mode operations, (3) incorrect
initialization, (4) precision inaccuracy, (5) incorrect symbolic representation of an expression.
Comparison and control flow are closely coupled to one another (i.e., change of flow frequently
occurs after a comparison).

Test cases should uncover errors such as

(1) comparison of different data types, (2) incorrect logical operators or precedence,(3) expectation of
equality when precision error makes equality unlikely, (4) incorrect comparison of variables, (5)
improper or nonexistent loop termination, (6) failure to exit when divergent iteration is encountered, and
(7) improperly modified loop variables.

Unit Test Procedures: After source level code has been developed, reviewed, and verified for
correspondence to component-level design, unit test case design begins

Er Sandeep R, Assistant
Professor,CSE,MCET
Software Engineering PC 501 CS AICTEM-OU

Because a component is not a stand-alone program, driver and/or stub software must be developed for
each unit test. The unit test environment is illustrated in Figure 18.5. In most applications a driver is
nothing more than a "main program" that accepts test case data, passes such data to the component (to be
tested), and prints relevant results. Stubs serve to replace modules that are subordinate (called by) the
component to be tested. A stub or "dummy subprogram" uses the subordinate module's interface, may do
minimal data manipulation, prints verification of entry, and returns control to the module undergoing
testing. Unit testing is simplified when a component with high cohesion is designed.

Integration Testing: The objective is to take unit tested components and build a program structure
that has been dictated by design.
Approaches:
 "Big bang" approach: All components are combined in advance. Entire program is tested as a
whole.
 Incremental integration: The program is constructed and tested in small increments, where errors are
easier to isolate and corrected.
Number of different incremental integration strategies are discussed:
1. Top-Down Integration

Modules are integrated by moving downward through the control hierarchy, beginning with the main
control module (main program). Modules subordinate to the main control module are incorporated into
the structure in either a depth-first or a breadth-first manner.

Referring to figure Depth-first integration would integrate all components on a major control path of the
structure. Selection of a major path and depends on application-specific characteristics. For example,
selecting left hand path, components M1, M2, M5 would be integrated first. Next, M8 or (if necessary for
proper functioning of M2) M6 would be integrated. Then, central and right-hand control paths are built.

Breadth-first integration incorporates all components directly subordinate at each level, moving across
the structure horizontally. From the figure, components M2, M3, and M4 (a replacement for stub S4)
would be integrated first. The next control level, M5, M6, and so on, follows.
Steps for Top-Down Integration:
1. The main control module is used as a test driver and stubs are substituted for all components
directly subordinate to the main control module.
2. Depending on the integration approach selected (i.e., depth or breadth first), subordinate stubs are
replaced one at a time with actual components.
3. Tests are conducted as each component is integrated.
4. On completion of each set of tests, another stub is replaced with the real component.

Er Sandeep R, Assistant
Professor,CSE,MCET
Software Engineering PC 501 CS AICTEM-OU

5. Regression testing may be conducted to ensure that new errors have not been introduced.
The process continues from step 2 until the entire program structure is built.

What problems are encountered when top-down integration strategy is chosen?


The most common of these problems occurs when processing at low levels in the hierarchy is required to
adequately test upper levels. Stubs replace low-level modules at the beginning of top-down testing;
therefore, no significant data can flow upward in the program structure

2. Bottom-Up Integration

Bottom-up integration testing, as its name implies, begins construction and testing with atomic modules
(i.e., components at the lowest levels in the program structure).
Steps for Bottom-Up Integration:
1. Low-level components are combined into clusters (sometimes-called builds) that perform a
specific software sub function.
2. A driver (a control program for testing) is written to coordinate test case input and output.
3. The cluster is tested.
4. Drivers are removed and clusters are combined moving upward in the program structure.

Components are combined to form clusters 1, 2, and 3. Each of the clusters is tested using a driver
(shown as a dashed block). Components in clusters 1 and 2 are subordinate to Ma. Drivers D1 and D2 are
removed and the clusters are interfaced directly to Ma. Similarly, driver D3 for cluster 3 is removed prior
to integration with module Mb. Both Ma and Mb will ultimately be integrated with component Mc, and
so forth.

Regression Testing: Each time a new module is added as part of integration testing, the software
changes. New data flow paths are established, new I/O may occur, and new control logic is invoked.
Regression testing is the re-execution of some subset of tests that have already been conducted to ensure
that changes have not propagated unintended side effects.

The regression test suite (the subset of tests to be executed) contains three different classes of test cases:

• A representative sample of tests that will exercise all software functions.


• Additional tests that focus on software functions that are likely to be affected by the change.
• Tests that focus on the software components that have been changed.

Er Sandeep R, Assistant
Professor,CSE,MCET
Software Engineering PC 501 CS AICTEM-OU

Smoke Testing: Smoke testing is an integration testing approach that is commonly used when “shrink-
wrapped” software products are being developed. It is designed as a pacing mechanism for time-critical
projects. The smoke testing approach encompasses the following activities:

1. Software components that have been translated into code are integrated into a “build.”: A build
includes all data files, libraries, reusable modules, and engineered components that are required to
implement one or more product functions.
2. A series of tests is designed to expose errors that will keep the build from properly performing
its function: The intent should be to uncover “show stopper” errors that have the highest
likelihood of throwing the software project behind schedule.
3. The build is integrated with other builds and the entire product (in its current form) is smoke
tested daily: The integration approach may be top down or bottom up
Smoke testing provides a number of benefits when it is applied on complex, time-critical software
engineering projects:

• Integration risk is minimized.


• The quality of the end-product is improved.
• Progress is easier to assess.

Comments on Integration Testing: The major disadvantage of the top-down approach is the need for
stubs and the attendant testing difficulties that can be associated with them. The major disadvantage of
bottom-up integration is that "the program as an entity does not exist until the last module is added"

Selection of an integration strategy depends upon software characteristics and, sometimes, project
schedule. In general, a combined approach (sometimes-called sandwich testing) that uses top-down tests
for upper levels of the program structure, coupled with bottom-up tests for subordinate levels may be the
best compromise.

Fig: Sandwich Testing

What is a critical module and why should we identify it?

 Address several software requirements


 Has a high level of control (high in the program structure) structure)
 Is complex or error prone
 Has definite performance requirements

Er Sandeep R, Assistant
Professor,CSE,MCET
Software Engineering PC 501 CS AICTEM-OU

VALIDATION TESTING
Validation succeeds when software functions in a manner that can be reasonably expected by the
customer.
Validation Test Criteria: Software validation is achieved through a series of black-box tests that
demonstrate conformity with requirements. Both plan and procedure are designed to ensure that all
functional requirements are satisfied, all behavioral characteristics are achieved, all performance
requirements are attained, documentation is correct and other requirements are met.
After each validation test case has been conducted, one of two possible conditions exist:

1. The function or performance characteristics conform to specification and are accepted


2. A deviation from specification is uncovered and a deficiency list is created.
Configuration Review: An important element of the validation process is a configuration review. The
intent of the review is to ensure that all elements of the software configuration have been properly
developed, are cataloged, and have the necessary detail to bolster the support phase of the software life
cycle. The configuration review, sometimes called an audit.

Alpha and Beta Testing: A customer conducts the alpha test at the developer’s site.
The beta test is conducted at one or more customer sites by the end-user of the software. Unlike alpha
testing, the developer is generally not present

SYSTEM TESTING
Software is incorporated with other system elements (e.g., hardware, people, information), and a
series of system integration and validation tests are conducted. These tests fall outside the scope of the
software process and are not conducted solely by software engineers. A classic system-testing problem is
"finger-pointing." This occurs when an error is uncovered, and each system element developer blames
the other for the problem.

Recovery Testing: Recovery testing is a system test that forces the software to fail in a variety of
ways and verifies that recovery is properly performed.

Security Testing: Security testing attempts to verify that protection mechanisms built into a system.
During security testing, the tester plays the role(s) of the individual who desires to penetrate the system.

Stress Testing: Stress testing executes a system in a manner that demands resources in abnormal
quantity, frequency, or volume. For example, (1) special tests may be designed that generate ten
interrupts per second, when one or two is the average rate, (2) input data rates may be increased by an
order of magnitude to determine how input functions will respond, (3) test cases that require maximum
memory or other resources are executed, Essentially, the tester attempts to break the program.

A variation of stress testing is a technique called sensitivity testing. Sensitivity testing attempts to
uncover data combinations within valid input classes that may cause instability or improper processing

Performance Testing: Performance testing is designed to test the run-time performance of software
within context of an integrated system. Performance testing occurs throughout all steps in testing process.
Performance tests are often coupled with stress testing and usually require both hardware and software
instrumentation.

Er Sandeep R, Assistant
Professor,CSE,MCET
Software Engineering PC 501 CS AICTEM-OU

THE ART OF DEBUGGING


Debugging occurs as a consequence of successful testing. That is, when a test case uncovers an error,
debugging is the process that results in the removal of the error.
The Debugging Process: The debugging process begins with the execution of a test case. Results are
assessed and a lack of correspondence between expected and actual performance is encountered. In many
cases, the noncorresponding data are a symptom of an underlying cause as yet hidden. The debugging
process attempts to match symptom with cause, thereby leading to error correction.
The debugging process will always have one of two outcomes: (1) the cause will be found and corrected,
or (2) the cause will not be found.

Debugging Approaches: In general, three categories for debugging approaches may be proposed
(1) brute force, (2) backtracking, and (3) cause elimination.
The brute force category of debugging is probably the most common and least efficient method for
isolating the cause of a software error. We apply brute force debugging methods when all else fails

Backtracking is a common debugging approach that can be used successfully in small programs.
Beginning at the site where a symptom has been uncovered, the source code is traced backward
(manually) until the site of the cause is found.

The third approach to debugging — cause elimination — is manifested by induction or deduction and
introduces the concept of binary partitioning. Data related to the error occurrence are organized to isolate
potential causes. A "cause hypothesis" is devised and the aforementioned data are used to prove or
disprove the hypothesis.

Once a bug has been found, it must be corrected. However, as we have already noted, the correction of a
bug can introduce other errors and therefore do more harm than good.

Van Vleck [VAN89] suggests three simple questions that every software engineer should ask before
making the "correction" that removes the cause of a bug:

 Is the cause of the bug reproduced in another part of the program?


 What "next bug" might be introduced by the fix I am about to make?
 What could we have done to prevent this bug in the first place?

Er Sandeep R, Assistant
Professor,CSE,MCET
Software Engineering PC 501 CS AICTEM-OU

TESTING TACTICS
WHITE-BOX TESTING
White-box testing, sometimes called glass-box testing. Our goal is to ensure that all statements and
conditions have been executed at least once.

Using white-box testing methods, the software engineer can derive test cases that:

1. Guarantee that all independent paths within a module have been exercised at least once.
2. Exercise all logical decisions on their true and false sides.
3. Execute all loops at their boundaries and within their operational bounds.
4. Exercise internal data structures to ensure their validity.

BASIS PATH TESTING: Basis path testing is a white-box testing technique first proposed by Tom
McCabe. Test cases derived to exercise the basis set are guaranteed to execute every statement in the
program at least one time during testing.

 Flow Graph Notation: Before the basis path method can be introduced, a simple notation for the
representation of control flow, called a flow graph (or program graph) must be introduced.

Each circle, called a flow graph node, represents one or more procedural statements. The arrows on the
flow graph, called edges or links, represent flow of control. Each node that contains a condition is called
a predicate node and is characterized by two or more edges emanating from it.

Cyclomatic Complexity: Cyclomatic complexity is software metric that provides a quantitative measure
of the logical complexity of a program.

In the context of the basis path testing method, the value computed for Cyclomatic complexity defines the
number of independent paths in the basis set of a program and provides us with an upper bound for the
number of tests that must be conducted to ensure that all statements have been executed at least once.

An independent path is any path through the program that introduces at least one new set of processing
statements or a new condition. When stated in terms of a flow graph, an independent path must move
along at least one edge that has not been traversed before the path is defined.

Er Sandeep R, Assistant
Professor,CSE,MCET
Software Engineering PC 501 CS AICTEM-OU

In the above fig the set of independent paths are as follows:


Path 1: 1-11
Path 2: 1-2-3-4-5-10-1-11
Path 3: 1-2-3-6-8-9-10-1-11
Path 4: 1-2-3-6-7-9-10-1-11
Note that each new path introduces a new edge. The path
1-2-3-4-5-10-1-2-3-6-8-9-10-1-11
Is not considered an independent path because it is simply a combination of already specified paths and
does not traverse any new edges.
Paths 1, 2, 3, and 4 constitute a basis set for the above flow graph
How is Cyclomatic complexity computed?
1. The number of regions of the flow graph corresponds to the Cyclomatic complexity.
2. Cyclomatic complexity, V(G), for a flow graph, G, is defined as V(G) = E - N + 2
Where E is the number of flow graph edges, N is the number of flow graph nodes.
3. Cyclomatic complexity, V (G), for a flow graph, G, is also defined as V(G) = P + 1
Where P is the number of predicate nodes contained in the flow graph G.
Referring once more to the above flow graph the Cyclomatic complexity can be computed using each of
the algorithms just noted:
1. The flow graph has four regions.
2. V(G) = 11 edges 9 nodes + 2 = 4.
3. V(G) = 3 predicate nodes + 1 = 4.cyclomatic complexity for above flow graph.

Er Sandeep R, Assistant
Professor,CSE,MCET
Software Engineering PC 501 CS AICTEM-OU

Deriving Test Cases

Using the above PDL let us describe how to derive the test cases

The following steps are used to derive the set of test cases

1. Using the design or code as a foundation, draw a corresponding flow graph

2. Determine the cyclomatic complexity of the resultant flow graph.


V(G) = 6 regions
V(G) = 17 edges -13 nodes + 2 = 6
V(G) = 5 predicate nodes + 1 = 6

Er Sandeep R, Assistant
Professor,CSE,MCET
Software Engineering PC 501 CS AICTEM-OU

3. Determine a basis set of linearly independent paths.

The value of V(G) provides the number of linearly independent paths through the program control
structure. In the case of procedure average, we expect to specify six paths:

path 1: 1-2-10-11-13
path 2: 1-2-10-12-13
path 3: 1-2-3-10-11-13
path 4: 1-2-3-4-5-8-9-2-. . .
path 5: 1-2-3-4-5-6-8-9-2-. . .
path 6: 1-2-3-4-5-6-7-8-9-2-. . .

The ellipsis (. . .) following paths 4, 5, and 6 indicates that any path through the remainder of the control
structure is acceptable. It is often worthwhile to identify predicate nodes as an aid in the derivation of test
cases. In this case, nodes 2, 3, 5, 6, and 10 are predicate nodes

4. Prepare test cases that will force execution of each path in the basis set.

Each test case is executed and compared to expected results. Once all test cases have been completed, the
tester can be sure that all statements in the program have been executed at least once.

 Graph Matrices: To develop a software tool that assists in basis path testing, a data structure, called
a graph matrix, can be quite useful. A graph matrix is a square matrix whose size (i.e., number of
rows and columns) is equal to number of nodes on the flow graph. Each row and column corresponds
to an identified node, and matrix entries correspond to connections (an edge) between nodes. A
simple example of a flow graph and its corresponding graph matrix is shown in below figure.

Er Sandeep R, Assistant
Professor,CSE,MCET
Software Engineering PC 501 CS AICTEM-OU

What is a graph matrix and how do we extend it use for testing? The graph matrix is nothing more than
a tabular representation of a flow graph. By adding a link weight to each matrix entry, the graph matrix
can become a powerful tool for evaluating program control structure during testing

In its simplest form, the link weight is 1 (a connection exists) or 0 (a connection does not exist).
Represented in this form, the graph matrix is called a connection matrix. Referring to above figure each
row with two or more entries represents a predicate node.

Performing the arithmetic shown to the right of the connection matrix provides us with still another
method for determining Cyclomatic complexity.

CONTROL STRUCTURE TESTING: It broadens testing coverage and improve quality of


white-box testing

Condition testing is a test case design method that exercises the logical conditions contained in a
program module. A simple condition is a Boolean variable or a relational expression, possibly preceded
with one NOT (¬) operator. A relational expression takes the form
E1 <relational-operator> E2
where E1 and E2 are arithmetic expressions and <relational-operator> is one of the following: <, ≤, =, ≠
(nonequality), >, or ≥.

A compound condition is composed of two or more simple conditions, Boolean operators, and
parentheses. A condition without relational expressions is referred to as a Boolean expression.

Types of errors in a condition include the following:

• Boolean operator error (incorrect/missing/extra Boolean operators), Boolean variable error, Boolean
parenthesis error, Relational operator error and Arithmetic expression error.

The purpose of condition testing is to detect not only errors in the conditions of a program but also other
errors in the program

 Data Flow Testing: The data flow testing method selects test paths of a program according to the
locations of definitions and uses of variables in the program.

To illustrate the data flow testing approach, assume that each statement in a program is assigned a unique
statement number and that each function does not modify its parameters or global variables. For a
statement with S as its statement number,

DEF(S) = {X | statement S contains a definition of X}


USE(S) = {X | statement S contains a use of X}

If statement S is an if or loop statement, its DEF set is empty and its USE set is based on the condition of
statement S. The definition of variable X at statement S is said to be live at statement S' if there exists a
path from statement S to statement S' that contains no other definition of X.

A definition-use (DU) chain of variable X is of the form [X, S, S'], where S and S' are statement
numbers, X is in DEF(S) and USE(S'), and the definition of X in statement S is live at statement S'. One
simple data flow testing strategy is to require that every DU chain be covered at least once. We refer to
this strategy as the DU testing strategy.

Er Sandeep R, Assistant
Professor,CSE,MCET
Software Engineering PC 501 CS AICTEM-OU

 Loop Testing: It is a white-box testing technique that focuses exclusively on the validity of loop
constructs. Four different classes of loops can be defined: simple loops, concatenated loops, nested
loops, and unstructured loops (Figure 17.8)

1. Simple loops: The following set of tests can be applied to simple loops, where n is the maximum
number of allowable passes through the loop.

1. Skip the loop entirely.


2. Only one pass through the loop.
3. Two passes through the loop.
4. m passes through the loop where m < n.
5. n - 1, n, n + 1 passes through the loop.

2. Nested loops: If we were to extend the test approach for simple loops to nested loops, the number of
possible tests would grow geometrically as the level of nesting increases. This would result in an
impractical number of tests. Beizer suggests an approach that will help to reduce the number of tests:

1. Start at the innermost loop. Set all other loops to minimum values.
2. Conduct simple loop tests for the innermost loop while holding the outer loops at their minimum
iteration parameter (e.g., loop counter) values.
3. Work outward, conducting tests for the next loop, but keeping all other outer loops at minimum values
and other nested loops to "typical" values.
4. Continue until all loops have been tested.

3. Concatenated loops: Concatenated loops can be tested using the approach defined for simple loops,
if each of the loops is independent of the other. However, if two loops are concatenated and the loop
counter for loop 1 is used as the initial value for loop 2, then the loops are not independent. When the
loops are not independent, the approach applied to nested loops is recommended.
4. Unstructured loops: Whenever possible, this class of loops should be redesigned to reflect the use of
the structured programming constructs

Er Sandeep R, Assistant
Professor,CSE,MCET
Software Engineering PC 501 CS AICTEM-OU

BLACK-BOX TESTING
Black-box testing, also called behavioral testing, focuses on the functional requirements of the software.
Black -box testing enables the software engineer to derive sets of input conditions that will fully exercise
all functional requirements for a program.
Black-box testing attempts to find errors in the following categories: (1) incorrect or missing functions,
(2) interface errors, (3) errors in data structures or external data base access, (4) behavior or performance
errors, and (5) initialization and termination errors.
White-box testing is performed early in the testing process; Black-box testing tends to be applied
during later stages of testing.
GRAPH-BASED TESTING METHODS: Software testing begins by creating a graph of
important objects and their relationships and then devising a series of tests that will cover the graph so
that each object and relationship is exercised and errors are uncovered.

Collection of nodes that represent objects; links that represent the relationships between objects; node
weights that describe the properties of a node (e.g., a specific data value or state behavior); and link
weights that describe some characteristic of a link.

A directed link (represented by an arrow) indicates that a relationship moves in only one direction.
A bidirectional link, also called a symmetric link, implies that the relationship applies in both directions.
Parallel links are used when a number of different relationships are established between graph nodes.

Referring to the figure, a menu select on new file generates a document window. The node weight of
document window provides a list of the window attributes that are to be expected when the window is
generated. The link weight indicates that the window must be generated in less than 1.0 second. An
undirected link establishes a symmetric relationship between the new file menu select and document
text, and parallel links indicate relationships between document window and document text.

The software engineer then derives test cases by traversing the graph and covering each of the
relationships shown. These test cases are designed in an attempt to find errors in any of the relationships.

Er Sandeep R, Assistant
Professor,CSE,MCET
Software Engineering PC 501 CS AICTEM-OU

EQUIVALENCE PARTITIONING: It is a black-box testing method that divides the input


domain of a program into classes of data from which test cases can be derived. Test case design for
equivalence partitioning is based on an evaluation of equivalence classes for an input condition.

If a set of objects can be linked by relationships that are symmetric, transitive, and reflexive, an
equivalence class is present. An equivalence class represents a set of valid or invalid states for input
conditions.

Typically, an input condition is a specific numeric value, a range of values, a set of related values, or a
Boolean condition. Equivalence classes may be defined according to the following guidelines:

1. If an input condition specifies a range, one valid and two invalid equivalence classes are
defined.
2. If an input condition requires a specific value, one valid and two invalid equivalence classes
are defined.
3. If an input condition specifies a member of a set, one valid and one invalid equivalence class
are defined.

Test cases are selected so that largest numbers of attributes of an equivalence class are exercised at once.

BOUNDARY VALUE ANALYSIS: Number of errors tends to occur at the boundaries of the
input domain rather than in the "center”. Boundary value analysis leads to a selection of test cases that
exercise bounding values.

BVA leads to the selection of test cases at the "edges" of the class. Rather than focusing solely on input
conditions, BVA derives test cases from the output domain as well

How do I create BVA test cases?

1. If an input condition specifies a range bounded by values a and b, test cases should be designed with
values a and b and just above and just below a and b.
2. If an input condition specifies a number of values, test cases should be developed that exercise the
minimum and maximum numbers. Values just above and below minimum and maximum are also
tested.
3. Apply guidelines 1 and 2 to output conditions. For example, assume that a temperature vs. pressure
table is required as output from an engineering analysis program. Test cases should be designed to
create an output report that produces the maximum (and minimum) allowable number of table entries.
4. If internal program data structures have prescribed boundaries (e.g., an array has a defined limit of
100 entries), be certain to design a test case to exercise the data structure at its boundary.

ORTHOGONAL ARRAY TESTING: The orthogonal array testing method is particularly useful
in finding errors associated with region faults—an error category associated with faulty logic within a
software component.

To illustrate the difference between orthogonal array testing and more conventional “one input item at a
time” approaches, consider a system that has three input items, X, Y, and Z. Each of these input items has
three discrete values associated with it. There are 33 = 27 possible test cases. Phadke suggests a
geometric view of the possible test cases associated with X, Y, and Z illustrated in Figure. Referring to

Er Sandeep R, Assistant
Professor,CSE,MCET
Software Engineering PC 501 CS AICTEM-OU

the figure, one input item at a time may be varied in sequence along each input axis. This results in
relatively limited coverage of the input domain (represented by the left-hand cube in the figure).

When orthogonal array testing occurs, an L9 orthogonal array of test cases is created. The L9 orthogonal
array has a balancing property that is; test cases (represented by black dots in the figure) are “dispersed
uniformly throughout the test domain,”

To illustrate the use of the L9 orthogonal array, consider the send function for a fax application. Four
parameters, P1, P2, P3, and P4, are passed to the send function. Each takes on three discrete values. For
example, P1 takes on values:

P1 = 1, send it now
P1 = 2, send it one hour later
P1 = 3, send it after midnight
P2, P3, and P4 would also take on values of 1, 2 and 3, signifying other send functions.

If a “one input item at a time” testing strategy were chosen, the following sequence of tests (P1, P2, P3,
P4) would be specified: (1, 1, 1, 1), (2, 1, 1, 1), (3, 1, 1, 1), (1, 2, 1,1), (1, 3, 1, 1), (1, 1, 2, 1), (1, 1, 3, 1),
(1, 1, 1, 2), and (1, 1, 1, 3).

Phadke assesses these test cases in the following manner:

Er Sandeep R, Assistant
Professor,CSE,MCET
Software Engineering PC 501 CS AICTEM-OU

PRODUCT METRICS
SOFTWARE QUALITY

Software quality is defined as the conformance to explicitly stated functional and performance
requirements, explicitly documented development standards, and implicit characteristics that are expected
of all professionally developed software

McCall’s Quality Factors: Factors that affect software quality can be categorized in two broad groups:

1. Factors that can be directly measured (e.g. defects uncovered during testing)
2. Factors that can be measured only indirectly (e.g. usability or maintainability)

Figure 19.1; focus on three important aspects of a software product: its operational characteristics, its
ability to undergo change, and its adaptability to new environments.

McCall and his colleagues provide the following descriptions:

1. Correctness: The extent to which a program satisfies its specification and fulfills the customer's
mission objectives.
2. Reliability: The extent to which a program can be expected to perform its intended function with
required precision.
3. Efficiency: The amount of computing resources and code required by a program to perform its
function.
4. Integrity: Extent to which access to software or data by unauthorized persons can be controlled.
5. Usability: Effort required to learn, operate, prepare input, and interpret output of a program.
6. Maintainability: Effort required to locate and fix an error in a program.
7. Flexibility: Effort required to modify an operational program.
8. Testability: Effort required to test a program to ensure that it performs its intended function.
9. Portability: Effort required to transfer the program from one hardware and/or software system
environment to another.
10. Reusability: Extent to which a program [or parts of a program] can be reused in other
applications related to the packaging and scope of the functions that the program performs.
11. Interoperability: Effort required to couple one system to another.

Er Sandeep R, Assistant
Professor,CSE,MCET
Software Engineering PC 501 CS AICTEM-OU

ISO 9126 Quality Factors


The ISO 9126 standard was developed in an attempt to identify the key quality attributes for computer
software. The standard identifies six key quality attributes:

1. Functionality: The degree to which the software satisfies stated needs as indicated by the following
sub attributes: suitability, accuracy, interoperability, compliance, and security.
2. Reliability: The amount of time that the software is available for use as indicated by the following
sub attributes: maturity, fault tolerance, recoverability.
3. Usability: The degree to which the software is easy to use as indicated by the following sub
attributes: understandability, learnability, operability.
4. Efficiency: The degree to which the software makes optimal use of system resources as indicated by
the following sub attributes: time behavior, resource behavior.
5. Maintainability: The ease with which repair may be made to the software as indicated by the
following sub attributes: analyzability, changeability, stability, testability.
6. Portability: The ease with which the software can be transposed from one environment to another as
indicated by the following sub attributes: adaptability, installability, conformance, replaceability.
Measures, Metrics and Indicators
 A measure provides a quantitative indication of the extent, amount, dimension, capacity, or size
of some attribute of a product or process
 The IEEE glossary defines a metric as “a quantitative measure of the degree to which a system,
component, or process possesses a given attribute.”
 An indicator is a metric or combination of metrics that provide insight into the software process,
a software project, or the product itself

METRICS FOR ANALYSIS MODEL


These metrics examine the analysis model with the intent of predicting the “size” of the resultant system.
Size is an indicator of design complexity and is almost always an indicator of increased coding,
integration and testing effort.
Function-Based Metrics: The function-point metric can be used effectively as a means for measuring
the functionality delivered by the system. Using historical data FP metric can be used to:
1) Estimate cost or effort required to design code and test the software.
2) Predict number of errors that will be encountered during testing
3) Forecast number of components and number of projected source lines in the implemented system.
Function points are derived using an empirical relationship based on countable measures of software
information domain.
Number of External inputs (EI): Each external input originates from a user or is transmitted from
another application. Inputs are often used to update Internal Logic Files.
Number of External Outputs: Each external output is derived data within the application that provides
information to the user. External outputs refer to reports, screens, error messages

Er Sandeep R, Assistant
Professor,CSE,MCET
Software Engineering PC 501 CS AICTEM-OU

Number of external inquiries: An external inquiry is defined as an online input that results in
generation of some intermediate software response in form of an online output
Number of internal logical files: Each internal logical file is a logical grouping of data that resides
within the applications boundary and is maintained via external inputs.
Number of external interface files: Each external interface file is a logical grouping of data that resides
external to application but provides information that may be of use to application

Three user inputs—password, panic button, and activate/deactivate—are shown in the figure along with
two inquires—zone inquiry and sensor inquiry. One file (system configuration file) is shown. Two user
outputs (messages and sensor status) and four external interfaces (test sensor, zone setting,
activate/deactivate, and alarm alert) are also present. These data, along with the appropriate complexity,
are shown in Figure 19.4. The count total shown in Figure 19.4 must be adjusted using Equation

FP = count total * [0.65 + 0.01* ∑(Fi)]

where count total is the sum of all FP entries obtained from Figure 19.3 and Fi (i = 1to 14) are
"complexity adjustment values." For the purposes of this example, we assume that (∑Fi) is 46 (a
moderately complex product). Therefore,

FP = 50 * [0.65 + 0.01 *46] = 56

Er Sandeep R, Assistant
Professor,CSE,MCET
Software Engineering PC 501 CS AICTEM-OU

Metrics For Specification Of Quality


List of characteristics that can be used to assess the quality of the analysis model and the corresponding
requirements specification: specificity (lack of ambiguity), completeness, correctness, understandability,
verifiability, internal and external consistency, achievability, concision, traceability, modifiability,
precision, and reusability.

We assume that there are Nr requirements in a specification, such that

Where nf is the number of functional requirements and Nnf is the number of non-functional (e.g.,
performance) requirements.

To determine the specificity (lack of ambiguity) of requirements

is the number of requirements for which all reviewers had identical interpretations. The closer the
value of Q to 1, the lower is the ambiguity of the specification.

The completeness of functional requirements can be determined by computing the ratio

is the number of unique function requirements, , is the number of inputs

(stimuli) defined or implied by the specification, and ns is the number of states specified. The Q2 ratio
measures the percentage of necessary functions that have been specified for a system

METRICS FOR DESIGN MODEL


Architectural Design Metrics: Architectural design metrics focus on characteristics of the program
architecture. These metrics are black box in the sense that they do not require any knowledge of the inner
workings of a particular software component.

Card and Glass define three software design complexity measures: Structural complexity, Data
complexity, and System complexity.

Structural complexity of a module i is defined in the following manner:

S(i) = f 2out(i)

where fout(i) is the fan-out of module i.(Fan-out means number of modules directly sub-ordinate to
module i)

Er Sandeep R, Assistant
Professor,CSE,MCET
Software Engineering PC 501 CS AICTEM-OU

Data complexity provides an indication of the complexity in the internal interface for a module i and is
defined as

D(i) = v(i)/[ fout(i) +1]

where v(i) is the number of input and output variables that are passed to and from module i.

System complexity is defined as the sum of structural and data complexity, specified as

C(i) = S(i) + D(i)

As each of these complexity values increases, the overall architectural complexity of the system also
increases. This leads to a greater likelihood that integration and testing effort will also increase.

Morphology (shape) metrics: It is a function of the number of modules and the number of interfaces
between modules size = n + a

where n is the number of nodes and a is the number of arcs.

For the architecture shown in Figure 19.5,


size = 17 + 18 = 35
depth = the longest path from the root (top) node to a leaf node. For the architecture shown in Figure
19.5, depth = 4.
width = maximum number of nodes at any one level of the architecture. For the architecture shown in
Figure 19.5, width = 6.arc-to-node ratio, r = a/n, r = 18/17 = 1.06.
DSQI (Design Structure Quality Index): US air force has designed the DSQI. Compute s1 to s7 from
data and architectural design

• S1: Total number of modules


• S2: Number of modules whose correct function depends on the data input
• S3: Number of modules whose function depends on prior processing
• S4: Number of data base items

Er Sandeep R, Assistant
Professor,CSE,MCET
Software Engineering PC 501 CS AICTEM-OU

• S5: Number of unique database items


• S6: Number of database segments
• S7: Number of modules with single entry and exit
Calculate D1 to D6 from s1 to s7 as follows:
• D1=1 if standard design is followed otherwise D1=0
• D2(module independence)=(1-(s2/s1))
• D3(module not depending on prior processing)=(1-(s3/s1))
• D4(Data base size)=(1-(s5/s4))
• D5(Database compartmentalization)=(1-(s6/s4)
• D6(Module entry/exit characteristics)=(1-(s7/s1))

DSQI=∑ WiDi

where i = 1 to 6, wi is the relative weighting of the importance of each of the intermediate values, and
∑wi= 1 (if all Di are weighted equally, then wi= 0.167).

DSQI of present design be compared with past DSQI. If DSQI is significantly lower than the average,
further design work and review are indicated

Metrics For Object-Oriented Design: Whitmire [WHI97] describes nine distinct and measurable
characteristics of an OO design:

Size: Size is defined in terms of four views: population, volume, length, and functionality
Complexity: How classes of an OO design are interrelated to one another
Coupling: The physical connections between elements of the OO design
Sufficiency: “the degree to which an abstraction possesses the features required of it, or the degree to
which a design component possesses features in its abstraction, from the point of view of the current
application.”
Completeness: An indirect implication about the degree to which the abstraction or design component
can be reused.
Cohesion: The degree to which all operations working together to achieve a single, well-defined purpose.
Primitiveness: Applied to operations and classes, the degree to which an operation is atomic.
Similarity: The degree to which two or more classes are similar in terms of their structure, function,
behavior, or purpose.
Volatility: Measures the likelihood that a change will occur.

Class-Oriented Metrics-The CK Metrics suite: Proposed by Chidamber and Kemerer


Weighted methods per class: Assume that n methods of complexity c1,c2,---cn are defines for a class C
WMC=∑ci for i=1 ton

The number of methods and their complexity are reasonable indicators of amount or effort required to
implement and test a class.

Er Sandeep R, Assistant
Professor,CSE,MCET
Software Engineering PC 501 CS AICTEM-OU

Depth of the inheritance tree: Max length form node to root of tree referring to fig DIT=4. As DIT
grows low-level classes will inherit many methods, this leads to many difficulties & greater design
complexity.

Number of children: The subclasses that are immediately subordinate to a class in class hierarchy are
termed its children. As NOC grows reuse increases but abstraction represented by parent class is diluted
if some children are not appropriate members of parent class.

Coupling between object classes: As CBO increases reusability decreases

Response for a class: A set of methods that can potentially be executed in response to a message.RFC is
no. of methods in response set. As RFC increases complexity increases.

Lack of cohesion in methods: LCOM is no. of methods that access one or more of same attributes. If no
methods access same attribute LCOM=0

Class-Oriented Metrics: The MOOD Metrics Suite

Method Inheritance Factor:

Er Sandeep R, Assistant
Professor,CSE,MCET
Software Engineering PC 501 CS AICTEM-OU

Coupling factor:

If CF increases the complexity of OO software also increases.

Class-Oriented Metrics Proposed by Lorenz and Kidd


Lorenz and Kidd divide class-based metrics into four broad categories
 Size-Oreinted Metrics: focus on count of attributes and operations for an individual class
Inheritance-Based Metrics: focus on manner in which operations are reused through class hierarchy
Merics For Class Internal: focus on cohesion
Metrics For External: focus on coupling

 Component-Level design metrics

Cohesion metrics: a function of data objects and the focus of their definition
Coupling metrics: a function of input and output parameters, global variables, and modules.

Complexity metrics: hundreds have been proposed (e.g., Cyclomatic complexity)

Er Sandeep R, Assistant
Professor,CSE,MCET
Software Engineering PC 501 CS AICTEM-OU

 Operation-Oriented Metrics
 average operation size

 operation complexity

 average number of parameters per operation

 Interface Design Metrics


Layout appropriateness: a function of layout entities, the geographic position and the “cost” of making
transitions among entities. This is a worthwhile design metric for interface design.

METRICS FOR SOURCE CODE


• Primitive measure that may be derived after the code is generated or estimated once design is
complete.

Length N = n1 log2 n1 + n2 log2 n2

Program volume V = N log2 (n1 + n2)

Volume ratio L=2/n1 * n2/N2

Where n1 = the number of distinct operators that appear in a program


n2 = the number of distinct operands that appear in a program
N1 = the total number of operator occurrences.
N2 = the total number of operand occurrence.

METRICS FOR TESTING


• Program Level and Effort

• PL = 1/[(n1 / 2) x (N2 / n2 l)]

• e = V/PL

METRICS FOR MAINTENANCE


IEEE standard suggests a software maturity index that provides indication of stability of software
product. The Software Maturity Index, SMI, is defined as:

SMI = [Mt – (Fc + Fa + Fd)/ Mt]

Where Mt = the number of modules in the current release

Fc = the number of modules in the current release that have been changed

Fa = the number of modules in the current release that have been added.

Fd = the number of modules from the preceding release that were deleted in the current release

Er Sandeep R, Assistant
Professor,CSE,MCET
All the best
Prepare well

You might also like