Professional Documents
Culture Documents
SOFTWARE ENGINEERING
Chapter I – Overview
Definition of Software:
Software Process
3. A role / action model: This model represents the roles of the people involved in
the software process and the activities for which they are responsible.
1. The Waterfall model: In this approach, each of the activities that are involved,
are represented as separate phases such as requirements specification, software
design, implementation, testing and so on. After each stage is defined, it is frozen
and development goes on to the following stage.
Distribution of costs across the software process depends on the process used and
the type of software which is being developed.
Assuming that the total cost of developing software, is 100 cost units, a general
distribution across the various phases can be shown as below:
It is clear from the above figure, that system integration and testing is the most
expensive activity. It is about 40% of the total development cost.
Here, the specification costs are reduced because only a high-level specification is
produced before development. Specification, design, implementation, integration and
testing are carried out in parallel within a development activity. But a separate testing
activity is needed once the initial implementation is complete.
Apart from development costs, costs are also incurred in changing the software
after it has gone into use. For many software systems which have a long lifetime, these
costs are likely to exceed the development costs as shown below:
All the above cost distribution holds for customized software which is specified
by a customer and developed by a contractor.
For software products which are mostly sold for PCs, the cost profile is likely to
be different. As these products are developed using an evolutionary approach, the
specification costs are relatively low. But, because they are used on a range of different
configurations, they must be extensively tested. Hence, the testing costs are high as
shown below:
3. Unified Modeling Language (UML): All the above different approaches were
integrated into a single unified approach called the UML.
All the above methods are based on the idea of developing models of a system which
may be represented graphically. These models can in turn be used as a system
specification or design.
1. System model descriptions: This includes the descriptions of the system models
which should be developed and the notation used to define these models.
2. Rules: This component involves the constraints that apply to the system models.
4. Process guidance: This includes the descriptions of the different activities and
the organization of these activities which are used to develop the system models.
There is no ideal method and different methods have different areas where they are
applicable.
Software products have a number of attributes which reflect the quality of that
software. These attributes are not directly concerned with what the software does, but
they reflect its behaviour while it is executing and the structure and organization of the
source program and associated documentation.
Some of the essential attributes are:
d) Usability: Software must be usable by the user for whom it is designed without
any extra effort from his side. This means that it should have an appropriate user
interface and adequate documentation.
software engineering techniques are time consuming. In other words, the time
taken to achieve good quality is more.
The work of software engineers is carried out within a legal and social
framework. Software engineering is bounded by local, national and international laws.
Software engineers must behave in an ethical and morally responsible way if they
are to be respected as professionals. Engineers should uphold normal standards of
honesty and integrity. They should not use their skills and abilities to behave in a
dishonest way or in a way that will bring disrepute to the software engineering
profession.
Some of the ethics which needs to be followed:
1. Confidentiality: Engineers should normally respect the confidentiality of their
employers or clients irrespective of whether it is formal or not.
4. Computer Misuse: Software engineers should not use their technical skills to
misuse other people’s computers to any extent.
Socio-Technical Systems
What is a System?
A system is a purposeful collection of interrelated components that work together to
achieve some objective. System includes software, hardware, system interactions
with users and its environment. Properties and behavior of system components
influence each other. E.g. Traffic control system – hardware, software, human users
who make decisions based on system information. Successful functionality of each
system component depends on the functioning of some other components.
Types of System
Technical computer-based systems
Includes hardware, software components and not the procedures and
processes.
Socio-technical systems
Includes hardware, software components and also includes procedures and
processes. Users are inherent part of the system. Operational policies are
governed by organizational policies and government regulations.
Emergent Properties
Properties of these system as a whole (not due to individual component) but due to
relationship among the components. These properties can be evaluated once the system
is assembled.
Non-deterministic Property
These system may not produce specific output when a specific input is given. System’s
behaviour depends on the way operator use the system.
Complex Relationship with Organizational Objective
Success of these systems in supporting the organizational objectives depends on the
stability of these objectives. E.g. new management may reinterpret the organizational
objective that a system is designed to support and a successful system may then become a
failure.
Emergent Properties
Properties of the system as a whole rather than properties that can be derived from the
properties of components of a system. Emergent properties are the consequences of the
relationships between system components. They can therefore only be assessed and
measured once the components have been integrated into a system.
• Volume
– Physical space occupied by the system
– Depends on the method of assembling the components
• Reliability
– This depends on the reliability of system components and the relationships
between the components.
• Security
– This is a complex property and can not be easily evaluated. Built-in safe
guards may not give security to system.
• Reparability
Repairabit:: Repairability:
– If system defect can be fixed easily, system is reparable, thus depends on
designer’s ability to diagnose the problem and faulty component
– The usability of a system
– This is a complex property which is not simply dependent on the system
hardware and software but also depends on the system operators’
knowledge and the environment where it is used.
System Reliability
Reliability is a complex concept that must always be considered at the system level rather
than at the individual component level. Because of component inter-dependencies, faults
can be propagated through the system. System failures often occur because of unforeseen
inter-relationships between components. Software reliability measures of individual
components may give a false picture of the system reliability.
Hardware reliability
What is the probability of a hardware component failing? How long does it take to repair
that component?
Software reliability
What is the probability of a software component to produce an incorrect output?
Software failure is usually distinct from hardware failure in that software does not wear
out. What is the time required to make corrections?
Operator reliability
How likely is it that the operator of a system will make an error?
Reliability Relationships
All the different reliability is interlinked. One component failure may lead to entire
system failure. Reliability also depends on the context in which system is used. E.g.
system working in a particular range of temperature (10-250 C.) may not work at high
temperature. The environment in which a system is installed can affect its reliability.
Scope of iteration
Little scope for iteration between phases because hardware changes are very expensive.
Software is flexible to incorporate small changes. Interdisciplinary development
Scope for misunderstanding here
Different disciplines use a different vocabulary and much negotiation is required.
Engineers may have personal agendas to fulfill.
Characteristics that the system must not exhibit: Unacceptable behavior of system is
specified. E.g.:- too much information with controller is not desirable.
Should also define overall organizational objectives for the system. This phase is
difficult sometimes when wicked problems are considered such as earthquake planning.
This problem can be tackled only after the earthquake has taken place.
This process deals with creating a representation or model of the system. It contains
following phases:
• Partition requirements
• Identify sub-systems
• Assign requirements to sub-systems
• Specify sub-system functionality
• Define sub-system interfaces
There is a lot of feedback and iteration from one stage to another in design process.
System Modeling
• Sensors
– Movement sensor, door sensor
– Detects movement in the rooms and door openings
• Actuator
– Siren emits an audible warning when an intruder is suspected
• Communication
– Telephone caller makes external calls to notify security, the police etc.
• Co-ordination
– Alarm controller controls the operation of the system
• Interface
Voice synthesizer synthesizes a voice message giving the location of the suspected
intruder
Sub-System Development
Another system engineering process may be carried out for individual subsystem. If
subsystem is software, software engineering process shall be carried out. All subsystem
may not be developed from scratch. Subsystems can be COTS (cheaper) COTS may not
exactly possess requirements according to given needs, thus design activity is carried out
again. Different subsystem can be developed in parallel. System modification in
hardware is difficult and expensive. In software required changes can be easily
incorporated.
System Integration
Integration is the process of putting hardware, software and people together to make a
system. Two approaches for system integration are as follows:
System Installation
System Evolution
Large systems have a long lifetime, thus system must evolve to meet changing
requirements. Evolution is inherently costly, thus
– Changes must be analysed from a technical and business perspective
– Sub-systems interact so unanticipated problems can arise
– System structure is corrupted as changes are made to it
Existing systems which must be maintained are sometimes called legacy systems.
Documentation must be created for changed functionality for further development.
System Decommissioning
Taking the system out of service after its useful lifetime. This may require removal of
materials (e.g. dangerous chemicals) which pollute the environment. It should be
planned for reuse in the system design. This may require data to be restructured and
converted to be used in some other system.
Human, social and organizational factors play critical role to understand system
requirements. The development, procurement and use of these systems is greatly
influenced by policies, procedure and work-culture of the environment. System designer
should include all relevant information about the organization in the system
specifications.
Organizational Processes
As system engineering deals with development process of the system, it also deals with
– procurement process
– operational process
Procurement Process: Procurement process ensures that organization acquires required
system at most competitive and favorable terms. Process of using and operating the
system is also defined.
Organizational processes:
System Procurement:
Acquiring a system for an organization to meet some need is called procurement. Large
system is combination of a) COTS b) Specially built components
System specifications and architectural design is necessary before procurement. A
specification is needed to let a contract for system development. The specification may
allow us to buy a commercial off-the-shelf (COTS) system, almost always cheaper than
developing a system from scratch. Software is mostly required to glue the hardware with
each other (to create interface between subsystems). Requirements may have to be
modified to match the capabilities of off-the-shelf components. The requirements
specification may be part of the contract for the development of the system. After
selecting contractor, contract negotiation about schedule and cost can be done. There is
usually a contract negotiation period to agree changes after the contractor to build a
system has been selected.
The procurement of large hardware /software systems is usually based around some
principal contractor. Sub-contracts are issued to other suppliers to supply parts of the
system. Customer deals with the principal contractor and does not deal directly with sub-
contractors.
Legacy Systems
Legacy systems are socio-technical systems that have been developed in past using old or
obsolete technology. These systems are crucial to the operation of a business, often too
risky to discard these systems, thus they are maintained.
– Bank customer accounting system
– Aircraft maintenance system.
Legacy systems undergo evolution throughout their life with changes to accommodate
new requirements and new operating platforms.
• Hardware – legacy systems have been written for mainframe hardware that is
expensive to maintain and may not be compatible with current organizational IT
purchasing policies.
• Support software – legacy systems may rely on support software (operating
system and utilities) from suppliers who are no longer in business.
Aparna K, Dept. of MCA, BMSIT 16
Software Engineering
Components of legacy system are a series of layers. Each layer depends on the layer
immediate below it and interface with that layer. It appears that changes can be made
in any layer if interfaces are maintained.
In practice, changes lead to changes in other layers also due to following reasons:
– Any change in a layer may emerge new functionality and adjacent layer
has to be changed to make use of new functionality.
– Changes in software may make system slow and new hardware is required
to enhance system performance.
– Maintaining hardware interfaces may be impossible if any radical changes
are made to hardware.
1. What is software?
11. What are the professional and ethical responsibilities of software engineer?
Explain.
13. What are the Emergent system properties? Explain the two different types.
14. What is System Engineering? List and explain the different steps with a neat
diagram.
16. With respect to Systems Engineering, explain all the steps involved in “System
Design”.
17. With respect to Systems Engineering, write a note on systems integration, system
evolution, and system decommissioning.
Chapter – II
Critical Systems, Software Processes
Critical Systems
• Safety-critical systems
– Failure results in loss of life, injury or damage to the environment
– Chemical plant protection system
Mission-critical systems
– Failure results in failure of some goal-directed activity
– Spacecraft navigation system
• Business-critical systems
– Failure results in high economic losses
– Customer accounting system in a bank
Importance of Dependability
• Hardware failure
– Hardware fails because of design and manufacturing errors or because
components have reached the end of their natural life.
Software failure
– Software fails due to errors in its specification, design or implementation.
• Operational failure
– Human operators make mistakes. Now perhaps this is largest single cause
of system failures.
These failures are interrelated.
Insulin reservoir
Needle
Pump Clock
assem bly
Display 1 Display 2
Power supply
Used by diabetics to simulate the function of the pancreas which manufactures insulin, an
essential hormone that metabolizes blood glucose. Measures blood glucose (sugar) using
a micro-sensor and computes the insulin dose required to metabolize the glucose.
The system shall be available to deliver insulin when required to do so. The system shall
perform reliability and deliver the correct amount of insulin to counteract the current
level of blood sugar. The essential safety requirement is that excessive doses of insulin
should never be delivered as this is potentially life threatening.
Dependability:
Dimensions of Dependability
D
ependability
Availability R
eliability S
afety S
ecurity
Theabilityofthesystem T
heabilityofthesystem T
heabilityofthesystem T
heabilityofthesystem
todeliverserviceswhen todeliverservicesas tooperatew ithout toprotectitelfagainst
requested specified catastrophicfailure accidentalordeliberate
intrusion
• Availability
– It is the probability that the system will be operating and able to deliver
the useful and required services at given point of time.
• Reliability
– It is the probability that the system will operate to provide expected
services without failure under given conditions for a given time interval.
• Safety
– It is ability of system to operate without posing any risk to the people and
its environment.
• Security
– It is ability of system to protect itself against accidents or deliberate
intrusion.
Availability and reliability are probabilities hence can be measured quantitatively, safety
and security are based on judgment.
• Reparability
– Reflects the extent to which the system can be repaired in the event of a
failure.
• Maintainability
– Reflects the extent to which the system can be adapted to new
requirements.
• Survivability
– Reflects the extent to which the system can deliver services during hostile
attack.
• Error tolerance
– Reflects the extent to which user input errors can be avoided and tolerated.
Untrustworthy systems may be rejected by their users. System failure costs may be very
high. It is very difficult to tune systems to make them more dependable. It may be
possible to compensate for poor performance. Untrustworthy systems may cause loss of
valuable information.
Dependability costs
Because of very high costs of dependability achievement, it may be more cost effective to
accept untrustworthy systems and pay for failure costs. However, this depends on social
and political factors. The products that can’t be trusted may lose future business.
Depends on system type - for business systems in particular, modest levels of
dependability may be adequate.
and do not damage data, low reliability may not be a problem. Availability takes repair
time into account.
Reliability Terminology
Failures are usually a result of system errors that are derived from faults in the system.
However, faults do not necessarily result in system errors. The faulty system state may
be transient and ‘corrected’ before an error arises. Errors do not necessarily lead to
system failures. The error can be corrected by built-in error detection and recovery. The
failure can be protected by built-in protection facilities.
Every system has an input-output mapping where only some inputs will result in
erroneous outputs. The reliability of the system is the probability that a particular input
will lie in the set of inputs that cause erroneous outputs. Different people will use the
system in different ways so this probability is not a static system attribute but depends on
the system’s environment.
Reliability Perception
Reliability Improvement
Removing X% of the faults in a system will not necessarily improve the reliability by X
%. A study at IBM showed that removing 60% of product defects resulted in a 3%
improvement in reliability. Program defects may be in rarely executed sections of the
code so may never be encountered by users, thus removing these does not affect the
reliability. A program with known faults may therefore still be seen as reliable by its
users.
• Fault avoidance
– Development technique are used that either minimise the possibility of
mistakes or trap mistakes before they result in the introduction of system
faults. E.g. less use of pointers
• Fault detection and removal
– Verification and validation techniques that increase the probability of
detecting and correcting errors before the system goes into service are
used.
• Fault tolerance
– Run-time techniques are used to ensure that system faults do not result in
system errors and/or that system errors do not lead to system failures.
Safety
Safety is ability of a system to operate without posing any threat to life and environment.
System where safety is considered as most critical requirement is a safety critical system.
Examples:
• Air traffic control system
• Process control systems in nuclear power system
Primary safety-critical systems: Embedded software systems whose failure can cause
the associated hardware to fail and directly threaten people’s life and environment. Ex:
Air traffic control system
Secondary safety-critical systems: Systems whose failure results in introduction of
faults in other systems which can threaten people’s life and environment. An error in
CAD is responsible for design fault; hence system may be faulty and in turn leads to
failure.
Safety and reliability are related but distinct. In general, reliability and availability are
necessary but not sufficient conditions for system safety. Reliability is concerned with
conformance to a given specification and delivery of service. Fault tolerance does not
ensure safety because the system may still behave in ways that may cause accident.
Safety is concerned with ensuring system cannot cause damage irrespective of whether or
not it conforms to its specification.
Safety achievement
• Hazard avoidance
– The system is designed so that some classes of hazard simply cannot arise.
E.g. wood cutting machine
• Hazard detection and removal
– The system is designed so that hazards are detected and removed before
they result in an accident. e.g. high pressure in chemical plant
• Damage limitation
– The system includes protection features that minimise the damage that
may result from an accident. E.g. automatic fire extinguishers
Security
The security of a system is a system property that reflects the system’s ability to protect
itself from accidental or deliberate external attack. Security is becoming increasingly
important as systems are networked so that external access to the system through the
Internet is possible. Security is an essential pre-requisite for availability, reliability and
safety. If a system is a networked system and is insecure then statements about its
reliability and its safety are unreliable. Intrusion can change the executing system and/or
its data. Therefore, the reliability and safety assurance is no longer valid.
Security Assurance
• Vulnerability avoidance
– The system is designed so that vulnerabilities do not occur. For example,
if there is no external network connection then external attack is
impossible
• Attack detection and elimination
– The system is designed so that attacks on vulnerabilities are detected and
neutralised before they result in an exposure. For example, virus checkers
find and remove viruses before they infect a system
• Exposure limitation
– The system is designed so that the adverse consequences of a successful
attack are minimised. For example, a backup policy allows damaged
information to be restored
Software Processes
• A frame work for the tasks that are required to produce a high quality software
product.
or
• The process followed by a software engineer to build, deliver, deploy and evolve
a software product from inception to the delivery
• Each process is governed by a set of principles, goals, and carried out under some
constraints like budget, time, tools etc.
• Evolutionary development
– Specification, development and validation are interleaved.
• Component-based software engineering
– The system is assembled from existing components
b) System and software design: The systems design process partitions the
requirements to either hardware or software systems. It establishes an overall
system architecture. Software design involves representing the software system
functions in a form that may be transformed into one or more executable
programs.
c) Implementation and unit testing: In this stage, the software design is realized
as a set of programs or program units. Unit testing involves verifying that each
unit meets its specification.
d) Integration and system testing: The individual program units or programs are
integrated and tested as a complete system to ensure that the software
requirements have been met. After testing, the software system is delivered to the
customer.
e) Operation and Maintenance: In this phase, the system is installed and put into
use practically. Maintenance involves correcting errors which were not
discovered in earlier stages of the life cycle, improving the implementation of
system units and enhancing the system’s services as new requirements are
discovered.
The result of each phase is one or more documents which are approved at the end
of the phase. The software process is not a simple linear model but involves a sequence
of iterations of the development activities.
Unfortunately, a model which involves frequent iterations increases the costs of
producing and approving documents. Therefore, after a small number of iterations, it is
normal to freeze parts of the development, such as specification and continue with the
later development stages. This premature freezing of requirements may mean that the
system won’t do what the user wants. It may also lead to unstructured systems.
• Easy to understand.
• Produces clarity about software development process.
• Visibility of progress of process is high due to defined start and finish with document.
• The waterfall model assumes that the requirements of a system can be frozen
before the design begins. This is possible for systems designed to automate an
existing manual system. But for new systems, determining the requirements is
difficult as the user does not even know the requirements. Hence, having
unchanging requirements is unrealistic for such projects.
• Freezing the requirements usually requires choosing the hardware since it forms a
part of the requirements specification. A large project may take a few years to
complete. If the hardware is selected early, then due to the speed at which
hardware technology is changing, it is likely that the final software will use a
hardware technology which is on the verge of becoming obsolete.
• The customer must have patience. A working version of the program(s) will not
be available until the final phase.
• It is a document driven process that requires formal documents at the end of each
phase. This approach is not suitable for many applications, particularly
Despite these limitations, the waterfall model is the most widely used process
model. It is well suited for routine types of projects where the requirements are well
understood. In other words, if the developing organization is quite familiar with the
problem domain and the requirements for the software are quite clear, the waterfall model
works well.
Evolutionary Development
They are of two types: Exploratory Development and Throw away prototyping
Exploratory development
Objective is to work with customers and to evolve a final system from an initial outline
specification. Development should start with well-understood requirements and add new
features as proposed by the customer. User has an active participation. Process
continues until final product is built.
Throw-away prototyping
Objective is to understand the system requirements. Development should start with
poorly understood requirements to clarify what is really needed. Developed part
(prototype) is thrown way once requirements are gathered in full. Actual system
development starts from scratch.
Evolutionary Development
Merits
Demerits
– Lack of process visibility
– Systems are often poorly structured
– Special skills (e.g. skills in languages for rapid prototyping) may be
required
Applicability
– For small or medium-size interactive systems
– For parts of large systems (e.g. the user interface)
– For short-lifetime systems
Based on systematic reuse where systems are integrated from existing components or
COTS (Commercial-off-the-shelf) systems. The different stages in this method are:
– Requirements specification
– Component analysis
– Requirements modification
– System design with reuse
– Development and integration
This approach is becoming increasingly used as component standards have emerged.
Component analysis:
– Selection of reusable components having similar functionality
Requirements modification:
– If exact match is not available, modify requirements
– If modifications not possible, component analysis is performed again
System validation:
– System as a whole is validated.
Merits:
• Enhanced productivity due to less development.
• Saving of time and development cost.
• It improves system interoperability due to uniform interfaces between
components.
• Performance and reliability of components is high due to extensive testing.
Demerits:
• It requires modification in requirements.
• The proprietary and copyright issues may affect the evolution of system as reused
components are the property of parent organization.
Applicability:
• When requirements are general.
• Related components are available.
• Not applicable to very specific and customized product.
Process Iteration
System requirements of a large system always evolve during development process. Thus
process iteration where earlier stages are reworked is always part of the development.
Iteration can be applied to any of the generic process models. Two approaches:
• Incremental delivery
• Spiral development
Incremental Delivery
Rather than delivering system as a single unit, the development and delivery is broken
down into increments. Each increment having part of the required functionality is
developed and delivered. User requirements are prioritized and the highest priority
requirements are included in early increments. Once the development of an increment is
started, the requirements are frozen though requirements for later increments can
continue to evolve.
Merits
• First increment is available within a short time. No need to wait for entire system
• Early increments act as a prototype to help elicit requirements for later
increments.
• Lower risk of overall project failure.
• The highest priority system services tend to receive most testing.
Demerits
• Increments being small, it may be difficult assigning a particular ser of
requirements to an increment.
• As requirements are not defined in detail until an increment is to be implemented,
it is difficult to identify common requirements of the increments.
Spiral Model
Key Features
• Simplified form
– Waterfall model with Risk Analysis
• Each phase preceded by
– Alternatives
– Risk analysis
• Each phase followed by
– Evaluation
– Planning of next phase
such as specification or design. Loops in the spiral are chosen depending on what is
required. Risks are explicitly assessed and resolved throughout the process.
Objective setting: Specific objectives for the phase are identified. Constraints on the
process and the product are identified and a detailed management plan is drawn up.
Project risks are identified. Alternatives strategies, depending on these risks, may be
planned.
Risk assessment and reduction: Risks are assessed and steps are taken to reduce the
risks. If risks cannot be resolved, project is immediately terminated.
Development and validation: A development model for the system is chosen which
can be any of the generic models on the basis of risk factor involved. E.g. formal
development for the projects with high risks
Planning: The project is reviewed and the next phase of the spiral is planned.
Strengths
– Alternatives Evaluation
– Exhaustive Risk Analysis
– Planning for next phase
– Verification & validation after each phase
– Easy to judge how much to test
– No distinction between development, maintenance
Weaknesses
– For large-scale software only
– For internal (in-house) software usually
Process Activities
• Software specification
• Software design and implementation
• Software validation
• Software evolution
Software Specifications
It is the process of establishing what services are required and the constraints on the
system’s operation and development.
Feasibility study
A feasibility study decides whether or not the proposed system is worthwhile to develop.
Aims to conform are as follows:
Organizational objectives
If the system contributes to organizational objectives
Technical feasibility
If the system can be engineered using current technology
Economic feasibility
If the system can be engineered within budget & time
Aparna K, Dept. of MCA, BMSIT 37
Software Engineering
Operational feasibility
If the system can be integrated with other systems that are used
Requirement Specifications
In this stage gathered requirements is transformed into well structured document. It
includes following requirements:
• High level statement of the requirements needed by users
• Detailed specifications of the system needed by software developers
Requirements Validation
• Checking of realism, consistency and completeness of requirements
• Architectural design
– Identifying the subsystems and relationships
• Abstract specification
– Description of services and constraints under which subsystem must
operate
• Interface design
– Interface between subsystems
• Component design
– Services and interfaces of components of subsystem
• Data structure design
– Data structure design in detail
• Algorithm design
– Algorithms for different modules
Structured Methods
Translating a design into a program and removing errors from that program.
Programming is a personal activity and there is no generic programming process.
Programmers carry out program testing to discover faults in the program and remove
these faults in the debugging process.
Software Validation
Verification and validation (V & V) is intended to show that a system conforms to its
specification and meets the requirements of the system customer. It involves checking
and review processes and system testing. System testing involves executing the system
with test cases that are derived from the specification of the real data to be processed by
the system.
Testing Stages
Testing Phases
Software Evolution
Rational Unified Process (RUP) is a hybrid process model bringing the elements from
different generic process models. A modern process model derived from the work on the
UML and associated process. Normally described from 3 perspectives:
• A dynamic perspective that shows phases over time
• A static perspective that shows process activities
• A practice perspective that suggests good practice
Inception: It establishes the business case for the system (baseline to meet business
requirements). It also identifies all the entities and their interaction. Cost and schedule
analysis is also made. If project does not pass this milestone, project is cancelled.
Construction: This involves system design, programming and testing. Different parts of
system are developed in parallel and integrated. A working software with documentation
is ready.
Transition: This involves deploying the system in its operating environment. Beta
testing the system in real environment is done. Training is given to end users. A check is
done against quality goals set in Inception phase.
RUP supports iteration in two ways. Each phase may be performed in an iterative way
and all phases may also be performed iteratively. Within each iteration, the tasks are
categorized into nine disciplines, six "engineering disciplines“
– Business Modeling
– Requirements
– Analysis and Design
– Implementation
– Test
– Deployment
• Three supporting disciplines
– Configuration and Change Management
– Project Management
– Environment
Merits
• Case technology has led to significant improvements in the software process.
However, these are not the order of magnitude improvements that were once
predicted
Demerits
• Software engineering requires creative thought, which is not readily automated
• Software engineering is a team activity and, for large projects, much time is spent
in team interactions. CASE technology does not really support these.
Case Classification
• Functional perspective
– Tools are classified according to their specific function.
• Process perspective
– Tools are classified according to process activities that are supported.
• Integration perspective
– Tools are classified according to their organisation into integrated units.
CASE Integration
• Tools
– Support individual process tasks such as design consistency checking, text
editing, etc.
• Workbenches
• Environments
– Support all or a substantial part of an entire software process.
– Normally include several integrated workbenches.
1. What are critical systems? Explain the main types of critical systems with
examples.
7. Explain in detail the incremental development process with a neat block diagram.
Chapter – III
Requirements
Software Requirements
The process of establishing the services that the customer requires from a system and the
constraints under which it operates and is developed is called Requirements
Engineering. The requirements are the descriptions of the system services and
constraints that are generated during the requirements engineering process.
Types of Requirements
User requirements: Clients provide the description of the services expected by the
system and its operational constraints. Statements are written in natural language or
shown through diagrams.
Requirements Readers:
Characteristics of Requirements
• Correctness: without error.
• Consistency: free from ambiguities.
• Completeness: including all inputs, outputs, constraints
• Realistic: only possible requirements.
• Verifiability: when testing for requirements is possible
• Functional requirements
– Statements of services the system should provide how the system should
react to particular inputs and how the system should behave in particular
situations.
• Non-functional requirements
– Constraints on the services or functions offered by the system such as
timing constraints, constraints on the development process, standards, etc.
• Domain requirements
– Requirements that come from the application domain of the system and
that reflect characteristics of that domain.
Functional Requirements
This describes the functionality or system services. It depends on the type of software,
expected users and the type of system where the software is used. Functional user
requirements may be high-level statements of what the system should do. Functional
system requirements should describe the system services in detail. Functional
Requirements describe the interaction between the system and environment, statement of
services, behaviors of system with particular input in particular situation, types of inputs,
outputs and their constraints, nature of computations, timing and synchronizations of
above.
Non-Functional Requirements
These are properties and constraints on the system. Sometimes non functional
requirements are more critical than functional requirements. Due to absence of Non-
functional Requirements, system usability becomes very less though functional
requirements are present. Due to these requirements, choice of language platform,
implementation techniques, tools are restricted.
No n -fu n c tio n a l
re q u ire m e n ts
Pe rfo rm a n c e Sp a c e Priva c y Sa fe ty
re q u ire m e n ts re q u ire m e n ts re q u ire m e n ts re q u ire m e n ts
• Product requirements
• Organisational requirements
• External requirements
External requirements: These are derived from factors external to system. Examples
include interoperability (system interaction with systems in same and other organization),
ethical requirements (social acceptability), legislative requirements (system complies
with government laws).
Requirements Measures
Domain Requirements
These are derived from the application domain and describe system characteristics and
features that reflect the domain. Domain requirements may be new functional
requirements, constraints on existing requirements or define specific computations. If
domain requirements are not satisfied, the system may be unworkable.
• Understandability
– Requirements are expressed in the language of the application domain
– This is often not understood by software engineers developing the system
• Implicitness
– Domain specialists understand the area so well that they do not think of
making the domain requirements explicit
User Requirements
These describe functional and non-functional requirements in such a way that they are
understandable by system users who don’t have detailed technical knowledge. User
requirements are defined using natural language, tables and diagrams as these can be
understood by all users.
Invent a standard format and use it for all requirements: Standardizing the
format makes omissions less likely and requirements easier to check.
Use language in a consistent way: Always distinguish between mandatory and
desirable requirements. Use shall for mandatory requirements, should for
desirable requirements.
Use text highlighting to identify key parts of the requirement.
Avoid the use of computer jargon.
System Requirements
These include more detailed specifications of system functions, services and constraints
than user requirements. They are intended to be a basis for designing the system. They
may be incorporated into the system contract. System requirements may be defined or
illustrated using system models.
Structured natural language is a way of writing system requirements where the freedom
of the requirements writer is limited and all requirements are written in a standard way.
The advantage of this approach is that it maintains most of the expressiveness and
understandability of natural language but ensures that some degree of uniformity is
imposed on the specification. Specific forms were designed that include the following:
– Sources
– Output
– Destination
– Pre-condition
– Post-condition
– Side-effects of operations
By doing so, the limitation of natural language is reduced. There is uniformity in
description of system requirements. There are no ambiguities. It also retains salient
features of natural language such as: expressiveness and understandability.
Interface Specification
Most of the systems must operate with other already existing systems. Thus, operating
interfaces between existing and new systems must be specified as a part of the
requirements. There are three types of interfaces:
• Procedure interface
• Data structure interface
• Representation of data
Procedure interface: Existing sub-systems offer a range of services which are accessed
by calling procedures. These interfaces are also sometimes called Application
Programming Interfaces (APIs).
Data structure interface: This includes Interface and passing the data structure from
one subsystem to another. Graphical data models are the best notations for this type of
description.
Graphical models are most useful when you need to show how state changes or where
you need to describe a sequence of actions. Sequence diagrams show the sequence of
events that take place during some user interaction with a system. Sequence diagrams are
read from top to bottom to see the order of the actions that take place. Cash withdrawal
from an ATM
– Validate card;
– Handle request;
– Complete transaction.
AT
M D
a
ta
ba
se
C
a
rd
C
a
rd
num
be
r
C
a
rd
OK
P
I
N r
equ
es
t
P
I
N
O
p
tio
nme
nu Va
lid
ate
ca
rd
<
<e
xc
ep
tio
n>
>
in
va
lid
ca
rd
With
dr
awr
equ
es
t B
a
la
nc
er
equ
es
t
B
a
la
nc
e
A
mo
un
tr
equ
es
t
H
a
nd
le
re
qu
es
t
A
mo
un
t
D
e
bit(
amo
un
t)
<<
ex
ce
ption>
>
D
e
bitr
es
pon
se
in
su
f f ic
ie
ntc
as
h
C
a
rd
C
a
rd
re
mov
ed
Comple te
C
a
sh tr
ans
action
C
a
shr
emo
ve
d
R
e
ce
ip
t
The requirements document is the official statement of what is required of the system
developers. It should include both a definition of user requirements and a specification of
the system requirements. It is NOT a design document. As far as possible, it should
include set of WHAT the system should do rather than HOW it should do it.
Specifyth erequirementsan d
readthem tocheckth at they
Systemcustomers m eet theirneeds.T hey
specifychan gestothe
requirem ents
Useth erequirementsto
Systemengineers understandw hatsystemisto
bedevelop ed
Defines a generic structure for a requirements document that must be instantiated for
each specific system
1. Introduction.
2. Overall description.
3. Specific requirements.
4. Appendices.
5. Index.
Introduction
1.1. Purpose
1.2 Scope
1.3. Definitions, acronyms and abbreviations
1.4. References
1.5. Overview.
2. Overall description
2.1. Product perspective
2.2. Product functions.
2.3. User characteristics
2.4. Constraints.
2.5. Assumptions and dependencies.
2.6. Requirements subsets
3. Specific requirements
3.1. External interface requirements
3.1.1. User interfaces
3.1.2. Hardware interfaces
3.1.3. Software interfaces
3.1.4. Communications interfaces
3.2. Functional requirements
3.2.1. User class 1
3.2.1.1. Functional requirements 1.1
3.2.1.2. Functional requirements 1.2
3.2.2. User class 2
.
3.3. Performance requirements
3.4. Design constraints
3.5. Software system attributes
3.5. Other requirements
4. Appendices
Index
• Preface
• Introduction
• Glossary
• User requirements definition
• System architecture
• System requirements specification
• System models
• System evolution
• Appendices
Index
Preface
– This section should define the intended readership. The history of the
present version and rationale for creating a new version should be
described.
– Summary of changes should be included.
Introduction
– This Chapter should describe the need for the system. The functions of the
system should be briefly described.
– It should describe how the new system fits into the overall business
objectives of the organization.
Glossary
– This should define all technical terms used in the document.
User requirements definition
– In this chapter users and non-functional requirements should be described
using natural language and other notations. Product and process standard
must be specified.
System architecture
– This chapter provides a high-level view of the anticipated system
architecture. Distribution of functions across system modules should be
described and reused components must be highlighted.
System requirements and specification
– This chapter should contain detailed description of functional and non-
functional requirements.
System models
– This section should include system models used for representing
relationship between the system components and the system and its
environment. The system models may be object model, data flow model
etc.
System evolution
– This chapter describes the fundamental assumptions on which the system
is based. It should also include anticipated changes, changing user needs
etc.
Appendices
– This chapter provides detailed, specific information related to the
application being developed.
– Appendices may be on topics hardware description, date base description,
requirements for system configurations etc.
Index
– It should include indexes of diagram, functions etc. in alphabetical order.
• Feasibility studies
• Requirements elicitation and analysis
• Requirements validation
• Requirements management
The processes used for Requirements Engineering vary widely depending on the
application domain, the people involved and the organisation developing the
requirements. There are a number of generic activities common to all processes
– Requirements elicitation
– Requirements analysis
– Requirements specification
– Requirements validation
– Requirements management
Feasibility Study
A feasibility study decides whether or not the proposed system is worthwhile to develop.
It aims to conform
• Organizational objectives
– If the system contributes to organisational objectives
• Technical feasibility
– If the system can be engineered using current technology
• Economic feasibility
– If the system can be engineered within budget & time
• Operational feasibility
– If the system can be integrated with other systems that are used
• Lack of clarity
– Stakeholders may not express expectation in words. They may find it
difficult to express their views.
• Commonalities and conflict
– Different customers define same requirement in different way.
• Unfamiliarity with domain
– Analyst may have less understanding of domain.
• Business and economic factors
– Requirements may change and new requirements may emerge
• Political factors
– Some managers may influence the system requirements for personal
benefits.
Requirements discovery
– Interacting with stakeholders to discover their requirements. Domain
requirements from stakeholders and documentation are also discovered at
this stage.
Requirements classification and organisation
– Takes the unstructured collection of requirements, groups related
requirements and organises them into coherent clusters.
Requirements Prioritisation and negotiation
– Prioritising requirements and resolving requirements conflicts through
negotiation.
Requirements documentation
– Requirements are documented and input into the next round of the spiral.
Formal or informal requirements document may be produced.
Requirements Discovery
This is the process of gathering information about the proposed and existing systems and
distilling the user and system requirements from this information. Sources of information
include documentation, system stakeholders and the specifications of similar systems.
Consider the stakeholders of ATM for example:
• Bank customers
• Representatives of other banks
• Bank managers
• Counter staff
• Database administrators
• Security managers
• Marketing department
• Hardware and software maintenance engineers
• Banking regulators
Viewpoints
Interactor viewpoints
– People or other systems that interact directly with the system. In an ATM,
the customer’s and the account database are interactor VPs.
Indirect viewpoints
–
Stakeholders who do not use the system themselves but who influence the
requirements. In an ATM, management and security staff are indirect
viewpoints.
Domain viewpoints
- Domain characteristics and constraints that influence the requirements.
In an ATM, an example would be standards for inter-bank
communications
All VPs
In d ire c t In te r a c to r Do m a in
Sy s te m
Stud e n ts Sta f f Ex te r n a l Ca ta lo g ue rs
m a n a g e rs
Viewpoint Identification
Interviewing
Interviews in Practice
General practice of interviews includes a mix of closed and open-ended interviewing.
Interviews are good for getting an overall understanding of what stakeholders do and how
they might interact with the system. Interviews are not good for understanding domain
requirements because requirements engineers cannot understand specific domain
terminology. Some domain knowledge is so obvious that people find it hard to articulate
or think that it isn’t worth articulating.
Scenarios
Scenarios are descriptions of how a system is used in practice. They are helpful in
requirements elicitation as people can relate to these more readily than abstract statement
of what they require from a system. Scenarios are particularly useful for adding details to
an outline requirements description. A scenario includes:
1. System state at the beginning of the scenario
2. Normal flow of events in the scenario
3. What can go wrong and how this is handled
4. Other concurrent activities
5. System state on completion of the scenario
LIBSYS Scenario-1
Lending services
Ar ticle printing
The above figure shows the essentials of the use-case notation. Actors in the process are
represented as stick figures, and each class of interaction is represented as a named
ellipse. The set of use-cases represents all of the possible interactions to be represented
in the system requirements. Figure below shows the LIBSYS example and other use-
cases in that environment.
Lending services
Library
User
User administration
Library
Staff
Use-cases identify the individual interactions with the system. They can be documented
with text or linked to UML models that develop the scenario in more detail. Sequence
diagrams are often used to add information to a use-case. These sequence diagrams show
the actors involved in the interaction, the objects they interact with and the operations
associated with these objects. As an illustration of this, figure below shows the
interactions involved in using LIBSYS for downloading and printing an article. Here
there are four objects of classes – Article, Form, Workspace and Printer involved in this
interaction. The sequence of actions is from top to bottom, and the labels on the arrows
between the actors and objects indicate the names of operations. Essentially, a user
request for an article triggers a request for a copyright form. Once the user has
completed the form, the article is downloaded and sent to the printer. Once printing is
complete, the article is deleted from the LIBSYS workspace.
Ethnography
As social and organizational factors play crucial role in success of a software system, it
may not be possible to gather the requirements through meetings and interviews. A
group of people spends a considerable time observing and analyzing how people actually
work. Users do not have to explain or articulate their work because they may not explain
their work properly. Social and organizational factors of importance may be observed.
Scope of Ethnography
– Requirements that can be derived only observing actual work (not on basis
of assumptions.)
Limitations of Ethnography
• It focuses only on end users, not suitable for deriving domain requirements.
• New features of system can not be identified.
• Not a complete approach to requirement elicitation.
Focused Ethnography
Requirements Validation
Requirements Reviews
• Verifiability:
– Testability of requirements
• Comprehensibility:
– Requirements properly understood by users.
• Traceability:
– Origin of the requirements should be traceable.
• Adaptability:
– Requirements can be changed without having large-scale effects on other
system requirements.
• Conflicts, contradictions errors and omissions should be pointed out during
review and same should be formally recorded.
Requirements Management
Requirement Change
The priority of requirements from different viewpoints changes during the development
process. System customers may specify requirements from a business perspective that
conflict with end-user requirements. The business and technical environment of the
system changes during its development.
Enduring requirements: These are the stable requirements derived from the core
activity of the customer organisation. E.g. requirements related to a hospital will always
have details of doctors, nurses, patients etc.
Volatile requirements: These are the requirements which change during development or
when the system is in use. E.g. in a hospital, requirements derived from Government
health-care policy may change.
• Mutable requirements
– Requirements that change due to the system’s environment
• Emergent requirements
– Requirements that emerge as understanding of the system develops
• Consequential requirements
– Requirements that result from the introduction of the computer system
• Compatibility requirements
– Requirements that depend on other systems or organisational processes
This should apply to all proposed changes to the requirements. The principal stages in
this are:
Problem analysis: During this stage, the problem or the change proposal is analyzed to
check that it is valid. The results of the analysis are fed back to the change requestor, and
sometimes a more specific requirements change proposal is then made.
Change analysis and costing: The effect of the proposed change is assessed using
traceability information and general knowledge of the system requirements. The cost of
making the change is estimated in terms of modifications to the requirements document
and, if appropriate, to the system design and implementation. Once this analysis is
completed, a decision is made whether to proceed with the requirements change.
Change implementation: Modify requirements document and other documents to reflect
change.
6. Give the IEEE format and explain the structure of software requirements
document.
9. List all the techniques involved in ‘Requirements Elicitation and Analysis’ with
appropriate examples. Explain viewpoint-oriented approach.
10. Explain ‘Interviews’, ‘Scenarios’ and ‘Use cases’ in detail with appropriate
examples.
13. Differentiate between Enduring and Volatile requirements. Also explain the
different types of volatile requirements.
Chapter – IV
System Models
• Context models
• Behavioural models
• Data models
• Object models
• Structured methods
System Modeling
System modeling helps the analyst to understand the functionality of the system and
models are used to communicate with customers. Different models present the system
from different perspectives. External perspective showing the system’s context or
environment, behavioral perspective showing the behavior of the system, and structural
perspective showing the system or data architecture.
Model Types
• Data flow models show how the data is processed at different stages. (DFD)
• Composition models show how entities are composed of other entities. (ER
diagram)
• Architectural models show principal sub-systems. (Context diagram )
• Classification models show how entities have common characteristics. (Class
and inheritance diagrams
Stimulus/response models show the system’s reaction to events. (state machine model
Context Models
Context models are used to illustrate the operational context of a system - they show what
lies outside the system boundaries. Social and organisational factors may affect the
decision for system boundaries. Architectural models show the system and its
relationship with other systems. Process models show the overall process and the
processes that are supported by the system. Data flow models may be used to show the
processes and the flow of information from one process to another.
Architectural models describe the environment of a system. However they do not show
the relationships between the other systems in the environment and the system that is
being specified. External systems might produce data for or consume data from the
system. They might share data with the system, or they might be connected directly,
through a network or not at all.
Security
system
B r anch
Account
accounting
da ta base
system
Auto-teller
system
B r anch
U sa ge
counter
database
system
Maintenance
system
De liv e ry
no te
Ch e c ke d De live ry
Eq uipm e nt no te
spe c .
Spe c ify spe c . Ac c e pt Ch e c k
Va lid a te Ge t c o st
e q uipm e nt d e li v e ry o f d e liv e re d
spe c ific a tio n e stim a te s
re q uir e d e q uipm e nt ite m s
Spe c . +
supplie r + Insta lla tio n
Eq uipm e nt e stim a te Ord e r instruc tio ns
spe c . Supplie r list no tific a tio n
Pla c e
Supplie r Find Ch o o se Insta ll
e q uipm e nt
d a ta b a se supplie rs supplie r e q uipm e nt
Orde r o rd e r
d e ta ils plus
b la nk o r d e r Insta lla tio n
fo r m a c c e pta nc e
Ac c e pt
d e li ve re d
Ch e c k e d a nd
e q uipm e nt
sig ne d o r d e r fo rm
Eq uipm e nt
d e ta ils
Eq uipm e nt
d a ta b a se
The above figure illustrates a process model for the process of procuring equipment in an
organization. This involves specifying the equipment required, finding and choosing
suppliers, ordering the equipment, taking delivery of the equipment and testing it after
delivery. When specifying computer support for this process, you have to decide which f
these activities will actually be supported. The other activities are outside the boundary
of the system. The dotted line encloses the activities that are within the system boundary.
Behavioural Models
Behavioural models are used to describe the overall behaviour of a system. Two types of
behavioural model are:
Data processing models that show how data is processed as it moves through the system
State machine models that show the systems response to events
These models show different perspectives so both of them are required to describe the
system’s behaviour.
Data flow diagrams (DFDs) may be used to model the system’s data processing. These
show the processing steps as data flows through a system. DFDs are an intrinsic part of
many analysis methods.
Ch e c ke d a n d
Co m p le te d Sig n e d Sig n e d Se n d to sig n e d o rd e r
o rd e r fo r m o rd e r fo r m o rd e r fo rm
sup p lie r + o rd e r
Ord e r
n o tific atio n
d e ta ils + Co m p le te Va lid ate Re c o rd
b la n k o r d e r fo rm o rd e r o rd e r
o r d e r fo rm Ad j u st
Ord e r a va ilab le
Sig n e d b ud g e t
d e ta ils o rd e r fo rm
Ord e r
a m o un t
+ a c c o un t
d e ta ils
Ord e r s Bu d g e t
file file
DFDs model the system from a functional perspective. Tracking and documenting how
the data associated with a process is helpful to develop an overall understanding of the
system. Data flow diagrams may also be used in showing the data exchange between a
system and other systems in its environment.
Insulin
re q uir e m e nt
c o m puta tio n
Pum p c o ntr o l
Insulin c o m m a nd s Insulin Insulin
Insulin re q uir e m e nt
d e liv e ry
pum p
c o ntr o lle r
These model the behaviour of the system in response to external and internal events.
They show the system’s responses to stimuli thus used for modelling real-time systems.
State machine models show system states as nodes and events as arcs between these
nodes. When an event occurs, the system moves from one state to another.
State charts (given by Hirel) are an integral part of the UML and are used to represent
state machine models. State charts allow the decomposition of a model into sub-models.
A brief description of the actions is included following the ‘do’ in each state. They can
be complemented by tables describing the states and the stimuli.
Full
po w e r Full po w e r
d o : se t po w e r
= 600
Tim e r
Wa iting
Num b e r
d o : d ispla y Ope ra tio n
Full Se t tim e
tim e
po w e r d o : g e t num b e r d o : o pe ra te
e xit: se t tim e ove n
Ha lf
Ha lf po w e r
Do o r
po w e r Tim e r c lo se d Ca nc e l
Sta rt
Do o r
o pe n Do o r
Ha lf po w e r Ena b le d Wa iting
o pe n
d o : se t po w e r Do o r d o : d ispla y d o : d ispla y
= 3 00 c lo se d 'Re a d y ' tim e
Disab le d
d o : d ispla y
'Wa iting '
The different states of the above state machine model are as follows:
• Waiting : Oven waiting for input, displays current time
• Half power: Power is set 300 watts, displays ‘Half power’
• Full power: Power is set 600 watts, displays ‘Full power’
• Set time: Cooking time is set.
The problem with the above state machine model is that the number of possible states
increases rapidly. For large system models, therefore, some structuring of these state
models is necessary. One way to do this is by using the notion of a superstate that
encapsulates a number of separate states. This superstate looks like a single state on a
high-level model but is then expanded in more detail on a separate diagram. Ex:
Consider the “operation” state in the above diagram. This is a superstate that can be
expanded as shown below. The “Operation” state includes a number of sub-states.
Operation
Tim e
Checking
OK Cook
do: check do: run
status generator
Done
Alarm
do: buzzer on
do: display
for 5 secs.
event
Disabled Waiting
These are used to describe the logical structure of data required by the system. An entity-
relation-attribute model set out the entities in the system, the relationships between these
entities and the entity attributes. It is widely used in database design and can readily be
implemented using relational databases. No specific notation is provided in the UML but
objects and associations can be used.
A
r
t
i
cl
e S
o
ur
c
e
p
u
bl
i
sh
ed
-
i
n
t
i
tl
e m n t
i
tl
e
a
ut
ho
rs p
ub
li
sh
er
p
df
fi
l
e f
e
e-
p
ay
ab
l
e
-t
o i
s
sue
1 d
at
e
f
e
e
p
ag
es
1
1
d
e
l
iv
e
rs
1 i
n
n 1
O
r
d
er C
opy
ri
gh
t C
o
u
nt
r
y
A
ge
ncy 1 i
n 1
o
rd
er
nu
mbe
r c
op
yr
i
gh
tf
o
rm
t
ot
a
lpa
yme
n
t na
me t
a
xra
t
e
h
a
s
-
al
i
n
d
dk
r
e
ss
s
d
at
e
t
a
xst
at
us
n
p
l
a
ce
s
1
B
u
y
er
n
a
me
a
d
dr
es
s
e
-
ma
il
b
i
l
li
ngi
nf
o
The above diagram is an example of a data model that is part of the library system
LIBSYS. It shows that an Article has attributes representing the title, the authors, the
name of the PDF file of the article and the fee payable. This is linked to the Source,
where the article was published, and to the Copyright Agency for the country of
publication. Both Copyright Agency and Source are linked to Country. The diagram
also shows that Buyers place Orders for Articles.
Data Dictionaries
Data dictionaries are lists of all of the names used in the system models. Descriptions of
the entities, relationships and attributes are also included.
• Advantages
– Support name management and avoid duplication
– Store of organisational knowledge linking analysis, design and
implementation
• Many CASE workbenches support data dictionaries.
Object Models
Object models describe the system in terms of object classes and their associations. An
object class is an abstraction over a set of objects with common attributes and the
services (operations) provided by each object. Various object models may be produced
– Object class models
– Inheritance models
– Aggregation models
– Behaviour models (interaction models)
An object class is an abstraction over a set of objects with common attributes and
services provided by each object. An object class is represented by a rectangle, name at
the top, attributes in middle section, and operations in bottom section. Object class
identification is recognized as a difficult process requiring a deep understanding of the
application domain. Object classes reflecting domain entities are reusable across
systems.
Em plo ye e
nam e : string
addre ss: string
date OfBir th: Date
e m ploye e No: inte ge r
socialSe curityNo: string
de par tm e nt: De pt
m anage r: Em ploye e
salar y: inte ge r
status: {c urre nt, le ft, re tire d}
taxCode : inte ge r
. ..
join ()
le ave ()
re tire ()
change De tails ()
Inheritance Models
In this model, the domain object classes are organized into an inheritance hierarchy.
Classes at the top of the hierarchy reflect the common features of all classes. Object
classes inherit their attributes and services from one or more super-classes; these may
further be specialized as necessary. Class hierarchy design can be a difficult process if
Reg ister ()
De-r eg ister ()
Reader Borr o w er
Affilia tion Items on loan
M ax. loans
S
taf f S
tudent
Depar tment Major subject
Depar tment phone Home ad dr ess
The above two figures show class inheritance hierarchies where every object class
inherits its attributes and operations from a single parent class. Multiple inheritance
models may also be constructed where a class has several parents. Its inherited attributes
and services are a conjunction of those inherited from each super-class. Figure below
shows an example of a multiple inheritance model that may also be part of the library
model.
Bo o k Vo ic e re c o rd in g
Auth o r Spe a k e r
Ed itio n Dura tio n
Pub lic atio n d a te Re c o rd ing d a te
ISBN
Ta lking b o o k
# Ta pe s
Object Aggregation
An aggregation model shows how classes (collections) are composed of other classes.
Aggregation models are part-of relationship.
Stud y pa c k
Co urse title
Num b e r
Ye a r
Instruc to r
A behavioral model shows the interactions between objects to produce some particular
system behavior that is specified as a use-case. Sequence diagrams (or collaboration
diagrams) in the UML are used to model interaction between objects.
Ec at: Lib1 :
:Library Ite m
Catalog Ne tSe rve r
:Library Use r
Lookup
Display
Issue
Issue lic e nc e
Ac c e pt lic e nc e
Com pre ss
De live r
In a sequence diagram, objects and actors are aligned along the top of the diagram.
Labeled arrows indicate operations; the sequence of operations is from top to bottom. In
this scenario, the library user accesses the catalogue to see whether the item required is
available electronically; if it is, the user requests the electronic issue of that item. For
copyright reasons, this must be licensed so there is a transaction between the item and the
user where the license is agreed. The item to be issued is then sent to a network server
object for compression before being sent to the library user.
Structured Methods
CASE Workbenches
These are coherent set of tools that is designed to support related software process
activities such as analysis, design or testing. Analysis and design workbenches support
system modelling during both requirements engineering and system design. These
workbenches may support a specific design method or may provide support for a creating
several different types of system model.
Project Management
• Management activities
• Project planning
• Project scheduling
Risk management
Management Activities
Proposal writing: The proposal describes the objective of project and how it will be
carried out. It usually includes cost and schedule estimates, and justifies why the project
contract should be awarded to a particular organization or team. It is high-skill job done
by experienced staff.
Project planning: This is concerned with identifying the activities, milestones and
deliverables produced by a project. A plan is drawn up to guide the development towards
the project goals.
Project Monitoring: The manager must keep track of the progress of the project and
compare actual and planned progress and costs. A skilled manager can form a clear
picture of what is going on through informal discussions with project staff.
Report writing and presentation: Periodical reports on the status of the project are
prepared. Project manager should have necessary skills and ability to present reports.
Project Staffing: It may not be possible to appoint the ideal people to work on a project.
Project budget may not allow for the use of highly-paid staff. Staff with the appropriate
experience may not be available. An organization may wish to develop employee skills
on a software project, thus inexperienced staff may be assigned to a project to learn.
Managers have to work within these constraints especially when there are shortages of
trained staff.
Project Planning
Types of Plan
At the beginning of a planning process, you should assess the constraints affecting the
project. In connection with this, we should estimate project parameters such as its
structure, size, and distribution of functions. Next we define the progress milestones and
deliverables. The process then enters a loop. We draw up an estimated schedule for the
project and the activities defined in the schedule are started or given permission to
continue. After some time, usually about two to three weeks, we should review progress
and note discrepancies from the planned schedule. Because initial estimates of project
parameters are tentative, we will always have to modify the original plan.
The project plan sets out the resources available to the project, the work breakdown and a
schedule for carrying out the work. The details of the project plan vary depending on the
type of project and organization. Most plans include the following sections:
• Introduction:
– Objectives and constraints of project
• Project organization:
– Team, hierarchy and roles of members
• Risk analysis:
– Types of risks, Strategies to solve
• Hardware and software resources requirements:
– Hardware, software required with cost estimates
• Work breakdowns structure:
– Number of activities, milestones and deliverables
• Project schedule:
– Activities on time scale, sequence and interdependencies among various
activities
• Monitoring and reporting:
– Status of project at various stages to check progress and quality
Activity Organization
Project Scheduling
This activity splits the project into tasks. It estimates time and resources required to
complete each task. It organizes tasks concurrently to make optimal use of workforce. It
minimizes task dependencies to avoid delays caused by one task waiting for another to
complete. Project scheduling is dependent on project manager’s intuition and experience.
Scheduling Problems
• Estimating the difficulty of problems and hence the cost of developing a solution
is hard.
• Productivity is not proportional to the number of people working on a task.
• Adding people to a late project makes it later because of communication
overheads.
• The unexpected events always happen thus always allow contingency in planning.
Graphical notations are used to illustrate the project schedule. Project breakdown is
shown into tasks. Tasks should not be too small, they should take about a week or two.
Activity charts show task dependencies and the critical path. Bar charts show schedule
against calendar time.
The table below shows activities, their duration, and activity interdependencies. We can
see that Activity T3 is dependent on Activity T1. This means that T1 must be completed
before T3 starts. Given the dependencies and estimated duration of activities, an activity
chart that shows activity sequences may be generated. This shows which activities can
be carried out in parallel and which must be executed in sequence because of a
dependency on an earlier activity. Activities are represented as rectangles; milestones
and project deliverables are shown with rounded corners. Dates show the start date of the
activity. The chart should be read from left to right and from top to bottom.
An activity network
Longest path of network is project duration and known as Critical path. Activities on this
path are Critical activities. Overall schedule depends on critical path. Any delay in
critical activity makes entire project delayed
Ex-1:
55 working days or 11 weeks
Staff Allocation
Activities should be assigned to staff. Activities on same time can not be allocated to
same staff. Every staff may not be working all the time for same project. They may
work with other project. They may take leave. Other factors are expertise of staff and
seniority of staff.
Risk Management
Risk management is concerned with identifying risks and drawing up plans to minimise
their effect on a project. A risk is a probability that some adverse circumstance will
occur. There are three related categories of risk:
– Project risks affect schedule or resources
– Product risks affect the quality or performance of the software being
developed
– Business risks affect the organisation developing or procuring the
software
Risk Ris k
Ris k a n a ly sis Ris k p la n n in g
id e n tif ic a tio n m o n ito r in g
Ris k a v o id a n c e
List o f p o te n tia l Pr io r itise d risk Ris k
a n d c o n tin g e n c y
r isks list a sse ssm e n t
p la n s
Risk Identification
This is the first stage of risk management. It is concerned with discovering possible risks
to the project. Risk identification may be carried out as a team process using a
brainstorming approach or may simply be based on experience. To help the process, a
checklist of different types of risk may be used. There are atleast six types of risk that
can arise:
• Personnel risks
– High staff turnover
– Skilled staff not available
– Inadequate training
• Organizational risks
– Change in management
– Financial condition of organization
– Restructuring of organization
• Technology risks
– Advanced technology not available
• Tool risks
– CASE tools inefficient
– CASE tools Integrating
– Other support software
• Requirements risks
– Customer requirements changed
– Change in customer business
– Process of managing requirements is not perfect
• Estimation risks
– Size of software underestimated
– Cost of repairing the defects underestimated
– Duration of development underestimated
Some examples of possible risks in each of these categories are shown below.
Risk Analysis
During the risk analysis process it is required to assess probability and seriousness of
each risk. Probability may be very low, low, moderate, high or very high. Risk effects
might be catastrophic, serious, tolerable or insignificant. Following steps are carried out
• Evaluate each identified risk
• Most important risk areas
• Quantity and prioritize the risks
Once the risks have been analyzed and ranked, we should assess which are most
significant. The judgement must depend on a combination of the probability of the risk
arising and the effects of that risk. In general, catastrophic risks should always be
considered, as should all serious risks that have more than a moderate probability of
occurrence.
Risk Planning
This process considers each of the key risks that have been identified and identifies
strategies to manage the risk. These strategies fall into three categories:
• Avoidance strategies
– The probability that the risk will arise is reduced E.g. employ more staff,
training to staff etc.
• Minimisation strategies
– The impact of the risk on the project or product will be reduced
• Contingency plans
– If the risk arises, contingency plans are plans to deal with that risk
We see here the analogy with the strategies used in critical systems to ensure reliability,
security and safety. Essentially, it is best to use a strategy that avoids the risk. If this is
not possible, we use one that reduces the chances that the risk will have serious effects.
Risk Monitoring
Risk Monitoring involves regularly assessing each of the identified risks to decide
whether or not that risk is becoming more or less probable and whether the effects of the
risk have changed. Some factors that help in risk monitoring are as follows:
4. Explain Bar Charts and Activity networks with respect to Project Scheduling with
appropriate examples.
7. Draw a data flow diagram for a library system considering registration, enquiry of
books issued and issue of books.
10. Describe Inheritance models and object aggregation with suitable examples.
Chapter V
Software Design
Architectural Design
Software Architecture
The design process for identifying the sub-systems making up a system and the
framework for sub-system control and communication is architectural design. The output
of this design process is a description of the software architecture. The process of
designing the global organization of a software system, including dividing software into
subsystems, deciding how these will interact and determining their interfaces.
Architectural Design
It is an early stage of the system design process. It represents the link between
specification and design processes. It is often carried out in parallel with some
specification activities. It involves identifying major system components and their
communications. The advantages of designing and documenting a software architecture
are:
• Stakeholder communication
– Architecture may be used as a focus of discussion by system stakeholders.
• System analysis
– Means that analysis of whether the system can meet its non-functional
requirements is possible.
• Large-scale reuse
– The architecture may be reusable across a range of systems.
Security: A layered structure for the architecture should be used, with the most critical
assets protected in the innermost layers and with a high level of security validation
applied to these layers.
Safety: The architecture should be designed so that safety-related operations are all
located in either a single sub-system or in a small number of sub-systems. This reduces
the costs and problems of safety validation and makes is possible to provide related
protection systems.
System Structuring
sub-system. Boxes within boxes indicate that the sub-system has itself been decomposed
to sub-systems. Arrows mean that data and or control signals are passed from sub-system
to sub-system in the direction of the arrows. Figure above shows an abstract model of the
architecture for a packing robot system.
Architectural design is a creative process so the process differs depending on the type of
system being developed. However, a number of common decisions span all design
processes such as the following:
• Similarity of the system being developed with any generic application
architecture.
• Distribution of the system across a number of processors.
• Appropriate architectural style for the system.
• Approach to structure the system.
• Decomposition of the system into modules.
• Control strategy of the units in the system.
• Method of evaluation of architectural design.
• Documentation of the system architecture.
Architectural Styles
• Static structural model that shows the need of major system components.
• Dynamic process model that shows organization of the processes in the system.
• Interface model that defines sub-system interfaces.
• Relationships model such as a data-flow model that shows sub-system
relationships.
• Distribution model that shows how sub-systems are distributed across
computers.
System Organization
The organization of a system reflects the basic strategy that is used to structure a system.
The system organization may be directly reflected in the sub-system structure. Three
organizational styles are widely used:
– A shared data repository style
– A shared services and servers style
– An abstract machine or layered style
Sub-systems making up a system must exchange information so that they can work
together effectively. There are two fundamental ways in which this can be done:
– Shared data is held in a central database or repository and may be accessed
by all sub-systems
– Each sub-system maintains its own database and passes data explicitly to
other sub-systems
The majority of systems that use large amounts of data are organized around a shared
database or repository. This model is therefore suited to applications where data is
generated by one sub-system and used by another. An example of CASE toolset
architecture is shown below.
De sig n Co d e
e d ito r g e ne r a to r
De sig n Re po r t
a na l y se r g e ne r a to r
The client-server model is a system model where the system is organized as a set of
services and associated servers and clients that access and use the services. The major
components of this model are:
• The set of servers that offer services to other sub-systems. Examples of servers
are print servers that offer printing services, file servers that offer file
management services and a compile server, which offers programming language
compilation services.
• A set of clients that call on the services offered by servers. These are normally
sub-systems in their own right. There may be several instances of a client
program executing concurrently.
• A network that allows the clients to access these services. This is not strictly
necessary as both the clients and the servers could run on a single machine.
The below figure shows an example of a system that is based on the client-server
model.
A disadvantage of the layered approach is that structuring systems in this way can be
difficult. Inner layers may provide basic facilities, such as file management, that are
required at all levels. Services required by a user of the top level may therefore have to
punch through adjacent layers to get access to services that are provided several layers
beneath it. This subverts the model, as the outer layer in the system does not just depend
on its immediate predecessor.
There are two main strategies that can be used when decomposing a sub-system into
modules:
• Object-oriented decomposition:
– A system is decomposed into interacting object
• Function-oriented pipelining:
Object-Oriented Decomposition
System is structured into a set of loosely coupled objects with well-defined interfaces.
Object-oriented decomposition is concerned with identifying object classes, their
attributes and operations. When implemented, objects are created from these classes and
some control model used to coordinate object operations. Figure below is an example of
an object-oriented architectural model of an invoice processing system.
Custo m e r Re c e ipt
c usto m e r# invo ic e #
na m e d ate
a d d re ss Inv o ic e a m o un t
c re d it p e r io d c usto m e r#
inv o ic e #
d ate
a m o un t
c usto m e r
Pa y m e n t issue ()
se n d Re m in d e r ()
invo ic e #
d ate a c c e p tPa y m e n t ()
a m o un t se n d Re c e ipt ()
c usto m e r#
Advantages
• Objects are loosely coupled so their implementation can be modified without
affecting other objects.
• It is easily understandable because objects reflect real-world entities.
• OO implementation languages are widely used.
• Objects can be reused
Disadvantages
• Objects must explicitly refer the names and interfaces of other objects offering
services.
Function-oriented pipelining
Is s u e
Re c e ip ts
re c e ip ts
Re a d is s u e d Id e n tif y
invo ic e s p ay m e n ts
Fin d Is s u e
p ay m e n ts p ay m e n t Re m in d e r s
due re m in d e r
Invo ic e s Pay m e n ts
Advantages:
• It supports transformation reuse.
• It shows intuitive organization for stakeholder communication.
• Easy to add new transformations.
• Relatively simple to implement as either a concurrent or sequential system.
Disadvantages
• It requires a common format for data transfer along the pipeline and difficult to
support event-based interaction.
Control Styles
The models for structuring a system are concerned with how a system is decomposed into
sub-systems. To work as a system, sub-systems must be controlled so that their services
are delivered to the right place at the right time. The sub-systems should be organized
according to some control model that supplements the structure model that is used. There
are two generic control styles that are used:
• Centralised control
– One sub-system has overall responsibility for control and starts and stops
other sub-systems.
• Event-driven control
– Each sub-system can respond to externally generated events from other
sub-systems or the system’s environment.
Centralized Control
In a centralized control model, one sub-system is designated as the system controller and
has responsibility for managing the execution of other sub-systems. Centralized control
models fall into two classes, depending on whether the controlled sub-system execute
sequentially or in parallel.
1. Call-return model
2. Manager model
Call-return Model
In this model, the control starts at the top of a subroutine hierarchy and, through
subroutine calls, passes to lower levels in the tree. The subroutine model is only
applicable to sequential systems. This model is shown below.
Ma in
pro g ra m
The main program can call routines 1, 2 and 3; routine 1 can call routines 1.1 or 1.2;
routine 3 can call routines 3.1 or 3.2; and so on.
The rigid and restricted nature of this model is both strength and a weakness. It is
strength because it is relatively simple to analyze control flows and work out how the
system will respond to particular inputs. It is a weakness because exceptions to normal
operation are awkward to handle.
Se nso r Ac tua to r
pr o c e sse s pr o c e sse s
Sy ste m
c o ntr o lle r
Event-driven Systems
Event driven control models are driven by externally generated events. Event may be a
signal that can take a range of values or a command input from a menu. Two event-
driven control models:
• Broadcast models
• Interrupt-driven models
Broadcast Model
An event is broadcast to all sub-systems. Any sub-system which can handle the event
may respond to handle the event, it requires great processing overhead.
Selective broadcast
The event and message handler maintain a register of subsystems and the events of
interest to them. Sub-systems register an interest in specific events. When these events
occur, control is transferred to the sub-system which can handle the event. Control policy
is not embedded in the event and message handler. Sub-systems decide on events of
interest to them. An illustration is shown below.
Advantages:
• It is effective in integrating sub-systems on different computers in a network.
• Evolution is easy; any subsystem can be integrated to handle any activity.
Disadvantages:
• Different subsystems may register for similar events which may cause conflict at
the time of event generation
• Sub-systems don’t know if or when an event will be handled.
Interrupt-driven models
Interrupt types are identified with their respective handler. Each interrupt type is
associated with a memory location and a hardware switch causes transfer to its handler.
At occurrence of an interrupt, the control is transformed to its handler. Handlers takes
control of the process to respond the signal received from interrupt vector. An
illustration is as shown below.
Advantages
• These are useful in real-time systems where fast response to an event is essential.
Disadvantages
• It is complex to program and difficult to validate
• Number of interrupts to be handled is limited by hardware
• Addition of new interrupt is difficult.
Reference Architectures
Architectural models may be specific to some application domain. These are domain-
specific model. There are two types of domain-specific architectural models:
Generic models: These are abstractions from a number of real systems and which
encapsulate the principal characteristics of these systems. Design of these models can be
directly reused.
Reference models: These are more abstract, idealised model. These provide a means of
information about that class of system and of comparing different architectures.
Reference models are derived from a study of the application domain rather than from
existing systems. These may be used as a basis for system implementation or to compare
different systems. These act as a standard against which systems can be evaluated. OSI
model is a layered model for communication systems which is shown below.
5 Session Session
4 Transpor t Transpor t
Another proposed reference model is a reference model for CASE environments. The
five levels of service in the CASE reference model are:
Data repository services: These provide facilities for the storage and management of data
items and their relationships.
Data integration services: These provide facilities for managing groups or the
establishment of relationships between them. These services and data repository services
are the basis of data integration in the environment.
Task management services: These provide facilities for the definition and enactment of
process models. They support process integration.
User interface services: These provide facilities for user interface development. They
support presentation integration.
Object-Oriented Design
Characteristics of OOD:
o 1 : C1 o 3 :C3 o 4: C4
sta te o 1 sta te o 3 sta te o 4
o ps1 () o ps3 () o ps4 ()
o 2 : C3 o 6 : C1 o 5 :C5
sta te o 2 sta te o 6 sta te o 5
o ps3 () o ps1 () o ps5 ()
Disadvantages
• All domain may not support OOD
• Original SRS generally is in function oriented approach
• Before OOD, SRS should be translated into OOA
An object is an entity that has a state and a defined set of operations which operate on
that state. The state is represented as a set of object attributes. The operations associated
with the object provide services to other objects (clients) which request these services
when some computation is required. Objects are created according to object class
definition. An object class definition serves as a type specification and as a template for
objects. It includes declarations of all the attributes and services which should be
associated with an object of that class. Object classes may inherit attributes and services
from other object classes.
Several different notations for describing object-oriented designs were proposed in the
1980s and 1990s. The Unified Modelling Language is an integration of these notations.
It describes notations for a number of different models that may be produced during OO
analysis and design. Now it is a standard for OO modelling.
Emp loyee
nam e: string
address: string
dateOfBirth: Date
em ployeeN o: integer
socialSecurityNo: string
department: Dept
ma nager: Employ ee
salary : integer
status: {current, left, retired}
taxC ode: integer
. ..
join ()
leav e ()
retire ()
changeDetails ()
The class Employee defines a number of attributes that hold information about employees
including their name and address, social security number, tax code and so on.
Object communication
Objects are members of classes that define attribute types and operations. Classes may
be arranged in a class hierarchy where one class (a super-class) is a generalisation of one
or more other classes (sub-classes). A sub-class inherits the attributes and operations
from its super class and may add new methods or attributes of its own. Generalisation in
the UML is implemented as inheritance in OO programming languages. A generalization
hierarchy is shown as below.
Employee
Manager Programmer
budgetsControlled project
progLanguage
dateAppointed
The above figure shows an example of an object class hierarchy where different classes
of employee are shown. Classes lower down the hierarchy has the same attributes and
operations as their parent classes but may add new attributes operations or modify some
of those from their parent classes. This means that there is one-way interchangeability.
If the name of a parent class is used in a model, the object in the system may either be
defined as of that class or of any of its descendants.
Advantages of Inheritance
Disadvantages:
• Object classes are not self-contained. they cannot be understood without reference
to their super-classes.
• Designers have a tendency to reuse the inheritance graph created during analysis,
it can lead to significant inefficiency.
• The inheritance graphs of analysis, design and implementation have different
functions and should be separately maintained.
UML Associations
Objects and object classes participate in relationships with other objects and object
classes. In the UML, relationship is indicated by an association. Associations may be
annotated with information that describes the association. Associations are general but
may indicate that an attribute of an object is an associated object or that a method relies
on an associated object. An association model is shown below.
Concurrent Objects
The nature of objects as self-contained entities makes them suitable for concurrent
implementation. The message-passing model of object communication can be
implemented directly if objects are running on separate processors in a distributed
system. There are two kinds of concurrent object implementation:
1. Servers where the object is realised as a parallel process with methods
corresponding to the defined object operations. Methods start up in response to
an external message and may execute in parallel with methods associated with
other objects. When they have completed their operation, the object suspends
itself and waits for further requests for service.
2. Active objects where the state of the object may be changed by internal
operations executing within the object itself. The process representing the object
continually executes these operations so never suspends itself.
Servers are most useful in a distributed environment where the calling and the called
object may execute on different computers. The response time for the service that is
requested is unpredictable, so, wherever possible, we should design the system so that the
object that has requested a service does not have to wait for the service to be completed.
They can also be used in a single machine where a service takes some time to complete.
Active objects are used when an object needs to update its own state at specified
intervals. This is common in real-time systems where objects are associated with
hardware devices that collect information about the system’s environment. The object’s
methods allow other objects access to the state information.
The above figure shows the layers and the layer name is included in a UML package
symbol that has been denoted as a sub-system. A UML package represents a collection
of objects and other packages. It is used here to show that each layer includes a number
of other components.
« sub sy ste m »
Da ta c o lle c tio n « sub sy ste m »
Da ta d ispla y
Ob se rv e r Sa te llite
U Useser r Ma p
Co m m s inte
inter rfafac ce e d ispla y
We a th e r Ma p
sta tio n Ba llo o n Ma p printe r
Da
Datata
Da ta Da ta sto
storaragge e
c h e c kin g inte g ra tio n
Ma p sto re Da ta sto re
The first stage in any software design process is to develop an understanding of the
relationships between the software that is being designed and its external environments.
The system context and the model of system use represent two complementary models of
the relationships between a system and its environment:
• System context
– A static model that describes other systems in the environment.
– Use a subsystem model to show other systems.
• Models of system use
– A dynamic model that describes how the system interacts with its
environment.
– Use use-cases to show interactions
The context model of a system may be represented using associations where a simple
block diagram of the overall system architecture is produced. We then develop this by
deriving a sub-system model using UML packages as shown in figure above. This model
shows that the context of the weather station system is within a sub-system concerned
with data collection. It also shows other sub-systems that make up the weather mapping
system.
Use-case models are used to represent each interaction with the system. A use-case
model shows the system features as ellipses and the interacting entity as a stick figure.
S
tar tup
S
hutdown
R
epor t
Calibrate
Test
The use-case model for the weather station is shown in figure above. This shows that
weather station interacts with external entities for startup and shutdown, for reporting the
weather data that has been collected, and for instrument testing and calibration.
Each of these use-cases can be described in structured natural language. This helps
designers identify objects in the system and gives them an understanding of what the
system is intended to do. The use-case description helps to identify objects and
operations in the system.
Architectural design
Once interactions between the system and its environment have been understood, you use
this information for designing the system architecture. A layered architecture is
appropriate for the weather station. The three layers in the weather station software are:
Interface layer: This is concerned with all communications with other parts of the
system and with providing the external interfaces of the system.
Data collection layer: This is concerned with managing the collection of data from the
instruments and with summarizing the weather data before transmission to the mapping
system.
Instruments layer: This is concerned with the encapsulation of all of the instruments
used to collect raw data about the weather conditions.
There should normally be no more than 7 entities in an architectural model. The weather
station architecture is shown below:
Object Identification
This process is actually concerned with identifying object classes. The design is
described in terms of these classes. Some of the proposals made about how to identify
object classes are as follows:
1. Use a grammatical analysis of a natural language description of a system.
Objects and attributes are nouns; operations or services are verbs.
2. Use tangible entities in the application domain such as aircraft, roles such as
manager, events such as request, interactions such as meetings, locations such as
offices, organizational units such as companies and so on.
3. Use a behavioral approach where the designer first understands the overall
behavior of the system. The various behaviors are assigned to different parts of
the system and an understanding is derived of who initiates and participates in
these behaviors. Participants who play significant roles are recognized as objects.
4. Use a scenario-based analysis where various scenarios of system use are
identified and analyzed in turn. As each scenario is analyzed, the team
responsible for analysis must identify the required objects, attributes and
operations. A method of analysis called CRC cards where analysis and designers
take on the role of objects is effective in supporting this scenario-based approach.
1. The WeatherStation object class provides the basic interface of the weather
station with its environment.
2. The WeatherData object class encapsulates the summarized data from the
instruments in the weather station. Its associated operations are concerned with
collecting and summarizing the data that is required.
3. The Ground thermometer, Anemometer and Barometer object classes are
directly related to instruments in the system. They reflect tangible hardware
entities in the system and the operations are concerned with controlling that
hardware.
Design Models
Design models show the objects and object classes and relationships between these
entities. Static models describe the static structure of the system in terms of object
classes and relationships. Dynamic models describe the dynamic interactions between
objects. Some of the models are as follows:
• Sub-system models that show logical groupings of objects into coherent
subsystems. These are static models.
• Sequence models that show the sequence of object interactions. These are
dynamic models.
• State machine models that show how individual objects change their state in
response to events. These are dynamic models.
• Other models include use-case models, aggregation models, generalisation
models, etc.
Figure below shows the objects in the sub-systems in the weather station. Each object is
associated with one or more objects in this package. A package model plus an object
class model should describe the logical groupings in the system.
A sub-system model is a useful static model as it shows how the design may be organized
into logically related groups of objects. The UML packages contain encapsulation
constructs and do not reflect directly on entities in the system that is developed.
« subsystem» « subsystem»
Inter face Data collection
C
ommsC
ontroller WeatherData
Instrument
WeatherS
tation S tatus
« subsystem»
Instruments
A ir
thermometer R
ainGauge Anemometer
Ground
thermometer B
arometer WindV ane
Sequence models are dynamic models that document, for each mode of interaction, the
sequence of object interactions that take place. Figure below is an example of a sequence
model that shows the operations involved in collecting the data from a weather station.
re q ue st (re po r t)
a c kn o w le d g e ()
re po r t ()
sum m a rise ()
se nd (re po r t)
re ply (re po rt)
a c kno w le d g e ()
In a sequence model:
– Objects are arranged horizontally across the top
– Time is represented vertically so models are read top to bottom
Statecharts
It shows how objects respond to different service requests and the state transitions
triggered by these requests.
– If object state is Shutdown then it responds to a Startup() message
– In the waiting state the object is waiting for further messages
– If reportWeather () then system moves to summarising state
– If calibrate () the system moves to a calibrating state
– A collecting state is entered when a clock signal is received
Figure below shows a statechart for the WeatherStation object that shows how it responds
to requests for various services.
Op e r a tio n c a lib r a te ()
Ca lib r a tin g
c a lib r a tio n OK
sta rtu p () te s t ()
Sh u td o w n Wa itin g Te s tin g
sh u td o w n () tr a n s m is s io n d o n e te st c o m p le te
Tr a n s m ittin g
c lo c k c o lle c tio n
do ne r e p o rtWe a th e r ()
w e a th e r s u m m a r y
c o m p le te
Su m m a r is in g
Co lle c tin g
Object interfaces have to be specified so that the objects and other components can be
designed in parallel. Designers should avoid designing the interface representation but
should hide this in the object itself. Objects may have several interfaces which are
viewpoints on the methods provided. The UML uses class diagrams for interface
specification. Java may be used to implement.
Design Evolution
Hiding information inside objects means that changes made to an object do not affect
other objects in an unpredictable way. Assume pollution monitoring facilities are to be
added to weather stations. These sample the air and compute the amount of different
Figure below shows WeatherStation and the new objects added to the system.
WeatherS
tation
Air quality
identifier
N OData
repor tWeather () smok eData
repor tA irQ uality () benz eneData
calibrate (instruments)
test () collect ()
star tup (instruments) summarise ()
shutdown (instruments)
NOmeter S
mokeM
eter
B
enzeneM
eter
7. What are the characteristics that make object oriented approach as the most
preferred approach in modern development scenario?
Chapter VI
Development
NO
• Management problems
– Producing entire documentation may not be cost-effective
– Progress can be hard to judge in absence of documentation
– For fast development and delivery, unfamiliar and advanced technologies
are required, skilled staff may not be available.
• Contractual problems
– The normal contract about cost and time needs specifications to be
documented
– Specifications documentation is produced for increments for different
timing, thus difficult to fix the cost and time.
– It is difficult to accommodate all the changes suggested by customer in
fixed price.
• Validation problems
– Difficult to generate test plan without specifications
• Maintenance problems
– Continual change tends to corrupt software structure, thus makes it more
expensive to change and evolve to meet new requirements.
Prototyping
For some large systems, incremental iterative development and delivery may be
impractical; this is especially true when multiple teams are working on different sites.
Prototyping: An experimental system is developed as a basis for formulating the
requirements. This system is thrown away when the system specification has been
agreed.
Evolutionary development
• The objective of evolutionary or incremental development is to deliver a working
system to end-users. The development starts with those requirements which are
best understood.
Throw-away prototyping
• The objective of throw-away prototyping is to validate or derive the system
requirements. The prototyping process starts with those requirements which are
poorly understood.
Agile Methods
For large and critical systems, a well planned and controlled software development
process is required which has heavy use of design and analysis methods. Plan based
approach is heavy-weight process which needs large overhead and may not be proper for
small or medium sized application. Dissatisfaction with the overheads involved in design
methods led to the creation of agile methods.
• It can be difficult to keep the interest of customers who are involved in the
process; success of the system depends on customer’s willingness and availability
to participate.
• Team members may be unsuited to the intense involvement that characterizes
agile methods.
• Prioritizing changes can be difficult where there are multiple stakeholders.
• Maintaining simplicity requires extra work.
• Contracts may be a problem as with other approaches to iterative development.
Extreme Programming
Extreme Programming is the best-known and most widely used agile method. Extreme
Programming (XP) takes an ‘extreme’ approach to iterative development and customer
involvement.
– All requirements are expressed as scenarios which are directly
implemented as series of tasks
– Programmers work in pairs and develop test cases for each task before
writing code
– Increments are delivered to customers with short time gapes between
releases
– All tests must be run for every build and the build is only accepted if tests
run successfully
Figure below illustrates the XP process to produce an increment of the system that is
being developed.
Se le c t use r
Bre ak do w n
storie s for this Plan re le ase
sto rie s to tasks
re le ase
new version, he or she must run all existing automated tests as well as the tests for the
new functionality. The new build of the software is accepted only if all tests execute
successfully.
Testing in XP
To avoid some of the problems of testing and system validation, XP places more
emphasis than other agile methods on the testing process. The key features of testing in
XP are:
• Test-first development
• Incremental test development from scenarios
• User involvement in test development and validation
• Use of automated test environment
Test-first development is one of the most important innovations in XP. Writing tests
first defines both an interface and a specification of behavior for the functionality being
developed. Problems of requirements and interface misunderstandings are reduced. This
approach can be adopted in any process where there is a clear relationship between a
system requirement and the code implementing that requirement. In XP, the story cards
representing the requirements are broken down into tasks, and the tasks are the principal
unit of implementation.
User requirements in XP are expressed as scenarios or stories and the user prioritizes
these for development. The development team assesses each scenario and breaks it down
into tasks. Each task represents a discrete feature of the system and a unit test can then
be designed for that task. For example, some of the task cards developed from the story
card for document downloading is shown below.
Each task generates one or more unit tests that check the implementation described in that
task. For example, figure below is a shortened description of a test case that has been
developed to check that a valid credit card number has been implemented.
Writing tests before code clarifies the requirements to be implemented. Tests are written
as programs rather than data so that they can be executed automatically. All previous and
new tests are automatically run when new functionality is added, checking that the new
functionality has not introduced errors. Some of the disadvantages of testing in XP are as
follows. Programmers prefer programming rather testing, thus they may not write
complete tests. Sometimes writing test first may not be possible e.g. for complex user
interface. It may be difficult to judge the completeness of set of test cases. Customer
may not be available full time with XP team for acceptance test plan. Customer may be
reluctant to be part of testing team.
Pair Programming
In XP, programmers work in pairs, sitting together to develop code. Same pairs of
members do not always work together. Pairs are created dynamically so each team
member may work with all other tem members.
• This helps develop common ownership of code and spreads knowledge across the
team.
• It serves as an informal review process as each line of code is looked at by more
than 1 person.
• It encourages refactoring as the whole team can benefit from this.
• Measurements suggest that development productivity with pair programming is
similar to that of two people working independently.
A RAD Environment
Inte r fa c e Offic e
g e ne r a to r sy ste m s
DB Re po r t
pro g ra m m ing g e ne r a to r
la ng ua g e
Da ta b a se m a na g e m e nt sy ste m
Many applications are based around complex forms and developing these forms manually
is a time-consuming activity. RAD environments include support for screen generation
including:
– Interactive form definition using drag and drop techniques
– Form linking where the sequence of forms to be presented is specified
– Field verification where allowed ranges in form fields is defined
Visual Programming
Scripting languages such as Visual Basic support visual programming where the
prototype is developed by creating a user interface from standard items and associating
components with these items. A large library of components exists to support this type of
development. These may be tailored to suit the specific application requirements. Figure
below shows an application screen including menus along the tip, input fields, output
fields and buttons. It also shows the components that are associated with some of the
display elements.
Me nu c o m po n e n t
Da te c o m po ne nt
Ge ne ra l
1 2 th Ja nua ry 2 0 0 0 Ind e x
Ra ng e c h e c king
3 .8 7 6
sc ript
Use r pro m pt
c o m po ne nt +
Dra w c a nv a s sc ript
c o m po ne nt
Tre e d ispla y
c o m po ne nt
The main advantage of this approach is that a lot of application functionality can be
implemented quickly at a very low cost. Users who are already familiar with the
applications making up the system do not have to learn how to new features.
Software Prototyping
A prototype is an initial version of a system used to demonstrate concepts and try out
design options. A prototype can be used in:
– The requirements engineering process to help with requirements elicitation
and validation
– In design processes to explore options and develop a UI design
– In the testing process to run back-to-back tests
Benefits of Prototyping
The objectives of prototyping should be made explicit from the start of the process. If the
objectives are left unstated, management or end-users may misunderstand the function of
the prototype. So they may not get the benefits that they expected from the prototype
development.
The next stage is to decide what to put into and, what to leave out of the prototype
system. Some of the functionality may be left out of the prototype to reduce costs and
accelerate the delivery schedule. Some non-functional requirements such as response
time and memory utilization may be relaxed. Standards of reliability and program quality
may be reduced.
The final stage of the process is prototype evaluation. Provisions must be made during
this stage for user training, and the prototype objectives should be used to derive a plan
for evaluation. Users need time to become comfortable with a new system and to settle
into a normal pattern of usage. Once they are using the system normally, they then
discover requirements errors and omissions.
Throw-away prototypes
Prototypes should be discarded after development as they are not a good basis for a
production system because
– It may be impossible to tune the system to meet non-functional
requirements
– Prototypes are normally undocumented
– The prototype structure is usually degraded through rapid change
– The prototype probably will not meet normal organizational quality
standards
Software Evolution
The majority of changes are a consequence of new requirements that are generated in a
response to changing business and user needs. The spiral model of evolution is shown in
the figure below.
Star t
Re le ase 1
Re le ase 2
Re le ase 3
We start by creating Release 1 of the system. Once delivered, changes are proposed and
the development of Release 2 starts almost immediately. The need for evolution may
become obvious even before the system is deployed so that later releases of the software
may be under development before the initial version has been released. This is an
idealized model of software evolution that can be applied in situations where a single
organization is responsible for both the initial software development and the evolution of
the software.
Program evolution dynamics is the study of the processes of system change. After major
empirical studies, Lehman and Belady proposed that there were a number of ‘laws’ which
applied to all systems as they evolved. They are applicable to large systems developed
by large organisations, less applicable in other cases.
Lehman’s Laws
• Continuing change
• Increasing complexity
• Large program evolution
• Organisational stability
• Conservation of familiarity
• Continuing growth
• Declining quality
• Feedback system
The first law states that system maintenance is an inevitable process. As the system’s
environment changes, new requirements emerge and the system must be modified. When
the modified system is re-introduced to the environment, this promotes more
environmental changes, so the evolution process recycles.
The second law states that, as a system is changed, its structure is degraded. The only
way to avoid this happening is to invest in preventative maintenance where we spend
time improving the software structure without adding to its functionality.
The third law suggests that large systems have a dynamic of their own that is established
at an early stage in the development process. This determines the gross trends of the
system maintenance process and limits the number of possible system changes.
The fourth law suggests that most large programming projects work in what is termed a
saturated state. That is, a change to resources or staffing has imperceptible effects on the
long-term evolution of the system. This is consistent with the third law, which suggests
that program evolution is largely independent of management decisions. This law
confirms that large software development teams are often unproductive because
communication overheads dominate the work of the team.
The fifth law is concerned with the change increments in each system release. Adding
new functionality to a system inevitably introduces new system faults. The more
functionality added in each release, the more faults there will be. Therefore, a large
increment in functionality in one system release means that this will have to be followed
by a further release where the new system faults are repaired. The law suggests that we
should not budget for large functionality increments in each release without taking into
account the need for fault repair.
The sixth and seventh laws are similar and essentially say that users of software will
become increasingly unhappy with it unless it is maintained and new functionality is
added to it.
The final law reflects the most recent work on feedback processes, although it is not yet
clear how this can be applied in practical software development.
Software Maintenance
Software maintenance is modifying a program after it has been put into use.
Maintenance does not normally involve major changes to the system’s architecture.
Changes are implemented by modifying existing components and adding new
components to the system. The system requirements are likely to change
while the system is being developed because the environment is changing. Systems are
tightly coupled with their environment. The system requirements are likely to change
while the system is being developed because the environment is changing. Systems
MUST be maintained, if they are to remain useful in an environment.
Types of Maintenance
Surveys suggests that about 65% of maintenance is concerned with implementing new
requirements, 18% with changing the system to adapt it to a new operating environment
and 17% to correcting system faults. This is shown below.
Fa ult re pa ir
(1 7 % )
Sy ste m 1
Sy ste m 2
45 0 $
0 50 1 00 150 2 00 2 50 3 00 35 0 400 5 00
The key factors that distinguish development and maintenance, and which lead to higher
maintenance costs, are:
• Team stability
– Maintenance costs are reduced if the same staff are involved with them for
some time.
• Contractual responsibility
– The developers of a system may have no contractual responsibility for
maintenance so there is no incentive to design for future change.
• Staff skills
– Maintenance staffs are often inexperienced and have limited domain
knowledge.
Maintenance Prediction
Maintenance prediction is concerned with assessing which parts of the system may cause
problems and have high maintenance costs
– Change acceptance depends on the maintainability of the components
affected by the change
– Implementing changes degrades the system and reduces its maintainability
– Maintenance costs depend on the number of changes and costs of change
depend on maintainability
To evaluate the relationships between a system and its environment, we should assess:
1. The number and complexity of system interfaces: The larger the number of
interfaces and the more complex they are, the more likely it is that demands for
change will be made.
2. The number of inherently volatile system requirements: Requirements that
reflect organizational policies and procedures are likely to be more volatile than
requirements that are based on stable domain characteristics.
3. The business processes in which the system is used: As business processes
evolve, they generate system change requests. The more business processes that
use a system, the more the demands for system change.
Examples of process metrics that can be used for assessing maintainability are:
2. Average time required for impact analysis: This reflects the number of
program components that are affected by the change request. If this time
increases, it implies that more and more components are affected and
maintainability is decreasing.
3. Average time taken to implement a change request: This is not the same as
the time for impact analysis although is may correlate with it. This is the amount
of time that you need to actually modify the system and its documentation, after
we have assessed which components are affected. An increase in the time needed
to implement a change may indicate a decline in maintainability
4. Number of outstanding change requests: An increase in this number over time
may imply a decline in maintainability.
Evolution Processes
The evolution process includes the fundamental activities of change analysis, release
planning, system implementation and releasing a system to customers. The cost and
impact of these changes are assessed to see how much of the system is affected by the
change and how much it might cost to implement the change. If the proposed changes
are accepted, a new release of the system is planned. During release planning, all
proposed changes are considered. A decision is then made on which changes to
implement in the next version of the system is released. The process then iterates with a
new set of changes proposed for the next release. Figure below shows an overview of
this process.
Ch a n g e Im p a c t Re le a se Ch a n g e Sy s te m
re q u e s ts a n a ly s is p la n n in g im p le m e n tatio n re le a s e
Pla tf o r m Sy s te m
Fa u lt r e p a ir
a d a p ta tio n e nha nc e m e nt
Ideally, the change implementation stage of this process should modify the system
specification, design and implementation to reflect the changes to the system.
Pr o po s e d Re q u ir e m e n ts Re q u ire m e n ts So f twa re
c ha ng e s a n a ly s is u p d a tin g d e ve lo p m e n t
Change requests sometimes relate to system problems that have to be tackled very
urgently. These urgent changes can arise for three reasons:
1. If a serious system fault occurs that has to be repaired to allow normal operation
to continue
2. If changes to the system’s operating environment have unexpected effects that
disrupt normal operation.
3. If there are unanticipated changes to the business running the system, such as the
emergence of new competitors or the introduction of new legislation.
System re-engineering
• Reduced cost
– The cost of re-engineering is often significantly less than the costs of
developing new software.
The critical distinction between re-engineering and new software development is the
starting point for the development. Rather than starting with a written specification, the
old system acts as a specification for the new system. The distinction between forward
engineering and software re-engineering is shown below:
Fo rw a r d e ng ine e ring
Forward engineering starts with a system specification and involves the design and
implementation of a new system. Re-engineering starts with an existing system and the
development process for the replacement is based on understanding and transforming the
original system. Figure below shows the re-engineering process.
Re v e rse
e ng ine e rin g
Da ta
So urc e c o d e Pro gra m re -e ng ine e rin g
tra nsla tio n m o d ula risa tio n
Pro g ra m
struc ture
im pro v e m e nt
The costs of re-engineering obviously depend on the extent of the work that is carried
out. There is a spectrum of possible approaches to re-engineering as shown below.
Auto m a te d pr o g ra m Pro g ra m a n d d a ta
re struc turin g re struc turin g
In c re a se d c o st
1. The quality of the software to be re-engineered: The lower the quality of the
software and its associated documentation (if any), the higher the re-engineering
costs.
2. The tool support available for re-engineering: It is not normally cost-effective
to re-engineer a software system unless we can use CASE tools to automate most
of the program changes.
3. The extent of data conversion required: If re-engineering requires large
volumes of data to be converted, the process cost increases significantly.
4. The availability of expert staff: If the staff responsible for maintaining the
system cannot be involved in the re-engineering process, the costs will increase
because system re-engineers will have to spend a great deal of time understanding
the system.
Organisations that rely on legacy systems must choose a strategy for evolving these
systems
– Scrap the system completely and modify business processes so that it is no
longer required
– Continue maintaining the system
– Transform the system by re-engineering to improve its maintainability
To assess the business value of a system, we have to identify system stakeholders, such
as end-users of the system and their managers, and ask a series of questions about the
system. There are four basic issues to discuss:
1. The use of the system: If systems are only used occasionally or by a small
number of people, they may have a low business value. A legacy system may
have been developed to meet a business need that has either changed or that can
now be met more effectively in other ways.
2. The business processes that are supported: When a system is introduced, business
processes to exploit that system may be designed. However, changing these
processes may be impossible because the legacy system can’t be adapted.
Therefore, a system may have a low business value because new processes cannot
be introduced.
3. The system dependability: System dependability is not only a technical problem
but also a business problem. If a system is not dependable and the problems
directly affect the business customers or mean that people in the business are
diverted from other tasks to solve these problems, the system has a low business
value.
4. The system outputs: The key issue here is the importance of the system outputs to
the successful functioning of the business. If the business depends on these
outputs, then the system has a high business value. Conversely, if these outputs
can be easily generated in some other way or if the system produces outputs that
are rarely used, then its business value may be low.
We may collect quantitative data to make an assessment of the quality of the application
system. Some of the quantitative data are:
1. The number of system change requests: System changes tend to corrupt the
system structure and make further changes more difficult. The higher this value,
the lower the quality of the system.
2. The number of user interfaces: This is an important factor in forms-based-
systems where each form can be considered as a separate user interface. The
more interfaces, the more likely that there will be inconsistencies and
redundancies in these interfaces.
3. The volume of data used by the system: The higher the volume of data, the more
complex the system.
1. What are Agile methods? Explain in detail the principles of the same.
5. State and explain all the Lehman’s Laws related to Program Evolution Dynamics.
8. Explain with a neat diagram the different stages involved in ‘The system
evolution processes’.
10. Explain the different strategies that can be employed for the evolution of Legacy
Systems.
11. Explain the different factors involved in the assessment of Legacy Systems.
Chapter VII
Verification and Validation
Verification:
"Are we building the product right”.
• The software should conform to its specification.
Validation:
"Are we building the right product”.
• The software should do what the user really requires.
The ultimate goal of the verification and validation process is to establish confidence that
the software system is ‘fit for purpose’. This means that the system must be good enough
for its intended use. The level of required confidence depends on the system’s purpose,
the expectations of the system users and the current marketing environment for the
system.
Within the V & V process, there are two complementary approaches to system checking
and analysis:
1. Software inspections or peer reviews analyse and check system representations
such as the requirements document, design diagrams and the program source
code. Inspections can be used at all stages of the process. Inspections may be
supplemented by some automatic analysis of the source text of a system or
associated documents. Software inspections and automated analyses are static V
& V techniques, as we do not need to run the software on a computer.
2. Software testing involves running an implementation of the software with test
data. We examine the outputs of the software and its operational behaviour to
check that it is performing as required. Testing is a dynamic technique of
verification and validation.
Software inspections and testing play complementary roles in the software process. This
is shown below.
Software
inspections
Prog ram
Pr ototype
te sting
Program Testing
Testing reveals the presence of errors NOT their absence. Testing is only the validation
technique for non-functional requirements as the software has to be executed to see how
it behaves. Testing should be used in conjunction with static
verification to provide full V&V coverage. There are basically two types of testing that
may be used at different stages in the software process:
Defect testing
– Tests are designed to discover system defects.
– A successful defect test is one which reveals the presence of defects in a
system.
Validation testing
– It is intended to show that the software meets its requirements.
– A successful test is one that shows that a requirement has been properly
implemented.
Defect testing and debugging are distinct processes. Verification and validation is
concerned with establishing the existence of defects in a program. Debugging is
concerned with locating and repairing these errors. Debugging involves formulating a
hypothesis about program behaviour then testing these hypotheses to find the system
error. The debugging process is shown below:
Lo c ate De sig n Re pa ir Re te st
e rro r e rro r r e pa ir e rro r pro g ra m
Careful planning is required to get the most out of testing and inspection processes.
Planning should start early in the development process. The plan should identify the
balance between static verification and testing. Test planning is about defining standards
for the testing process rather than describing product tests. The V-model of development
is shown below.
It is an instantiation of the generic waterfall model and shows that test plans should be
derived from the system specification and design. This model also breaks down system V
& V into a number of stages. Each stage is driven by tests that have been defined to
check the conformance of the program with its design and specification.
Re q u ire m e n ts Sy s te m Sy s te m De ta ile d
s p e c if ic tio
a n sp e c if ic tio
a n d e s ig n d e sig n
Sy s te m Su b - sy s te m Mo d u le a n d
Ac c e p ta n c e
in te gra tio n in te gra tio n u n it c o d e
te s t p la n
te s t p la n te s t p la n a n d te s t
Ac c e p ta n c e Sy s te m Su b - sy s te m
Se r v ic e
te s t in te gra tio n te s t in te gra tio n te s t
Test planning is concerned with establishing standards for the testing process, not just
with describing product tests. Tests plans are intended for software engineers involved in
designing and carrying out system tests. They help technical staff get an overall picture
of the system tests and place their own work in this context. The major components of a
test plan are shown below.
Tests plans are not a static document but evolve during the development process. Test
plans change because of delays at other stages in the development process. If part of a
system is incomplete, the system as a whole cannot be tested. We then have to revise the
test plan to redeploy the testers to some other activity and bring them back when the
software is once again available.
Software Inspections
These involve people examining the source representation with the aim of discovering
anomalies and defects. Inspections do not require execution of a system so may be used
Inspections and testing are complementary and not opposing verification techniques.
Both should be used during the V & V process. Inspections can check conformance with
a specification but not conformance with the customer’s real requirements. Inspections
cannot check non-functional characteristics such as performance, usability, etc.
Program inspections are reviews whose objective is program defect detection. The key
difference between program inspections and other types of quality review is that the
specific goal of inspections is to find program defects rather than to consider broader
design issues.
The program inspection is a formal process that is carried out by a team of at least four
people. Team members analyze the code and point out possible defects. In Fagan’s
original proposals, he suggested roles such as author, reader, tester and moderator. The
reads the code aloud to the inspection team, the tester inspects the code from a testing
perspective and the moderator organizes the process. Figure below shows the different
roles.
The inspection team moderator is responsible for inspection planning. This involves
selecting an inspection team, organizing a meeting room and ensuring that the material to
be inspected and its specifications are complete. The program to be inspected is
presented to the inspection team during the overview stage when the author of the code
describes what the program is intended to do. This is followed by a period of individual
preparation. Each inspection team member studies the specification and the program and
looks for defects in the code.
Following the inspection, the program’s author should make changes to it to correct the
identified problems. In the follow-up stage, the moderator should decide whether a
reinspection of the code is required. He or she may decide that a complete reinspection is
not required and that the defects have been successfully fixed. The program is then
approved by the moderator for release. Figure below shows the inspection process.
Pla n n in g
Inspection Rate
• 500 statements/hour during overview.
• 125 source statement/hour during individual preparation.
• 90-125 statements/hour can be inspected.
• Inspection is therefore an expensive process.
• Inspecting 500 lines costs about 40 man/hours effort.
Inspection check-lists
Checklist of common errors should be used to drive the inspection. Error checklists are
programming language dependent and reflect the characteristic of errors that are likely to
arise in the language. In general, the 'weaker' the type checking, the larger the checklist.
Examples: Initialisation, Constant naming, loop termination, array bounds, etc. A list of
inspection check list is shown below.
Static analysers are software tools for source text processing. They parse the program
text and try to discover potentially erroneous conditions and bring these to the attention
of the V & V team. They are very effective as an aid to inspections - they are a
supplement tool but not a replacement for inspections. The intention of automatic static
analysis is to draw an inspector’s attention to anomalies in the program, such as variables
that are used without initialization, variables that are unused or data whose value could
go out of range. Some of the checks that can be detected by static analysis are shown
below.
• Control flow analysis: Checks for loops with multiple exit or entry points, finds
unreachable code, etc.
• Data use analysis: Detects uninitialized variables, variables written twice
without an intervening assignment, variables which are declared but never used,
etc.
• Interface analysis: Checks the consistency of routine and procedure declarations
and their use.
• Information flow analysis: Identifies the dependencies of output variables. Does
not detect anomalies itself but highlights information for code inspection or
review
• Path analysis: Identifies paths through the program and sets out the statements
executed in that path. Again, potentially useful in the review process
Particularly valuable when a language such as C is used which has weak typing and
hence many errors are undetected by the compiler. Less cost-effective for languages like
Java that have strong type checking and can therefore detect many errors during
compilation. A static analyser called LINT can be used for C programming in Unix/
Linux environment.
The name is derived from the 'Cleanroom' process in semiconductor fabrication. The
philosophy is defect avoidance rather than defect removal. A model of the cleanroom
process is shown in figure below.
and the aim is to systematically transform the specification to create the program
code.
4. Static verification: The developed software is statically verified using rigorous
software inspections. There is no unit or module testing process for code
components.
5. Statistical testing of the system: The integrated software increment is tested
statistically to determine its reliability. These statistical tests are based on an
operational profile, which is developed in parallel with the system specification.
There are three teams involved when the cleanroom process is used for large system
development:
• Specification team:
– Responsible for developing and maintaining the system specification.
• Development team:
– Responsible for developing and verifying the software. The software is
NOT executed or even compiled during this process.
• Certification team:
– Responsible for developing a set of statistical tests to exercise the software
after development.
Disadvantages:
The results of using the Cleanroom process have been very impressive with few
discovered faults in delivered systems. Independent assessment shows that the process is
no more expensive than other approaches. There were fewer errors than in a 'traditional'
development process. However, the process is not widely used. It is not clear how this
approach can be transferred to an environment with less skilled or less motivated
software engineers.
Software Testing
• System testing
• Component testing
• Test case design
• Test automation
Software testing
Testing is a process of executing a program with intent of finding an error. The testing
objectives are to prove that the requirements specifications from which the software was
designed, are correct (software meets customers’ requirements) and to prove that the
design and coding correctly respond to the requirements (software conforms to its
specifications). A more abstract view of the software testing is shown in figure below:
Co m po ne nt Sy ste m
te sting te sting
The two fundamental testing activities are component testing – testing the parts of the
system – and system testing – testing the system as a whole. A general model of the
testing process is shown in figure below.
Te st Te st Te st Te s t
c a se s d ata re su lts re p o rts
De s ig n te s t Pre p a re te s t Ru n p ro g ra m Co m p a er re s u lts
c a se s d ata w ith te st d ata to te s t c a se s
Test cases are specifications of the inputs to the test and the expected output from the
system plus a statement of what is being tested. Test data are the inputs that have been
devised to test the system. Test data can sometimes be generated automatically.
Automatic test case generation is impossible. The output of the tests can only be
predicted by people who understand what the system should do.
– Where user input is required, all functions must be tested with correct and
incorrect input
System Testing
System testing involves integrating two or more components that implement system
functions or features and then testing this integrated system. In an iterative development
process, system testing is concerned with testing an increment to be delivered to the
customer; in a waterfall process, system testing is concerned with testing the entire
system. There are two distinct phases to system testing:
1. Integration testing, where the test team has access to the source code of the
system. When a problem is discovered, the integration team tries to find the
source of the problem and identify the components that have to be debugged.
2. Release testing, where a version of the system that could be released to users is
tested. Here, the test team is concerned with validating that the system meets its
requirements and with ensuring that the system is dependable. Release testing is
usually ‘black-box’ testing where the test team is simply concerned with
demonstrating that the system does or does not work properly.
Integration Testing
The process of system integration involves building a system from its components and
testing the resultant system for problems that arise from component interactions. The
components that are integrated may be off-the-shelf components, reusable components
that have been adapted for a particular system or newly developed components.
System integration involves identifying clusters of components that deliver some system
functionality and integrating these by adding code that makes them work together.
Sometimes, the overall skeleton of the system is developed first, and components are
added to it. This is called top-down integration. Alternatively, we may first integrate
infrastructure components that provide common services, such as network and database
access, and then add the functional components. This is bottom-up integration.
A major problem that arises during integration testing is localizing errors. There are
complex interactions between the system components and, when an anomalous output is
discovered, we may find it hard to identify where the error occurred. To make it easier to
locate errors, we should always use an incremental approach to system integration and
testing. An example of incremental integration testing is shown below.
A T1
T1
A
T1 T2
A B
T2
T2 B T3
T3
B C
T3 T4
C
T4
D T5
Te st se que nc e 1 Te st se q ue nc e 2 Te st se que nc e 3
Release Testing
Release testing is the process of testing a release of the system that will be distributed to
customers. The primary goal of this process is to increase the supplier’s confidence that
the system meets its requirements. If so, it can be released as a product or delivered to the
customer. To demonstrate that the system meets its requirements, we have to show that it
delivers the specified functionality, performance and dependability, and that it does not
fail during normal use.
Release testing is usually a black-box testing process where the tests are derived from
the system specification. The system is treated as a black box whose behavior can only
be determined by studying its inputs and the related outputs. Another name for this is
functional testing because the tester is only concerned with the functionality and not the
implementation of the software. Figure below illustrates such a model.
Inputs c a using
a no m a lo us
Input te st da ta Ie b e ha v io ur
Sy ste m
Outputs w hic h r e ve a l
the pr e se nc e o f
Output te st r e sults Oe de fe c ts
Testing guidelines are hints for the testing team to help them choose tests that will reveal
defects in the system
– Choose inputs that force the system to generate all error messages
– Design inputs that cause buffers to overflow
– Repeat the same input or input series several times
– Force invalid outputs to be generated
– Force computation results to be too large or too small
Performance testing
Part of release testing may involve testing the emergent properties of a system, such as
performance and reliability. An operational profile is generated which an actual working
is set of requirements of the system. Performance tests usually involve planning a series
of tests where the load is steadily increased until the system performance becomes
unacceptable.
Stress testing
Stress testing exercises the system beyond its maximum design load. It is testing on
minimal configuration at peak load.
• Defect detection
– Stressing the system often causes defects to
come to light.
• Failure behaviour
– Stress testing checks for unacceptable loss of service or data.
– Systems should not fail catastrophically
Stress testing is particularly relevant to distributed systems that can exhibit severe
degradation as a network becomes overloaded.
Component Testing
We a th e r Sta tio n
id e n tif ie r
r e p o rtWe a th e r ()
c a lib r a te (in s tr u m e n ts )
te s t ()
s ta rtu p (in s tr u m e n ts )
s h u td o w n (in s tr u m e n ts )
We therefore only need to a test that checks whether is has been set up. We need to
define test cases for reportWeather, calibrate, test, startup, and shutdown. Using a state
model, identify sequences of state transitions to be tested and the event sequences to
cause these transitions
• For example:
– Waiting -> Calibrating -> Testing -> Transmitting -> Waiting
Interface testing
Objectives are to detect faults due to interface errors or invalid assumptions about
interfaces. It is particularly important for object-oriented development as objects are
defined by their interfaces.
There are different types of interfaces between program components and, consequently
different types of interface errors that can occur:
1. Parameter interfaces: These are interfaces where data or sometimes function
references are passed from one component to another.
2. Shared memory interfaces: These are interfaces where a block of memory is
shared between components. Data is placed in the memory by one sub-system
and retrieved from there by other sub-systems.
3. Procedural interfaces: These are interfaces where one component encapsulates
as a set of procedures that can be called by other components. Objects and
reusable components have this form of interface.
4. Message passing interfaces: These are interfaces where one component requests
a service from another component by passing a message to it. A return message
includes the results of executing the service.
This involves designing the test cases (inputs and outputs) used to test the system. The
goal of test case design is to create a set of tests that are effective in validation and defect
testing. Some of the test cases design approaches are:
– Requirements-based testing
– Partition testing
– Structural testing
Requirements-based testing
• The user shall be able to search either all of the initial set of databases or
select a subset from it.
• The system shall provide appropriate viewers for the user to read
documents in the document store.
• Every order shall be allocated a unique identifier (ORDER_ID) that
the user shall be able to copy to the account’s permanent storage area.
• Initiate user search for searches for items that are known to be
present and known not to be present, where the set of databases includes
1 database.
• Initiate user searches for items that are known to be present and
known not to be present, where the set of databases includes 2 databases
• Initiate user searches for items that are known to be present and
known not to be present where the set of databases includes more than 2
databases.
• Select one database from the set of databases and initiate user
searches for items that are known to be present and known not to be
present.
• Select more than one database from the set of databases and
initiate searches for items that are known to be present and known not to
be present.
Partition Testing
Input data and output results often fall into different classes where all members of a class
are related. Each of these classes is an equivalence partition or domain where the
program behaves in an equivalent way for each class member. Test cases should be
chosen from each partition.
Equivalence Partitioning: It is not possible to check each and every data. Divide all
input data into groups which are related. These groups are equivalence classes. This
technique is equivalence portioning. Tester needs to run one or few tests for each
equivalence class i.e. representative members of class. In the below figure, each
equivalence partition is shown as an ellipse. Input equivalence partitions are sets of data
where all of the set members should be processed in an equivalent way. Output
equivalence partitions are program outputs that have common characteristics, so they can
be considered as a distinct class. We also identify partitions where the inputs are outside
the other partitions that we have chosen. These test whether the program handles invalid
input correctly. Valid and invalid inputs also form equivalence partitions.
Once we have identified the set of partitions, we can choose test cases from each of these
partitions. A good rule of thumb for test case selection is to choose test cases on the
boundaries of the partitions plus cases close to the mid-point of the partition.
System
Outputs
We identify the partitions by using the program specification or user documentation and,
from experience, where we predict the classes of input value that are likely to detect
errors. For example, say a program specification states that the program accepts 4 to 8
inputs that are five-digit integers greater than 10,000. Figure below shows the partitions
for this situation and possible test input values.
To illustrate the derivation of test cases, consider the specification of a search component
shown below.
Sequence Element
Single value In sequence
Single value Not in sequence
More than 1 value First element in sequence
More than 1 value Last element in sequence
More than 1 value Middle element in sequence
More than 1 value Not in sequence
Structural Testing
T
estdata
T
ests D
eriv
es
Component T es
t
code outputs
As an example, consider the following binary search routine. The equivalence classes
can be shown as below.
Mid-point
This leads to a revised set of test cases for the search routine, as shown below.
Path testing
It is structural testing and used in unit testing. Every independent path is tested, thus it is
difficult after integration. All conditional statements <Yes, No> are tested. If loop is
present,
– Number of Path is very large, thus not possible to check each path
– Try to choose input such as each statement is checked
All logical paths should be defined. Test cases to execute all logical paths are derived.
The starting point for path testing is a program flow graph that shows nodes representing
program decisions and arcs representing the flow of control. Statements with conditions
are therefore nodes in the flow graph.
Cyclomatic Complexity: Flow graphs are used as the basis of cyclomatic complexity.
• CC (G) = E – N + 2
• CC (G) = P + 1
• No. of logical paths
Consider the binary search flow graph below:
8 1 2 1 3
1 4 1 0
The flow graph for the binary search procedure is shown in figure above where each node
represents a line in the program with an executable statement. The paths through the
binary search flow graph are:
• 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 14
• 1, 2, 3, 4, 5, 14
• 1, 2, 3, 4, 5, 6, 7, 11, 12, 5, …
1, 2, 3, 4, 6, 7, 2, 11, 13, 5, …
If all of these paths are executed, we can be sure that every statement in the method has
been executed at least once and that every branch has been exercised for true and false
conditions.
Test Automation
Test data
Specification
generator
Execution File
Simulator
report comparator
• Test Manager
– It manages running of program tests.
– It keeps track of test data, expected results and program functionalities
tested.
• Test data generator
– It generates test data for the program to be tested.
• Oracle
– It predicts expected results.
• File comparator
– It compares the program test results with results of Oracle and reports the
difference.
• Report Generator
– It provides report generation facilities for test results
• Dynamic analyzer
– It presents an execution profile showing how each program statement has
been executed.
• Simulator
– Testing workbenches may include different types of simulator such as
target simulators, simulators for input/output, user interface simulators.
1. Describe software verification and software validation. How they are achieved?
8. Give and explain with a neat diagram the model of the software testing process.
14. Define black box testing and white box testing. What is the difference between
the two?
17. What is Path Testing? Explain and bring out the concept of cyclomatic
complexity and its use in path testing.
Chapter VIII
Management
• Selecting staff
• Motivating people
• Managing groups
• The people capability maturity model
People Management
People are vital for effective functioning of organization thus most important assets of
organisation. The tasks of a manager are essentially people-oriented. There should be
good understanding of people for successful management. In presence of poor
management of people, project failure is likely to happen. Four critical factors in people
management are:
1. Consistency: People in a project team should all be treated in a comparable way.
While no one expects all rewards to be identical, people should not feel that their
contribution to the organization is undervalued.
2. Respect: Different people have different skills and managers should respect
these differences. All members of the team should be given an opportunity to
make a contribution. In some cases, people simply do not fit into a team and
cannot continue, but it is important not to jump to conclusions about this.
3. Inclusion: People contribute effectively when they feel that others listen to them
and take account of their proposals. It is important to develop a working
environment where all views, even those of the most junior staff, are considered.
4. Honesty: As a manager, we should always be honest about what is going well
and what is going badly in the team. We should be honest about our level of
technical knowledge and be willing to defer to staff with more knowledge when
necessary. If we are less than honest, we will eventually be found out and will
lose the respect of the group.
Selecting staff
One of the most important project management tasks is team selection. The decision on
who to appoint to a project is usually made using three types of information:
1. Information provided by candidates about their background and experience (their
resume or CV). This is usually the most reliable evidence that you have available
to judge whether candidates are likely to be suitable.
2. Information gained by interviewing candidates. Interviews can give a good
impression of whether a candidate is a good communicator and whether he or she
has good social skills. Tests have been shown that interviewers are liable to
accept or reject candidates on the basis of rapid subjective judgments. Interviews
are not a reliable method for making judgments of technical capabilities.
3. Recommendations from people who have worked with the candidates. This can
be effective when you know the people making the recommendation. Otherwise,
the recommendations cannot be trusted.
Motivating People
People working in software development organizations are not usually hungry or thirsty
and generally do not feel physically threatened by their environment. Therefore, ensuring
the satisfaction of social, esteem and self-realization needs is most significant from a
management point of view.
1. To satisfy social needs, we need to give people time to meet their co-workers and
to provide places for them to meet. This is relatively easy when all of the
members of a development team work in the same place but, team members are
not located in the same building or even the same town or state. They may work
for different organizations or from home most of the time. Electronic
communications such as e-mail and teleconferencing may be used.
2. To satisfy esteem needs, we need to show people that they are valued by the
organization. Public recognition of achievements is a simple yet effective way of
doing this. People must also feel that they are paid at a level that reflects their
skills and experience.
3. To satisfy self-realization needs, we need to give people responsibility for their
work, assign them demanding (but not impossible) task and provide a training
programme where people can develop their skills.
There are also some drawbacks of Maslow’s theory. The theory is highly subjective.
People may not be driven by compulsion of basic needs. People may be influenced by
the work of a dedicated group of people.
Managing groups
Most professional software is developed by project teams ranging in size from two to
several hundred people. However, as it is clearly impossible for all these people to work
together on a single problem, these large teams are usually split into a number of groups.
Each group is responsible for part of the overall system. As a general rule, software
engineering project groups should normally have no more than eight or ten members.
When small groups are used, communication problems are reduced. There are a number
of factors that influence group working:
• Group composition
– Right balance of technical skills, experiences and personalities
• Group cohesiveness
– Thinking for group, not for individual
• Group communications
– Effective communication necessary
• Group organization
– Feeling of being important and satisfied in their role
Group Composition
Group composed of members who share the same motivation can be problematic. A
group that has complementary personalities may work better than a group selected solely
on technical ability. People who are motivated by the work are likely to be the strongest
technically. People who are self-oriented will probably be best at pushing the work
forward to finish the job. People who are interaction-oriented help facilitate
communications within the group.
The group leader has an important role. He or she may be responsible for providing
technical direction and project administration. Group leaders must keep track of the day-
to-day work of their group, ensure that people are working effectively and work closely
with project managers on project planning.
Group Cohesiveness
In a cohesive group, members think of the group as more important than the individual in
it. Members of a well-led, cohesive group are loyal to the group. They identify with
group goals and with other group members. They attempt to protect the group, as an
entity, from outside interference. This makes the group robust and able to cope with
problems and unexpected situations. The advantages of a cohesive group are:
• Establishment of group quality standards
– Group quality standards can be developed by consensus
• Intimate working environment
– Group members work closely together so inhibitions caused by ignorance
are reduced
• Transparency in the work
– Team members learn from each other and get to know each other’s work
• Egoless programming
– Members strive to improve each other’s programs can be practised
Egoless programming is a style of group working where designs, programs and other
documents are regarded as group property rather than personal. One member’s work is
inspected by others. Without any ego people should be ready for criticism to improve
programming.
Advantages
• Improves quality of work
• Improves communication among people
• Facilitates healthy interaction
• Creates team spirit
Group Communications
Good communications are essential for effective group working. Information must be
exchanged on the status of work, design decisions and changes to previous decisions.
Good communications also strengthens group cohesion as it promotes understanding.
Some key factors that influence the effectiveness of communication are:
1. Group size: As a group increases in size, ensuring that all members
communicate effectively with each other becomes more difficult. The number of
one-way communication links is n * (n – 1), where n is the group size.
2. Group structure: People in informally structured groups communicate more
effectively than people in groups with a formal, hierarchical structure. In
hierarchical groups, communications tend to flow up and down the hierarchy.
People at the same level may not talk to each other. This is a particular problem
in a large project with several development groups. When people working on
different sub-systems communicate only through their managers, the project may
suffer delays and misunderstandings.
3. Group composition: People with same personality types may clash and
communications may be inhibited. Communication is also usually better in
mixed-gender groups than in single-gender groups. Women tend to be more
interaction-oriented than men and may act as interaction controllers and
facilitators for the group.
4. The physical work environment: The organization of the workplace is a major
factor in facilitating or inhibiting communications.
Group organization
Small programming groups are usually organized in a fairly informal way. The group
leader gets involved in the software development with the other group members. A
technical leader may emerge who effectively controls software production. In an
informal group, the work to be carried out is discussed by the group as a whole, and tasks
are allocated according to ability and experience. More senior group members may be
responsible for the architectural design.
Informal groups can be very successful, particularly when the majority of group members
are experienced and competent. Such a group makes decisions by consensus, which
improves group spirit, cohesiveness and performance. If a group is composed mostly of
inexperienced or incompetent members, informality can be a hindrance because no
definite authority exists to direct the work, causing a lack of coordination between group
members and, possibly, eventual project failure.
To make the make most effective use of highly skilled programmers, teams should be
built around an individual, highly skilled chief programmer. The underlying principle of
the chief programmer team is that skilled and experienced staff should be responsible for
all software development. They should not be concerned with routine matters and should
have good technical and administrative support for their work. They should focus on the
software to be developed and should not get involved in external meetings. But the
serious problem is it is over-dependent on the chief programmer and their assistant.
Other team members, who are not given sufficient responsibility, become unmotivated
because they feel their skills are underused.
Working Environments
The physical workplace provision has an important effect on individual productivity and
satisfaction. Health and safety considerations must be taken into account such as
– Comfort
– Privacy
– Facilities
– Lighting
– Heating & cooling
– Furniture
The most important environmental factors identified in the design study are:
• Privacy
– Each engineer requires an area for uninterrupted work
• Outside awareness
– People prefer to work in natural light and environment
• Personalization
• Individuals adopt different working practices and like to organize their working
environment in their own ways
Workspaces should provide private spaces where people can work without interruption.
Providing individual offices for staff has been shown to increase productivity. However,
teams working together also require spaces where formal and informal meetings can be
held. Grouping individual offices round larger group meeting rooms is considered one of
the best ways to reconcile some of the conflicting requirements. An office layout is
shown below.
Me e ting
ro o m
Of fic e Offic e
Sha r e d
do c um e nta tio n
Of fic e Offic e
Optim izing
Continuously im prove m e thods Continuous workforce innovation
for de ve loping pe rsonal and Coaching
organisational com pe te nce Pe rsonal com petency de velopm e nt
Mana ged
Quantitatively m anage Organisational per form ance alignm ent
organisational g row th in Organisational c om pe tency m anage m e nt
workforce capabilities and
establish com pe te ncy-base d Te am -base d practice s
team s Te am building
M e ntoring
De fine d
Identify prim ary
com pe te ncie s and Par ticipatory culture
align workforce Com pete ncy-base d practice s
activitie s with them
Care e r de ve lopm e nt
Com pete ncy de ve lopm ent
Workforce planning
Knowledge and skills analysis
Repe a ta b le
Instill basic
discipline into Com pensation
workforce Training
activitie s Pe r form ance m anagem ent
Staffing
Com m unication
Work environm e nt
Initial
• Software productivity
• Estimation techniques
• Algorithmic cost modelling
• Project duration and staffing
Software Productivity
A line of source code per programmer month (LOC/pm) is widely used software
productivity metric. This can be computed by counting the total number of lines of
source code that are delivered, then divide the count by the total time in programmer-
months required to complete the project. This time therefore includes the time required
for all other activities involved in software development.
An alternative to using code size as the estimated product attribute is to use some
measure of the functionality of the code. This avoids the above anomaly, as functionality
is independent of implementation language. Productivity is expressed as the number of
function points that are implemented per person-month. A function point is not a single
characteristic but is composed by combining several different measurements or estimates.
We compare the total number of function points in a program by measuring or estimating
the following program features: external inputs and outputs, user interactions, external
interfaces and files used by the system.
A weight is associated with each of these and the unadjusted function point count (UFC)
is computed by multiplying each raw count by the weight and summing all values.
The function point count is modified by complexity of the project. FPs can be used to
estimate LOC depending on the average number of LOC per FP for a given language.
AVC is a language-dependent factor varying from 200-300 for assembly language to 2-
40 for a 4GL. FPs are very subjective and depends on the estimator.
Object points are an alternative to function points. Object points are NOT the same as
object classes. The number of object points in a program is a weighted estimate of
– The number of separate screens that are displayed
– The number of reports that are produced by the system
– The number of program modules that must be developed to supplement
the database code
Object points are easier to estimate from a specification than function points as they are
simply concerned with screens, reports and programming modules. They can therefore
be estimated at a fairly early point in the development process, whereas it is very difficult
to estimate the number of lines of code in a system. Some of the factors affecting
productivity are as follows:
Estimation techniques
There is no simple way to make an accurate estimate of the effort required to develop a
software system. Initial estimates are based on inadequate information in a user
requirements definition. The software may run on unfamiliar computers or use new
technology. The people in the project may be unknown. Changing technologies may
mean that previous estimating experience does not carry over to new systems
– Distributed object systems rather than mainframe systems
– Use of web services
– Use of ERP or database-centered systems
– Use of off-the-shelf software
– Development for and with reuse
– Development using scripting languages
– The use of CASE tools and program generators
The approaches to cost estimation are shown below. These approaches can be tackled
using either a top-down or a bottom-up approach. A top-down approach starts at the
system level. We start by examining the overall functionality of the product and how that
functionality is provided by interacting sub-functions. The costs of system-level
activities such as integration, configuration management and documentation are taken
into account.
The bottom-up approach, by contrast, starts at the component level. The system is
decomposed into components, and we estimate the effort required to develop each of
these components. We then add these component costs to compute the effort required for
the whole system development.
Each method has strengths and weaknesses. Estimation should be based on several
methods. Some action should be taken to make more accurate estimates. Pricing to win
is a commonly applicable method. This approach may seem unethical and un-
businesslike. However, when detailed information is lacking it may be the only
appropriate strategy. The project cost is agreed on the basis of an outline proposal and
the development is constrained by that cost. A detailed specification may be negotiated
or an evolutionary approach used for system development.
The most commonly used product attribute for cost estimation is code size. Most models
are similar but they use different values for A, B and M.
The size of a software system can only be known accurately when it is finished. Several
factors influence the final size
– Use of COTS and components
– Programming language
– Distribution of system
As the development process progresses then the size estimate become more accurate.
The accuracy of the estimates produced by an algorithmic model depends on the system
information that is available. As the software process proceeds, more information
becomes available so estimates become more and more accurate. If the initial estimate of
effort required is x months of effort, this range may be from 0.25x to 4x when the system
is first proposed. This narrows during the development process, as shown in the figure
below.
4x
2 x
x
Feasibility Requirem ents Design Code De live ry
0.5 x
0.2 5 x
The first version of the COCOMO model (COCOMO 81) was a three-level model where
the levels corresponded to the detail of the analysis of the cost estimate. The first level
(basic) provided an initial rough estimate; the second level modified this using a number
of project and process multipliers, and the most detailed level produced estimates for
different phases of the project. Figure below shows the basic COCOMO formula for
different types of projects. The multiplier M reflects product, project and team
characteristics.
COCOMO 81 was developed with the assumption that a waterfall process would be used,
all software would be developed from scratch using the languages like C, FORTAN etc..
There have been many changes in software engineering practice like prototyping, reuse,
incremental development and COCOMO 2 is designed to accommodate different
approaches to software development.
Ba se d o n Use d fo r De v e lo pm e nt e ffo r t
Num b e r o f line s o f Po st-a rc hite c ture b a se d o n sy ste m
so urc e c o de m o de l
d e sig n spe c ific a tio n
Estimates can be made after the requirements have been agreed. Based on a standard
formula for algorithmic models
Effort = A * SizeB * M
where
– M = PERS * RCPX * RUSE * PDIF * PREX * FCIL * SCED
– A = 2.94 in initial calibration,
– Size in KSLOC, first function points are estimated then KSLOC from
standard tables for different languages.
– B varies from 1.1 to 1.24 depending on novelty of the project,
development flexibility, risk management approaches and the process
maturity.
Multipliers reflect the capability of the developers, the non-functional requirements, the
familiarity with the development platform, etc.
It takes into account black-box code that is reused without change and code that has to be
adapted to integrate it with new code. There are two versions of reuse:
• Black-box reuse
– Where code is not modified and reused without understanding.
– Development effort for black box code is taken to be zero.
• White-box reuse
– Where code is understood and modified to adapt to integrate with new
code or other reused components.
– Many system include automatically generated source code
– Program translators generate code from system models where standard
templates are embedded in the generator.
For generated code:
PMauto = (ASLOC * AT/100)/ATPROD
– ASLOC is the number of lines of automatically generated code
– AT is the percentage of code automatically generated.
– ATPROD is the productivity of engineers in integrating this code.
When code has to be understood and integrated, equivalent number of lines of source
code is estimated.
ESLOC = ASLOC * (1-AT/100) * AAM.
– ESLOC is equivalent number of lines of source code
– ASLOC is the number of lines of automatically generated code
– AT is the percentage of code automatically generated.
– AAM is the adaptation adjustment multiplier computed from the costs of
changing the reused code (AAF), the costs of understanding how to
integrate the code (SU) and the costs of reuse decision making (AA).
AAM = AAF + AA + SU
This uses the same formula as the early design model but with 17 rather than 7 associated
multipliers. The code size is estimated as:
– Number of lines of new code to be developed (LOC)
– Estimate of equivalent number of lines of new code computed using the
reuse model (ESLOC)
The scale factors used in the COCOMO II exponent computation are as follows:
A company takes on a project in a new domain. The client has not defined the process to
be used and has not allowed time for risk analysis. The company has a CMM level 2
rating.
– Precedenteness - new project (4)
– Development flexibility - no client involvement - Very high (1)
– Architecture/risk resolution - No risk analysis - V. Low .(5)
– Team cohesion - new team - nominal (3)
– Process maturity - some control - nominal (3)
• Scale factor is therefore (4+1+5+3+3)/100 + 1.01
= 1.17
Algorithmic cost models provide a basis for project planning as they allow alternative
strategies to be compared.
• Embedded spacecraft system
– Must be reliable
– Must minimise weight (number of chips)
– Multipliers on reliability and computer constraints > 1
• Cost components
– Target hardware
– Development platform
– Development effort
As well as effort estimation, managers must estimate the calendar time required
completing a project and when staff will be required. Calendar time (TDEV:
Development time) can be estimated using a COCOMO 2 formula
TDEV = 3 ´ (PM)(0.33+0.2*(B-1.01))
– PM is the effort computation and B is the exponent computed as in
COCOMO 2 (B is 1 for the early prototyping model). This computation
predicts the nominal schedule for the project.
The time required is independent of the number of people working on the project.
Planned schedule may be shorter or longer than the nominal predicted schedule and can
be adjusted as:
TDEV = 3 ´ (PM)(0.33+0.2*(B-1.01)) *SCED percentage/100
Staff required can’t be computed by deriving the development time by the required
schedule. The number of people working on a project varies depending on the phase of
the project. The more people who work on the project, the more total effort is usually
required. A very rapid build-up of people often correlates with schedule slippage.
3. List and briefly explain the factors that influence group working.
4. Explain how working environment can affect the effective management of people
in an organization.
5. What are the issues involved in managing people? How can people be retained?
How does PCMM influence higher productivities and motivation for software
personnel?