You are on page 1of 176

PAPER NO.

CT 52
SECTION 5

CERTIFIED
INFORMATION COMMUNICATION
TECHNOLOGISTS
(CICT)

SOFTWARE ENGINEERING

STUDY TEXT

www.someakenya.com Contact: 0707 737 890


KASNEB SYLLABUS

LEARNING OUTCOMES

A candidate who passes this paper should be able to:


 Identify appropriate software system design tools
 Design appropriate software systems
 Describe software system testing
 Document and commission software
 Evaluate software acquisition techniques
 Maintain software

CONTENT page

1. Introduction to software systems development………………………………………….4


- Software systems development concepts
- Software development life cycle

2. Software process models…………………………………………………………………10


- Linear/waterfall model
- Rapid prototyping
- Evolutionary models
- Component based models
- Other models

3. Software requirements analysis…………………………………………………….…..19


- Overview or requirements concepts
- Requirement analysis process
- Requirements specification

4. Design tools and methods………………………………………………………………25


- System flowcharts
- Case tools
- Functional decomposition
- Modules design
- Structured walkthrough
- Decision tables
- Structured charts
- Data flow diagrams
- Object Oriented design tools

5. Software quality………………………………………………………….……………63
- Quality control and assurance
- Software quality factors and metrics
- Formal technical reviews
- Verification and validation
- Cost of quality

6. Software coding………………………………………………………………………82
- Coding styles and characteristics
- Coding in high-level languages
- Coding standards
- User interface

ii www.someakenya.com Contact: 0707 737 890


7. Software testing…………………………………………………………………….94
- Software testing life cycle
- Software testing methods (Black box testing and White box testing)
- Software testing levels( unitintegration, system and acceptance testing)
- Other forms of testing

8. Software acquisition methods……………………………………………………102


- Software costing
- Software outsourcing
- Open-source software engineering and customization
- In-house development
- Commercial Off The Shelf software (COTS)
- Budgeting for information systems
- Financial cost benefit analysis
- Business case approach
- Total cost of ownership
- Balanced scorecard/activity based costing and expected value
- Tracking and allocating costs

9. Conversion strategies…………………………….…………………….………131
- Conversion planning
- Parallel running
- Direct cut over
- Pilot study
- Phased approach

10. Documentation and commissioning………………………………………….138


- Objectives of systems documentation
- Use of systems documentation
- Qualities of a good documentation
- Types of documentation
- Software commissioning

11. Software maintenance and evolution......................................................…...146


- Types or software changes
- Software change identification
- Software change implementation

12. Auditing information systems………………………………………………162


- Overview of information systems audit
- Auditing computer resources
- Audit techniques
- Audit applications

13 Emerging Issues and trends………………………………………………….

iii www.someakenya.com Contact: 0707 737 890


TOPIC 1

INTRODUCTION TO SOFTWARE SYSTEMS DEVELOPMENT

 Software systems development concepts


IEEE defines software engineering as:
The application of a systematic, disciplined, quantifiable approach to the development,
operation and maintenance of software; that is, the application of engineering to
software.
Fritz Bauer, a German computer scientist, defines software engineering as:
Software
oftware engineering is the establishment and use of sound engineering principles in
order to obtain economically software that is reliable and work efficiently on real
machines.

Software Evolution
The process of developing a software product using software
software engineering principles and
methods is referred to as software evolution. This includes the initial development of
software and its maintenance and updates, till desired software product is developed,
which satisfies the expected requirements.

Evolution starts from the requirement gathering process. After which developers create a
prototype of the intended software and show it to the users to get their feedback at the
early stage of software product development. The users suggest changes, on which
severall consecutive updates and maintenance keep on changing too. This process
changes to the original software, till the desired software is accomplished.
Even after the user has desired software in hand, the advancing technology and the
changing requirements force
orce the software product to change accordingly. Re-creating
Re creating
software from scratch and to go one-on-one
one one with requirement is not feasible. The only
feasible and economical solution is to update the existing software so that it matches the
latest requirements.
Software Evolution Laws
Lehman has given laws for software evolution. He divided the software into three
different categories:
 S-type (static-type) - This is a software, which works strictly according to
defined specifications and solutions. The solution and the method to achieve it,
both are immediately understood before coding. The s-type software is least
subjected to changes hence this is the simplest of all. For example, calculator
program for mathematical computation.
 P-type (practical-type) - This is a software with a collection of procedures. This
is defined by exactly what procedures can do. In this software, the specifications
can be described but the solution is not obvious instantly. For example, gaming
software.
 E-type (embedded-type) - This software works closely as the requirement of
real-world environment. This software has a high degree of evolution as there are
various changes in laws, taxes etc. in the real world situations. For example,
Online trading software.

E-Type software evolution


Lehman has given eight laws for E-Type software evolution -
 Continuing change - An E-type software system must continue to adapt to the
real world changes; else it becomes progressively less useful.
 Increasing complexity - As an E-type software system evolves, its complexity
tends to increase unless work is done to maintain or reduce it.
 Conservation of familiarity - The familiarity with the software or the
knowledge about how it was developed, why was it developed in that particular
manner etc. must be retained at any cost, to implement the changes in the system.
 Continuing growth- In order for an E-type system intended to resolve some
business problem, its size of implementing the changes grows according to the
lifestyle changes of the business.
 Reducing quality - An E-type software system declines in quality unless
rigorously maintained and adapted to a changing operational environment.
 Feedback systems- The E-type software systems constitute multi-loop, multi-
level feedback systems and must be treated as such to be successfully modified or
improved.
 Self-regulation - E-type system evolution processes are self-regulating with the
distribution of product and process measures close to normal.
 Organizational stability - The average effective global activity rate in an
evolving E-type system is invariant over the lifetime of the product.

Software Paradigms
Software paradigms refer to the methods and steps, which are taken while designing the
software. There are many methods proposed and are in work today, but we need to see
where in the software engineering these paradigms stand. These can be combined into
various categories, though each of them is contained in one another:

6 www.someakenya.com Contact: 0707 737 890


Programming paradigm is a subset of Software design paradigm
paradigm which is further a subset
of Software development paradigm.

Software Development Paradigm


This Paradigm is known as software engineering paradigms where all the engineering
concepts pertaining to the development of software are applied. It includes various
various
researches and requirement gathering which helps the software product to build. It
consists of –
 Requirement gathering
 Software design
 Programming

Software Design Paradigm


This paradigm is a part of Software Development and includes :
 Design
 Maintenance
 Programming

Programming Paradigm
This paradigm is related closely to programming aspect of software development. This
includes:
 Coding
 Testing
 Integration

Need of Software Engineering


The need of software engineering arises because of higher rate of change in user
requirements and environment on which the software is working.
 Large software - It is easier to build a wall than to a house or building, likewise,
as the size of software become large engineering has to step to give it a scientific
process.
 Scalability- If the software process were not based on scientific and engineering
concepts, it would be easier to re-create new software than to scale an existing
one.
 Cost- As hardware industry has shown its skills and huge manufacturing has
lower down the price of computer and electronic hardware. But the cost of
software remains high if proper process is not adapted.
 Dynamic Nature- The always growing and adapting nature of software hugely
depends upon the environment in which user works. If the nature of software is
always changing, new enhancements need to be done in the existing one. This is
where software engineering plays a good role.
 Quality Management- Better process of software development provides better
and quality software product.

Characteristics of good software


A software product can be judged by what it offers and how well it can be used. This
software must satisfy on the following grounds:
 Operational
 Transitional
 Maintenance
Well-engineered and crafted software is expected to have the following characteristics:

Operational
This tells us how well software works in operations. It can be measured on:
 Budget
 Usability
 Efficiency
 Correctness
 Functionality
 Dependability
 Security
 Safety

Transitional
This aspect is important when the software is moved from one platform to another:
 Portability
 Interoperability
 Reusability
 Adaptability

Maintenance
This aspect briefs about how well software has the capabilities to maintain itself in the
ever-changing environment:
 Modularity
 Maintainability

8 www.someakenya.com Contact: 0707 737 890


 Flexibility
 Scalability
In short, Software engineering is a branch of computer science, which uses well-defined
engineering concepts required to produce efficient, durable, scalable, in-budget and on-
time software products.

 Software development life cycle


SDLC is an approach for making software for the developer, user and customer. SDLC
focus on the internal phase to the end phase for making particular software. It generally
deals with the analyst and the corresponding clients. SDLC has some specific phase.
This are-

1) Project identification
2) Feasibility study
3) System analysis
4) System design
5) System development
6) System testing
7) System implementation
8) System maintenance
9) System documentation

1) Project identification: - in this phase the analyst focus the basic objective and
identification need for the corresponding software. In this phase the analyst set up
some meeting with the corresponding client for making the desired software.
2) Feasibility study: - feasibility defines in the three views for making particular
software for the client.
 Technical
 financial
 Social feasibility.
3) System analysis: - analysis defines how and what type of desired software we have
to make for the client. It has some pen and paper base. Exercise through which the
analyst focused for the desired goals.
4) System design: - in this phase the analyst draw the corresponding diagrams related
to the particular software. In this phase the design include in the form of flow chat,
data flow diagram, net relationship diagram (NRD).
5) System development: - development refers in the form of coding, error checking
and debarking for the particular software. This phase deals with the developer
activity for making a successfully software.
6) System testing: - testing refers whatever analyst and developer done will it be
correct and error free to the desired software. In the S.E there some testing
technique to which we can check whether project is error free.

9 www.someakenya.com Contact: 0707 737 890


The main testing techniques are
o white box testing
o black box testing
o ad hope testing
o system testing
o unit testing
o alpha testing
o beta testing

7) System implementation: - after completing the testing phase we have to implement


a particular product or system according to the customer need. In the
implementation phase some design and other user activity part may be changed as
per customer need.
8) System maintenance: - after implementation the users use the particular software to
their corresponding operation to active there job. In this phase the software
maintained from the user or developer side after spanning some times of use of
particular software. In this phase the related hardware, software and other utilities
are also maintained.
9) System documentation: - documentation refers the approach and guidelines for the
user as well as the customer to the related software. The documentation refers some
writing instruction for how to use for related hardware requirement, and also some
maintains factors for the users.

10 www.someakenya.com Contact: 0707 737 890


TOPIC 2

SOFTWARE PROCESS MODELS

A software process is a coherent set of activities for specifying, designing, implementing


and testing software systems. A structured set of activities required to develop a software
system:

• Specification
• Design
• Validation
• Evolution

A software process model is an abstract representation of a process. It presents a


description of a process from some particular perspective.

Generic software process models

a) The waterfall model - Separate and distinct phases of specification and development.
b) Evolutionary development - Specification and development are interleaved.
c) Formal systems development - A mathematical system model is formally
transformed to an implementation.
d) Reuse-based development - The system is assembled from existing components

 Linear/waterfall model

This model assumes that everything is carried out and taken place perfectly as planned in
the previous stage and there is no need to think about the past issues that may arise in the
next phase. This model does not work smoothly if there are some issues left at the

11 www.someakenya.com Contact: 0707 737 890


previous step. The sequential nature of model does not allow us go back and undo or
redo our actions.

This model is best suited when developers already have designed and developed similar
software in the past and is aware of all its domains. The drawbacks of the waterfall
model are:

 The difficulty of accommodating change after the process is underway.


 Inflexible partitioning of the project into distinct stages.
 This makes it difficult to respond to changing customer requirements.
 Therefore, this model is only appropriate when the requirements are well-
understood.

 Rapid prototyping

Rapid Prototyping (RP) can be defined as a group of techniques used to quickly fabricate
a scale model of a part or assembly using three-dimensional computer aided design
(CAD) data. What is commonly considered to be the first RP technique,
Stereolithography, was developed by 3D Systems of Valencia, CA, USA. The company
was founded in 1986, and since then, a number of different RP techniques have become
available.

Rapid Prototyping has also been referred to as solid free-form manufacturing; computer
automated manufacturing, and layered manufacturing. RP has obvious use as a vehicle
for visualization. In addition, RP models can be used for testing, such as when an airfoil
shape is put into a wind tunnel. RP models can be used to create male models for tooling,
such as silicone rubber molds and investment casts. In some cases, the RP part can be the
final part, but typically the RP material is not strong or accurate enough. When the RP
material is suitable, highly convoluted shapes (including parts nested within parts) can be
produced because of the nature of RP.

The reasons of Rapid Prototyping are

 To increase effective communication.


 To decrease development time.
 To decrease costly mistakes.
 To minimize sustaining engineering changes.
 To extend product lifetime by adding necessary features and eliminating
redundant features early in the design.

Rapid Prototyping decreases development time by allowing corrections to a product to


be made early in the process. By giving engineering, manufacturing, marketing, and
purchasing a look at the product early in the design process, mistakes can be corrected
and changes can be made while they are still inexpensive. The trends in manufacturing

12 www.someakenya.com Contact: 0707 737 890


industries continue to emphasize the following:

• Increasing number of variants of products.


• Increasing product complexity.
• Decreasing product lifetime before obsolescence.
• Decreasing delivery time.

Rapid Prototyping improves product development by enabling better communication in a


concurrent engineering environment.

Methodology of Rapid Prototyping

The basic methodology for all current rapid prototyping techniques can be summarized
as follows:

1. A CAD model is constructed, and then converted to STL format. The resolution
can be set to minimize stair stepping.
2. The RP machine processes the .STL file by creating sliced layers of the model.
3. The first layer of the physical model is created. The model is then lowered by the
thickness of the next layer, and the process is repeated until completion of the
model.
4. The model and any supports are removed. The surface of the model is then
finished and cleaned.

 Evolutionary models

This approach is based on the idea of rapidly developing an initial software


implementation from very abstract specifications and modifying this according to your
appraisal. Each program version inherits the best features from earlier versions. Each
version is refined based upon feedback from yourself to produce a system which satisfies
your needs. At this point the system may be delivered or it may be re-implemented using
a more structured approach to enhance robustness and maintainability. Specification,
development and validation activities are concurrent with strong feedback between each.

13 www.someakenya.com Contact: 0707 737 890


There are two types of evolutionary development:

1. Exploratory programming
here, the objective of the process is to work with you to explore their
requirements and deliver a final system. The development starts with better
understood components of the system. The software evolves by adding new
features as they are proposed.

2. Throwaway prototyping
here, the purpose of the evolutionary development process is to understand your
requirements and thus develop a better requirements definition for the system.
The prototype concentrates on experimenting with those components of the
requirements which are poorly understood.

Advantages
This is the only method appropriate for situations where a detailed system specification
is unavailable. Effective in rapidly producing small systems, software with short life
spans, and developing sub-components of larger systems.

Disadvantages
It is difficult to measure progress and produce documentation reflecting every version of
the system as it evolves. This paradigm usually results in badly structured programs due
to continual code modification. Production of good quality software using this method
requires highly skilled and motivated programmers.

14 www.someakenya.com Contact: 0707 737 890


Problems of evolutionary development include
include:

• Lack of process visibility.


• Systems are often poorly structured.
• Special skills (e.g. in languages
language for rapid prototyping) may be required.

Evolutionary development is usually employed in:


in

• For small or medium-size


size interactive systems
• For parts of large systems (e.g. the user interface)
• For short-lifetime systems

 Component based models

It’s the creation, integration, and re-use


re use of components of program code, each of which
has a common interface for use by multiple systems.
Based on systematic reuse where systems are integrated from existing components or
COTS (Commercial-off-the-shelf)shelf) systems. Process stages include:

• Component analysis
• Requirements modification
• System design with reuse
• Development and integration

This approach is becoming more important but still limited experience with it.

 Other models

Process Iteration

System requirements ALWAYS evolve in the course of a project so process iteration


where earlier stages are reworked is always part of the process for large systems.
Iteration can be applied to any of the generic process models. Two (related) approaches:
• Incremental development

• Spiral development

Incremental Development

Rather than deliver the system as a single delivery, the development and delivery is
broken down into increments with each increment delivering part of the required
functionality. User requirements
equirements are prioritized and the highest priority requirements are
included in early increments. Once the development of an increment is started, the
requirements are frozen though requirements for later increments can continue to evolve.

Advantages
vantages of the incremental development include:

• Customer value can be delivered with each increment so system functionality is


available earlier.
• Early increments act as a prototype to help elicit requirements for later
increments.
• Lower risk of overall project failure.

The highest priority system services tend to receive the most testing.
Spiral Development

Process is represented as a spiral rather than as a sequence of activities with


backtracking. Each loop in the spiral represents a phase in the process. No fixed phases
such as specification or design loops in the spiral are chosen depending on what is
required. Risks are explicitly assessed and resolved throughout the process.

The spiral model sectors include:

• Objective setting - Specific objectives for the phase are identified.


• Risk assessment and reduction - Risks are assessed and activities put in place to
reduce the key risks.
• Development and validation - A development model for the system is chosen
which can be any of the generic models
• Planning - The project is reviewed and the next phase of the spiral is planned.

CASE

Computer-aided software engineering (CASE) is software to support software


development and evolution processes. Activity automation includes:
• Graphical editors for system model development
• Data dictionary to manage design entities
• Graphical UI builder for user interface construction
• Debuggers to support program fault finding
• Automated translators to generate new versions of a program

17 www.someakenya.com Contact: 0707 737 890


Case Technology
Case technology has led to significant improvements in the software process though not
the order of magnitude improvements that were once predicted. Reasons include:

• Software engineering requires creative thought - this is not readily automatable.


• Software engineering is a team activity and, for large projects, much time is spent
in team interactions. CASE technology does not really support these.

CASE Classification

Classification helps us understand the different types of CASE tools and their support for
process activities.

• Functional perspective - Tools are classified according to their specific function.


• Process perspective - Tools are classified according to process activities that are
supported.
• Integration perspective - Tools are classified according to their organization into
integrated units.

Functional Tool Classification


Tool Type Examples
Planning tools PERT tools, estimation tools, spreadsheets
Editing tools Text editors, diagram editors, word processors
Change management tools Requirements traceability tools, change control
systems
Configuration management tools Version management systems, system building tools
Prototyping tools Very high-level languages, user interface generators
Method-support tools Design editors, data dictionaries, code generators
Language-processing tools Compilers, interpreters
Program analysis tools Cross reference generators, static analyzers,
dynamic analyzers
Testing tools Test data generators, file comparators
Debugging tools Interactive debugging systems
Documentation tools Page layout programs, image editors
Re-engineering tools Cross-reference systems, program restructuring
systems

18 www.someakenya.com Contact: 0707 737 890


CASE Integration
Tools - Support individual process tasks such as design consistency checking, text
editing, etc.

Workbenches - Support a process phase such as specification or design. Normally


include a number of integrated tools Environments - Support all or a substantial part of
an entire software process. Normally include several integrated
int workbenches
TOPIC 3

SOFTWARE REQUIREMENTS ANALYSIS

 Overview or requirements concepts

Software Requirement
1. A condition of capability needed by a user to solve a problem or achieve an objective
2. A condition or a capability that must be met or possessed by a system to satisfy a contract,
standard, specification, or other formally imposed document."

1 Known Requirements: -Something a stakeholder believes to be implemented


2 Unknown requirements:-Forgotten by the stakeholder because they are not needed right
now or needed only by another stakeholder
3 Undreamt requirements:- stakeholder may not be able to think of new requirement due
to limited knowledge

A Known, Unknown and Undreamt requirements may be functional or non-functional.


• Functional requirements: - describe what the software has to do. They are often called
product features. It depends on the type of software, expected users and the type of
system where the software is used.
• Non Functional requirements: - are mostly quality requirements. That stipulate how
well the software does, what it has to do. These define system properties and constraints
e.g. reliability, response time and storage requirements. Constraints are I/O device
capability, system representations, etc.
• User requirements: -Statements in natural language plus diagrams of the services the
system provides and its operational constraints.
• System requirements: - A structured document setting out detailed descriptions of the
system’s functions, services and operational constraints. Defines what should be
implemented so may be part of a contract between client and contractor.

www.someakenya.com Contact: 0707 737 890


Requirements engineering Process
• The process of finding out, analyzing, documenting and checking these services and constraints
called requirement engineering.
• RE produces one large document, written in a natural language, contains a description of what
the system will do without how it will do
• Input to RE is problem statement prepared by the customer and the output is SRS prepared by
the developer.
• The requirements themselves are the descriptions of the system services and constraints that are
generated during the requirements engineering process.

Definition 2
Requirements engineering processes: -The processes used for RE vary widely depending on
the application domain, the people involved and the organisation developing the requirements.
However, there are a number of generic activities common to all processes.
 Requirements elicitation
 Requirements analysis
 Requirements documentation
 Requirements review

• Requirement Elicitation: - known as gathering of requirement. Here requirement are


identified with the help of customer and exiting system processes, if they are available.
• Requirement Analysis: - analysis of requirement starts with Requirement Elicitation.
Requirements are analyzed in order to identify inconstancy, defects etc.
• Requirement Documentation: -this is the end product of requirement elicitation and
analysis. Documentation is very important as it will be the foundation for the design of
the software. The documentation is known as SRS.

21 www.someakenya.com Contact: 0707 737 890


• Requirement Review:-review process is carried out to improve the quality of the SRS. It
may also be called verification. It should be a continuous activity that is incorporated into
the elicitation, analysis, documentation.
The primary output of requirement engineering process is system requirement specification
(SRS).

Feasibility Study: - It is the process of evaluation or analysis of the potential impact of a


propose project or program.
Five common factors (TELOS)
 Technical feasibility: - Is it technically feasible to provide direct communication
connectivity through space from one location of globe to another location?
 Economic feasibility:-Are the project’s cost assumption realistic?
 Legal feasibility: - Does the business model realistic?
 Operational feasibility: - Is there any market for the product?
 Schedule feasibility: - Are the project’s schedule assumption realistic?

Elicitation: - It is also called requirement discovery. Requirements are identified with the help
of customer and exiting system processes, if they are available.
Requirement Elicitation is most difficult, perhaps most critical, most error prone and most
communication intensive aspect of software development.

Various methods of Requirements Elicitation

1. Interviews
• After receiving the problem statement from the customer, the first step is to arrange a meeting
with the customer.
• During the meeting or interview, both the parties would like to understand each other.
• The objective of conducting an interview is to understand the customer’s expectation from the
software

Selection of stakeholder
1. Entry level personnel
2. Middle level stakeholder
3. Managers
4. Users of the software (Most important)

2. Brainstorming Sessions
• Brainstorming is a group technique that may be used during requirement elicitation to
understand the requirement
• Every idea will be documented in a way that every can see it

22 www.someakenya.com Contact: 0707 737 890


• After brainstorming session a detailed report will be prepared and reviewed by the facilitator.

3. Facilitated Application specification approach (FAST)


• FAST is similar to brainstorming session and the objective is to bridge the expectation gap-a
difference what the developers think they are supposed to build and what the customer think they
are going to get
• In order to reduce the gap a team oriented approach is developed for requirement gathering and
is called FAST.

4. Quality function deployment (QFT)


It incorporates the voice of the customer and converts it in to the document.

 Requirement analysis process


Requirement analysis phase analyze, refine and scrutinize requirements to make consistent &
unambiguous requirements

1 Draw the context diagram


The context diagram is a simple model that defines the boundaries and interface of the
proposed system.
2 Development of prototype
Prototype helps the client to visualize the proposed system and increase the understanding of
requirement. Prototype may help the parties to take final decision.
3 Model the requirement
This process usually consists of various graphical representations of function, data entities,
external entities and the relationship between them. It graphical view may help to find
incorrect, inconsistent, missing requirements. Such models include data flow diagram, entity
relationship diagram, data dictionary, state transition diagram.
4 Finalize the requirement
After modeling the requirement inconsistencies and ambiguities have been identified and
corrected. Flow of data among various modules has been analyzed. Now Finalize and
analyzes requirements and next step is to document these requirements in prescribed format.

23 www.someakenya.com Contact: 0707 737 890


Documentation
This is the way of representing requirements in a consistent format SRS serves many purpose
depending upon who is writing it.

 Requirements specification
Requirements specification is a complete description of the behavior of a system to be
developed. It includes a set of use cases that describe all the interactions the users will have with
the software. Use cases are also known as functional requirements. In addition to use cases, the
SRS also contains non-functional (or supplementary) requirements. Nonfunctional requirements
are requirements which impose constraints on the design or implementation (such as
performance engineering requirements, quality standards, or design constraints)

Need for Software Requirement Specification (SRS)


• The problem is that the client usually does not understand software or the software
development process, and the developer often does not understand the client’s problem
and application area
• This causes a communication gap between the parties involved in the development
project. A basic purpose of software requirements specification is to bridge this
communication gap.

Characteristics of good SRS document: -Some of the identified desirable qualities of the
SRS documents are the following-
• Concise- The SRS document should be concise and at the same time unambiguous,
consistent, and complete. An SRS is unambiguous if and only if every requirement stated
has one and only one interpretation.
• Structured- The SRS document should be well-structured. A well-structured document
is easy to understand and modify. In practice, the SRS document undergoes several
revisions to cope with the customer requirements.
• Black-box view- It should only specify what the system should do and refrain from
stating how to do. This means that the SRS document should specify the external
behaviours of the system and not discuss the implementation issues.
• Conceptual integrity- The SRS document should exhibit conceptual integrity so that the
reader can easily understand the contents.
• Verifiable- All requirements of the system as documented in the SRs document should
beverifiable. This means that it should be possible to determine whether or not
requirements have been met in an implementation.

24 www.someakenya.com Contact: 0707 737 890


Requirements Validation
Requirement Validation is used for checking the document:-

 Completeness & consistency


 Conformance to standards
 Requirements conflicts
 Technical errors
 Ambiguous requirements

25 www.someakenya.com Contact: 0707 737 890


TOPIC 4

DESIGN TOOLS AND METHODS

 System flowcharts

System flowcharts are a way of displaying how data flows in a system and how decisions are
made to control events.

To illustrate this, symbols are used. They are connected together to show what happens to data
and where it goes. The basic ones include:

Symbols used in flow charts

Note that system flow charts are very similar to data flow charts. Data flow charts do not include
decisions, they just show the path that data takes, where it is held, processed, and then output.

Using system flowchart ideas

This system flowchart is a diagram for a 'cruise control' for a car. The cruise control keeps the
car at a steady speed that has been set by the driver.

26 www.someakenya.com Contact: 0707 737 890


A system flowchart for cruise control on a car

The flowchart shows what the outcome is if the car is going too fast or too slow. The system is
designed to add fuel, or take it away and so keep the car's speed constant. The output (the car's
new speed) is then fed back into the system via the speed sensor.

Other examples of uses for system diagrams include:

 aircraft control
 central heating
 automatic washing machines
 booking systems for airlines

Types of flowchart

Sterneckert (2003) suggested that flowcharts can be modeled from the perspective of different
user groups (such as managers, system analysts and clerks) and that there are four general types:

 Document flowcharts, showing controls over a document-flow through a system


 Data flowcharts, showing controls over a data-flow in a system
 System flowcharts showing controls at a physical or resource level
 Program flowchart, showing the controls in a program within a system

Notice that every type of flowchart focuses on some kind of control, rather than on the particular
flow itself.

27 www.someakenya.com Contact: 0707 737 890


However, there are several of these classifications. For example, Andrew Veronis (1978) named
three basic types off flowcharts: the system flowchart, the general flowchart,, and the detailed
flowchart.. That same year Marilyn Bohl (1978) stated "in practice, two kinds of flowcharts are
used in solution planning: system flowcharts and program flowcharts.. More recently Ma Mark A.
Fryman (2001) stated that there are more differences: "Decision flowcharts, logic flowcharts,
systems flowcharts, product flowcharts, and process flowcharts are just a few of the different
types of flowcharts that are used in business and government".
government"

In addition, many diagram techniques exist that are similar to flowcharts but carry a different
name, such as UMLactivity diagrams.

Flowchart building blocks

Common Shapes

The following are some of the commonly used shapes used in flowcharts. Generally, flowcharts
flow from top to bottom and left to right.

Shape Name Description


Flow Line An arrow coming from one symbol and ending at another
symbol represents that control passes to the symbol the arrow
points to. The line for the arrow can be solid or dashed. The
meaning of the arrow with dashed line may differ from one
flowchart to another and can be defined in the legend.
On-Page Generally represented with a circle, showing where multiple
Connector control flows converge in a single exit flow. It w
will have more
than one arrow coming into it, but only one going out. In
simple cases, one may simply have an arrow point to another
arrow instead. These are useful to represent an iterative process
(what in Computer Science is called a loop). ). A loop may, for
example, consist of a connector where control first enters,
processing steps, a conditional with one
ne arrow exiting the loop,
and one going back to the connector. For additional clarity,
wherever two lines accidentally cross in the drawing, one of
them may be drawn with a small semicircle over the other,
showing that no connection is intended.
Annotation Annotations represent comments or remarks about the
flowchart. Like comments found in high-level
level programming
languages, they have no effect on the interpretation or behavior
of the flowchart. Sometimes, the shape consists of a box with
dashed (or dotted) lines.

28 www.someakenya.com Contact: 0707 737 890


Terminal Represented as circles, ovals, stadiums or rounded (fillet)
rectangles. They usually contain the word "Start" or "End", or
another phrase signaling the start or end of a process, such as
"submit inquiry" or "receive product".
Decision Represented as a diamond (rhombus)) showing where a decision
is necessary, commonly a Yes/No question or True/False test.
The conditional symbol is peculiar in that it has two arrows
coming out of it, usually from the bottom point and right point,
one
ne corresponding to Yes or True, and one corresponding to
No or False. (The arrows should always be labeled.) More than
two arrows can be used, but this is normally a clear indicator
that a complex decision is being taken, in which case it may
need to be broken-down
down further or replaced with the
"predefined process" symbol. Decision can also help in the
filtering of data.
Input/output Represented as a parallelogram. Involves receiving
iving data and
displaying processed data. Can only move from input to output
and not vice versa. Examples: Get X from the user; display X.
Predefined Represented as rectangles with double-struck
struck vertical edges;
Process these are used to show complex processing
cessing steps which may be
detailed in a separate flowchart. Example: PROCESS
PROCESS-FILES.
One subroutine may have multiple distinct entry points or exit
flows (see coroutine). If so, these are shown as labeled 'wells'
in the rectangle, and control arrows connect to these 'wells'.
Process Represented as rectangles.. This shape is used to show that
something is performed. Examples:
amples: "Add 1 to X", "replace
identified part", "save changes", etc....
Preparation Represented as a hexagon.. May also be called initialization.
Shows operations which have no effect other than preparing a
value for a subsequent conditional or decision step.
Alternatively, this shape is used to replace the Decision Shape
in the case of conditional looping.
Off-Page Represented as a home plate-shaped pentagon.. Similar to the
Connector on-page
page connector except allows for placing a connect
connector that
connects to another page.

29 www.someakenya.com Contact: 0707 737 890


Other Shapes

A typical flowchart from older basic computer science textbooks may have the following kinds
of symbols:

Labeled connectors

Represented by an identifying label inside a circle. Labeled connectors are used in complex or
multi-sheet diagrams to substitute for arrows. For each label, the "outflow" connector must
always be unique, but there may be any number of "inflow" connectors. In this case, a junction in
control flow is implied.

Concurrency symbol

Represented by a double transverse line with any number of entry and exit arrows. These
symbols are used whenever two or more control flows must operate simultaneously. The exit
flows are activated concurrently, when all of the entry flows have reached the concurrency
symbol. A concurrency symbol with a single entry flow is a fork; one with a single exit flow is a
join.

Data-flow extensions

A number of symbols have been standardized for data flow diagrams to represent data flow,
rather than control flow. These symbols may also be used in control flowcharts (e.g. to substitute
for the parallelogram symbol).

 A Document represented as a rectangle with a wavy base;


 A Manual input represented by quadrilateral, with the top irregularly sloping up from left
to right. An example would be to signify data-entry from a form;
 A Manual operation represented by a trapezoid with the longest parallel side at the top, to
represent an operation or adjustment to process that can only be made manually.
 A Data File represented by a cylinder.

 Case tools
Computer-aided software engineering (CASE) is the domain of software tools used to design
and implement applications. CASE tools are similar to and were partly inspired by Computer
Aided Design (CAD) tools used to design hardware products. CASE tools are used to develop
software that is high-quality, defect-free, and maintainable. CASE software is often associated
with methodologies for the development of information systems together with automated tools
that can be used in the software development process.

30 www.someakenya.com Contact: 0707 737 890


Case software
CASE software is classified into 3 categories:

1. Tools support specific tasks in the software life-cycle.


2. Workbenches combine two or more tools focused on a specific part of the software life-
cycle.
3. Environments combine two or more tools or workbenches and support the complete
software life-cycle.

Tools

CASE tools support specific tasks in the software development life-cycle. They can be divided
into the following categories:

1. Business and Analysis modeling. Graphical modeling tools. E.g., E/R modeling, object
modeling, etc.
2. Development. Design and construction phases of the life-cycle. Debugging environments.
E.g., GNUDebugger.
3. Verification and validation. Analyze code and specifications for correctness,
performance, etc.
4. Configuration management. Control the check-in and check-out of repository objects and
files. E.g., SCCS, CMS.
5. Metrics and measurement. Analyze code for complexity, modularity (e.g., no "go to's"),
performance, etc.
6. Project management. Manage project plans, task assignments, scheduling.

Another common way to distinguish CASE tools is the distinction between Upper CASE and
Lower CASE. Upper CASE Tools support business and analysis modeling. They support
traditional diagrammatic languages such as ER diagrams, Data flow diagram, Structure charts,
Decision Trees, Decision tables, etc. Lower CASE Tools support development activities, such as
physical design, debugging, construction, testing, component integration, maintenance, and
reverse engineering. All other activities span the entire life-cycle and apply equally to upper and
lower CASE.

Workbenches

Workbenches integrate two or more CASE tools and support specific software-process activities.
Hence they achieve:

 A homogeneous and consistent interface (presentation integration).


 Seamless integration of tools and tool chains (control and data integration).

31 www.someakenya.com Contact: 0707 737 890


An example workbench is Microsoft's Visual Basic programming environment. It incorporates
several development tools: a GUI builder, smart code editor, debugger, etc. Most commercial
CASE products tended to be such workbenches that seamlessly integrated two or more tools.
Workbenches also can be classified in the same manner as tools; as focusing on Analysis,
Development, Verification, etc. as well as being focused on upper case, lower case, or processes
such as configuration management that span the complete life-cycle.

Environments

An environment is a collection of CASE tools or workbenches that attempts to support the


complete software process. This contrasts with tools that focus on one specific task or a specific
part of the life-cycle. CASE environments are classified as follows:

1. Toolkits. Loosely coupled collections of tools. These typically build on operating system
workbenches such as the Unix Programmer's Workbench or the VMS VAX set. They
typically perform integration via piping or some other basic mechanism to share data and
pass control. The strength of easy integration is also one of the drawbacks. Simple
passing of parameters via technologies such as shell scripting can't provide the kind of
sophisticated integration that a common repository database can.
2. Fourth generation. These environments are also known as 4GL standing for fourth
generation language environments due to the fact that the early environments were
designed around specific languages such as Visual Basic. They were the first
environments to provide deep integration of multiple tools. Typically these environments
were focused on specific types of applications. For example, user-interface driven
applications that did standard atomic transactions to a relational database. Examples are
Informix 4GL, and Focus.
3. Language-centered. Environments based on a single often object-oriented language such
as the Symbolics Lisp Genera environment or Visual Works Smalltalk from Parcplace. In
these environments all the operating system resources were objects in the object-oriented
language. This provides powerful debugging and graphical opportunities but the code
developed is mostly limited to the specific language. For this reason, these environments
were mostly a niche within CASE. Their use was mostly for prototyping and R&D
projects. A common core idea for these environments was the model-view-controller user
interface that facilitated keeping multiple presentations of the same design consistent
with the underlying model. The MVC architecture was adopted by the other types of
CASE environments as well as many of the applications that were built with them.
4. Integrated. These environments are an example of what most IT people tend to think of
first when they think of CASE. Environments such as IBM's AD/Cycle, Andersen
Consulting's FOUNDATION, the ICL CADES system, and DEC Cohesion. These
environments attempt to cover the complete life-cycle from analysis to maintenance and
provide an integrated database repository for storing all artifacts of the software process.

32 www.someakenya.com Contact: 0707 737 890


The integrated software repository was the defining feature for these kinds of tools. They
provided multiple different design models as well as support for code in
heterogeneouslanguages. One of the main goals for these types of environments was
"round trip engineering": being able to make changes at the design level and have those
automatically reflected in the code and vice versa. These environments were also
typically associated with a particular methodology for software development. For
example the FOUNDATION CASE suite from Andersen was closely tied to the
Andersen Method/1 methodology.
5. Process-centered. This is the most ambitious type of integration. These environments
attempt to not just formally specify the analysis and design objects of the software
process but the actual process itself and to use that formal process to control and guide
software projects. Examples are East, Enterprise II, Process Wise, Process Weaver, and
Arcadia. These environments were by definition tied to some methodology since the
software process itself is part of the environment and can control many aspects of tool
invocation.

In practice, the distinction between workbenches and environments was flexible. Visual Basic
for example was a programming workbench but was also considered a 4GL environment by
many. The features that distinguished workbenches from environments were deep integration via
a shared repository or common language and some kind of methodology (integrated and process-
centered environments) or domain (4GL) specificity.

Major CASE Risk Factors

Some of the most significant risk factors for organizations adopting CASE technology include:

 Inadequate standardization. Organizations usually have to tailor and adopt methodologies


and tools to their specific requirements. Doing so may require significant effort to
integrate both divergent technologies as well as divergent methods. For example, before
the adoption of the UML standard the diagram conventions and methods for designing
object-oriented models were vastly different among followers of Jacobsen, Booch,
Rumbaugh, etc.
 Unrealistic expectations. The proponents of CASE technology—especially vendors
marketing expensive tool sets—often hype expectations that the new approach will be a
silver bullet that solves all problems. In reality no such technology can do that and if
organizations approach CASE with unrealistic expectations they will inevitably be
disappointed.
 Inadequate training. As with any new technology, CASE requires time to train people in
how to use the tools and to get up to speed with them. CASE projects can fail if
practitioners are not given adequate time for training or if the first project attempted with
the new technology is itself highly mission critical and fraught with risk.

33 www.someakenya.com Contact: 0707 737 890


 Inadequate process control. CASE provides significant new capabilities to utilize new
types of tools in innovative ways. Without the proper process guidance and controls these
new capabilities can cause significant new problems as well

 Functional decomposition

Functional Decomposition is the process of taking a complex process and breaking it down into
its smaller, simpler parts. For instance, think about using an ATM. You could decompose the
process into:

1. Walk up to the ATM


2. Insert your bank card
3. Enter your pin

Well...you get the point.

You can think of programming the same way. Think of the software running that ATM:

1. Code for reading the card


2. PIN verification
3. Transfer Processing

Each of which can be broken down further. Once you've reached the most decomposed pieces of
a subsystem, you can think about how to start coding those pieces. You then compose those
small parts into the greater whole.

The benefit of functional decomposition is that once you start coding, you are working on the
simplest components you can possibly work with for your application. Therefore developing and
testing those components becomes much easier (not to mention you are better able to architect
your code and project to fit your needs).

The obvious downside is the time investment. To perform functional decomposition on a


complex system takes more than a trivial amount of time BEFORE coding begins

Overview

There are different types of decomposition defined in computer sciences:

 In structured programming, algorithmic decomposition breaks a process down into well-


defined steps.

34 www.someakenya.com Contact: 0707 737 890


 Structured analysis breaks down a software system from the system context level to
system functions and data entities as described by Tom DeMarco.
 Object-oriented decomposition, on the other hand, breaks a large system down into
progressively smaller classes or objects that are responsible for some part of the problem
domain.
 According to Booch, algorithmic decomposition is a necessary part of object-oriented
analysis and design, but object-oriented systems start with and emphasize decomposition
into classes.

More generally, functional decomposition in computer science is a technique for mastering the
complexity of the function of a model. A functional model of a system is thereby replaced by a
series of functional models of subsystems.

Decomposition topics

Decomposition paradigm

A decomposition paradigm in computer programming is a strategy for organizing a program as a


number of parts, and it usually implies a specific way to organize a program text. Usually the
aim of using a decomposition paradigm is to optimize some metric related to program
complexity, for example the modularity of the program or its maintainability.

Most decomposition paradigms suggest breaking down a program into parts so as to minimize
the static dependencies among those parts, and to maximize the cohesiveness of each part. Some
popular decomposition paradigms are the procedural, modules, abstract data type and object
oriented ones.

The concept of decomposition paradigm is entirely independent and different from that of model
of computation, but the two are often confused, most often in the cases of the functional model
of computation being confused with procedural decomposition, and of the actor model of
computation being confused with object oriented decomposition.

35 www.someakenya.com Contact: 0707 737 890


Decomposition diagram

Decomposition Structure

A decomposition diagram shows a high-level


high level function, process, organization, data subject area,
or other type of object broken down into lower level, more detailed components. For example,
decomposition diagrams may represent organizational structure or functional decomposition into
processes. Decomposition diagrams provide a logical hierarchical decomposition of a system.

 Modules design

Modular design,, or "modularity in design", is a design approach that


that subdivides a system into
smaller parts called modules or skids that can be independently created and then used in different
systems. A modular system can be characterized by functional partitioning into discrete scalable,
reusable modules, rigorous use of well-defined
well defined modular interfaces, and making use of industry
standards for interfaces.
Modular programming is a software design technique that emphasizes separating the
functionality of a program into independent, interchangeable modules,, such that each contains
everything necessary to execute only one aspect of the desired functionality.

36 www.someakenya.com Contact: 0707 737 890


A module interface expresses the elements that are provided and required by the module. The
elements defined in the interface are detectable by other modules. The implementation contains
the working code that corresponds to the elements declared in the interface. Modular
programming is closely related to structured programming and object-oriented programming, all
having the same goal of facilitating construction of large software programs and systems by
decomposition into smaller pieces, and all originating around the 1960s. While historically usage
of these terms has been inconsistent, today "modular programming" refers to high-level
decomposition of the code of an entire program into pieces, structured programming to the low-
level code use of structured control flow, and object-oriented programming to the data use of
objects, a kind of data structure.

In object-oriented programming, the use of interfaces as an architectural pattern to construct


modules is known as interface-based programming

Terminology

The term package is sometimes used instead of module (as in Dart, Go, or Java). In other
implementations, this is a distinct concept; in Python a package is a collection of modules, while
in the upcoming Java 9 it is planned to be introduced a new concept of module (a collection of
packages with enhanced access control).

Furthermore, the term "package" has other uses in software. A component is a similar concept,
but typically refers to a higher level; a component is a piece of a whole system, while a module is
a piece of an individual program. The scale of the term "module" varies significantly between
languages; in Python it is very small-scale and each file is a module, while in Java 9 it is planned
to be large-scale, where a module is a collection of packages, which are in turn collections of
files.

Other terms for modules include unit, used in Pascal dialects.

Module design which is also called "low level design" has to consider the programming language
which shall be used for implementation. This will determine the kind of interfaces you can use
and a number of other subjects.

 Structured walkthrough

In software engineering, a walkthrough or walk-through is a form of software peer review "in


which a designer or programmer leads members of the development team and other interested
parties through a software product, and the participants ask questions and make comments about
possible errors, violation of development standards, and other problems".

37 www.someakenya.com Contact: 0707 737 890


"Software product" normally refers to some kind of technical document. As indicated by the
IEEE definition, this might be a software design document or program source code, but use
cases, business process definitions, test case specifications, and a variety of other technical
documentation may also be walked through.

A walkthrough differs from software technical reviews in its openness of structure and its
objective of familiarization. It differs from software inspection in its ability to suggest direct
alterations to the product reviewed, its lack of a direct focus on training and process
improvement, and its omission of process and product measurement.

Objectives and participants

In general, a walkthrough has one or two broad objectives: to gain feedback about the technical
quality or content of the document; and/or to familiarize the audience with the content.

A walkthrough is normally organized and directed by the author of the technical document. Any
combination of interested or technically qualified personnel (from within or outside the project)
may be included as seems appropriate.

IEEE 1028 recommends three specialist roles in a walkthrough:

 The Author, who presents the software product in step-by-step manner at the walk-
through meeting, and is probably responsible for completing most action items;
 The Walkthrough Leader, who conducts the walkthrough, handles administrative tasks,
and ensures orderly conduct (and who is often the Author); and
 The Recorder, who notes all anomalies (potential defects), decisions, and action items
identified during the walkthrough meetings.

What is Structured Walkthrough?

A structured walkthrough, a static testing technique performed in an organized manner between a


group of peers to review and discuss the technical aspects of software development process. The
main objective in a structured walkthrough is to find defects in order to improve the quality of
the product.

Structured walkthroughs are usually NOT used for technical discussions or to discuss the
solutions for the issues found. As explained, the aim is to detect error and not to correct errors.
When the walkthrough is finished, the author of the output is responsible for fixing the issues.

Benefits:

 Saves time and money as defects are found and rectified very early in the lifecycle.

38 www.someakenya.com Contact: 0707 737 890


 This provides value-added comments from reviewers with different technical
backgrounds and experience.

 It notifies the project management team about the progress of the development process.

 It creates awareness about different development or maintenance methodologies which


can provide a professional growth to participants.

Structured Walkthrough Participants:

 Author - The Author of the document under review.

 Presenter - The presenter usually develops the agenda for the walkthrough and presents
the output being reviewed.

 Moderator - The moderator facilitates the walkthrough session, ensures the walkthrough
agenda is followed, and encourages all the reviewers to participate.

 Reviewers - The reviewers evaluate the document under test to determine if it is


technically accurate.

 Scribe - The scribe is the recorder of the structured walkthrough outcomes who records
the issues identified and any other technical comments, suggestions, and unresolved
questions.

 Decision tables

Decision tables are a precise yet compact way to model complex rule sets and their
corresponding actions.

Decision tables, like flowcharts and if-then-else and switch-case statements, associate conditions
with actions to perform, but in many cases do so in a more elegant way.

In the 1960s and 1970s a range of "decision table based" languages such as Filetab were popular
for business programming.

Structure

The four quadrants


Conditions Condition alternatives
Actions Action entries

39 www.someakenya.com Contact: 0707 737 890


Each decision corresponds to a variable, relation or predicate whose possible values are listed
among the condition alternatives. Each action is a procedure or operation to perform, and the
entries specify whether (or in what order) the action is to be performed for the set of condition
alternatives the entry corresponds to. Many decision tables include in their condition alternatives
the don't care symbol, a hyphen. Using don't cares can simplify decision tables, especially when
a given condition has little influence on the actions to be performed. In some cases, entire
conditions thought to be important initially are found to be irrelevant when none of the
conditions influence which actions are performed.

Aside from the basic four quadrant structure, decision tables vary widely in the way the
condition alternatives and action entries are represented. Some decision tables use simple
true/false values to represent the alternatives to a condition (akin to if-then-else), other tables
may use numbered alternatives (akin to switch-case), and some tables even use fuzzy logic or
probabilistic representations for condition alternatives. In a similar way, action entries can
simply represent whether an action is to be performed (check the actions to perform), or in more
advanced decision tables, the sequencing of actions to perform (number the actions to perform).

Example

The limited-entry decision table is the simplest to describe. The condition alternatives are simple
Boolean values, and the action entries are check-marks, representing which of the actions in a
given column are to be performed.

A technical support company writes a decision table to diagnose printer problems based upon
symptoms described to them over the phone from their clients.

The following is a balanced decision table (created by Systems Made Simple).

Printer troubleshooter
Rules
Conditions Printer does not print Y Y Y Y N N N N
A red light is flashing Y Y N N Y Y N N
Printer is unrecognized Y N Y N Y N Y N
Actions Check the power cable X
Check the printer-computer cable X X
Ensure printer software is installed X X X X
Check/replace ink X X X X
Check for paper jam X X

40 www.someakenya.com Contact: 0707 737 890


Of course, this is just a simple example (and it does not necessarily correspond to the reality of
printer troubleshooting), but even so, it demonstrates how decision tables can scale to several
conditions with many possibilities.

Software engineering benefits

Decision tables, especially when coupled with the use of a domain-specific language, allow
developers and policy experts to work from the same information, the decision tables them.

Tools to render nested if statements from traditional programming languages into decision tables
can also be used as a debugging tool.

Decision tables have proven to be easier to understand and review than code, and have been used
extensively and successfully to produce specifications for complex systems.

Program embedded decision tables

Decision tables can be, and often are, embedded within computer programs and used to 'drive'
the logic of the program. A simple example might be a lookup table containing a range of
possible input values and a function pointer to the section of code to process that input.

Static decision table


Input Function Pointer
'1' Function 1 (initialize)
'2' Function 2 (process 2)
'9' Function 9 (terminate)

Multiple conditions can be coded for in similar manner to encapsulate the entire program logic in
the form of an 'executable' decision table or control table.

Implementations

 Filetab, originally from the NCC


 DETAB/65, 1965, ACM
 FORTAB from Rand in 1962, designed to be imbedded in FORTRAN
 A Ruby implementation exists using MapReduce to find the correct actions based on
specific input values.

Decision table provides a handy and compact way to represent complex business logic. In a
decision table, business logic is well divided into conditions, actions (decisions) and rules for
representing the various components that form the business logic.

41 www.someakenya.com Contact: 0707 737 890


 Structured charts
A Structure Chart (SC) in software engineering and organizational theory is a chart which
shows the breakdown of a system to its lowest manageable levels. They are used in structured
programming to arrange program modules into a tree. Each module is represented by a box,
which contains the module's name. The tree structure visualizes the relationships between
modules

Overview

The organization chart is a diagram showing graphically the relation of one official to another, or
others, of a company. It is also used to show the relation of one department to another, or others,
or of one function of an organization to another, or others. This chart is valuable in that it enables
one to visualize a complete organization, by means of the picture it presents.

A company's organizational chart typically illustrates relations between people within an


organization. Such relations might include managers to sub-workers, directors to managing
directors, chief executive officer to various departments, and so forth. When an organization
chart grows too large it can be split into smaller charts for separate departments within the
organization. The different types of organization charts include:

 Hierarchical
 Matrix
 Flat (also known as Horizontal)

There is no accepted form for making organization charts other than putting the principal
official, department or function first, or at the head of the sheet, and the others below, in the
order of their rank. The titles of officials and sometimes their names are enclosed in boxes or
circles. Lines are generally drawn from one box or circle to another to show the relation of one
official or department to the others.

History

42 www.someakenya.com Contact: 0707 737 890


Organization Chart of Tabulating Machine Co., 1917

The Scottish-American engineer Daniel McCallum (1815–1878) is credited for creating the first
organizational charts of American businessaround 1854. This chart was drawn by George Holt
Henshaw.

The term "organization chart" came into use in the early twentieth century. In 1914
Brintondeclared "organization charts are not nearly so widely used as they should be. As
organization charts are an excellent example of the division of a total into its components, a
number of examples are given here in the hope that the presentation of organization charts in
convenient form will lead to their more widespread use." In those years industrial engineers
promoted the use of organization charts.

In the 1920s a survey revealed that organizational charts were still not common among ordinary
business concerns, but they were beginning to find their way into administrative and business
enterprises.

The term "organigram" originates in the 1960s.

Limitations

There are several limitations of organizational charts:

 If updated manually, organizational charts can very quickly become out-of-date,


especially in large organizations that change their staff regularly.
 They only show "formal relationships" and tell nothing of the pattern of human (social)
relationships which develop. They also often do not show horizontal relationships.
 They provide little information about the managerial style adopted (e.g. "autocratic",
"democratic" or an intermediate style)
 In some cases, an organigraph may be more appropriate, particularly if one wants to show
non-linear, non-hierarchical relationships in an organization.
 They often do not include customers.

43 www.someakenya.com Contact: 0707 737 890


Examples

A military example chart for explanation purposes.

The example on the right shows a simple hierarchical organizational chart.

An example of a "line relationship" (or chain of command in military relationships) in this chart
would be between the general and the two colonels - the colonels are directly responsible to the
general.

An example of a "lateral relationship" in this chart would be between "Captain A", and "Captain
B" who both work on level and both report to the "Colonel B".

Various shapes such as rectangles, squares, triangles, circles can be used to indicate different
roles. Color can be used both for shape borders and connection lines to indicate differences in
authority and responsibility, and possibly formal, advisory and informal links between people. A
department or position yet to be created or currently vacant might be shown as a shape with a
dotted outline. Importance of the position may be shown both with a change in size of the shape
in addition to its vertical placement on the chart.

 A structure chart (module chart, hierarchy chart) is a graphic depiction of the decomposition
of a problem. It is a tool to aid in software design.. It is particularly helpful on large pr
problems.

A structure chart illustrates the partitioning of a problem into subproblems and shows the
hierarchical relationships among the parts. A classic "organization chart" for a company is an
example of a structure chart.

The top of the chart is a box representing the entire problem, the bottom of the chart shows a
number of boxes representing the less complicated subproblems. (Left-right
(Left right on the chart is
irrelevant.)

A structure chart is NOT a flowchart. It has nothing to do with the logical sequence oof tasks. It
does NOT show the order in which tasks are performed. It does NOT illustrate an algorithm.
44 www.someakenya.com Contact: 0707 737 890
Each block represents some function in the system, and thus should contain a verb phrase, e.g.
"Print report heading."

 Data flow diagrams

Data flow diagram is graphical representation of flow of data in an information system. It is


capable of depicting incoming data flow, outgoing data flow and stored data. The DFD does not
mention anything about how data flows through the system.

There is a prominent difference between DFD and Flowchart. The flowchart depicts flow of
control in program modules. DFDs depict flow of data in the system at various levels. DFD does
not contain any control or branch elements.

45 www.someakenya.com Contact: 0707 737 890


Types of DFD

Data Flow Diagrams are either Logical


Lo or Physical.

 Logical DFD - This type of DFD concentrates on the system process and flow of data in
the system.For example in a Banking software system, how data is moved between
different entities.
 Physical DFD - This type of DFD shows how the data flow is actually implemented in
the system. It is more specific and close to the implementation.

DFD Components

DFD can represent Source, destination, storage and flow of data using the following set of

components -

 Entities - Entities are source and destination of information data. Entities are represented
by a rectangle with their respective names.
 Process - Activities and action taken on the data are represented by Circle or Round
Round-
edged rectangles.
 Data Storage - There
here are two variants of data storage - it can either be represented as a
rectangle with absence of both smaller sides or as an open-sided
open sided rectangle with only one
side missing.
 Data Flow - Movement of data is shown by pointed arrows. Data movement is shown
from the base of arrow as its source towards head of the arrow as destination.

Levels of DFD

 Level 0 - Highest abstraction level DFD is known as Level 0 DFD, which depicts the
entire information system as one diagram concealing all the underlying details. Level 0
DFDs are also known as context level DFDs.

46 www.someakenya.com Contact: 0707 737 890


 Level 1 - The Level 0 DFD is broken down into more specific, Level 1 DFD. Level 1
DFD depicts basic modules in the system and flow of data among various modules. Level
1 DFD also mentions basic processes
process and sources of information.

47 www.someakenya.com Contact: 0707 737 890


 Level 2 - At this level, DFD shows how data flows inside the modules mentioned in
Level 1.

Higher level DFDs can be transformed into more specific lower level DFDs with deeper level of
understanding unless the desired level of specification is achieved.

What is a data flow diagram (DFD)?

Data Flow Diagrams (DFD) helps us in identifying existing business processes. It is a technique
we benefit from particularly before we go through business process re-engineering.

At its simplest, a data flow diagram looks at how data flows through a system. It concerns things
like where the data will come from and go to as well as where it will be stored. But you won't
find information about the processing timing (e.g. whether the processes happen in sequence or
in parallel).

We usually begin with drawing a context diagram, a simple representation of the whole system.
To elaborate further from that, we drill down to a level 1 diagram with additional information
about the major functions of the system. This could continue to evolve to become a level 2
diagram when further analysis is required. Progression to level 3, 4 and so on is possible but
anything beyond level 3 is not very common. Please bear in mind that the level of detail asked
for depends on your process change plan.

Diagram Notations

Now we'd like to briefly introduce to you a few diagram notations which you'll see in the tutorial
below.

External Entity

An external entity can represent a human, system or subsystem. It is where certain data comes
from or goes to. It is external to the system we study, in terms of the business process. For this
reason, people used to draw external entities on the edge of a diagram.

48 www.someakenya.com Contact: 0707 737 890


Process

A process is a business activity or function where the manipulation and transformation of data
takes place. A process can be decomposed to finer level of details, for representing how data is
being processed within the process.

Data Store

A data store represents the storage of persistent data required and/or produced by the process.
Here are some examples of data stores: membership forms, database table, etc.

Data Flow

A data flow represents the flow of information, with its direction represented by an arrow head
that shows at the end(s) of flow connector.

What will we do in this tutorial?

In this tutorial we will show you how to draw a context diagram, along with a level 1 diagram.

Note: The software we are using here is Visual Paradigm Standard Edition. You are welcome to
download a free 30-dayevaluation copy of Visual Paradigm to walk through the example below.
No registration, email address or obligation is required.

Steps to Draw a Context Diagram

1. To create new DFD, select Diagram > New from the toolbar.
2. In the New Diagram window, select Data Flow Diagram and click Next.
3. Enter Context as diagram name and click OK to confirm.

49 www.someakenya.com Contact: 0707 737 890


4. We'll now draw the first process. From the Diagram Toolbar, drag Process onto the
diagram. Name the new process System.

5. Next, let's create an external entity. Please your mouse pointer over System
System. Press and
drag out the Resource Catalog button at top right.

6. Release the mouse button and select Bidirectional Data Flow ->
> External Entity from
Resource Catalog.

7. Name the new external entity Customer.

8. Now we'll model the database accessed by the system. Use Resource
Resource Catalog to create a
Data Store from System,, with a bidirectional data flow in between.

50 www.someakenya.com Contact: 0707 737 890


9. Name the new data store Inventory.

10. Create two more data stores, Customer and Transaction,, as shown below. We have just
completed the Context diagram.

51 www.someakenya.com Contact: 0707 737 890


Steps to Draw a Level 1 DFD

1. Instead of creating another diagram from scratch, we will decompose the System process
to form a new DFD. Right click on System and select Decompose from the popup menu.

2. The data stores and/or external entities connected to the


the selected process ((System) would
be referred to in the level 1 DFD. So when you are prompted to add them to the new
diagram, click Yes to confirm.
Note: The new DFD should look very similar to the Context diagram initially. Every
element should remain unchanged,
un except that the System process (from which this new
DFD decomposes) is now gone and replaced by a blank space (to be elaborated).
3. Rename the new DFD. Right click on its background and select Rename...
Rename.... In the
diagram's name box, enter Level 1 DFD and press ENTER.
4. Create three processes (Process Order, Ship Good, Issue Receipt) in center as shown
below. That is the old spot for the System process and we place them there to elaborate

52 www.someakenya.com Contact: 0707 737 890


System.

Wiring with connection lines for data flows

The remaining steps in this section are about connecting the model elements in the diagram. For
example, Customer provides order information when placing an order for processing.

1. Place your mouse pointer over Customer. Drag out the Resource Catalog icon and
release your mouse button on Process Order.

53 www.someakenya.com Contact: 0707 737 890


2. Select Data Flow from Resource Catalog.

3. Enter order information has the caption of flow.

4. Meanwhile the Process Order process also receives customer information from the
database in order to process the order.
or
Use Resource Catalog to create a data flow from Customer to Process Order
Order.

Optional:: You can label the data flow "customer information" if you like. But since this
data flow is quite self-explanatory
explanatory visually, we are going to omit it here.
5. By combining the order information from Customer (external entity) and the customer
information from Customer (data store), Process Order (process) then creates a
transaction record in the database. Create a data flow from Process Order to Transaction.

54 www.someakenya.com Contact: 0707 737 890


Drawing Tips:
To rearrange a connection line, place your mouse pointer over where you want to add a
pivot point to it. You'll then see a bubble at your mouse point. Click and drag it to where
you need.

55 www.someakenya.com Contact: 0707 737 890


Up to this point, your diagram should look something
somethin like this.

6. Once a transaction is stored, the shipping process follows. Therefore, create a data flow
from Process Order (process) to Ship Good (process).

7. Ship Good needs to read the transaction information (i.e. The order_ in order to pack the
rightt product for delivery. Create a data flow from Transaction (data store) to Ship Good
(process).

Note: If there is a lack of space, feel free to move the shapes around to make room.

56 www.someakenya.com Contact: 0707 737 890


8. Ship Good also needs to read the customer information for his/her shipping
shipping address.
Create a data flow from Customer (data store) to Ship Good (process).

9. Ship Good then updates the Inventory database to reflect the goods shipped. Create a data
flow from Ship Good (process) to Inventory (data store). Name it updated produc
product
record.

10. Once the order arrives in the customer's hands, the Issue Receipt process begins. In it, a
receipt is prepared based on the transaction record stored in the database. So let's create a
data flow from Transaction (data store) to Issue Receipt (process).

11. Then a receipt is issued to the customer. Let's create a data flow from Issue Receipt
(process) to Customer (external entity). Name the data flow receipt.

57 www.someakenya.com Contact: 0707 737 890


You have just finished drawing the level 1 diagram which should look something like
this.

Steps to Make the Level 1 Diagram Easier to Read

The completed diagram above looks a bit rigid and busy. In this section we are going to make
some changes to the connectors to increase readability.

58 www.someakenya.com Contact: 0707 737 890


1. Right click on the diagram (Level 1 DFD) and select Connectors > Curve
Curve. Connectors
in the diagram are now in curve lines.

2. Move the shapes around so that the diagram looks less crowded.

 Object Oriented design tools

oriented analysis and design (OOAD)) is a popular technical approach for


Object-oriented
analyzing, designing an application, system, or business by applying the object
object-oriented

59 www.someakenya.com Contact: 0707 737 890


paradigm and visual modeling throughout the development life cycles to foster better
stakeholder communication and product quality.

According to the popular guide Unified Process, OOAD in modern software engineering is best
conducted in an iterative and incremental way. Iteration by iteration, the outputs of OOAD
activities, analysis models for OOA and design models for OOD respectively,
respectively, will be refined and
evolve continuously driven by key factors like risks and business value.

The software life cycle is typically divided up into stages going from abstract descriptions of the
problem to designs then to code and testing and finally
finally to deployment. The earliest stages of this
process are analysis and design. The analysis phase is also often called "requirements
acquisition".

The Waterfall Model.

OOAD
OAD is conducted in an iterative and incremental manner, as formulated by the Unified
Process.

In some approaches to software development—known


development known collectively as waterfall models
models—the
boundaries between each stage are meant to be fairly rigid and sequential. The term "waterfall"
was coined for such methodologies to signify that progress went sequentially in one direction

60 www.someakenya.com Contact: 0707 737 890


only, i.e., once analysis was complete then and only then was design begun and it was rare (and
considered a source of error) when a design issue required a change in the analysis model or
when a coding issue required a change in design.

The alternative to waterfall models is iterative models. This distinction was popularized by Barry
Boehm in a very influential paper on his Spiral Model for iterative software development. With
iterative models it is possible to do work in various stages of the model in parallel. So for
example it is possible—and not seen as a source of error—to work on analysis, design, and even
code all on the same day and to have issues from one stage impact issues from another. The
emphasis on iterative models is that software development is a knowledge-intensive process and
that things like analysis can't really be completely understood without understanding design
issues, that coding issues can affect design, that testing can yield information about how the code
or even the design should be modified, etc.

Although it is possible to do object-oriented development using a waterfall model, in practice


most object-oriented systems are developed with an iterative approach. As a result in object-
oriented processes "analysis and design" are often considered at the same time.

The object-oriented paradigm emphasizes modularity and re-usability. The goal of an object-
oriented approach is to satisfy the "open closed principle". A module is open if it supports
extension. If the module provides standardized ways to add new behaviors or describe new
states. In the object-oriented paradigm this is often accomplished by creating a new subclass of
an existing class. A module is closed if it has a well-defined stable interface that all other
modules must use and that limits the interaction and potential errors that can be introduced into
one module by changes in another. In the object-oriented paradigm this is accomplished by
defining methods that invoke services on objects. Methods can be either public or private, i.e.,
certain behaviors that are unique to the object are not exposed to other objects. This reduces a
source of many common errors in computer programming.

The software life cycle is typically divided up into stages going from abstract descriptions of the
problem to designs then to code and testing and finally to deployment. The earliest stages of this
process are analysis and design. The distinction between analysis and design is often described
as "what vs. how". In analysis developers work with users and domain experts to define what the
system is supposed to do. Implementation details are supposed to be mostly or totally (depending
on the particular method) ignored at this phase. The goal of the analysis phase is to create a
functional model of the system regardless of constraints such as appropriate technology. In
object-oriented analysis this is typically done via use cases and abstract definitions of the most
important objects. The subsequent design phase refines the analysis model and makes the needed
technology and other implementation choices. In object-oriented design the emphasis is on
describing the various objects, their data, behavior, and interactions. The design model should
have all the details required so that programmers can implement the design in code.

61 www.someakenya.com Contact: 0707 737 890


Object-oriented analysis

The purpose of any analysis activity in the software life-cycle is to create a model of the system's
functional requirements that is independent of implementation constraints.

The main difference between object-oriented analysis and other forms of analysis is that by the
object-oriented approach we organize requirements around objects, which integrate both
behaviors (processes) and states (data) modeled after real world objects that the system interacts
with. In other or traditional analysis methodologies, the two aspects: processes and data are
considered separately. For example, data may be modeled by ER diagrams, and behaviors by
flow charts or structure charts.

The primary tasks in object-oriented analysis (OOA) are:

 Find the objects


 Organize the objects
 Describe how the objects interact
 Define the behavior of the objects
 Define the internals of the objects

Common models used in OOA are use cases and object models. Use cases describe scenarios for
standard domain functions that the system must accomplish. Object models describe the names,
class relations (e.g. Circle is a subclass of Shape), operations, and properties of the main objects.
User-interface mockups or prototypes can also be created to help understanding.

Object-oriented design

During object-oriented design (OOD), a developer applies implementation constraints to the


conceptual model produced in object-oriented analysis. Such constraints could include the
hardware and software platforms, the performance requirements, persistent storage and
transaction, usability of the system, and limitations imposed by budgets and time. Concepts in
the analysis model which is technology independent are mapped onto implementing classes and
interfaces resulting in a model of the solution domain, i.e., a detailed description of how the
system is to be built on concrete technologies.

Important topics during OOD also include the design of software architectures by applying
architectural patterns and design patterns with object-oriented design principles.

Object-oriented modeling

Object-oriented modeling (OOM) is a common approach to modeling applications, systems, and


business domains by using the object-oriented paradigm throughout the entire development life

62 www.someakenya.com Contact: 0707 737 890


cycles. OOM is a main technique heavily used by both OOA and OOD activities in modern
software engineering.

Object-oriented modeling typically divides into two aspects of work: the modeling of dynamic
behaviors like business processes and use cases, and the modeling of static structures like classes
and components. OOA and OOD are the two distinct abstract levels (i.e. the analysis level and
the design level) during OOM. The Unified Modeling Language (UML) and SysML are the two
popular international standard languages used for object-oriented modeling.

The benefits of OOM are:

Efficient and effective communication

Users typically have difficulties in understanding comprehensive documents and programming


language codes well. Visual model diagrams can be more understandable and can allow users
and stakeholders to give developers feedback on the appropriate requirements and structure of
the system. A key goal of the object-oriented approach is to decrease the "semantic gap" between
the system and the real world, and to have the system be constructed using terminology that is
almost the same as the stakeholders use in everyday business. Object-oriented modeling is an
essential tool to facilitate this.

Useful and stable abstraction

Modeling helps coding. A goal of most modern software methodologies is to first address "what"
questions and then address "how" questions, i.e. first determine the functionality the system is to
provide without consideration of implementation constraints, and then consider how to make
specific solutions to these abstract requirements, and refine them into detailed designs and codes
by constraints such as technology and budget. Object-oriented modeling enables this by
producing abstract and accessible descriptions of both system requirements and designs, i.e.
models that define their essential structures and behaviors like processes and objects, which are
important and valuable development assets with higher abstraction levels above concrete and
complex source code.

63 www.someakenya.com Contact: 0707 737 890


TOPIC 5

SOFTWARE QUALITY

• The degree to which a system, component, or process meets specified requirements.


• The degree to which a system, component or process meets customer or user needs or
expectations

 Quality control and assurance

Software Quality Control is the set of procedures used by organizations to ensure that a
software product will meet its quality goals at the best value to the customer, and to continually
improve the organization’s ability to produce software products in the future.

Software quality control refers to specified functional requirements as well as non-functional


requirements such as supportability, performance and usability. It also refers to the ability for
software to perform well in unforeseeable scenarios and to keep a relatively low defect rate.

Definition 2

Software Quality Control is a function that checks whether a software component or supporting
artifact meets requirements, or is "fit for use". Software Quality Control is commonly referred to
as Testing.

Quality Control Activities

 Check that assumptions and criteria for the selection of data and the different factors
related to data are documented.
 Check for transcription errors in data input and reference.
 Check the integrity of database files.
 Check for consistency in data.
 Check that the movement of inventory data among processing steps is correct.
 Check for uncertainties in data, database files etc.
 Undertake review of internal documentation.
 Check methodological and data changes resulting in recalculations.
 Undertake completeness checks.
 Compare Results to previous Results.

64 www.someakenya.com Contact: 0707 737 890


Software Control Methods

 Rome laboratory Software framework


 Goal Question Metric Paradigm
 Risk Management Model
 The Plan-Do-Check-Action Model of Quality Control
 Total Software Quality Control
 Spiral Model Of Software Developments

Software development requires quality control.

These specified procedures and outlined requirements leads to the idea of Verification and
Validation and software testing.

It is distinct from software quality assurance which encompasses processes and standards for
ongoing maintenance of high quality of products, e.g. software deliverables, documentation and
processes - avoiding defects. Whereas software quality control is a validation of artifacts
compliance against established criteria - finding defects

 Software quality assurance (SQA) consists of a means of monitoring the software


engineering processes and methods used to ensure quality.The methods by which this is
accomplished are many and varied, and may include ensuring conformance to one or more
standards, such as ISO 9000 or a model such as CMMI.

SQA encompasses the entire software development process, which includes processes such as
requirements definition, software design, coding, source code control, code reviews, software
configuration management, testing, release management, and product integration. SQA is
organized into goals, commitments, abilities, activities, measurements, and verifications

65 www.someakenya.com Contact: 0707 737 890


Differences between Software Quality Assurance (SQA) and Software Quality Control
(SQC):

Many people still use the term Quality Assurance (QA) and Quality Control (QC)
interchangeably but this should be discouraged.

Criteria Software Quality Assurance (SQA) Software Quality Control (SQC)


SQA is a set of activities for ensuring quality
SQC is a set of activities for ensuring
in software engineering processes (that
quality in software products. The
Definition ultimately result in quality in software
activities focus on identifying defects
products). The activities establish and
in the actual products produced.
evaluate the processes that produce products.
Focus Process focused Product focused
Orientation Prevention oriented Detection oriented
Breadth Organization wide Product/project specific
Relates to all products that will ever be
Scope Relates to specific product
created by a process
 Process Definition and Implementation
 Reviews
 Audits
Activities  Testing
 Training

 Software quality factors and metrics


Quality Factors
 Functionality - A set of attributes that bear on the existence of a set of functions and their
specified properties. The functions are those that satisfy stated or implied needs.
o Suitability
o Accuracy
o Interoperability
o Compliance
o Security

 Reliability - A set of attributes that bear on the capability of software to maintain its level of
performance under stated conditions for a stated period of time.
o Maturity
o Recoverability

66 www.someakenya.com Contact: 0707 737 890


 Usability- A set of attributes that bear on the effort needed for use, and on the individual
assessment of such use, by a stated or implied set of users.
o Learnability
o Understandability
o Operability

 Efficiency- A set of attributes that bear on the relationship between the level of performance
of the software and the amount of resources used, under stated conditions.
o Time Behavior
o Resource Behavior

 Maintainability- A set of attributes that bear on the effort needed to make specified
modifications.
o Stability
o Analyzability
o Changeability
o Testability

 Portability- A set of attributes that bear on the ability of software to be transferred from one
environment to another.
o Installability
o Replaceability
o Adaptability

A Framework for Technical Software Metrics

General principles for selecting product measures and metrics are discussed in this section. The
generic measurement process activities parallel the scientific method taught in natural science
classes (formulation, collection, analysis, interpretation, feedback).
If the measurement process is too time consuming, no data will ever be collected during the
development process. Metrics should be easy to compute or developers will not take the time to
compute them.
The tricky part is that in addition to being easy compute, the metrics need to be perceived as
being important to predicting whether product quality can be improved or not.

Measures, Metrics and Indicators

 A measure provides a quantitative indication of the extent, amount, dimension, capacity, or


size of some attribute of a product or process

67 www.someakenya.com Contact: 0707 737 890


 The IEEE glossary defines ametric as “a quantitative measure of the degree to which a
system, component, or process possesses a given attribute.”
 An indicator is a metric or combination of metrics that provide insight into the software
process, a software project, or the product itself

Measurement Principles

 The objectives of measurement should be established before data collection begins;


 Each technical metric should be defined in an unambiguous manner;
 Metrics should be derived based on a theory that is valid for the domain of application
(e.g., metrics for design should draw upon basic design concepts and principles and
attempt to provide an indication of the presence of an attribute that is deemed desirable);
 Metrics should be tailored to best accommodate specific products and processes.

Measurement Process

 Formulation. The derivation of software measures and metrics appropriate for the
representation of the software that is being considered.
 Collection. The mechanism used to accumulate data required to derive the formulated
metrics.
 Analysis. The computation of metrics and the application of mathematical tools.
 Interpretation. The evaluation of metrics results in an effort to gain insight into the
quality of the representation.
 Feedback. Recommendations derived from the interpretation of productmetrics
transmitted to the software team.

S/W metrics will be useful only if they are characterized effectively and validated to that their
worth is proven.

 A metric should have desirable mathematical properties.


 When a metric represents a S/W characteristic that increases when positive traits occur or
decreases when undesirable traits are encountered, the value of the metric should increase
or decrease in the same manner.
 Each metric should be validated empirically in a wide variety of contexts before being
published or used to make decisions.

68 www.someakenya.com Contact: 0707 737 890


Goal-Oriented Software Measurements

 The Goal/Question/Metric Paradigm


 Establish an explicit measurement goal that is specific to the process activity or
product characteristic that is to be assessed
 Define a set of questions that must be answered in order to achieve the goal, and
 Identify well-formulated metrics that help to answer these questions.

A goal definition template can be used to define each measurement goal.

 Goal definition template


 Analyze {the name of activity or attribute to be measured}
 for the purpose of {the overall objective of the analysis}
 with respect to {the aspect of the activity or attribute that is considered}
 from the viewpoint of {the people who have an interest in the measurement}
 In the context of {the environment in which the measurement takes place}.

The Attributes of Effective S/W Metrics

 Simple and computable. It should be relatively easy to learn how to derive the metric, and
its computation should not demand inordinate effort or time
 Empirically and intuitively persuasive. The metric should satisfy the engineer’s intuitive
notions about the product attribute under consideration
 Consistent and objective. The metric should always yield results that are unambiguous.
 Consistent in its use of units and dimensions. The mathematical computation of the
metric should use measures that do not lead to bizarre combinations of unit.
 Programming language independent. Metrics should be based on the analysis model, the
design model, or the structure of the program itself.
 An effective mechanism for quality feedback. That is, the metric should provide a
software engineer with information that can lead to a higher quality end product

Metrics for the Analysis Model

Collection and Analysis Principles

 Whenever possible, data collection and analysis should be automated;


 Valid statistical techniques should be applied to establish relationship between internal
product attributes and external quality characteristics
 Interpretative guidelines and recommendations should be established for each metric

69 www.someakenya.com Contact: 0707 737 890


Analysis Metrics

 Function-based metrics: use the function point (FP) as a normalizing factor or as a


measure of the “size” of the specification. FP can be used to:
1. Estimate the cost required to design, code, and test
2. Predict the number of errors that will be encountered during testing
3. Forecast the number of components and/or the number of project source lines in the
implemented system.
 Specification metrics: used as an indication of quality by measuring number of
requirements by type

 The function point metric (FP), first proposed by Albrecht [ALB79], can be used
effectively as a means for measuring the functionality delivered by a system.
 Function points are derived using an empirical relationship based on countable (direct)
measures of software's information domain and assessments of software complexity
 Information domain values are defined in the following manner:
o number of external inputs (EIs)
o number of external outputs (EOs)
o number of external inquiries (EQs)
o number of internal logical files (ILFs)
o number of external interface files (EIFs)

Computing Function Points

Information Weighting factor


Domain Value Count simple average complex

External Inputs ( EIs) 3 3 4 6 =


External Outputs ( EOs) =
3 4 5 7

External Inquiries ( EQs) 3 3 4 6 =

Internal Logical Files ( ILFs) 3 7 10 15 =


3 5 7 10 =
External Interface Files ( EIFs)

Count total
Metrics for the Design Model
Design metrics for computer S/W, like all other S/W metrics, are not perfect. And yet,
design without measurement is an unacceptable alternative.
Architectural Design Metrics

70 www.someakenya.com Contact: 0707 737 890


 Structural complexity = g(fan-out), fan-out is defined as the number of modules
immediately subordinate to the module, that is, the number of modules that are directly
invoked by module i. Fan-in is defined as the number of modules that directly invoked
module i.
 Data complexity = f (input & output variables, fan-out), provides an indication of the
complexity in the internal interface for a module i.
 System complexity = h (structural & data complexity), is defined as the sum of structural
and data complexity.
 HK metric: architectural complexity as a function of fan-in and fan-out
 Morphology metrics: a function of the number of modules and the number of
interfaces between modules
Metrics for OO Design
Whitmire [WHI97] describes nine distinct and measurable characteristics of an OO design:
• Size: Size is defined in terms of four views: population, volume, length, and functionality
• Complexity: How classes of an OO design are interrelated to one another
• Coupling: The physical connections between elements of the OO design
• Sufficiency: “the degree to which an abstraction possesses the features required of it, or the
degree to which a design component possesses features in its abstraction, from the point of
view of the current application.”
• Completeness: An indirect implication about the degree to which the abstraction or design
component can be reused.
Metrics for OO Design-II

 Cohesion: The degree to which all operations working together to achieve a single, well-
defined purpose
 Primitiveness: Applied to both operations and classes, the degree to which an operation
is atomic
 Similarity: The degree to which two or more classes are similar in terms of their
structure, function, behavior, or purpose
 Volatility: Measures the likelihood that a change will occur

Class-Oriented Metrics--The CK Metrics Suite

Weighted methods per class (WMC): The number of methods and their complexity are
reasonable indicator of the amount of effort required to implement and test a class.
Depth of the inheritance tree (DIT): The maximum length from the node to root of the tree.

71 www.someakenya.com Contact: 0707 737 890


Number of children (NOC): The subclasses that are immediately subordinate to a class in the
class hierarchy are termed its children.
Coupling between object classes (CBO): is the number of collaborations listed for a class on its
CRC card. Keep CBO low.
Response for a class (RFC): is a set of methods that can potentially be executed in response to a
message received by an object of that class. RFC is the number of methods in the response set.
Keep RFC low.
Lack of cohesion in methods (LCOM): is the number of methods that access one or more of
the same attributes. Keep LCOM low.

 Formal technical reviews

A software technical review is a form of peer review in which a team of qualified personnel ...
examines the suitability of the software product for its intended use and identifies discrepancies
from specifications and standards. Technical reviews may also provide recommendations of
alternatives and examination of various alternatives.

"Software product" normally refers to some kind of technical document. This might be a
software design document or program source code, but use cases, business process definitions,
test case specifications, and a variety of other technical documentation, may also be subject to
technical review.

Technical review differs from software walkthroughs in its specific focus on the technical quality
of the product reviewed. It differs from software inspection in its ability to suggest direct
alterations to the product reviewed, and its lack of a direct focus on training and process
improvement.

The term formal technical review is sometimes used to mean a software inspection. A
'Technical Review' may also refer to an acquisition lifecycle event or Design review.

Objectives and participants

The purpose of a technical review is to arrive at a technically superior version of the work
product reviewed, whether by correction of defects or by recommendation or introduction of
alternative approaches. While the latter aspect may offer facilities that software inspection lacks,
there may be a penalty in time lost to technical discussions or disputes which may be beyond the
capacity of some participants.

IEEE 1028 recommends the inclusion of participants to fill the following roles:

72 www.someakenya.com Contact: 0707 737 890


 Decision Maker (the person for whom the technical review is conducted) determines if
the review objectives have been met.
 Review Leader is responsible for performing administrative tasks relative to the review,
ensuring orderly conduct, and ensuring that the review meets its objectives.
 Recorder documents anomalies, action items, decisions, and recommendations made by
the review team.
 Technical staffs are active participants in the review and evaluation of the software
product.
 Management staff may participate for the purpose of identifying issues that require
management resolution.
 Customer or user representatives may fill roles determined by the Review Leader prior
to the review.

A single participant may fill more than one role, as appropriate.

Process

A formal technical review will follow a series of activities similar to that specified in clause 5 of
IEEE 1028, essentially summarized in the article on software review.

 Verification and validation

Software Validation

Validation is the process of examining whether or not the software satisfies the user
requirements. It is carried out at the end of the SDLC. If the software matches requirements for
which it was made, it is validated.

 Validation ensures the product under development is as per the user requirements.
 Validation answers the question: Are we developing the product which attempts all that
user needs from this software?
 Validation emphasizes on user requirements.

Software Verification

Verification is the process of confirming if the software is meeting the business requirements,
and is developed adhering to the proper specifications and methodologies.

 Verification ensures the product being developed is according to design specifications.

73 www.someakenya.com Contact: 0707 737 890


 Verification answers the question– "Are we developing this product by firmly following
all design specifications?”
 Verifications concentrate on the design and system specifications.

Target of the test are:

 Errors: These are actual coding mistakes made by developers. In addition, there is a
difference in output of software and desired output, is considered as an error.
 Fault: When error exists fault occurs. A fault, also known as a bug, is a result of an error
which can cause system to fail.
 Failure: Failure is said to be the inability of the system to perform the desired task.
Failure occurs when fault exists in the system.

Verification: “Are we building the product right?” The software should conform to its
specification.

Validation: “Are we building the right product?” The software should do what the user really
needs / wants. Types of V & V include:

Static V&V - Software inspections / reviews: where one analyzes static system representations
such as requirements, design, source code, etc.

Dynamic V&V - Software testing: where one executes an implementation of the software to
examine outputs and operational behavior.

74 www.someakenya.com Contact: 0707 737 890


Program Testing

There are two types of program testing:

a) Defect testing: Tests designed to discover system defects. Sometimes referred to as


coverage testing.

b) Statistical testing: Tests designed to assess system reliability and performance under
operational conditions. Makes use of an operational profile.

V & V Goals
Verification and validation should establish confidence that the software is “fit for purpose”.
This does NOT usually mean that the software must be completely free of defects. The level of
confidence required depends on at least three factors:
1. Software function / purpose: Safety-critical systems require a much higher level of
confidence than demonstration-of-concept prototypes.

2. User expectations: Users may tolerate shortcomings when the benefits of use are high.

3. Marketing environment: Getting a product to market early may be more important than
finding additional defects.

V&V versus Debugging


V&V and debugging are distinct processes. V&V is concerned with establishing the existence of
defects in a program. Debugging is concerned with locating and repairing these defects. Defect
locating is analogous to detective work or medical diagnosis.

Software Inspections / Reviews

These involve people examining a system representation (requirements, design, source code,
etc.) with the aim of discovering anomalies and defects. They do not require execution so may be
used before system implementation. Can be more effective than testing after system
implementation.

Why code inspections can be so effective

• Many different defects may be discovered in a single inspection. (In testing, one defect
may mask others so several executions may be required.)

75 www.someakenya.com Contact: 0707 737 890


• They reuse domain and programming language knowledge. (Reviewers are likely to have
seen the types of error that commonly occur.)

Inspections and testing are complementary in that inspections can be used early with non-
executable entities and with source code at the module and component levels while testing can
validate dynamic behaviour and is the only effective technique at the sub-system and system
code levels. Inspections cannot directly check nonfunctional requirements such as performance,
usability, etc.

Program Inspections /Reviews

This is a formalized approach to document walkthroughs or desk-checking. Its intended


exclusively for defect DETECTION not correction. Defects may be logical errors, anomalies in
the code that might indicate an erroneous condition, or non-compliance with standards.
Inspection pre-conditions (“entry criteria”)

A precise specification must be available.

Team members must be familiar with the organization standards.

Syntactically correct code must be available for code inspections.

Inspection pre-conditions

• An error checklist should be prepared.

• Management must accept the fact that inspection will increase costs early in the software
process.

• Management must not use inspections results for staff appraisals.

Inspection Process

76 www.someakenya.com Contact: 0707 737 890


Inspection Procedure

• System overview presented to inspection team.


• Code and associated documents are distributed to inspection team in advance.
• Inspection takes place and discovered errors are noted.
• Modifications are made to repair discovered errors (by owner).
• Re-inspection may or may not be required.

Inspection Teams

It’s typically made up of 4-7 members.

• Author (owner) of the element being inspected.


• Inspectors who find errors, omissions and inconsistencies.
• Reader who steps through the element being reviewed with the team.
• Moderator who chairs the meeting and notes discovered errors.
• Other roles are Scribe and Chief moderator

Code inspection checklists

Checklist of common errors should be used to drive individual preparation. Error checklist for is
programming language dependent. The “weaker” the type checking (by the compiler), the larger
the checklist. Examples: initialization, constant naming, loop termination, array bounds, etc.

Cleanroom Software Development


The name is derived from the philosophy and is defect avoidance process based on:Incremental
development (if appropriate)

• Formal specification
• Static verification using correctness arguments
• Statistical testing to certify program reliability
• NO defect testing!

77 www.someakenya.com Contact: 0707 737 890


Cleanroom Process Teams

• Specification team: responsible for developing and maintaining the system specification
• Development team: responsible for developing and verifying the software. The software
is NOT executed or even compiled during this process.
• Certification team: responsible for developing a set of statistical tests to
measurereliability after development.

Cleanroom Process Evaluation

Results in IBM and elsewhere have been very impressive with very few discovered faults in
delivered systems.

Independent assessment shows that the process is no more expensive than other approaches. It’s
however not clear how this approach can be transferred to an environment with less skilled
engineers.

 Cost of quality

Definition

Cost of Quality (COQ) is a measure that quantifies the cost of control/conformance and the cost
of failure of control/non-conformance. In other words, it sums up the costs related to prevention
and detection of defects and the costs due to occurrences of defects.

 Definition by ISTQB: cost of quality: The total costs incurred on quality activities and
issues and often split into prevention costs, appraisal costs, internal failure costs and
external failure costs.
 Definition by QAI: Money spent beyond expected production costs (labor, materials, and
equipment) to ensure that the product the customer receives is a quality (defect free)
product. The Cost of Quality includes prevention, appraisal, and correction or repair
costs.

Explanation

 Cost of Control (Also known as Cost of Conformance)


o Prevention Cost
 The cost arises from efforts to prevent defects.
 Example: Quality Assurance costs
o Appraisal Cost

78 www.someakenya.com Contact: 0707 737 890


 The cost arises from efforts to detect defects.
 Example: Quality Control costs
 Cost of Failure of Control (Also known as Cost of Non-Conformance)
o Internal Failure Cost
 The cost arises from defects identified internally and efforts to correct
them.
 Example: Cost of Rework (Fixing of internal defects and re-testing)
o External Failure Cost
 The cost arises from defects identified by the client or end-users and
efforts to correct them.
 Example: Cost of Rework (Fixing of external defects and re-testing) and
any other costs due to external defects (Product service/liability/recall,
etc.)

FORMULA / CALCULATION

Cost of Quality (COQ) = Cost of Control + Cost of Failure of Control

where

Cost of Control = Prevention Cost + Appraisal Cost

and

Cost of Failure of Control = Internal Failure Cost + External Failure Cost

NOTES

 In its simplest form, COQ can be calculated in terms of effort (hours/days).

79 www.someakenya.com Contact: 0707 737 890


 A better approach will be to calculate COQ in terms of money (converting the effort into
money and adding any other tangible costs like test environment setup).
 The best approach will be to calculate COQ as a percentage of total cost. This allows for
comparison of COQ across projects or companies.
 To ensure impartiality, it is advised that the Cost of Quality of a project/product be
calculated and reported by a person external to the core project/product team (Say,
someone from the Accounts Department).
 It is desirable to keep the Cost of Quality as low as possible. However, this requires a fine
balancing of costs between Cost of Control and Cost of Failure of Control. In general, a
higher Cost of Control results in a lower Cost of Failure of Control. But, the law of
diminishing returns holds true here as well.

Cost of quality
A very significant question is: does quality assurance add any value. That is, is worthspending a
lot of money in quality assurance practices? In order to understand the impact of quality
assurance practices, we have to understand the cost of quality (or lack thereof) in a system.
Quality has a direct and indirect cost in the form of cost of prevention, appraisal, and failure.
If we try to prevent problems, obviously we will have to incur cost. This cost includes:
• Quality planning
• Formal technical reviews
• Test equipment
• Training
We will discuss these in more detail in the later sections.
The cost of appraisal includes activities to gain insight into the product condition. It involves in-
process and inter-process inspection and testing.
And finally, failure cost. Failure cost has two components: internal failure cost and external
failure cost. Internal failure cost requires rework, repair, and failure mode analysis. On the other
hand, external failure cost involves cost for complaint resolution, product return and
replacement, help-line support, warranty work, and law suits.
It is trivial to see that cost increases as we go from prevention to detection to internal failure to
external failure. This is demonstrated with the help of the following example:
Let us assume that a total of 7053 hours were spent inspecting 200,000 lines of code with the
result that 3112 potential defects were prevented. Assuming a programmer cost of $40 per hour,
the total cost of preventing 3112 defects was $382,120, or roughly $91 per defect.
Let us now compare these numbers to the cost of defect removal once the product has been
shipped to the customer. Suppose that there had been no inspections, and the programmers had
been extra careful and only one defect one 1000 lines escaped into the product shipment. That
would mean that 200 defects would still have to be fixed in the field. As an estimated cost of
$25000 per fix, the cost would be $5 Million or approximately 18 times more expensive than the
total cost of defect prevention
That means, quality translates to cost savings and an improved bottom line.
80 www.someakenya.com Contact: 0707 737 890
SQA Activities
There are two different groups involved in SQA related activities:
• Software engineers who do the technical work
• SQA group who is responsible for QA planning, oversight, record keeping, analysis, and
reporting
Software engineers address quality by applying solid technical methods and measures,
conducting formal and technical reviews, and performing well planned software testing.

The SQA group assists the software team in achieving a high quality product.

SQA Group Activities


An SQA plan is developed for the project during project planning and is reviewed by all stake
holders. The plan includes the identification of:
• Evaluations to be performed
• Audits and reviewed to be performed
• Standards that are applicable to the project
• Procedures for error reporting and tracking
• Documents to be produced by the SQA group
• Amount of feedback provided to the software project team
The group participates in the development of the project’s software process description.
The software team selects the process and SQA group reviews the process description for
compliance with the organizational policies, internal software standards, externally imposed
standards, and other parts of the software project plan.

The SQA group also reviews software engineering activities to verify compliance with the
defined software process. It identifies, documents, and tracks deviations from the process and
verifies that the corrections have been made. In addition, it audits designated software work
products to verify compliance with those defined as part of the software process. It, reviews
selected work products, identifies, documents, and tracks deviations; verifies that corrections
have been made; and reports the results of its work to the project manager.

The basic purpose is to ensure that deviations in software work and work products are
documented and handled according to documented procedures. These deviations may
beencountered in the project plan, process description, applicable standards, or technical work
products. The group records any non-compliance and reports to senior management and non-
compliant items are recorded and tracked until they are resolved.
Another very important role of the group is to coordinate the control and management of change
and help to collect and analyze software metrics.

81 www.someakenya.com Contact: 0707 737 890


Quality Control
The next question that we need to ask is, once we have defined how to assess quality, how are
we going to make sure that our processes deliver the product with the desired quality. That is,
how are we going to control the quality of the product?
The basic principle of quality control is to control the variation as variation control is the heart of
quality control. It includes resource and time estimation, test coverage, variation in number of
bugs, and variation in support.
From one project to another we want to minimize the predicted resources needed to complete a
project and calendar time. This involves a series of inspection, reviews, and tests and includes
feedback loop. So quality control is a combination of measurement and feedback and
combination of automated tools and manual interaction

82 www.someakenya.com Contact: 0707 737 890


TOPIC 6

SOFTWARE CODING
Coding
• Coding is undertaken once the design phase is complete and the design documents have
been successfully reviewed.
• In the coding phase every module identified and specified in the design document is
independently coded and unit tested.
• Good software development organizations normally require their programmers to adhere
to some well-defined and standard style of coding called coding standards.
• Most software development organizations formulate their own coding standards that suit
them most, and require their engineers to follow these standards rigorously.

Good software development organizations usually develop their own coding standards and
guidelines depending on what best suits their organization and the type of products they develop.
• representative coding standards
• representative coding Guidelines

Coding Standards
Programmers spend more time reading code than writing code
• They read their own code as well as other programmers’ code.
• Readability is enhanced if some coding conventions are followed by all.
• Coding standards provide these guidelines for programmers.
• Generally are regarding naming, file organization, statements/declarations,
• Naming conventions.

Coding Guidelines
• Package name should be in lower case (mypackage, edu.iitk.maths)
 Type names should be nouns and start with uppercase (Day, DateOfBirth,…)
 Var names should be nouns in lowercase; vars with large scope should have long names;
loop iterators should be i, j, k…
 Const names should be all caps
 Method names should be verbs starting with lower case (eg getValue())
 Prefix is should be used for Boolean method

 Coding styles and characteristics

Programming style is a set of rules or guidelines used when writing the source code for a
computer program. It is often claimed that following a particular programming style will help

83 www.someakenya.com Contact: 0707 737 890


programmers to read and understand source code conforming to the style, and help to avoid
introducing errors.

A classic work on the subject was The Elements of Programming Style, written in the 1970s, and
illustrated with examples from the FORTRAN and PL/I languages prevalent at the time.

The programming style used in a particular program may be derived from the coding
conventions of a company or other computing organization, as well as the preferences of the
author of the code. Programming styles are often designed for a specific programming language
(or language family): style considered good in C source code may not be appropriate for BASIC
source code, and so on. However, some rules are commonly applied to many languages.

Elements of good style

Good style is a subjective matter, and is difficult to define. However, there are several elements
common to a large number of programming styles. The issues usually considered as part of
programming style include the layout of the source code, including indentation; the use of white
space around operators and keywords; the capitalization or otherwise of keywords and variable
names; the style and spelling of user-defined identifiers, such as function, procedure and variable
names; and the use and style of comments.

Code appearance

Programming styles commonly deal with the visual appearance of source code, with the goal of
readability. Software has long been available that formats source code automatically, leaving
coders to concentrate on naming, logic, and higher techniques. As a practical point, using a
computer to format source code saves time, and it is possible to then enforce company-wide
standards without debates.

Indentation

Indent styles assist in identifying control flow and blocks of code. In some programming
languages indentation is used to delimit logical blocks of code; correct indentation in these cases
is more than a matter of style. In other languages indentation and white space do not affect
function, although logical and consistent indentation makes code more readable

Coding styles- Coding guidelines provide only general suggestions regarding the coding style to
be followed.
1) Do not use a coding style that is too clever or too difficult to understand: Code should be
easy to understand. Clever coding can obscure meaning of the code and hamper
understanding. It also makes maintenance difficult.

84 www.someakenya.com Contact: 0707 737 890


2) Avoid obscure side effects: The side effects of a function call include modificationof
parameters passed by reference, modification of global variables, and I/Ooperations. An
obscure side effect is one that is not obvious from a casualexamination of the code. Obscure
side effects make it difficult to understand apiece of code.
3) Does not use an identifier for multiple purposes: Programmers often use the same
identifier to denote several temporary entities? There are several things which are wrong with
this approach and hence should be avoided. Some of the problems caused by use of variables
for multiple purposes are as follows:
 Each variable should be given a descriptive name indicating its purpose.
This is not possible if an identifier is used for multiple purposes. Use of avariable
for multiple purposes can lead to confusion and make it difficult toread and
understand the code.
 Use of variables for multiple purposes usually makes future enhancementsmore
difficult.
4) The code should be well-documented- As a rule of thumb, there must be at leastone
comment line on the average for every three source lines.
5) Do not use goto statements- Use of goto statements makes a program unstructuredand very
difficult to understand.

 Coding in high-level languages

Very early in the development of computers attempts were made to make programming easier by
reducing the amount of knowledge of the internal workings of the computer that was needed to
write programs. If programs could be presented in a language that was more familiar to the
person solving the problem, then fewer mistakes would be made. High-level programming
languages allow the specification of a problem solution in terms closer to those used by human
beings. These languages were designed to make programming far easier, less error-prone and to
remove the programmer from having to know the details of the internal structure of a particular
computer. These high-level languages were much closer to human language. One of the first of
these languages was Fortran II which was introduced in about 1958. In Fortran II our program
above would be written as:

C=A+B
which is obviously much more readable, quicker to write and less error-prone. As with assembly
languages the computer does not understand these high-level languages directly and hence they
have to be processed by passing them through a program called a compiler which translates
them into internal machine language before they can be executed.

Another advantage accrues from the use of high-level languages if the languages are
standardized by some international body. Then each manufacturer produces a compiler to
compile programs that conform to the standard into their own internal machine language. Then it

85 www.someakenya.com Contact: 0707 737 890


should be easy to take a program which conforms to the standard and implement it on many
different computers merely by re-compiling it on the appropriate computer. This great advantage
of portability of programs has been achieved for several high-level languages and it is now
possible to move programs from one computer to another without too much difficulty.
Unfortunately many compiler writers add new features of their own which means that if a
programmer uses these features then their program becomes non-portable. It is well worth
becoming familiar with the standard and writing programs which obey it, so that your programs
are more likely to be portable.

As with assembly language human time is saved at the expense of the compilation time required
to translate the program to internal machine language. The compilation time used in the
computer is trivial compared with the human time saved, typically seconds as compared with
weeks.

Many high level languages have appeared since Fortran II (and many have also disappeared!),
among the most widely used have been:

COBOL Business applications


FORTRAN Engineering & Scientific Applications
PASCAL General use and as a teaching tool
C & C++ General Purpose - currently most popular
PROLOG Artificial Intelligence
JAVA General Purpose - gaining popularity rapidly

All these languages are available on a large variety of computers

 Coding standards
Coding conventions are a set of guidelines for a specific programming language that
recommend programming style, practices and methods for each aspect of a piece program
written in this language. These conventions usually cover file organization, indentation,
comments, declarations, statements, white space, naming conventions, programming practices,
programming principles, programming rules of thumb, architectural best practices, etc. These are
guidelines for software structural quality. Software programmers are highly recommended to
follow these guidelines to help improve the readability of their source code and make software
maintenance easier. Coding conventions are only applicable to the human maintainers and peer
reviewers of a software project. Conventions may be formalized in a documented set of rules that
an entire team or company follows, or may be as informal as the habitual coding practices of an
individual. Coding conventions are not enforced by compilers. As a result, not following some or
all of the rules has no impact on the executable programs created from the source code.

86 www.someakenya.com Contact: 0707 737 890


Coding standards

Where coding conventions have been specifically designed to produce high-quality code, and
have then been formally adopted, they then become coding standards. Specific styles,
irrespective of whether they are commonly adopted, do not automatically produce good quality
code. It is only if they are designed to produce good quality code that they actually result in good
quality code being produced, i.e., they must be very logical in every aspect of their design -
every aspect justified and resulting in quality code being produced.

Good procedures, good methodology and good coding standards can be used to drive a project
such that the quality is maximized and the overall development time and development and
maintenance cost is minimized.

 User interface

Software User Interface Design

User interface is the front-end application view to which user interacts in order to use the
software. User can manipulate and control the software as well as hardware by means of user
interface. Today, user interface is found at almost every place where digital technology exists,
right from computers, mobile phones, cars, music players, airplanes, ships etc.

User interface is part of software and is designed such a way that it is expected to provide the
user insight of the software. UI provides fundamental platform for human-computer interaction.
UI can be graphical, text-based, audio-video based, depending upon the underlying hardware and
software combination. UI can be hardware or software or a combination of both.The software
becomes more popular if its user interface is:

 Attractive
 Simple to use
 Responsive in short time
 Clear to understand
 Consistent on all interfacing screens

UI is broadly divided into two categories:

 Command Line Interface


 Graphical User Interface

87 www.someakenya.com Contact: 0707 737 890


Command Line Interface (CLI)

CLI has been a great tool of interaction with computers until the video display monitors came
into existence. CLI is first choice of many technical users and programmers. CLI is minimum
interface software can provide to its users.

CLI provides a command prompt, the place where the user types the command
command and feeds to the
system. The user needs to remember the syntax of command and its use. Earlier CLI were not
programmed to handle the user errors effectively.A command is a text-based
text based reference to set of
instructions, which are expected to be executed by the system. There are methods like macros,
scripts that make it easy for the user to operate.

CLI uses less amount of computer resource as compared to GUI.

CLI Elements

A text-based
based command line interface can have the following elements:

88 www.someakenya.com Contact: 0707 737 890


 Command Prompt - It is text-based
text based notifier that is mostly shows the context in which
the user is working. It is generated by the software system.
 Cursor - It is a small horizontal line or a vertical bar of the height of line, to represent
position of character while typing.
typing. Cursor is mostly found in blinking state. It moves as
the user writes or deletes something.
 Command - A command is an executable instruction. It may have one or more
parameters. Output on command execution is shown inline on the screen. When output is
produced, command prompt is displayed on the next line.

Graphical User Interface

Graphical User Interface provides the user graphical means to interact with the system. GUI can
be combination of both hardware and software. Using GUI, user interprets the software.
software.

Typically, GUI is more resource consuming than that of CLI. With advancing technology, the
programmers and designers create complex GUI designs that work with more efficiency,
accuracy and speed.

GUI Elements

GUI provides a set of components to interact


in with software or hardware.

Every graphical component provides a way to work with the system. A GUI system has
following elements such as:

89 www.someakenya.com Contact: 0707 737 890


 Window - An area where contents of application are displayed. Contents in a window
can be displayed in the form of icons or lists, if the window represents file structure. It is
easier for a user to navigate in the file system in an exploring window. Windows can be
minimized, resized or maximized to the size of screen. They can be moved anywhere on
the screen. A window may contain another window of the same application, called child
window.
 Tabs - If an application allows executing multiple instances of itself, they appear on the
screen as separate windows. Tabbed Document Interface has come up to open multiple
documents in the same window. This interface also helps in viewing preference panel in
application. All modern web-browsers use this feature.
 Menu - Menu is an array of standard commands, grouped together and placed at a visible
place (usually top) inside the application window. The menu can be programmed to
appear or hide on mouse clicks.
 Icon - An icon is small picture representing an associated application. When these icons
are clicked or double clicked, the application window is opened. Icon displays application
and programs installed on a system in the form of small pictures.
 Cursor - Interacting devices such as mouse, touch pad, digital pen are represented in
GUI as cursors. On screen cursor follows the instructions from hardware in almost real-
time. Cursors are also named pointers in GUI systems. They are used to select menus,
windows and other application features.

Application specific GUI components

A GUI of an application contains one or more of the listed GUI elements:

 Application Window - Most application windows uses the constructs supplied by


operating systems but many use their own customer created windows to contain the
contents of application.
 Dialogue Box - It is a child window that contains message for the user and request for
some action to be taken. For Example: Application generate a dialogue to get
confirmation from user to delete a file.

90 www.someakenya.com Contact: 0707 737 890


 Text-Box - Provides an area for user to type and enter text-based
text data.
 Buttons - They imitate real life buttons and are used to submit inputs
inputs to the software.

 Radio-button - Displays available options for selection. Only one can be selected among
all offered.
 Check-box - Functions similar to list-box.
list box. When an option is selected, the box is marked
as checked. Multiple options represented by check boxes can be selected.
 List-box - Provides list of available items for selection. More than one item can be
selected.

91 www.someakenya.com Contact: 0707 737 890


Other impressive GUI components are:

 Sliders
 Combo-box
 Data-grid
 Drop-down list

User Interface Design Activities

There are a number of activities performed for designing user interface. The process of GUI
design and implementation is alike SDLC. Any model can be used for GUI implementation
among Waterfall, Iterative or Spiral Model.

A model used for GUI design and development should


should fulfill these GUI specific steps.

 GUI Requirement Gathering - The designers may like to have list of all functional and
non-functional
functional requirements of GUI. This can be taken from user and their existing
software solution.
 User Analysis - The designer studies who is going to use the software GUI. The target
audience matters as the design details change according to the knowledge and
competency level of the user. If user is technical savvy, advanced and complex GUI can
be incorporated. Forr a novice user, more information is included on how-
how-to of software.
 Task Analysis - Designers have to analyze what task is to be done by the software
solution. Here in GUI, it does not matter how it will be done. Tasks can be represented in
hierarchical manner
anner taking one major task and dividing it further into smaller sub
sub-tasks.

92 www.someakenya.com Contact: 0707 737 890


Tasks provide goals for GUI presentation. Flow of information among sub-tasks
determines the flow of GUI contents in the software.
 GUI Design & implementation - Designers after having information about
requirements, tasks and user environment, design the GUI and implements into code and
embed the GUI with working or dummy software in the background. It is then self-tested
by the developers.
 Testing - GUI testing can be done in various ways. Organization can have in-house
inspection, direct involvement of users and release of beta version are few of them.
Testing may include usability, compatibility, user acceptance etc.

GUI Implementation Tools

There are several tools available using which the designers can create entire GUI on a mouse
click. Some tools can be embedded into the software environment (IDE).

GUI implementation tools provide powerful array of GUI controls. For software customization,
designers can change the code accordingly.There are different segments of GUI tools according
to their different use and platform.

Example

Mobile GUI, Computer GUI, Touch-Screen GUI etc. Here is a list of few tools which come
handy to build GUI:

 FLUID
 AppInventor (Android)
 LucidChart
 Wavemaker
 Visual Studio

User Interface Golden rules

The following rules are mentioned to be the golden rules for GUI design, described by
Shneiderman and Plaisant in their book (Designing the User Interface).

 Strive for consistency - Consistent sequences of actions should be required in similar


situations. Identical terminology should be used in prompts, menus, and help screens.
Consistent commands should be employed throughout.
 Enable frequent users to use short-cuts - The user’s desire to reduce the number of
interactions increases with the frequency of use. Abbreviations, function keys, hidden
commands, and macro facilities are very helpful to an expert user.

93 www.someakenya.com Contact: 0707 737 890


 Offer informative feedback - For every operator action, there should be some system
feedback. For frequent and minor actions, the response must be modest, while for
infrequent and major actions, the response must be more substantial.
 Design dialog to yield closure - Sequences of actions should be organized into groups
with a beginning, middle, and end. The informative feedback at the completion of a
group of actions gives the operators the satisfaction of accomplishment, a sense of relief,
the signal to drop contingency plans and options from their minds, and this indicates that
the way ahead is clear to prepare for the next group of actions.
 Offer simple error handling - As much as possible, design the system so the user will
not make a serious error. If an error is made, the system should be able to detect it and
offer simple, comprehensible mechanisms for handling the error.
 Permit easy reversal of actions - This feature relieves anxiety, since the user knows that
errors can be undone. Easy reversal of actions encourages exploration of unfamiliar
options. The units of reversibility may be a single action, a data entry, or a complete
group of actions.
 Support internal locus of control - Experienced operators strongly desires the sense that
they are in charge of the system and that the system responds to their actions. Design the
system to make users the initiators of actions rather than the responders.
 Reduce short-term memory load - The limitation of human information processing in
short-term memory requires the displays to be kept simple, multiple page displays be
consolidated, window-motion frequency be reduced, and sufficient training time be
allotted for codes, mnemonics, and sequences of actions.

94 www.someakenya.com Contact: 0707 737 890


TOPIC 7

SOFTWARE TESTING

 Software testing life cycle

Contrary to popular belief, Software Testing is not a just a single activity. It consists of series of
activities carried out methodologically to help certify your software product. These activities
(stages) constitute the Software Testing Life Cycle (STLC).

The different stages in Software Test Life Cycle -

Each of these stages has a definite Entry and Exit criteria, Activities & Deliverables associated
with it.

In an Ideal world you will not enter the next stage until the exit criteria for the previous stage is
met. But practically this is not always possible. So for this tutorial , we will focus of activities
and deliverables for the different stages in STLC

What is Software Testing Life Cycle (STLC)

Software Testing Life Cycle refers to a testing process which has specific steps to be executed in
a definite sequence to ensure that the quality goals have been met. In STLC process, each
activity is carried out in a planned and systematic way. Each phase has different goals and
deliverables. Different organizations have different phases in STLC; however the basis remains
the same.

95 www.someakenya.com Contact: 0707 737 890


Below are the phases of STLC:

1. Requirements phase
2. Planning Phase
3. Analysis phase
4. Design Phase
5. Implementation Phase
6. Execution Phase
7. Conclusion Phase
8. Closure Phase

1. Requirement Phase:

During this phase of STLC, analyze and study the requirements. Have brain storming sessions
with other teams and try to find out whether the requirements are testable or not. This phase
helps to identify the scope of the testing. If any feature is not testable, communicate it during this
phase so that the mitigation strategy can be planned.

2. Planning Phase:

In practical scenarios, Test planning is the first step of the testing process. In this phase we
identify the activities and resources which would help to meet the testing objectives. During
planning we also try to identify the metrics, the method of gathering and tracking those metrics.

On what basis the planning is done? Only requirements?

The answer is NO. Requirements do form one of the bases but there are 2 other very important
factors which influence test planning. These are:

– Test strategy of the organization.


– Risk analysis / Risk Management and mitigation.

3. Analysis Phase:

This STLC phase defines “WHAT” to be tested. We basically identify the test conditions
through the requirements document, product risks and other test basis. The test condition should
be traceable back to the requirement. There are various factors which effect the identification of
test conditions:

– Levels and depth of testing


– Complexity of the product
– Product and project risks

96 www.someakenya.com Contact: 0707 737 890


– Software development life cycle involved.
– Test management
– Skills and knowledge of the team.
– Availability of the stakeholders.

We should try to write down the test conditions in a detailed way. For example, for an e-
commerce web application, you can have a test condition as “User should be able to make a
payment”. Or you can detail it out by saying “User should be able to make payment through
NEFT, debit card and credit card”. The most important advantage of writing the detailed test
condition is that it increases the test coverage, since the test cases will be written on the basis of
the test condition, these details will trigger to write more detailed test cases which will eventually
increase the coverage. Also identify the exit criteria of the testing, i.e. determine some conditions
when you will stop the testing.

 Software testing methods (Black box testing and White box testing)

Manual vs automated testing


Testing can either be done manually or using an automated testing tool:

 Manual - This testing is performed without taking help of automated testing tools. The
software tester prepares test cases for different sections and levels of the code, executes
the tests and reports the result to the manager.

Manual testing is time and resource consuming. The tester needs to confirm whether or
not right test cases are used. Major portion of testing involves manual testing.

 Automated This testing is a testing procedure done with aid of automated testing tools.
The limitations with manual testing can be overcome using automated test tools.

A test needs to check if a webpage can be opened in Internet Explorer. This can be easily done
with manual testing. But to check if the web-server can take the load of 1 million users, it is quite
impossible to test manually.

There are software and hardware tools which helps tester in conducting load testing, stress
testing, regression testing.

Testing Approaches
Tests can be conducted based on two approaches –

 Functionality testing

97 www.someakenya.com Contact: 0707 737 890


 Implementation testing

Whenn functionality is being tested without taking the actual implementation in concern it is
known as black-box testing.. The other side is known as white-box testing where not only
functionality is tested but the way it is implemented is also analyzed.

Exhaustive tests are the best-desired


desired method for a perfect testing. Every single possible value in
the range of the input and output values is tested. It is not possible to test each and every value in
real world scenario if the range of values is large.

Black-box testing

It is carried out to test functionality of the program. It is also called ‘Behavioral’


‘ ’ testing. The
tester in this case, has a set of input values and respective desired results. On providing input, if
the output matches with the desired results,
results, the program is tested ‘ok’, and problematic
otherwise.

In this testing method, the design and structure of the code are not known to the tester, and
testing engineers and end users conduct this test on the software.

Black-box testing techniques:

 Equivalence class - The input is divided into similar classes. If one element of a class
passes the test, it is assumed that all the class is passed.
 Boundary values - The input is divided into higher and lower end values. If these values
pass the test, it is assumed that all values in between may pass too.
 Cause-effect graphing - In both previous methods, only one input value at a time is
tested. Cause (input) – Effect (output) is a testing technique where combinations of input
values are tested in a systematic
syste way.
 Pair-wise Testing - The behavior of software depends on multiple parameters. In
pairwise testing, the multiple parameters are tested pair-wise
pair wise for their different values.
 State-based testing - The system changes state on provision of input. Thes These systems are
tested based on their states and input.

98 www.someakenya.com Contact: 0707 737 890


White-box testing

It is conducted to test program and its implementation, in order to improve code efficiency or
structure. It is also known as ‘Structural’ testing.

In this testing method, the design and structure of the code are known to the tester. Programmers
of the code conduct this test on the code.

The below are some White-box


box testing techniques:

 Control-flow testing - The purpose of the control-flow


control ow testing to set up test cases which
cover all statements and branch conditions. The branch conditions are tested for both being
true and false, so that all statements can be covered.
 Data-flow testing - This testing technique emphasis to cover all the data variables included in
the program. It tests
ests where the variables were declared and defined and where they were used
or changed.

 Software testing levels( unit integration, system and acceptance testing)

Testing Levels

Testing itself may be defined at various levels of SDLC. The testing process runs parallel to
software development. Before jumping on the next stage, a stage is tested, validated and verified.

Testing separately is done just to make sure that there are no hidden bugs or issues left in the
software. Software is tested on various levels
lev -

Unit Testing

99 www.someakenya.com Contact: 0707 737 890


While coding, the programmer performs some tests on that unit of program to know if it is error
free. Testing is performed under white-box testing approach. Unit testing helps developers
decide that individual units of the program are working as per requirement and are error free.

Integration Testing

Even if the units of software are working fine individually, there is a need to find out if the units
if integrated together would also work without errors. For example, argument passing and data
updation etc.

System Testing

The software is compiled as product and then it is tested as a whole. This can be accomplished
using one or more of the following tests:

 Functionality testing - Tests all functionalities of the software against the requirement.
 Performance testing - This test proves how efficient the software is. It tests the
effectiveness and average time taken by the software to do desired task. Performance
testing is done by means of load testing and stress testing where the software is put under
high user and data load under various environment conditions.
 Security & Portability - These tests are done when the software is meant to work on
various platforms and accessed by number of persons.

Acceptance Testing

When the software is ready to hand over to the customer it has to go through last phase of testing
where it is tested for user-interaction and response. This is important because even if the
software matches all user requirements and if user does not like the way it appears or works, it
may be rejected.

 Alpha testing - The team of developer themselves perform alpha testing by using the
system as if it is being used in work environment. They try to find out how user would
react to some action in software and how the system should respond to inputs.
 Beta testing - After the software is tested internally, it is handed over to the users to use
it under their production environment only for testing purpose. This is not as yet the
delivered product. Developers expect that users at this stage will bring minute problems,
which were skipped to attend.

Regression Testing

100 www.someakenya.com Contact: 0707 737 890


Whenever a software product is updated with new code, feature or functionality, it is tested
thoroughly to detect if there is any negative impact of the added code. This is known as
regression testing.

Testing Documentation

Testing documents are prepared at different stages

Before Testing

Testing starts with test cases generation. Following documents are needed for reference:

 SRS document - Functional Requirements document


 Test Policy document - This describes how far testing should take place before releasing
the product.
 Test Strategy document - This mentions detail aspects of test team, responsibility
matrix and rights/responsibility of test manager and test engineer.
 Traceability Matrix document - This is SDLC document, which is related to
requirement gathering process. As new requirements come, they are added to this matrix.
These matrices help testers know the source of requirement. They can be traced forward
and backward.

While Being Tested

The following documents may be required while testing is started and is being done:

 Test Case document - This document contains list of tests required to be conducted. It
includes Unit test plan, Integration test plan, System test plan and Acceptance test plan.
 Test description - This document is a detailed description of all test cases and
procedures to execute them.
 Test case report - This document contains test case report as a result of the test.
 Test logs - This document contains test logs for every test case report.

After Testing

The following documents may be generated after testing :

 Test summary - This test summary is collective analysis of all test reports and logs. It
summarizes and concludes if the software is ready to be launched. The software is
released under version control system if it is ready to launch.

101 www.someakenya.com Contact: 0707 737 890


Testing vs. Quality Control, Quality Assurance and Audit

We need to understand that software testing is different from software quality assurance,
software quality control and software auditing.

 Software quality assurance - These are software development process monitoring


means, by which it is assured that all the measures are taken as per the standards of
organization. This monitoring is done to make sure that proper software development
methods were followed.
 Software quality control - This is a system to maintain the quality of software product.
It may include functional and non-functional aspects of software product, which enhance
the goodwill of the organization. This system makes sure that the customer is receiving
quality product for their requirement and the product certified as ‘fit for use’.
 Software audit - This is a review of procedure used by the organization to develop the
software. A team of auditors, independent of development team examines the software
process, procedure, requirements and other aspects of SDLC. The purpose of software
audit is to check that software and its development process, both conform standards, rules
and regulations.

 Other forms of testing

102 www.someakenya.com Contact: 0707 737 890


TOPIC 8

SOFTWARE ACQUISITION METHODS

 Software costing

Introduction

It has been surveyed that nearly one-third projects overrun their budget and late delivered and
two-thirds of all major projects substantially overrun their original estimates. The accurate
prediction of software development costs is a critical issue to make the good management
decisions and accurately determining how much effort and time a project required for both
project managers as well as system analysts and developers. Without reasonably accurate cost
estimation capability, project managers cannot determine how much time and manpower cost the
project should take and that means the software portion of the project is out of control from its
beginning; system analysts cannot make realistic hardware-software tradeoff analyses during the
system design phase; software project personnel cannot tell managers and customers that their
proposed budget and schedule are unrealistic. This may lead to optimistic over promising on
software development and the inevitable overruns and performance compromises as a
consequence. But, actually huge overruns resulting from inaccurate estimates are believed to
occur frequently.

The overall process of developing a cost estimate for software is not different from the process
for estimating any other element of cost. There are, however, aspects of the process that are
peculiar to software estimating. Some of the unique aspects of software estimating are driven by
the nature of software as a product. Other problems are created by the nature of the estimating
methodologies. Software cost estimation is a continuing activity which starts at the proposal
stage and continues through the lift time of a project. Continual cost estimation is to ensure that
the spending is in line with the budget.

Cost estimation is one of the most challenging tasks in project management. It is to accurately
estimate needed resources and required schedules for software development projects. The
software estimation process includes estimating the size of the software product to be produced,
estimating the effort required, developing preliminary project schedules, and finally, estimating
overall cost of the project.

It is very difficult to estimate the cost of software development. Many of the problems that
plague the development effort itself are responsible for the difficulty encountered in estimating
that effort. One of the first steps in any estimate is to understand and define the system to be
estimated. Software, however, is intangible, invisible, and intractable. It is inherently more

103 www.someakenya.com Contact: 0707 737 890


difficult to understand and estimate a product or process that cannot be seen and touched.
Software grows and changes as it is written. When hardware design has been inadequate, or
when hardware fails to perform as expected, the "solution" is often attempted through changes to
the software. This change may occur late in the development process, and sometimes results in
unanticipated software growth.

After 20 years research, there are many software cost estimation methods available including
algorithmic methods, estimating by analogy, expert judgment method, price towin method, top-
down method, and bottom-up method. No one method is necessarily better or worse than the
other, in fact, their strengths and weaknesses are often complimentary to each other. To
understand their strengths and weaknesses is very important when you want to estimate your
projects.

Expert Judgment Method

Expert judgment techniques involve consulting with software cost estimation expert or a group
of the experts to use their experience and understanding of the proposed project to arrive at an
estimate of its cost.

Generally speaking, a group consensus technique, Delphi technique, is the best way to be used.
The strengths and weaknesses are complementary to the strengths and weaknesses of algorithmic
method.

To provide a sufficiently broad communication bandwidth for the experts to exchange the
volume of information necessary to calibrate their estimates with those of the other experts, a
wideband Delphi technique is introduced over standard Deliphi technique.

The estimating steps using this method:

1. Coordinator present each expert with a specification and an estimation form.


2. Coordinator calls a group meeting in which the experts discuss estimation issues with the
coordinator and each other.
3. Experts fill out forms anonymously
4. Coordinator prepares and distributes a summary of the estimation on an iteration form.
5. Coordinator calls a group meeting, specially focusing on having the experts discuss
points where their estimates varied widely.
6. Experts fill out forms, again anonymously, and steps 4 and 6 are iterated for as many
rounds as appropriate.

104 www.someakenya.com Contact: 0707 737 890


The wideband Delphi Technique has subsequently been used in a number of studies and cost
estimation activities. It has been highly successful in combining the free discuss advantages of
the group meeting technique and advantage of anonymous estimation of the standard Delphi
Technique.

The advantages of this method are:


• The experts can factor in differences between past project experience and requirements of
the proposed project.The experts can factor in project impacts caused by new
technologies, architectures, applications and languages involved in the future project and
can also factor in exceptional personnel characteristics and interactions, etc.

The disadvantages include:

• This method cannot be quantified.


• It is hard to document the factors used by the experts or experts-group.
• Expert may be some biased, optimistic, and pessimistic, even though they have been
decreased by the group consensus.
• The expert judgment method always compliments the other cost estimating methods such
as algorithmic method.

Estimating by Analogy

Estimating by analogy means comparing the proposed project to previously completed similar
project where the project development information id known. Actual data from the completed
projects are extrapolated to estimate the proposed project. This method can be used either at
system-level or at the component-level.

Estimating by analogy is relatively straightforward. Actually in some respects, it is a systematic


form of expert judgment since experts often search for analogous situations so as to inform their
opinion.

The steps using estimating by analogy are:

1. Characterizing the proposed project.


2. Selecting the most similar completed projects whose characteristics have been stored in
the historical data base.
3. Deriving the estimate for the proposed project from the most similar completed projects
by analogy.

105 www.someakenya.com Contact: 0707 737 890


The main advantages of this method are:

• The estimation are based on actual project characteristic data.


• The estimator's past experience and knowledge can be used which is not easy to be
quantified.
• The differences between the completed and the proposed project can be identified and
impacts estimated.

However there are also some problems with this method,

• Using this method, we have to determine how best to describe projects.


The choice of variables must be restricted to information that is available at the point that
the prediction require.Possibilities include the type of application domain, the number
ofinputs, the number of distinct entities referenced, the number of screens and so forth.
• Even once we have characterized the project, we have to determine the similarity and
how much confidence can we place in the analogies. Too few analogies might lead to
maverick projects being used; too many might lead to the dilution of the effect of the
closest analogies. Martin Shepperd etc. introduced the method of finding the analogies by
measuring Euclidean distance in n-dimensional space where each dimension corresponds
to a variable. Values are standardized so that each dimension contributes equal weight to
the process of finding analogies. Generally speaking, two analogies are the most
effective.
• Finally, we have to derive an estimate for the new project by using known effort values
from the analogous projects. Possibilities include means and weighted means which will
give more influence to the closer analogies.

It has been estimated that estimating by analogy is superior technique to estimation via
algorithmic model in at least some circumstances. It is a more intuitive method so it is easier to
understand the reasoning behind a particular prediction.

Top-Down and Bottom-Up Methods

Top-Down Estimating Method

Top-down estimating method is also called Macro Model. Using top-down estimating method,
an overall cost estimation for the project is derived from the global properties of the software
project, and then the project is partitioned into various low-level components. The leading
method using this approach is Putnam model. This method is more applicable to early cost
estimation when only global properties are known. In the early phase of the software
development, It is very useful because there are no detailed information available.

106 www.someakenya.com Contact: 0707 737 890


The advantages of this method are:

• It focuses on system-level activities such as integration, documentation, configuration


management, etc., many of which may be ignored in other estimating methods and it will
not miss the cost of system-level functions.
• It requires minimal project detail, and it is usually faster, easier to implement.

The disadvantages are:

• It often does not identify difficult low-level problems that are likely to escalate costs and
sometime tends to overlook low-level components.
• It provides no detailed basis for justifying decisions or estimates.Because it provides a
global view of the software project, it usually embodies some effective features such as
cost-time trade off capability that exists in Putnam model.

Bottom-up Estimating Method

Using bottom-up estimating method, the cost of each software components is estimated and then
combine the results to arrive at an estimated cost of overall project. It aims at constructing the
estimate of a system from the knowledge accumulated about the small software components and
their interactions. The leading method using this approach is COCOMO's detailed model.

The advantages:

• It permits the software group to handle an estimate in an almost traditional fashion and to
handle estimate components for which the group has a feel.
• It is more stable because the estimation errors in the various components have a chance to
balance out.

The disadvantages:

• It may overlook many of the system-level costs (integration, configuration management,


quality assurance, etc.) associated with software development.
• It may be inaccurate because the necessary information may not available in the early
phase.
• It tends to be more time-consuming.
• It may not be feasible when either time or personnel are limited.

107 www.someakenya.com Contact: 0707 737 890


Algorithmic Method

The algorithmic method is designed to provide some mathematical equations to perform software
estimation. These mathematical equations are based on research and historical data and use
inputs such as Source Lines of Code (SLOC), number of functions to perform, and other cost
drivers such as language, design methodology, skill-levels, risk assessments, etc. The algorithmic
methods have been largely studied and there are a lot of models have been developed, such as
COCOMO models, Putnam model, and function points based models.

General advantages:

• It is able to generate repeatable estimations.


• It is easy to modify input data, refine and customize formulas.
• It is efficient and able to support a family of estimations or a sensitivity analysis.
• It is objectively calibrated to previous experience.

General disadvantages:

• It is unable to deal with exceptional conditions, such as exceptional personnel in any


software cost estimating exercises, exceptional teamwork, and an exceptional match
between skill-levels and tasks.
• Poor sizing inputs and inaccurate cost driver rating will result in inaccurate estimation.
• Some experience and factors cannot be easily quantified.

COCOMO Models

One very widely used algorithmic software cost model is the Constructive Cost Model
(COCOMO). The basic COCOMO model has a very simple form:

MAN-MONTHS = K1* (Thousands of Delivered Source Instructions) K2

Where K1 and K2 are two parameters dependent on the application and development
environment.

Estimates from the basic COCOMO model can be made more accurate by taking into account
other factors concerning the required characteristics of the software to be developed, the
qualification and experience of the development team, and the software development
environment. Some of these factors are:

108 www.someakenya.com Contact: 0707 737 890


1. Complexity of the software
2. Required reliability
3. Size of data base
4. Required efficiency (memory and execution time)
5. Analyst and programmer capability
6. Experience of team in the application area
7. Experience of team with the programming language and computer
8. Use of tools and software engineering practices

Many of these factors affect the person months required by an order of magnitude or more.
COCOMO assumes that the system and software requirements have already been defined, and
that these requirements are stable. This is often not the case.

COCOMO model is a regression model. It is based on the analysis of 63 selected projects. The
primary input is KDSI. The problems are:

1. In early phase of system life-cycle, the size is estimated with great uncertainty value. So,
the accurate cost estimate cannot be arrived at. The cost estimation equation is derived
from the analysis of 63 selected projects. It usually have some problems outside of its
particular environment. For this reason, the recalibration is necessary.

According to Kemerer's research, the average error for all versions of the model is 601%.

The detailed model and Intermediate model seem not much better than basic model.

The first version of COCOMO model was originally developed in 1981. Now, it has been
experiencing increasing difficulties in estimating the cost of software developed to new life cycle
processes and capabilities including rapid-development process model, reuse-driven approaches,
object-oriented approaches and software process maturity initiative.

For these reasons, The newest version, COCOMO 2.0, was developed. The major new modeling
capabilities of COCOMO 2.0 are a tailorable family of software size models, involving object
points, function points and source lines of code; nonlinear models for software reuse and
reengineering; an exponent-driver approach for modeling relative software diseconomies of
scale; and several additions, deletions, and updates to previous COCOMO effort-multiplier cost
drivers. This new model is also serving as a framework for an extensive current data collection
and analysis effort to further refine and calibrate the model's estimation capabilities.

109 www.someakenya.com Contact: 0707 737 890


Putnam model

Another popular software cost model is the Putnam model. The form of this model is:

Technical constant C= size * B1/3 * T4/3

Total Person Months B=1/T4 *(size/C)3

T= Required Development Time in years

Size is estimated in LOC

Where: C is a parameter dependent on the development environment and It is determined on the


basis of historical data of the past projects.

Rating: C=2,000 (poor), C=8000 (good) C=12,000 (excellent).

The Putnam model is very sensitive to the development time: decreasing the development time
can greatly increase the person-months needed for development.

One significant problem with the PUTNAM model is that it is based on knowing, or being able
to estimate accurately, the size (in lines of code) of the software to be developed. There is often
great uncertainty in the software size. It may result in the inaccuracy of cost estimation.
According to Kemerer's research, the error percentage of SLIM, a Putnam model based method,
is 772.87%

Function Point Analysis Based Methods

From above two algorithmic models, we found they require the estimators to estimate the
number of SLOC in order to get man-months and duration estimates. The Function Point
Analysis is another method of quantifying the size and complexity of a software system in terms
of the functions that the system delivers to the user. A number of proprietary models for cost
estimation have adopted a function point type of approach, such as ESTIMACS and SPQR/20.

The function point measurement method was developed by Allan Albrecht at IBM and published
in 1979. He believes function points offer several significant advantages over SLOC counts of
size measurement. There are two steps in counting function points:

• Counting the user functions. The raw function counts are arrived at by considering a
linear combination of five basic software components: external inputs, external outputs,

110 www.someakenya.com Contact: 0707 737 890


external inquiries, logic internal files, and external interfaces, each at one of three
complexity levels: simple, average or complex. The sum of these numbers, weighted
according to the complexity level, is the number of function counts (FC).
• Adjusting for environmental processing complexity. The final function points is arrived
at by multiplying FC by an adjustment factor that is determined by considering 14 aspects
of processing complexity. This adjustment factor allows the FC to be modified by at most
35% or -35%.

The collection of function point data has two primary motivations. One is the desire by managers
to monitor levels of productivity. Another use of it is in the estimation of software development
cost.

There are some cost estimation methods which are based on a function point type of
measurement, such as ESTIMACS and SPQR/20. SPQR/20 is based on a modified function
point method. Whereas traditional function point analysis is based on evaluating 14 factors,
SPQR/20 separates complexity into three categories: complexity of algorithms, complexity of
code, and complexity of data structures. ESTIMACS is a propriety system designed to give
development cost estimate at the conception stage of a project and it contains a module which
estimates function point as a primary input for estimating cost.

The advantages of function point analysis based model are:

1. Function points can be estimated from requirements specifications or design


specifications, thus making it possible to estimate development cost in the early phases of
development.
2. Function points are independent of the language, tools, or methodologies used for
implementation.
3. non-technical users have a better understanding of what function points are measuring
since function points are based on the system user's external view of the system

From Kemerer's research, the mean error percentage of ESTIMACS is only 85.48%. So,
considering the 601% with COCOMO and 771% with SLIM, I think the Function Point based
cost estimation methods is the better approach especially in the early phases of development.

The Selection and Use of Estimation Methods

The selection of Estimation methods

From the above comparison, we know no one method is necessarily better or worse than the
other, in fact, their strengths and weaknesses are often complimentary to each other. According

111 www.someakenya.com Contact: 0707 737 890


to the experience, it is recommended that a combination of models and analogy or expert
judgment estimation methods is useful to get reliable, accurate cost estimation for software
development.

For known projects and projects parts, we should use expert judgment method or analogy
method if the similarities of them can be got, since it is fast and under these circumstance,
reliable; For large, lesser known projects, it is better to use algorithmic model. In this case,
many researchers recommend the estimation models that do not required SLOC as an input. I
think COCOMO2.0 is the first candidate because COCOMO2.0 model not only can use Source
lines of code (SLOC) but also can use Object points, unadjusted function points as metrics for
sizing a project. If we approach cost estimation by parts, we may use expert judgment for some
known parts. This way we can take advantage of both: the rigor of models and the speed of
expert judgment or analogy. Because the advantages and disadvantages of each technique are
complementary, a combination will reduce the negative effect of any one technique, augment
their individual strengths and help to cross-check one method against another.

Use of Estimation Methods

It is very common that we apply some cost estimation methods to estimate the cost of software
development. But what we have to note is that it is very important to continually re-estimate cost
and to compare targets against actual expenditure at each major milestone. This keeps the status
of the project visible and helps to identify necessary corrections to budget and schedule as soon
as they occur.

At every estimation and re-estimation point, iteration is an important tool to improve estimation
quality. The estimator can use several estimation techniques and check whether their estimates
converge. The other advantages are as following:

• Different estimation methods may use different data. This results in better coverage of the
knowledge base for the estimation process. It can help to identify cost components that
cannot be dealt with or were overlooked in one of the methods
• Different viewpoints and biases can be taken into account and reconciled. A competitive
contract bid, a high business priority to keep costs down, or a small market window with
the resulting tight deadlines tends to have optimistic estimates. A production schedule
established by the developers is usually more on the pessimistic side to avoid committing
to a schedule and budget one cannot meet.

It is also very important to compare actual cost and time to the estimates even if only one or two
techniques are used. It will also provide the necessary feedback to improve the estimation quality

112 www.someakenya.com Contact: 0707 737 890


in the future. Generally, the historical data base for cost estimation should be set up for future
use.

Identifying the goals of the estimation process is very important because it will influence the
effort spent in estimating, its accuracy, and the models used. Tight schedules with high risks
require more accurate estimates than loosely defined projects with a relatively open-ended
schedule. The estimators should look at the quality of the data upon which estimates are based
and at the various objectives.

Model Calibration

The act of calibration standardizes a model. Many model are developed for specific situations
and are, by definition, calibrated to that situation. Such models usually are not useful outside of
their particular environment. So, the act of calibration is needed to increase the accuracy of one
of these general models by making it temporarily a specific model for whatever product it has
been calibrated for. Calibration is in a sense customizing a generic model. Items which can be
calibrated in a model include: product types, operating environments, labor rates and factors,
various relationships between functional cost items, and even the method of accounting used by
a contractor. All general models should be standardized (i.e. calibrated), unless used by an
experienced modeler with the appropriate education, skills and tools, and experience in the
technology being modeled.

Calibration is the process of determining the deviation from a standard in order to compute the
correction factors. For cost estimating models, the standard is considered historical actual costs.
The calibration procedure is theoretically very simple. It is simply running the model with
normal inputs (known parameters such as software lines of code) against items for which the
actual cost are known. These estimates are then compared with the actual costs and the average
deviation becomes a correction factor for the model. In essence, the calibration factor obtained is
really good only for the type of inputs that were used in the calibration runs. For a general total
model calibration, a wide range of components with actual costs need to be used. Better yet,
numerous calibrations should be performed with different types of components in order to obtain
a set of calibration factors for the various possible expected estimating situations

Conclusions

The accurate prediction of software development costs is a critical issue to make the good
management decisions and accurately determining how much effort and time a project required
for both project managers as well as system analysts and developers. There are many software
cost estimation methods available including algorithmic methods, estimating by analogy, expert
judgment method, top-down method, and bottom-up method. No one method is necessarily

113 www.someakenya.com Contact: 0707 737 890


better or worse than the other, in fact, their strengths and weaknesses are often complimentary to
each other. To understand their strengths and weaknesses is very important when you want to
estimate your projects.

For a specific project to be estimated, which estimation methods should be used depend on the
environment of the project. According to the weaknesses and strengths of the methods, you can
choose some methods to be used. I think a combination of the expert judgment or analogy
method and COCOMO2.0 is the best approach that you can choose. For known projects and
projects parts, we should use expert judgment method or analogy method if the similarities of
them can be got, since it is fast and under these circumstance, reliable; For large, lesser known
projects, it is better to use algorithmic model like COCOMO2.0 which will be available in early
1997. If COCOMO2.0 is not available, ESTIMACS or the other function point based methods
are highly recommended especially in the early phase of the software life-cycle because in the
early phase of software life-cycle SLOC based methods have great uncertainty values of size. If
there are many great uncertainty values of size, reuse, cost drivers etc., the analogous method or
wide-band Delphi technology should be considered as the first candidate. And , the COCOMO
2.0 has capabilities to deal with the current software process and is served as a framework for an
extensive current data collection and analysis effort to further refine and calibrate the model's
estimation capabilities. In general, the COCOMO2.0 will be very popular. Now Dr. Barry
Boehm and his students are developing COCOMO2.0. They expect to have it calibrated and
usable in early 1997.

Some recommendations:

1. Do not depend on a single cost or schedule estimate.


2. Use several estimating techniques or cost models, compare the results, and determine the
reasons for any large variations.
3. Document the assumptions made when making the estimates.
4. Monitor the project to detect when assumptions that turn out to be wrong jeopardize the
accuracy of the estimate.
5. Improve software process: An effective software process can be used to increase
accuracy in cost estimation in a number of ways.
6. Maintaining a historical database

 Software outsourcing

Software Outsourcing and its new Definition

The way of working in Software Outsourcing process is now changing. Day by day the process
is becoming more and more innovative as new ideas of the business process are taking place.

114 www.someakenya.com Contact: 0707 737 890


Now companies are not just relying on any single overseas partners but they believe more in
distributing the work among more than one overseas company. In this way they try to reduce the
risk of the overall project process. By these way of the distribution of the work both the risks like
geographical and financial can be reduced. Clients and vendors have also become more mature
with the growth of the overall overseas Software Outsourcing industry.

In Software Outsourcing the main reason behind the failure of the deal is the improper or poor
communication between the vendors and the clients. Communication gap between these two
parties leads to meet the needs and requirements of both the parties and the project faces the
failure. To make the overall process accurate the onshore relationship with overseas services
providers has also been a part of the new strategy. It also plays an important role. Client would
be able to communicate his needs in better way in Offshore Software Development process.
Again to make the deal run in the favorable way all the primary ideas must be clear in your mind
too. All the needs and requirements of the company must be clear with you along with the goal
of the benefits of lower price advantage. Otherwise you might be the main reason for the failure
of the deal and still you will continue to blame the Software Outsourcing vendors for no
reasons.

Software Outsourcing

So in Software Outsourcing process, be clear with yourself to remove the inaccuracy during the
project. Make proper communication with the service providers as and when required. Because a
minor mistake from your side also can take you to the improper result at the end of the project.
Idea of having the internal central project office can be helpful in making the communication
smooth. It will surely help in all the aspects of the business including the evaluation,
maintenance, delivery and many more regarding the overseas project. It is marked that
collaboration and proper communication are the most important keys to success in overall deal of
Software Outsourcing.

In Offshore Outsourcing, in past scenario client used to gather total information and sent it to
the overseas service providers and resources could also be shifted overseas to let the deal work.
But now total different process is taking place in these overseas deals. Important management
people form the service providing companies now stay in the clients place and help in running
overall Software Outsourcing process smoothly.

In short the overall Scenario in Software Outsourcing process is now changed. With the maturity
of the market the mindsets of the clients and vendors are also changing. More and more
innovations and generation of new ideas in overseas deal has really changed the whole business
process scenario in Software Outsourcing.

115 www.someakenya.com Contact: 0707 737 890


 Open-source software engineering and customization

Generally, open source refers to a computer program in which the source code is available to the
general public for use and/or modification from its original design. Open-source code is meant to
be a collaborative effort, where programmers improve upon the source code and share the
changes within the community. Typically this is not the case, and code is merely released to the
public under some license. Others can then download, modify, and publish their version (fork)
back to the community. Today you find more projects with forked versions than unified projects
worked by large teams.

Many large formal institutions have sprung up to support the development of the open-source
movement, including the Apache Software Foundation, which supports projects such as the open
source framework behind big dataApache Hadoop and an open-source HTTP server Apache
HTTP.

The open-source model is based on a more decentralized model of production, in contrast with
more centralized models of development such as those typically used in commercial software
companies.

Customization

Modification of packaged software to meet individual requirements. Before an enterprise can


automate its operations using packaged software, it must first make sure that the software caters
for all the processes it needs to automate. This step is called implementation. If the software
already includes all the necessary capabilities, it is simply a matter of selecting the correct
configuration (often a complex operation in itself). Adding extra capabilities that are not
included in the off-the-shelf package involves writing additional software code. This is known as
customization.

Open Source Software Customization Benefits for Clients

 Cost savings typically range from 40% to 50% compared to wholly custom development
 Enhanced portability
 Vendor neutrality
 Reduced development times
 Flexibility for meeting specific needs (not available with proprietary licensed software)
 Long term customer support, fixes, enhancements, and software updates from wide user
base

116 www.someakenya.com Contact: 0707 737 890


 In-house development

When a company needs a piece of software written they sometimes choose to use programmers
within their own company to write it. This is known as "in-house" development.

Pros
The level of customization is perhaps the biggest benefit of custom software. While a
commercial package may fit many of your business’s needs, it’s doubtful that it will have the
same efficiency as custom software. By meeting your exact specifications, you can cover every
aspect of your business without unnecessary extras. It gives you greater control, which is
important if your business has specific needs that your average commercial product can’t fulfill.
Having customized software should also make the interface more familiar and easy to use.

Because in-house software is developed by a team of your choosing, it also gives you access to
knowledgeable support. Rather than dealing with technicians who may not understand your
unique situation, you can get support from the individuals who have developed your software
firsthand. They will understand any subtle nuances and minimize downtime from technical
errors.

Cons
Your team of in-house developers may lack the knowledge and expertise to create sophisticated
software capable of handling all the tasks you require. If you only need basic software, this
probably won’t be an issue. However, if you need more sophisticated software, this could be
more trouble than it’s worth and lead to bugs and glitches. This may force you to bring in outside
consultants who lack familiarity with your business, which can also be detrimental.

Custom software also tends to lack scalability, and upgrades can be troublesome. Because
technology is constantly evolving, you may have difficulty adapting to new platforms in the
future. Although developed software may work for well for a while, it could become defunct in a
few years. This can force you to spend more money on developing new software.

 Commercial Off The Shelf software (COTS)

Commercially available specialized softwaredesigned for specific applications (such as legal or


medical billing, chemical analysis, statistical analysis) that can be used with little or no
modification.

Commercial-off-the-shelf (COTS) software and services are built and delivered usually from a
third party vendor. COTS can be purchased, leased or even licensed to the general public.

COTS provides some of the following strengths


117 www.someakenya.com Contact: 0707 737 890
 Applications are provided at a reduced cost.
 The application is more reliable when compared to custom built software because its
reliability is proven through the use by other organizations.
 COTS is more maintainable because the systems documentation is provided with the
application.
 The application is higher quality because competition improves the product quality.
 COTS is of higher complexity because specialists within the industry have developed the
software.
 The marketplace, not industry, drives the development of the application.
 The delivery schedule is reduced because the basic schedule is operations.

 Budgeting for information systems

o Financial cost benefit analysis

• Tangible Costs
- Direct investment in software & hardware (one time)
- IS installation & employee training (one time)
- Operating costs for an IS (recurring) – expenditures on software licences, labor costs of
IS staff, IS maintenance, overhead for facilities, expenses of communications carried out
by computer networks partaking in IS.
- Loss of money and time with new IS that does not perform as expected (opportunity
cost).
- Total Cost of Ownership sums up all the costs in a system life cycle.
- Effort put in learning a new IS and associated process
- Employees’ loss of work motivation due to new processes/IS
- Employees’ resistance to new processes/IS
- Lower customer satisfaction due to improperly performing IS
- Limitations in decision making when a new IS cannot deliver reports managers need to
make decisions.
- Note that intangible costs may result in tangible costs.

• Tangible Benefits
- Savings on many counts:
- savings on labor expenses
- savings due to reduced process time (e.g., reducing inventory costs in supply
chain process)
- savings due to avoiding to add more employees when improved process/IS can
carry a larger volume of operations
118 www.someakenya.com Contact: 0707 737 890
- Organizational performance gains (new IS/process  organizational productivity 
financial returns).
- Better decision making resulting in income increase (e.g., moving into new product and
geographical markets)
- Cutting losses by improved management control (e.g., ERPS case of detecting fraudulent
purchases)
- Data error reduction eliminating waste of business time & labour for repeated tasks.
- Customer value that does not translate directly into monetary gains for a company
- Better control and decision making, which do not translate readily into monetary gains
- Improvement in the appearance of reports and other business documentation (better
quality but no more money).
- Increased knowledge capabilities (note: these are a condition for making more attractive
products, but before these products are made and sold no monetary gains accrue).

• Financial Assessments of IS Economy


1. Returns’ size focus: Various ratios of how much an IS returns in relation to its costs
(Benefit/Cost Ratio, Net Present Value, Return on Investment):
– The higher the ratios, the more economically valuable the IS
– Present value of money used (future returns as well as costs discounted for some
rate) as finances flow over years (NPV function in Excel)
2. Returns’ timing focus: Assessment of when returns will occur (e.g., Break-Even
Analysis)

• Mixed Methods of Assessing IS Economy


1. Portfolio Analysis
– The focus is on controlling risks that different systems can bring
– In planning IS, IS projects compared on risks they bear (completion within budget
& time, technology demands, size of organizational change required)
– Risk = probability a problem will happen x weight of problem
– Risk can be thought of as a special and critical cost
– Riskier projects: Expensive systems, new technologies, and larger org. changes
(e.g., enterprise systems)
2. Balanced Scorecard
– The focus is on achieving organizational goals
– A combination of tangible and intangible benefits in select areas – finances,
customer relations, key processes, growth potential, anything else important for a
company.
– IS contribution to these performance indicators is assessed periodically.

119 www.someakenya.com Contact: 0707 737 890


• Software and Hardware Acquisition
Three options: Make, Buy, Rent
1 In-house Development (company's IS Department does programming, hardware acquisition,
and IS installation)
2 Buy:
– Off-the-shelf software (e.g., Microsoft Office, SAP)
– Buy custom-built software (a software vendor writes software according to the
client company’s requirements).
– Note: If there is a system development capability in the IS Department, the buy
options are called “outsourcing” (sourcing outside of own company)

3 Rent:
– Annual licencing of software or hardware
– Rent via the Cloud (partial or total IS services).
• Cloud Advantages:
– Reduce costs: pay-per-use, avoiding development & maintenance costs
– Client benefits from new IT as vendor keeps updating it to remain competitive 
gains in client’s business processes.
• Cloud Disadvantages:
– Synchronizing business processes between client and vendor
– Risk of compromising confidentiality of business data
– Vendor lock-in (it is hard to get out of Cloud as a company relies more on a cloud
vendor)
– Unexpected changes in pricing services.

• Summary
• Costs of IS can be tangible (expressed in monetary terms) & intangible (all other forms).
Examples of tangible costs are investment in computer software and hardware, and
system’s operating costs.
• Benefits of Information Systems can be tangible & intangible. Examples of tangible
benefits are cost reduction and income gains.
• Financial Assessments of IS economy focuses on the size of returns (e.g., NPV) and on
timing of returns (e.g., payback period).
• Mixed Assessments of IS economy cover tangible and intangible C/B (portfolio analysis,
and balanced scorecard).
• Software can be developed by the company’s IS department, purchased, or rented;
hardware is usually purchased or rented. Each option has pros and cons.
• Cloud (cloud computing) is the trendy rental option with significant pros & cons.

120 www.someakenya.com Contact: 0707 737 890


o Business case approach

A business case captures the reasoning for initiating a project or task. It is often presented in a
well-structured written document, but may also sometimes come in the form of a short verbal
argument or presentation. The logic of the business case is that, whenever resources such as
money or effort are consumed, they should be in support of a specific business need. An example
could be that a software upgrade might improve system performance, but the "business case" is
that better performance would improve customer satisfaction, require less task processing time,
or reduce system maintenance costs. A compelling business case adequately captures both the
quantifiable and unquantifiable characteristics of a proposed project. Business case depends on
business attitude and business volume.

Business cases can range from comprehensive and highly structured, as required by formal
project management methodologies, to informal and brief. Information included in a formal
business case could be the background of the project, the expected business benefits, the options
considered (with reasons for rejecting or carrying forward each option), the expected costs of the
project, agap analysis and the expected risks. Consideration should also be given to the option of
doing nothing including the costs and risks of inactivity. From this information, the justification
for the project is derived. Note that it is not the job of the project manager to build the business
case; this task is usually the responsibility of stakeholdersand sponsors

Traditional Capital Budgeting Models


Capital budgeting models are one of several techniques used to measure the value ofinvesting in
long-term capital investment projects. The process of analyzing and selectingvarious proposals
for capital expenditures is called capital budgeting. Firms invest incapital projects to expand
production to meet anticipated demand or to modernize productionequipment to reduce costs.
Firms also invest in capital projects for manynoneconomic reasons, such as installing pollution
control equipment, converting to ahuman resources database to meet some government
regulations, or satisfying nonmarketpublic demands. Information systems are considered long-
term capital investmentprojects.
Six capital budgeting models are used to evaluate capital projects:
• The payback method
• The accounting rate of return on investment (ROI)
• The net present value
• The cost-benefit ratio
• The profitability index
• The internal rate of return (IRR)

Capital budgeting methods rely on measures of cash flows into and out of the firm.
Capital projects generate cash flows into and out of the firm. The investment cost is animmediate
cash outflow caused by the purchase of the capital equipment. In subsequentyears, the

121 www.someakenya.com Contact: 0707 737 890


investment may cause additional cash outflows that will be balanced by cashinflows resulting
from the investment. Cash inflows take the form of increased sales ofmore products (for reasons
such as new products, higher quality, or increasing marketshare) or reduced costs in production
and operations. The difference between cash outflowsand cash inflows is used for calculating the
financial worth of an investment. Oncethe cash flows have been established, several alternative
methods are available for comparingdifferent projects and deciding about the investment.
Financial models assume that all relevant alternatives have been examined, that allcosts and
benefits are known, and that these costs and benefits can be expressed in a commonmetric,
specifically, money. When one has to choose among many complex alternatives,these
assumptions are rarely met in the real world, although they may be approximated.

Tangible benefits can be quantified and assigned a monetary value. Intangible benefits,such as
more efficient customer service or enhanced employee goodwill, cannot beimmediately
quantified but may lead to quantifiable gains in the long run.
You are familiar with the concept of total cost of ownership (TCO), which is designedto identify
and measure the components of information technology expenditures beyondthe initial cost of
purchasing and installing hardware and software. However, TCO analysisprovides only part of
the information needed to evaluate an information technologyinvestment because it does not
typically deal with benefits, cost categories such as complexitycosts, and “soft” and strategic
factors discussed later in this section.

THE PAYBACK METHOD


The payback method is quite simple: It is a measure of the time required to pay back the initial
investment of a project. The payback period is computed as follows:

Original investment = Number of years to pay back


Annual net cash inflow

The weakness of this measure is its virtue: The method ignores the time value of money, the
amount of cash flow after the payback period, the disposal value (usually zero with computer
systems), and the profitability of the investment.

ACCOUNTING RATE OF RETURN ON INVESTMENT (ROI)


Firms make capital investments to earn a satisfactory rate of return. Determining a satisfactory
rate of return depends on the cost of borrowing money, but other factors can enter into the
equation. Such factors include the historic rates of return expected by the firm.
In the long run, the desired rate of return must equal or exceed the cost of capital in the
marketplace. Otherwise, no one will lend the firm money.
The accounting rate of return on investment (ROI) calculates the rate of return from an
investment by adjusting the cash inflows produced by the investment for depreciation.

122 www.someakenya.com Contact: 0707 737 890


It gives an approximation of the accounting income earned by the project.
To find the ROI, first calculate the average net benefit. The formula for the average net benefit is
as follows:

(Total benefits – Total cost – Depreciation) = Net benefit


Useful life

This net benefit is divided by the total initial investment to arrive at ROI. The formulais as
follows:
Net benefit = ROI
Total initial investment

The weakness of ROI is that it can ignore the time value of money. Future savings aresimply not
worth as much in today’s dollars as are current savings. However, ROI can bemodified (and
usually is) so that future benefits and costs are calculated in today’s dollars.
(The present value function on most spreadsheets can perform this conversion.)

NET PRESENT VALUE


Evaluating a capital project requires that the cost of an investment (a cash outflow usually in year
0) be compared with the net cash inflows that occur many years later. But these two kinds of
cash flows are not directly comparable because of the time value of money. Money you have
been promised to receive three, four, and five years from now is not worth as much as money
received today. Money received in the future has to be discounted by some appropriate
percentage rate—usually the prevailing interests rate, or sometimes the cost of capital. Present
value is the value in current dollars of a payment or stream of payments to be received in the
future. It can be calculated by using the following formula:

Payment * 1 – (1 + interest)-n= Present value


Interest

Thus, to compare the investment (made in today’s dollars) with future savings or earnings, you
need to discount the earnings to their present value and then calculate the net present value of the
investment. The net present value is the amount of money an investment is worth, taking into
account its cost, earnings, and the time value of money. The formula for net present value is this:

Present value of expected cash flows – Initial investment cost = Net present value

123 www.someakenya.com Contact: 0707 737 890


COST-BENEFIT RATIO
A simple method for calculating the returns from a capital expenditure is to calculate the cost-
benefit ratio, which is the ratio of benefits to costs. The formula is

Total benefits= Cost-benefit ratio


Total costs

The cost-benefit ratio can be used to rank several projects for comparison. Some firms establish a
minimum cost-benefit ratio that must be attained by capital projects. The cost-benefit ratio can,
of course, be calculatedusing present values to account for the time value of money.

PROFITABILITY INDEX
One limitation of net present value is that it provides no measure of profitability. Neither does it
provide a way to rank order different possible investments. One simple solution is provided by
the profitability index. The profitability index is calculated by dividing the present value of the
total cash inflow from an investment by the initial cost of the investment.
The result can be used to compare the profitability of alternative investments.

Present value of cash inflows = Profitability index


Investment

INTERNAL RATE OF RETURN (IRR)


Internal rate of return (IRR) is defined as the rate of return or profit that an investment is
expected to earn, taking into account the time value of money. IRR is the discount (interest) rate
that will equate the present value of the project’s future cash flows to the initial cost of the
project (defined here as negative cash flow in year 0 of $11,467,350). In other words, the value
of R (discount rate) is such that Present value – Initial cost = 0.

o Total cost of ownership

Total cost of ownership (TCO) is a financial estimate intended to help buyers and owners
determine the direct and indirect costs of a product or system. It is a management
accountingconcept that can be used in full cost accounting or even ecological economics where it
includes social costs.

TCO, when incorporated in any financial benefit analysis, provides a cost basis for determining
the total economic value of an investment. Examples include: return on investment, internal rate
of return, economic value added, return on information technology, and rapid economic
justification.
124 www.someakenya.com Contact: 0707 737 890
A TCO analysis includes total cost of acquisition and operating costs. A TCO analysis is used to
gauge the viability of any capital investment. An enterprise may use it as a product/process
comparison tool. It is also used by credit markets and financing agencies. TCO directly relates to
an enterprise's asset and/or related systems total costs across all projects and processes, thus
giving a picture of the profitability over time

For example, the total cost of ownership of a car is not just the purchase price, but also the
expenses incurred through its use, such as repairs, insurance and fuel. A used car that appears to
be a great bargain might actually have a total cost of ownership that is higher than that of a new
car, if the used car requires numerous repairs while the new car has a three-year warranty.

TCO quantifies the cost of the purchase across the product's entire lifecycle. Therefore, it offers a
more accurate basis for determining the value - cost vs. ROI -of an investment than the purchase
price alone. The overall TCO includes direct and indirect expenses, as well as some intangible
ones that may be assigned a monetary value. For example, a server's TCO might include an
expensive purchase price, a good deal on ongoing support, and low system management time
because of its user-friendly interface.

TCO factors in costs accumulated from purchase to decommissioning. For a data center server,
for example, this means initial acquisition price, repairs, maintenance, upgrades, service or
support contracts, network integration, security, software licenses (such as Windows Server 2012
R2) and user training. It can even include the credit terms on which the company purchased the
product. Through analysis, the purchasing manager might assign a monetary value to intangible
costs, such as systems management time, electricity used, downtime, insurance and other
overhead. The total cost of ownership must be compared to the total benefits of ownership
(TBO) to determine the viability of a purchase.

There are several methodologies and software tools to calculate total cost of ownership, but the
process is not perfect. Many enterprises fail to define a singular methodology. This is bad
because they cannot base purchasing decisions on uniform information. Another problem is that
it is difficult to determine the scope of operating costs for any piece of IT equipment; some cost
factors are easily overlooked or inaccurately compared from one product to another. For
example, support costs on one server include the cost of spare parts. This might make support
cost more than it does on another server, but eliminates an additional cost factor of parts
acquisition.

Cost of ownership analysis generally doesn't anticipate unpredictable rising costs over time, for
example, if upgrade part costs jump substantially more than expected due to a distributor change.
TCO calculations cannot account for the availability of upgrades and services, or the impact of
vendor relationships. If a vendor refuses to offer service after three years, no longer stocks parts

125 www.someakenya.com Contact: 0707 737 890


after five years or ends support for certain software, the business may be subject to unexpected
and significant additional costs which could drive the TCO far beyond its initial estimate.

Enterprise managers and purchasing decision makers complete total cost of ownership analysis
for multiple options, then compare TCOs to determine the best long-term investment. For
example, one server's purchase price might be less expensive than a competitive model, but the
decision maker can see that anticipated upgrades and annual service contracts would drive the
total cost much higher. In turn, one model's TCO may be slightly higher than another model's,
but its TBO far exceeds that of the competitive offering.

Without TCO analysis, enterprises could greatly miscalculate IT budgets, or purchase servers
and other components unsuited to their computing needs, resulting in slow services, uncontrolled
downtime and other problems.

o Balanced scorecard/activity based costing and expected value

The Balanced Scorecard (BSC) is a strategy performance management tool - a semi-standard


structured report, supported by design methods and automation tools that can be used by
managers to keep track of the execution of activities by the staff within their control and to
monitor the consequences arising from these actions.

The critical characteristics that define a balanced scorecard are:

 Its focus on the strategic agenda of the organization concerned


 The selection of a small number of data items to monitor
 A mix of financial and non-financial data items

 Balanced Scorecard Basics


The balanced scorecard is a strategic planning and management system that is used extensively
in business and industry, government, and nonprofit organizations worldwide to align business
activities to the vision and strategy of the organization, improve internal and external
communications, and monitor organization performance against strategic goals. It was originated
by Drs. Robert Kaplan (Harvard Business School) and David Norton as a performance
measurement framework that added strategic non-financial performance measures to traditional
financial metrics to give managers and executives a more 'balanced' view of organizational
performance. While the phrase balanced scorecard was coined in the early 1990s, the roots of
this type of approach are deep, and include the pioneering work of General Electric on
performance measurement reporting in the 1950’s and the work of French process engineers
(who created the Tableau de Bord – literally, a "dashboard" of performance measures) in the
early part of the 20th century.

126 www.someakenya.com Contact: 0707 737 890


Gartner Group suggests that over 50% of large US firms have adopted the BSC. More than half
of major companies in the US, Europe and Asia are using balanced scorecard approaches, with
use growing in those areas as well as in the Middle East and Africa. A recent global study by
Bain & Co listed balanced scorecard fifth on its top ten most widely used management tools
around the world, a list that includes closely-related strategic planning at number one. Balanced
scorecard has also been selected by the editors of Harvard Business Review as one of the most
influential business ideas of the past 75 years.

The balanced scorecard has evolved from its early use as a simple performance measurement
framework to a full strategic planning and management system. The “new” balanced scorecard
transforms an organization’s strategic plan from an attractive but passive document into the
"marching orders" for the organization on a daily basis. It provides a framework that not only
provides performance measurements, but helps planners identify what should be done and
measured. It enables executives to truly execute their strategies.

This new approach to strategic management was first detailed in a series of articles and books by
Drs. Kaplan and Norton. Recognizing some of the weaknesses and vagueness of previous
management approaches, the balanced scorecard approach provides a clear prescription as to
what companies should measure in order to 'balance' the financial perspective. The balanced
scorecard is a management system (not only a measurement system) that enables organizations
to clarify their vision and strategy and translate them into action. It provides feedback around
both the internal business processes and external outcomes in order to continuously improve
strategic performance and results. When fully deployed, the balanced scorecard transforms
strategic planning from an academic exercise into the nerve center of an enterprise.

Kaplan and Norton describe the innovation of the balanced scorecard as follows:

"The balanced scorecard retains traditional financial measures. But financial measures tell the
story of past events, an adequate story for industrial age companies for which investments in
long-term capabilities and customer relationships were not critical for success. These financial
measures are inadequate, however, for guiding and evaluating the journey that information age
companies must make to create future value through investment in customers, suppliers,
employees, processes, technology, and innovation."

127 www.someakenya.com Contact: 0707 737 890


Adapted from Robert S. Kaplan and David P. Norton, “Using the Balanced Scorecard as a
Strategic Management System,” Harvard Business Review
Rev (January-February
February 1996):

Perspectives

The balanced scorecard suggests that we view the organization from four perspectives, and to
develop metrics, collect data and analyze it relative to each of these perspectives:

The Learning & Growth Perspective


This perspective includes employee training and corporate cultural attitudes related to both
individual and corporate self-improvement.
improvement. In a knowledge-worker
knowledge worker organization, people -- the
only repository of knowledge -- are the main resource. In the current climate
climate of rapid
technological change, it is becoming necessary for knowledge workers to be in a continuous
learning mode. Metrics can be put into place to guide managers in focusing training funds where
they can help the most. In any case, learning and growth constitute the essential foundation for
success of any knowledge-worker
worker organization.

Kaplan and Norton emphasize that 'learning' is more than 'training'; it also includes things like
mentors and tutors within the organization, as well as that ease of communication
communication among
workers that allows them to readily get help on a problem when it is needed. It also includes
technological tools; what the Baldrige criteria call "high performance work systems."

128 www.someakenya.com Contact: 0707 737 890


The Business Process Perspective

This perspective refers to internal business processes. Metrics based on this perspective allow the
managers to know how well their business is running, and whether its products and services
conform to customer requirements (the mission). These metrics have to be carefully designed by
those who know these processes most intimately; with our unique missions these are not
something that can be developed by outside consultants.

The Customer Perspective

Recent management philosophy has shown an increasing realization of the importance of


customer focus and customer satisfaction in any business. These are leading indicators: if
customers are not satisfied, they will eventually find other suppliers that will meet their needs.
Poor performance from this perspective is thus a leading indicator of future decline, even though
the current financial picture may look good.

In developing metrics for satisfaction, customers should be analyzed in terms of kinds of


customers and the kinds of processes for which we are providing a product or service to those
customer groups.

The Financial Perspective

Kaplan and Norton do not disregard the traditional need for financial data. Timely and accurate
funding data will always be a priority, and managers will do whatever necessary to provide it. In
fact, often there is more than enough handling and processing of financial data. With the
implementation of a corporate database, it is hoped that more of the processing can be
centralized and automated. But the point is that the current emphasis on financials leads to the
"unbalanced" situation with regard to other perspectives. There is perhaps a need to include
additional financial-related data, such as risk assessment and cost-benefit data, in this category.

Strategy Mapping

Strategy maps are communication tools used to tell a story of how value is created for the
organization. They show a logical, step-by-step connection between strategic objectives (shown
as ovals on the map) in the form of a cause-and-effect chain. Generally speaking, improving
performance in the objectives found in the Learning & Growth perspective (the bottom row)
enables the organization to improve its Internal Process perspective Objectives (the next row up),
which in turn enables the organization to create desirable results in the Customer and Financial
perspectives (the top two rows).

129 www.someakenya.com Contact: 0707 737 890


 Reference: The Institute Way: Simplify Strategic Planning & Management with the
Balanced Scorecard.

Balanced Scorecard Software

The balanced scorecard is not a piece of software. Unfortunately, many people


people believe that
implementing software amounts to implementing a balanced scorecard.Once a scorecard has
been developed and implemented, however, performance management software can be used to
get the right performance information to the right people at the right time. Automation adds
structure and discipline to implementing the Balanced Scorecard system, helps transform
disparate corporate data into information and knowledge, and helps communicate performance
information. The Balanced Scorecard Institute formally
for recommends the QuickScore
TM
Performance Information System developed by Spider Strategies and co-marketed marketed by the
Institute.

Activity-based costing (ABC)) is a costing methodology that identifies activities in an


organization and assigns the cost of each activity with resources to all products and services
according to the actual consumption by each.
eac

An accounting method that identifies the activities that a firm performs, and then assigns indirect
costs to products. An activity based costing (ABC) system recognizes the relations
relationship between
costs, activities and products, and through this relationship assigns indirect costs to products less
arbitrarily than traditional metho
130 www.someakenya.com Contact: 0707 737 890
o Tracking and allocating costs

Historical record (electronic or paper trail) of cost related information.

Cost allocation is a process of providing relief to shared service organization's cost centers that
provide a product or service. In turn, the associated expense is assigned to internal clients' cost
centers that consume the products and services. For example, the CIO may provide all IT
services within the company and assign the costs back to the business units that consume each
offering.

The core components of a cost allocation system consist of a way to track which organizations
provides a product and/or service, the organizations that consume the products and/or services,
and a list of portfolio offerings (e.g. service catalog). Depending on the operating structure
within a company, the cost allocation data may generate an internal invoice or feed an ERP
system's chargeback module. Accessing the data via an invoice or chargeback module are the
typical methods that drive personnel behavior. In return, the consumption data becomes a great
source of quantitative information to make better business decisions. Today’s organizations face
growing pressure to control costs and enable responsible financial management of resources. In
this environment, an organization is expected to provide services cost-effectively and deliver
business value while operating under tight budgetary constraints. One way to contain costs is to
implement a cost allocation methodology, where your business units become directly
accountable for the services they consume.

An effective cost allocation methodology enables an organization to identify what services are
being provided and what they cost, to allocate costs to business units, and to manage cost
recovery. Under this model, both the service provider and its respective consumers become
aware of their service requirements and usage and how they directly influence the costs incurred.
This information, in turn, improves discipline within the business units and financial discipline
across the entire organization. With the organization articulating the costs of services provided,
the business units become empowered – and encouraged – to make informed decisions about the
services and availability levels they request. They can make trade-offs between service levels
and costs, and they can benchmark internal costs against outsourced providers.

131 www.someakenya.com Contact: 0707 737 890


TOPIC 9

CONVERSION STRATEGIES

 Conversion planning

Overview
The Conversion Plan describes the strategies involved in converting data from an existing
system to another hardware or software environment. It is appropriate to reexamine the original
system’s functional requirements for the condition of the system before conversion to determine
if the original requirements are still valid. An outline of the Conversion Plan is shown below.

INTRODUCTION
This section provides a brief description of introductory material.

Purpose and Scope


This section describes the purpose and scope of the Conversion Plan. Reference the information
system name and provide identifying information about the system undergoing conversion.

Points of Contact
This section identifies the System Proponent. Provide the name of the responsible organization
and staff (and alternates, if appropriate) who serve as points of contact for the system conversion.
Include telephone numbers of key staff and organizations.

Project References
This section provides a bibliography of key project references and deliverables that have been
produced before this point in the project development. These documents may have been
produced in a previous development life cycle that resulted in the initial version of the system
undergoing conversion or may have been produced in the current conversion effort as
appropriate.

Glossary
This section contains a glossary of all terms and abbreviations used in the plan. If it is several
pages in length, it may be placed in an appendix.

CONVERSION OVERVIEW
This section provides an overview of the aspects of the conversion effort, which are discussed in
the subsequent sections.

132 www.someakenya.com Contact: 0707 737 890


System Overview
This section provides an overview of the system undergoing conversion. The general nature or
type of system should be described, including a brief overview of the processes the system is
intended to support. If the system is a database or an information system, also include a general
discussion of the type of data maintained, the operational sources, and the uses of those data.

System Conversion Overview


This section provides an overview of the planned conversion effort.

Conversion Description
This section provides a description of the system structure and major components. If only
selected parts of the system will undergo conversion, identify which components will and will
not be converted.

If the conversion process will be organized into discrete phases, this section should identify
which components will undergo conversion in each phase. Include hardware, software, and data
as appropriate. Charts, diagrams, and graphics may be included as necessary. Develop and
continuously update a milestone chart for the conversion process.

Type of Conversion
This section describes the type of conversion effort. The software part of the conversion effort
usually falls into one of the following categories:

 Intra language conversion is a conversion between different versions of the same


computer language or different versions of a software system, such as a database
management system (DBMS), operating system, or local area network (LAN)
management system.
 Inter language conversion is the conversion from one computer language to another or
from one software system to another.
 Same compiler conversions use the same language and compiler versions. Typically,
these conversions are performed to make programs conform to standards, improve
program performance, convert to a new system concept, etc. These conversions may
require some program redesign and generally require some reprogramming.

In addition to the three categories of conversions described above, other types of conversions
may be defined as necessary.

133 www.someakenya.com Contact: 0707 737 890


Conversion Strategy
This section describes the strategies for conversion of system hardware, software, and data.
 Hardware Conversion Strategy
o This section describes the strategy to be used for the conversion of system hardware,
if any. Describe the new (target) hardware environment, if appropriate.
 Software Conversion Strategy
o This section describes the conversion strategy to be used for software.
 Data Conversion Strategy
o This section describes the data conversion strategy, data quality assurance, and the
data conversion controls.
 Data Conversion Approach
o This section describes the specific data preparation requirements and the data that
must be available for the system conversion. If data will be transported from the
original system, provide a detailed description of the data handling, conversion, and
loading procedures. If these data will be transported using machine-readable media,
describe the characteristics of those media.
 Interfaces
o In the case of a hardware platform conversion - such as mainframe to client/server -
the interfaces to other systems may need reengineering. This section describes the
affected interfaces and the revisions required in each.
 Data Quality Assurance and Control
o This section describes the strategy to be used to ensure data quality before and after
all data conversions. This section also describes the approach to data scrubbing and
quality assessment of data before they are moved to the new or converted system.
The strategy and approach may be described in a formal transition plan or a document
if more appropriate.
 Conversion Risk Factors
o This section describes the major risk factors in the conversion effort and strategies for
their control or reduction. Descriptions of the risk factors that could affect the
conversion feasibility, the technical performance of the converted system, the
conversion schedule, or costs should be included. In addition, a review should be
made to ensure that the current backup and recovery procedures are adequate as well
as operational.

Conversion Tasks
This section describes the major tasks associated with the conversion, including planning and
pre-conversion tasks.

134 www.someakenya.com Contact: 0707 737 890


Conversion Planning
This section describes planning for the conversion effort. If planning and related issues have
been addressed in other life-cycle documents, reference those documents in this section. The
following list provides some examples of conversion planning issues that could be addressed:

• Analysis of the workload projected for the target conversion environment to ensure that the
projected environment can adequately handle that workload and meet performance and
capacity requirements
• Projection of the growth rate of the data processing needs in the target environment to ensure
that the system can handle the projected near-term growth, and that it has the expansion
capacity for future needs
• Analysis to identify missing features in the new (target) hardware and software environment
that were supported in the original hardware and software and used in the original system
• Development of a strategy for recoding, reprogramming, or redesigning the components of
the system that used hardware and software features not supported in the new (target)
hardware and software environment but used in the original system

Pre-Conversion Tasks
This section describes all tasks that are logically separate from the conversion effort itself but
that must be completed before the initiation, development, or completion of the conversion
effort. Examples of such pre-conversion tasks include:

Finalize decisions regarding the type of conversion to be pursued.


• Install changes to the system hardware, such as a new computer or communications
hardware, if necessary.
• Implement changes to the computer operating system or operating system components, such
as the installation of a new LAN operating system or a new windowing system.
• Acquire and install other software for the new environment, such a new DBMS or document
imaging system.

Major Tasks and Procedures


This section addresses the major tasks associated with the conversion and the procedures
associated with those tasks.
Major Task Name
Provide a name for each major task. Provide a brief description of each major task required for
the conversion of the system, including the tasks required to perform the conversion, preparation
of data, and testing of the system. If some of these tasks are described in other life-cycle
documents, reference those documents in this section.

135 www.someakenya.com Contact: 0707 737 890


Procedures
This section should describe the procedural approach for each major task. Provide as much
detail as necessary to describe these procedures.

Conversion Schedule
This section provides a schedule of activities to be accomplished during the conversion. Pre-
conversion tasks and major tasks for all hardware, software, and data conversions described in
Section 2.3,Conversion Tasks, should be described here and should show the beginning and end
dates of each task. Charts may be used as appropriate.

Security
If appropriate for the system to be implemented, provide an overview of the system security
features and the security during conversion.

System Security Features


The description of the system security features, if provided, should contain a brief overview and
discussion of the security features that will be associated with the system when it is converted.
Reference other life-cycle documents as appropriate. Describe the changes in the security
features or performance of the system that would result from the conversion.

Security During Conversion


This section addresses security issues specifically related to the conversion effort.

CONVERSION SUPPORT
This section describes the support necessary to implement the system. If there are additional
support requirements not covered by the categories shown here, add other subsections as needed.

Hardware
This section lists support equipment, including all hardware to be used for the conversion.

Software
This section lists the software and databases required to support the conversion. It describes all
software tools used to support the conversion effort, including the following types of software
tools, if used:

• Automated conversion tools, such as software translation tools for translating among
different computer languages or translating within software families (such as, between
release versions of compilers and DBMSs)

136 www.someakenya.com Contact: 0707 737 890


• Automated data conversion tools for translating among data storage formats associated with
the different implementations (such as, different DBMSs or operating systems)
• Quality assurance and validation software for the data conversion that are automated testing
tools
• Computer-aided software engineering (CASE) tools for reverse engineering of the existing
application
• CASE tools for capturing system design information and presenting it graphically
• Documentation tools such as cross-reference lists and data attribute generators
• Commercial off-the-shelf software and software written specifically for the conversion effort

Facilities
This section identifies the physical facilities and accommodations required during the conversion
period.

Materials
This section lists support materials.

Personnel
This section describes personnel requirements and any known or proposed staffing, if
appropriate. Also describe the training, if any, to be provided for the conversion staff.

Personnel Requirements and Staffing


This section describes the number of personnel, length of time needed, types of skills, and skill
levels for the staff required during the conversion period.

Training of Conversion Staff


This section addresses the training, if any, necessary to prepare the staff for converting the
system. It should provide a training curriculum, which lists the courses to be provided, a course
sequence, and a proposed schedule. If appropriate, it should be identify by job description which
courses should be attended by particular types of staff Training for users in the operation of the
system is not presented in this section, but is normally included in the Training Plan.

 Parallel running
Parallel conversion: The new system is introduced while the old one is still in use.
Both systems process all activity and the results are compared. Once there is confidence that the
new one operates properly, the old one is shut down. Because parallel conversion isn’t as useful
today as some people believe (and many authors suggest), it is discussed separately below.

137 www.someakenya.com Contact: 0707 737 890


 Direct cut over
Direct cutover: an entire organization stops using the old system at one time and begins using
the new one immediately thereafter. (This may be after a natural activity break such as
overnight.) The organization ends one day using the old system, begins the next day with the
new.
This is the riskiest method: Lee (2004) discusses how the Nevada Department of Motor Vehicles
gave itself problems by using it. Other methods reduce this risk.

 Pilot study

Pilot conversion: Part of an organization uses the new system while the rest of it continues to
use the old. This localizes problems to the pilot group so support resources can focus on it.
However, there can be interface issues where organizational units share data.

 Phased approach

Phased (modular) conversion: Part of the new system is introduced while the rest of the old
one remains in use. This localizes problems to the new module so support resources can focus on
it. However, there can be interface issues where modules share data.

138 www.someakenya.com Contact: 0707 737 890


TOPIC 10

DOCUMENTATION AND COMMISSIONING

 Objectives of systems documentation


 Use of systems documentation

Requirements

The question of what is the purpose of system documentation is difficult to answer. Below are
some possible uses of system documentation.

a) Introduction / overview

System documentation can provide an introduction and overview of systems. New


Administrators, contractors and other staff may need to familiarize themselves with a system; the
first thing that will be requested is any system documentation. To avoid staff having to waste
time discovering the purpose of a system, how it is configured etc. system documentation should
provide an Introduction.

b) Disaster Recovery

Many systems are supported by disaster recovery arrangements, but even in such circumstances,
the recovery can still fail. There may be a need to re-build a system from scratch at least to the
point where a normal restore from backup can be done. To make a rebuild possible it will be
necessary to have documentation that provides answers to the configuration choices. For
example it is important to re-build the system with the correct size of file systems to avoid trying
to restore data to a file system that has been made too small. In some circumstances certain
parameters can be difficult to change at a later date. When rebuilding a system it may be
important to configure networking with the original parameters both to avoid conflicts with other
systems on the network and because these may be difficult to change subsequently.

c) OS or Application re-load

Even when a disaster has not occurred, there may be times when it is necessary to reload an
Operating System or Application; this can either be as part of a major version upgrade or a
drastic step necessary to solve problems. In such circumstances it is important to know how an
OS or Application has been configured.

139 www.someakenya.com Contact: 0707 737 890


d) Trouble shooting aid

The benefits of good system documentation, when trouble shooting, are fairly obvious. A
comprehensive description of how a system should be configured and should behave can be
invaluable when configuration information has become corrupted, when services have failed, or
components have failed.

Good system documentation will include a description of the physical hardware and its physical
configuration, which can avoid the need to shut down a system in order to examine items such as
jumper settings.

e) Planning tool

When planning changes or upgrades it will be necessary to assess the impact of changes on
existing systems. A good understanding of existing systems is necessary for assessing the impact
of any changes and for this good system documentation is required.

f) Other

System documentation can be used for many purposes including Auditing, Inventory,
Maintenance, etc. The documentation of individual systems forms an important component of
the overall network documentation.

Automating the creation of system documentation

Most operating systems include tools to report important system information and often the output
from such tools can be re-directed to a file; this provides a means of automating the creation of
system documentation.

Many Administrators have neither the time nor inclination to produce System Documentation
and given the importance of keeping such documentation current, automation of the creation of
system documentation is very desirable.

There are limitations to the extent to which system documentation can be automated. The
following cannot be documented automatically:

 Textual descriptions that Administrators may create.


 Hardware configuration items that cannot be detected through software (perhaps such as
which slot on a bus a card occupies or how jumpers are set).
 Other items that cannot be detected through software such as serial numbers, the physical
location of a system etc.

140 www.someakenya.com Contact: 0707 737 890


The documentation of items that cannot be documented automatically can be facilitated by such
means as having standard template documents and creating such documentation when systems
are first installed. Fortunately this kind of documentation is less susceptible to change and is
often less critical. It may be useful to create a simple file in a standard location (perhaps root) on
systems that contains "external" information such as serial numbers, asset numbers etc.

Most Operating Systems have tools for automating their deployment (e.g. unattend.txt for NT,
JumpStart for Solaris, Kickstart for Redhat Linux etc.). Although these tools are primarily
intended for deploying large numbers of systems they can be used for individual systems. While
the configuration scripts used in these tools are not very readable, they are in text form, and can
be read by technical staff. Such scripts offer the great advantage that a system is documented in
an unambiguous way that guarantees that the system can be rebuilt exactly the same way it was
first built.

For most systems it should be possible to create a simple script (or batch file) that uses several
system tools to report system information and out put the results to a file. The use of such tools
together with simple print commands (e.g. echo "line of text") can readily produce a useful
document. It should be possible to adapt such scripts to produce the documentation required in
terms of content and level of detail etc.

If standard tools do not provide sufficient information, there are many third party tools and free
tools that can be used, however the potential problems of using additional tools, rather than just
"built-in" tools, has to be weighed against the advantages. Alternative scripting languages (such
a Perl) may provide additional benefits.

 Qualities of a good documentation

Characteristics

What the characteristics of good system documentation are is difficult to decide. Some desirable
characteristics are described below:

Created for intended audience

The documentation should be created for the intended audience. While it may be appropriate for
a "Management overview" to be created for non-technical people, most system documentation
will be used by System Administrators and other technical people as an important reference.
System documentation should provide sufficient technical detail.

141 www.someakenya.com Contact: 0707 737 890


Specific

The system documentation should describe the specific implementation of a given system rather
than provide generic documentation.

Relationship with other documentation

System documentation should be reasonably self-contained; however it will often be a


component of a wider collection of documentation and it is reasonable for it to reference other
documents. There is little value in duplicating information from Vendor's manuals.

Up to date

The documentation needs to be up to date, but does not necessarily have to be recent. If the
system has remained completely unchanged for a long period of time, the documentation can
remain unchanged for the same period of time. It is important that when systems are changed
documentation is updated to reflect the changes and this should be part of any change control
procedures.

Sufficiently comprehensive

Documentation needs to be sufficiently comprehensive to for fill its purpose.

Accessible

The documentation must be held in a location and format that makes it accessible. It is obviously
unacceptable to have the only copy of a system's documentation held on a drive that has failed or
on the system itself should it fail.

It is very desirable to hold the documentation in a universal standard format that does not require
access to a particular word processor; ASCII text may be most suitable.

Secure

Because system documentation could be useful to troublemakers, thought may need to be given
to controlling access to the documentation.

 Understanding your organisations level of documentation debt can be difficult, but knowing
the standard of good documentation can aid in establishing the quality of your documentation
and identifying areas of concern.

142 www.someakenya.com Contact: 0707 737 890


Docfacto has identified five qualities of good documentation, which, when in place, eliminate
documentation debt:

1. Coverage: Code that is and is not documented is easily identifiable.


2. Accuracy: The code comments accurately describe the code reflecting the last set of
source code changes.
3. Clarity: The system documentation describes what the code does and why it is written
that way.
4. Maintainability: A single source is maintained to handle multiple output formats,
product variants, localization or translation.
5. Synchronization: The code and documentation are linked to keep them in sync.

Any of these qualities missing from your documentation is a sign of documentation debt.

Coverage

Code that is and is not documented is easily identifiable.

It is essential to know what parts of the code is documented and what is not. Releasing code to
other team members, maintenance teams or outside your organisation without understanding the
level of documentation coverage can dramatically reduce productivity or increase support issues.

This can affect developers taking them away from coding new features to fixing documentation
or getting involved in support. This also has an impact on ‘on boarding’ new developers.

Accuracy

The code comments accurately describe the code reflecting the last set of source code changes.

Code documentation should accurately reflect the actual code for API’s, classes, methods or
functions. For example, comments in your structured code documentation, like Javadoc, should
accurately reflect the method signature of defined methods with specific parameter types that
return a specific value type.

Checking the accuracy of code documentation for each class, method or function within a project
is difficult, so ends up being left to the individual developer. Inaccuracies usually surface when
another developer reports a problem after spending days trying to make their code conform and
failed, or the fix interacts with other parts of the software in unexpected ways causing new bugs.
Adding new features also becomes risky and highly likely to cause new bugs.

Accuracy is also dependent on the code reflecting the last set of source code changes. Modern
source code management tools make it easy to track code changes across a project and see the

143 www.someakenya.com Contact: 0707 737 890


impact of a change in one part on another part of the code. With documentation, it is completely
different because tracking whether a change in the code makes the document out of date, is
difficult.

Releasing documentation that does not reflect the functionality of the code might cause some
level of confusion for the user, but in the case of a library or API used by developers it can have
a significant impact on support, reputation and the adoption of code.

Clarity

The system documentation describes what the code does and why it is written that way.

Comments in source code are an important record of what the code does, but often doesn’t
explain why it was written that way or how it was implemented by a developer. Without this
“why” it is often difficult to understand why a previous developer (or even you) took a certain
approach when they wrote the code and this could potentially lead to unnecessary refactoring
only to realize why and scrapping the refactor mid process.

Part of the problem explaining the “why” is that it’s too difficult and time consuming to explain
in code documentation that only uses characters, words and symbols and lacks enriched,
diagrammatic means. Most developers would prefer to draw diagrams or [x] and embed them
into the code. Visual diagrams, photos or videos could provide key insights needed into the
“why”.

Maintainability

A single source is maintained to handle multiple output formats, product variants, localization or
translation

Producing help documentation would be simple if only one format and one language was needed
for a single product, but with trends like the growth in mobile applications making international
markets easily accessible the requirements of documentation have become more complex,
needing multiple output formats in multiple languages.

Managing multiple copies for each product, language or output can be a nightmare for any
development team, especially when changes are needed. The solution is to have a single source
for all documentation; we think that source is best at the level of the code.

Synchronization

The code and documentation are linked automatically to keep them in sync.

144 www.someakenya.com Contact: 0707 737 890


Many organisations have documentation as a separate manual process that runs alongside
development or done after the coding has finished.

The problem with this approach is that the tools used can’t provide visibility into the level of
documentation debt within a project, the developer doesn’t link their code to the relevant
documentation and the process is exposed to all the flaws inherent with a non-automated process.
However, when the developer is at the heart of the process automatically creating links between
the code and the documentation they become synchronized providing the visibility needed to
eliminate documentation debt.

docfacto tools have been designed and developed to work seamlessly together to take the “too
hard” out of documentation without leaving the IDE, help business enrich their documentation
and reduce documentation debt. If you would like to try our tool kit you can get it on our
download page or try docfacto Links to keep your code and documentation in sync

 Types of documentation

Software documentation, also referred to as source code documentation is a text that describes
computer software. It explains how software works but it can also explain how to use the
software properly. Several types of software documentation exist and can be classified into:

User Documentation

Also known as software manuals, user documentation is intended for end users and aims to help
them use software properly. It is usually arranged in a book-style and typically also features table
of contents, index and of course, the body which can be arranged in different ways, depending on
whom the software is intended for. For example, if the software is intended for beginners, it
usually uses a tutorial approach and guides the user step-by-step. Software manuals which are
intended for intermediate users, on the other hand, are typically arranged thematically, while
manuals for advanced users follow reference style.

Besides printed version, user documentation can also be available in an online version or PDF
format. Often, it is also accompanied by additional documentation such as video tutorials,
knowledge based articles, videos, etc.

Requirements Documentation

Requirements documentation, also referred to simply as requirements explains what software


does and shall be able to do. Several types of requirements exist which may or may not be

145 www.someakenya.com Contact: 0707 737 890


included in documentation, depending on purpose and complexity of the system. For example,
applications that don’t have any safety implications and aren’t intended to be used for a longer
period of time may be accompanied by little or no requirements documentation at all. Those that
can affect human safety or/and are created to be used over a longer period of time, on the other
hand, come with an exhausting documentation.

Architecture Documentation

Also referred to as software architecture description, architecture documentation either analyses


software architectures or communicates the results of the latter (work product). It mainly deals
with technical issues including online marketing and seo services but it also covers non-technical
issues in order to provide guidance to system developers, maintenance technicians and others
involved in the development or use of architecture including end users. Architecture
documentation is usually arranged into architectural models which in turn may be organized into
different views, each of which deals with specific issues.

Comparison document is closely related to architecture documentation. It addresses current


situation and proposes alternative solutions with an aim to identify the best possible outcome. In
order to be able to do that, it requires an extensive research.

Technical Documentation

Technical documentation is a very important part of software documentation and many


programmers use both terms interchangeably despite the fact that technical documentation is
only one of several types of software documentation. It describes codes but it also addresses
algorithms, interfaces and other technical aspects of software development and application.
Technical documentation is usually created by the programmers with the aid of auto-generating
tools.

 Software commissioning
Process by which an equipment, facility, or plant (which is installed, or is complete or near
completion) is tested to verify if it functions according to its designobjectives or specifications.

146 www.someakenya.com Contact: 0707 737 890


TOPIC 11

SOFTWARE MAINTENANCE AND EVOLUTION

Software maintenance is widely accepted part of SDLC now a days. It stands for all the
modifications and updations done after the delivery of software product. There are number of
reasons, why modifications are required, some of them are briefly mentioned below:

 Market Conditions - Policies, which changes over the time, such as taxation and newly
introduced constraints like, how to maintain bookkeeping, may trigger need for
modification.
 Client Requirements - Over the time, customer may ask for new features or functions in
the software.
 Host Modifications - If any of the hardware and/or platform (such as operating system)
of the target host changes, software changes are needed to keep adaptability.
 Organization Changes - If there is any business level change at client end, such as
reduction of organization strength, acquiring another company, organization venturing
into new business, need to modify in the original software may arise.

Types of maintenance

In a software lifetime, type of maintenance may vary based on its nature. It may be just a routine
maintenance tasks as some bug discovered by some user or it may be a large event in itself based
on maintenance size or nature. Following are some types of maintenance based on their
characteristics:

 Corrective Maintenance - This includes modifications and updations done in order to


correct or fix problems, which are either discovered by user or concluded by user error
reports.
 Adaptive Maintenance - This includes modifications and updations applied to keep the
software product up-to date and tuned to the ever changing world of technology and
business environment.
 Perfective Maintenance - This includes modifications and updates done in order to keep
the software usable over long period of time. It includes new features, new user
requirements for refining the software and improve its reliability and performance.
 Preventive Maintenance - This includes modifications and updations to prevent future
problems of the software. It aims to attend problems, which are not significant at this
moment but may cause serious issues in future.

147 www.someakenya.com Contact: 0707 737 890


Cost of Maintenance

Reports suggest that the cost of maintenance


maintenance is high. A study on estimating software
maintenance found that the cost of maintenance is as high as 67% of the cost of entire software
process cycle.

On an average, the cost of software maintenance is more than 50% of all SDLC phases. There
are various
ous factors, which trigger maintenance cost goes high, such as:

Real-world
world factors affecting Maintenance Cost

 The standard age of any software is considered up to 10 to 15 years.


 Older software,, which were meant to work on slow machines with less memory aand
storage capacity cannot keep themselves challenging against newly coming enhanced
software on modern hardware.
 As technology advances, it becomes costly to maintain old software.
 Most maintenance engineers are newbie and use trial and error method to re rectify
problem.
 Often, changes made can easily hurt the original structure of the software, making it hard
for any subsequent changes.
 Changes are often left undocumented which may cause more conflicts in future.

Software-end
end factors affecting Maintenance Cost
C

 Structure of Software Program


 Programming Language
 Dependence on external environment
 Staff reliability and availability

148 www.someakenya.com Contact: 0707 737 890


Maintenance Activities

IEEE provides a framework for sequential maintenance process activities. It can be used in
iterative manner and can be extended so that customized items and processes can be included.

These activities go hand-in-hand


hand with each of the following phase:

 Identification & Tracing - It involves activities pertaining to identification of


requirement of modification or maintenance. It is generated by user or system may itself
report via logs or error messages.Here, the maintenance type is classified also.
 Analysis - The modification is analyzed for its impact on the system including safety and
security implications. If probable impact is severe, alternative solution is looked for. A
set of required modifications is then materialized into requirement specificati
specifications. The
cost of modification/maintenance is analyzed and estimation is concluded.
 Design - New modules, which need to be replaced or modified, are designed against
requirement specifications set in the previous stage. Test cases are created for validation
and verification.
 Implementation - The new modules are coded with the help of structured design created
in the design step.Every programmer is expected to do unit testing in parallel.
 System Testing - Integration testing is done among newly created modulemodules. Integration
testing is also carried out between new modules and the system. Finally the system is
tested as a whole, following regressive testing procedures.
 Acceptance Testing - After testing the system internally, it is tested for acceptance with
the help of users. If at this state, user complaints some issues they are addressed or noted
to address in next iteration.

149 www.someakenya.com Contact: 0707 737 890


 Delivery - After acceptance test, the system is deployed all over the organization either
by small update package or fresh installation of
of the system. The final testing takes place
at client end after the software is delivered.

Training facility is provided if required, in addition to the hard copy of user manual.

 Maintenance management - Configuration management is an essential part of sy system


maintenance. It is aided with version control tools to control versions, semi
semi-version or
patch management.

Software Re-engineering

When we need to update the software to keep it to the current market, without impacting its
functionality, it is called software re-engineering.
re engineering. It is a thorough process where the design of
software is changed and programs are re-written.
re

Legacy software cannot keep tuning with the latest technology available in the market. As the
hardware become obsolete, updating of software becomes a headache. Even if software grows
old with time, its functionality does not.

For example, initially Unix was developed in assembly language. When language C came into
existence, Unix was re-engineered
engineered in C, because working in assembly language was difficult.

Other than this, sometimes programmers notice that few parts of software need more
maintenance than others
hers and they also need re-engineering.
re

150 www.someakenya.com Contact: 0707 737 890


Re-Engineering Process

 Decide what to re-engineer.


engineer. Is it whole software or a part of it?
 Perform Reverse Engineering, in order to obtain specifications of existing software.
 Restructure Program if required. For example,
e changing function-oriented
oriented programs
into object-oriented
oriented programs.
 Re-structure data as required.
 Apply Forward engineering concepts in order to get re-engineered
engineered software.

There are few important terms used in Software re-engineering


re

Reverse Engineering

It is a process to achieve system specification by thoroughly analyzing, understanding the


existing system. This process can be seen as reverse SDLC model, i.e. we try to get higher
abstraction level by analyzing lower abstraction levels.

An existingg system is previously implemented design, about which we know nothing. Designers
then do reverse engineering by looking at the code and try to get the design. With design in hand,
they try to conclude the specifications. Thus, going in reverse from code to
to system specification.

Program Restructuring

It is a process to re-structure
structure and re-construct
re construct the existing software. It is all about re
re-arranging
the source code, either in same programming language or from one programming language to a
different one. Restructuring
estructuring can have either source code-restructuring
code and data-restructuring
restructuring or
both.

Re-structuring
structuring does not impact the functionality of the software but enhance reliability and
maintainability. Program components, which cause errors very frequently can be changed, or
updated with re-structuring.

The dependability of software on obsolete hardware platform can be removed via re


re-structuring.

151 www.someakenya.com Contact: 0707 737 890


Forward Engineering

Forward engineering is a process of obtaining desired software from the specifications in hand
which were brought down by means of reverse engineering. It assumes that there was some
software engineering already done in the past.

Forward engineering is same as software engineering process with only one difference – it is
carried out always after reverse
erse engineering.

Component Reusability

A component is a part of software program code, which executes an independent task in the
system. It can be a small module or sub-system
sub itself.

Example

The login procedures used on the web can be considered as components,


components, printing system in
software can be seen as a component of the software.

Components have high cohesion of functionality and lower rate of coupling, i.e. they work
independently and can perform tasks without depending on other modules.

In OOP, the objects


bjects are designed are very specific to their concern and have fewer chances to be
used in some other software.

In modular programming, the modules are coded to perform specific tasks which can be used
across number of other software programs.

There is a whole new vertical, which is based on re-use


re use of software component, and is known as
Component Based Software Engineering (CBSE).

152 www.someakenya.com Contact: 0707 737 890


Re-use
use can be done at various levels

 Application level - Where an entire application is used as sub-system


sub system of new software.
 Component level - Where sub-system
sub of an application is used.
 Modules level - Where functional modules are re-used.
re

Software components provide interfaces, which can be used to establish communication among
different components.

Reuse Process

Two kinds of method can be adopted: either by keeping requirements same and adjusting
components or by keeping components same and modifying requirements.

 Requirement Specification - The functional and non-functional


functional requirements are
specified, which a software product must comply to, with the help of existing system,
user input or both.
 Design - This is also a standard SDLC process step, where requirements are defined in
terms of software parlance. Basic architecture of system as a whole and its sub
sub-systems
are created.

153 www.someakenya.com Contact: 0707 737 890


 Specify Components - By studying the software design, the designers segregate the
entire system into smaller components or sub-systems. One complete software design
turns into a collection of a huge set of components working together.
 Search Suitable Components - The software component repository is referred by
designers to search for the matching component, on the basis of functionality and
intended software requirements..
 Incorporate Components - All matched components are packed together to shape them
as complete software.

 Types or software changes

Software evolution is the term used in software engineering (specifically software maintenance)
to refer to the process of developing software initially, then repeatedly updating it for various
reasons.

General introduction

Fred Brooks, in his key book The Mythical Man-Month, states that over 90% of the costs of a
typical system arise in the maintenance phase, and that any successful piece of software will
inevitably be maintained.

In fact, Agile methods stem from maintenance-like activities in and around web based
technologies, where the bulk of the capability comes from frameworks and standards

Software maintenance address bug fixes and minor enhancements and software evolution focus
on adaptation and migration.

Impact

The aim of software evolution would be to implement (and revalidate) the possible major
changes to the system without being able a priori to predict how user requirements will evolve.
The existing larger system is never complete and continues to evolve. As it evolves, the
complexity of the system will grow unless there is a better solution available to solve these
issues. The main objectives of software evolution are ensuring the reliability and flexibility of
the system. During the 20 years past, the lifespan of a system could be on average 6–10 years.
However, it was recentlyfound that a system should be evolved once every few months to ensure
it is adapted to the real-world environment. This is due to the rapid growth of World Wide Web
and Internet Resources that make it easier for users to find related information. The idea of
software evolution leads to open source development as anybody could download the source

154 www.someakenya.com Contact: 0707 737 890


codes and hence modify it. The positive impact in this case is large amounts of new ideas would
be discovered and generated that aims the system to have better improvement in variety choices.

Changes in software evolution models and theories

Over time, software systems, programs as well as applications, continue to develop. These
changes will require new laws and theories to be created and justified. Some models as well
would require additional aspects in developing future programs. Innovations and improvements
do increase unexpected form of software development. The maintenance issues also would
probably change as to adapt to the evolution of the future software. Software process and
development are an ongoing experience that has a never-ending cycle. After going through
learning and refinements, it is always an arguable issue when it comes to matter of efficiency and
effectiveness of the programs.

Types of software maintenance

E.B. Swanson initially identified three categories of maintenance: corrective, adaptive, and
perfective. Four categories of software were then catalogued by Lientz and Swanson (1980).
These have since been updated and normalized internationally in the ISO/IEC 14764:2006

 Corrective maintenance: Reactive modification of a software product performed after


delivery to correct discovered problems;
 Adaptive maintenance: Modification of a software product performed after delivery to
keep a software product usable in a changed or changing environment;
 Perfective maintenance: Modification of a software product after delivery to improve
performance or maintainability;
 Preventive maintenance: Modification of a software product after delivery to detect and
correct latent faults in the software product before they become effective faults.

All of the preceding take place when there is a known requirement for change.

Although these categories were supplemented by many authors like Warren et al. (1999) and
Chapin (2001), the ISO/IEC 14764:2006 international standard has kept the basic four
categories.

More recently the description of software maintenance and evolution has been done using
ontologies, which enrich the description of the many evolution activities.

Stage model

Current trends and practices are projected forward using a new model of software evolution
called the staged model. Staged model was introduced to replace conventional analysis which is

155 www.someakenya.com Contact: 0707 737 890


less suitable for modern software development that is rapid changing due to its difficulties of
hard to contribute in software evolution. There are five distinct stages contribute in simple staged
model (Initial development, Evolution, Servicing, Phase-out, and Close-down).

 According to K.H. Bennett and V.T Rajlich, the key contribution is to separate the
'maintenance' phase into an evolution stage followed by a servicing and phase out stages.
The first version of software system which is lacking some features will be developed
during initial development or also known as alpha stage. However, the architecture has
already been possessed during this stage will bring for any future changes or
amendments. Most references in this stage will base on scenarios or case study.
Knowledge has defined as another important outcome of initial development. Such
knowledge including the knowledge of application domain, user requirements, business
rules, policies, solutions, algorithm, etc. Knowledge also seems as the important factor
for the subsequent phase of evolution.
 Once the previous stage completed successfully (and must be completed successfully
before entering next stage), the next stage would be evolution. Users tend to change their
requirements as well as they prefer to see some improvements or changes. Due to this
factor, the software industry is facing the challenges of rapid changes environment.
Hence the goal of evolution is to adapt the application to the ever-changing user
requirements and operating environment. During the previous stage, the first version
application created might contain a lot of faults, and those faults will be fixed during
evolution stage based on more specified and accurate requirements due to the case study
or scenarios.
 The software will continuously evolve until it is no longer evolvable and then enter stage
of servicing (also known as software maturity). During this stage, only minor changes
will be done.
 Next stage which is phase-out, there is no more servicing available for that particular
software. However, the software still in production.
 Lastly, close-down. The software use is disconnected or discontinue and the users are
directed towards a replacement.

Lehman's Laws of Software Evolution

Prof. Meir M. Lehman, who worked at Imperial College London from 1972 to 2002, and his
colleagues have identified a set of behaviours in the evolution of proprietary software. These
behaviours (or observations) are known as Lehman's Laws, and there are eight of them:

1. (2015) "Continuing Change" — an E-type system must be continually adapted or it


becomes progressively less satisfactory
2. (1974) "Increasing Complexity" — as an E-type system evolves, its complexity increases
unless work is done to maintain or reduce it

156 www.someakenya.com Contact: 0707 737 890


3. (1900) "Self-Regulation" — E-type system evolution processes are self-regulating with
the distribution of product and process measures close to normal
4. (1978) "Conservation of Organisational Stability (invariant work rate)" - the average
effective global activity rate in an evolving E-type system is invariant over the product's
lifetime
5. (1978) "Conservation of Familiarity" — as an E-type system evolves, all associated with
it, developers, sales personnel and users, for example, must maintain mastery of its
content and behaviour to achieve satisfactory evolution. Excessive growth diminishes
that mastery. Hence the average incremental growth remains invariant as the system
evolves.
6. (1991) "Continuing Growth" — the functional content of an E-type system must be
continually increased to maintain user satisfaction over its lifetime
7. (1996) "Declining Quality" — the quality of an E-type system will appear to be declining
unless it is rigorously maintained and adapted to operational environment changes
8. (1996) "Feedback System" (first stated 1974, formalized as law 1996) — E-type
evolution processes constitute multi-level, multi-loop, multi-agent feedback systems and
must be treated as such to achieve significant improvement over any reasonable base[

It is worth mentioning that the applicability of all of these laws for all types of software systems
has been studied by several researchers. For example, see a presentation by Nanjangud C
Narendrawhere he describes a case study of an enterprise Agile project in the light of Lehman’s
laws of software evolution. Some empirical observations coming from the study of open source
software development appear to challenge some of the laws

The laws predict that the need for functional change in a software system is inevitable, and not a
consequence of incomplete or incorrect analysis of requirements or bad programming. They state
that there are limits to what a software development team can achieve in terms of safely
implementing changes and new functionality.

Maturity Models specific to software evolution have been developed to improve processes, and
help to ensure continuous rejuvenation of the software as it evolves iteratively

The "global process" that is made by the many stakeholders (e.g. developers, users, their
managers) has many feedback loops. The evolution speed is a function of the feedback loop
structure and other characteristics of the global system. Process simulation techniques, such as
system dynamics can be useful in understanding and managing such global process.

Software evolution is not likely to be Darwinian, Lamarckian or Baldwinian, but an important


phenomenon on its own. Given the increasing dependence on software at all levels of society and
economy, the successful evolution of software is becoming increasingly critical. This is an
important topic of research that hasn't received much attention.

157 www.someakenya.com Contact: 0707 737 890


The evolution of software, because of its rapid path in comparison to other man-made entities,
was seen by Lehman as the "fruit fly" of the study of the evolution of artificial systems.

 Software change identification

Problems related to software change management tend to occur in five areas: analysis and
identification related problems, communication issues, decision-making challenges, effectiveness
roadblocks, traceability issues and problems with tools.If we examine each of these we can see a
number of important issues that frequently crop-up, especially within third-generation languages
(3GL).
Today, I’d like to focus on a high-level review of analysis and identification related problems in
software change management. How can we identify and analyze problems in our software to best
understand and realize where we have errors that require correction. More importantly, how can
we change our approach to application development in a way that reduces the impact and
likelihood of errors?

Analysis and identification problems can be seen in several areas. First of all, problems with
analysis and identification are driven by concurrent and parallel development approaches. The
problems occur because with concurrent efforts it becomes more difficult to determine root
causes of program errors. This is exacerbated by the fact that standalone testing does not find the
problems leading to the error conditions. Solutions can be found by reducing the number of
developers, engaging in more frequent cycles (such as with agile development or SCRUM),
testing without compiling and ultimately by delegating more basic functions to an application
platform in a post-3GL approach.

Another factor driving analysis and identification problems is code optimization. For one thing,
optimized code, especially optimized C, C# and C++ code is very difficult to understand. In
addition, with optimized code, object oriented development tends to create a ripple effect that is
not apparent in typical source. Code optimization issues can be avoided by leveraging pre-
optimized code, i.e., avoiding heavy 3GL development projects with more advanced
development platforms.

A third factor leading to analysis and identification problems comes from the use of shared
software components. Here we see impacts across the code base and ripple effects. These can
be avoided through better pre-planning, wise use of inheritance principles and by leveraging a
platform rather than resortingtoline-by-line coding.

The need for high reliability makes the problem of analysis and identification of the impact of
software changes particularly important. It is difficult to predict the impact of changes and at
times corrective actions may seem difficult or impossible. Avoid this sense of being
158 www.someakenya.com Contact: 0707 737 890
overwhelmed by engaging in iterative development and testing. Make use of an application
platform to better overcome challenges in the analysis and identification of software change
management problems

 Software change implementation

Overcoming organizational politics and resistance to change is a daunting challenge for any
organization implementing new software systems. First, managers have to work hard at agreeing
on the initiative and deciding what would be best for the organization as a whole and not only for
their particular area of expertise. Once on the same page and all politics are set aside, they have
to collaboratively deal with their staff.

Resistance to change is an ongoing problem at both the individual and the organizational level as
it “impairs concerted efforts to improve performance. Management and the project leadership
team must find ways to work with this resistance, overcome it, and successfully carry out their
new vision. The key to doing so is change management.

As defined by Nancy Lorenzi and Robert Riley, change management is “the process by which an
organization gets to its future state, its vision.” What makes it different from traditional
approaches to planning is that while they “delineate the steps on the journey,” change
management, by contrast, “attempts to facilitate that journey.

Change management is essential to any successful software implementation process. It is


required to prepare users for the introduction of the new system, to reduce resistance toward the
system, and to influence user attitudes toward that system. Research suggests that the
establishment and assurance of the following 10 tactics can help to avoid unsuccessful
implementations and failed investments for the enterprise.

1) Top Management Support. The foundation of any successful organizational change revolves
around the leadership team, their ability to set politics aside, and to collaborate, agree, and
commit to the change process. Top management support should be included in each step of the
implementation and in all organizational levels.3 When there is consistent, managerial backing at
every level, the entire workforce is being driven toward the common goal of accepting and
adapting to the new system. Effective leadership can sharply reduce the behavioral resistance to
change, especially when dealing with new technologies.

2) Project Team. Teamwork is so important when implementing new technologies. Similarly, it


is also necessary to support change management processes.3 Cross-functional teams dedicated to
managing the institution of change are strongly preferred in effective software implementation.
Project teams ensure that the implementation is not lost or forgotten about, and they continuously
offer assistance with the rollout of the new system implementation.

159 www.someakenya.com Contact: 0707 737 890


3) Project Champion. The presence of a champion is a critical factor for success in managing
change because of the strong influence they have on the change process within the organization.
Designating someone with authority to support and motivate the new initiative is a good strategy
used to remove cross-departmental political obstacles, strengthen the new implementation, and
reveal how important it is among staff.

4) Systematic Planning. The presence of a clear plan for change is a great way to boost software
implementation projects. A project vision specifies what the implementation project is meant to
achieve and how it can positively affect the organization and staff.Additionally, assessing the
readiness for change and developing a formal strategy allows for better planning and smoother
implementation.

After a clear vision is established, the leadership team must analyze their organization and assess
its readiness for change by analyzing the culture and behavior of the staff and overall
organization.If the leadership team takes the time to assess their staff and determine the
organization’s readiness for change, they can deal with the implementation and resistance from
staff much more effectively. They can also identify the key drivers of change and tie them into
all areas of the workplace, so that all staff members remain aligned to objectives.

5) Broad Participation. A company wants to engage staff within the whole life cycle of
implementation in order to keep them in the loop and responsive. As Lorenzi and Riley note,
“People who have low psychological ownership in a system and who vigorously resist its
implementation can bring a ‘technically best’ system to its knees.

Sidney Fuchs of IBM notes that to ensure that all staff adopts a specific change, they must feel
the demand for it. It is critical, therefore, to “make sure each person understands the problems
you are addressing and has a feeling of ownership for the solutions you’re proposing.”5 If
management can figure out how specific end users will benefit from the new system and convey
that to those users, they will strengthen the project significantly. The project leadership team
needs to work carefully and strategically to overcome resistance to change among staff and lead
a successful implementation process by enhancing user involvement in the process.

6) Effective Communication. Before and during any software implementation, meaningful and
effective communication at all levels of the organization is essential. This is mostly because
substantial communication allows for strong teamwork, effective planning, and end user
involvement.

Ample communication regarding the new implementation project helps to foster understanding
of the project’s vision and thus to overcome resistance to the project.4 Good communication also
heightens overall awareness of the system.2 Only with thorough and ongoing communication
among and between both management and staff can the implementation project be successful.

160 www.someakenya.com Contact: 0707 737 890


The more extensively end users understand the project, the more willing and able they will be to
use the new system.

7) Feedback. A key to identifying the source of user resistance to a project is the feedback
management receives from staff.3 Project team leaders, the project champion, and management
should all make sure that they are providing feedback about the new system. More importantly,
they must gather feedback from their staff members and identify the consensus regarding the
new system.

Enforcers of the new system have to overcome change obstacles by considering all end user
complaints. Perhaps they are legitimate, and the system has a glitch. In any case, system issues
must be addressed immediately to avoid excessive pushback from staff. A final argument in
favor of gathering and responding to feedback is that people often respond favorably to the
implementation of a new technology when those in control of the process consider their input.4

8) Effective Training and Knowledge Transfer. Training is crucial to success of software


implementations. All employees should know how the system works and how they personally
relate to the new process.3 Training should be readily and broadly available to encourage the new
system’s acceptance and use within the company. It can certainly be used as a tool to help
overcome employee resistance.

Training should be offered prior to, during, and after the implementation to ensure operational
end user knowledge. Management must take training seriously to avoid the adoption of an
ineffective system. There is nothing worse than a useless, unused software program after a
company has spent much time and capital investing in it.

9) Incentives. Incentives help develop strong feelings toward accepting and adopting new
systems. Incentives should be offered to not only engage staff and overcome resistance to
change, but to retain key implementation staff as well. Revised titles, overtime pay, letters of
merit, and certificates of recognition can be used as forms of incentives to foster staff
involvement and commitment to the new project. Incentives are a great way to encourage end
user involvement, increase
participation, encourage training, and strengthen the overall system.

10) Post-Implementation Activities. Following implementation, activities such as mentoring by


super-users, training, help-desk support, end user documentation, newsletters about the
software’s features and functions, and online help are extremely beneficial. Ongoing post-
implementation change management activities can help to foster and maintain competent end
users.

161 www.someakenya.com Contact: 0707 737 890


A Culture of Change

Having a strong, well-thought-out implementation process and aligned staff is key to the success
of the new product or system: “Unless these blocks are in place, technology introductions will
fail to satisfy expectations and may even produce adverse results.”5 The leadership team must
know their staff’s abilities and the culture of their organization. They must know how to assess
their organization and get through to staff members to ensure they get the most out of their
financial investment.

Change management is certainly the most difficult part of the implementation process. Yet once
resistance is phased out strategically, change can be phased in, and metrics can be used to track
the new installment’s progress and success. Leaders have to take the time to understand user
resistance, realize where it’s coming from, and figure out a way to remove it from the
implementation process. Companies spend lots of time and money when determining what and
when to implement a new technological advancement. By utilizing change management
techniques, they can make sure they don’t lose out.

162 www.someakenya.com Contact: 0707 737 890


TOPIC 12

AUDITING INFORMATION SYSTEMS

 Overview of information systems audit

An IT audit is different from a financial statement audit. While a financial audit's purpose is to
evaluate whether an organization is adhering to standard accounting practices, the purposes of an
IT audit are to evaluate the system's internal control design and effectiveness. This includes, but
is not limited to, efficiency and security protocols, development processes, and IT governance or
oversight. Installing controls are necessary but not sufficient to provide adequate security. People
responsible for security must consider if the controls are installed as intended, if they are
effective if any breach in security has occurred and if so, what actions can be done to prevent
future breaches. These inquiries must be answered by independent and unbiased observers.
These observers are performing the task of information systems auditing. In an Information
Systems (IS) environment, an audit is an examination of information systems, their inputs,
outputs, and processing.

The primary functions of an IT audit are to evaluate the systems that are in place to guard an
organization's information. Specifically, information technology audits are used to evaluate the
organization's ability to protect its information assets and to properly dispense information to
authorized parties. The IT audit aims to evaluate the following:

Will the organization's computer systems be available for the business at all times when
required? (known as availability) Will the information in the systems be disclosed only to
authorized users? (known as security and confidentiality) Will the information provided by the
system always be accurate, reliable, and timely? (Measures the integrity) In this way, the audit
hopes to assess the risk to the company's valuable asset (its information) and establish methods
of minimizing those risks.

Also Known As: Information Systems Audit, ADP audits, EDP audits, computer audits

Types of IT audits

Various authorities have created differing taxonomies to distinguish the various types of IT
audits. Goodman & Lawless state that there are three specific systematic approaches to carry out
an IT audit:

 Technological innovation process audit. This audit constructs a risk profile for
existing and new projects. The audit will assess the length and depth of the

163 www.someakenya.com Contact: 0707 737 890


company's experience in its chosen technologies, as well as its presence in
relevant markets, the organization of each project, and the structure of the portion
of the industry that deals with this project or product, organization and industry
structure.
 Innovative comparison audit. This audit is an analysis of the innovative abilities
of the company being audited, in comparison to its competitors. This requires
examination of company's research and development facilities, as well as its track
record in actually producing new products.
 Technological position audit: This audit reviews the technologies that the
business currently has and that it needs to add. Technologies are characterized as
being either "base", "key", "pacing" or "emerging".

Others describe the spectrum of IT audits with five categories of audits:

 Systems and Applications: An audit to verify that systems and applications are
appropriate, are efficient, and are adequately controlled to ensure valid, reliable, timely,
and secure input, processing, and output at all levels of a system's activity.
 Information Processing Facilities: An audit to verify that the processing facility is
controlled to ensure timely, accurate, and efficient processing of applications under
normal and potentially disruptive conditions.
 Systems Development: An audit to verify that the systems under development meet the
objectives of the organization and to ensure that the systems are developed in
accordance with generally accepted standards for systems development.
 Management of IT and Enterprise Architecture: An audit to verify that IT
management has developed an organizational structure and procedures to ensure a
controlled and efficient environment for information processing.
 Client/Server, Telecommunications, Intranets, and Extranets: An audit to verify
that telecommunications controls are in place on the client (computer receiving
services), server, and on the network connecting the clients and servers.

And some lump all IT audits as being one of only two types: "general control review" audits or
"application control review" audits.

A number of IT Audit professionals from the Information Assurance realm consider there to be
three fundamental types of controls regardless of the type of audit to be performed, especially in
the IT realm. Many frameworks and standards try to break controls into different disciplines or
arenas, terming them “Security Controls“, ”Access Controls“, “IA Controls” in an effort to
define the types of controls involved. At a more fundamental level, these controls can be shown
to consist of three types of fundamental controls: Protective/Preventative Controls, Detective
Controls and Reactive/Corrective Controls.

164 www.someakenya.com Contact: 0707 737 890


In an IS system, there are two types of auditors and audits: internal and external. IS auditing is
usually a part of accounting internal auditing, and is frequently performed by corporate internal
auditors. An external auditor reviews the findings of the internal audit as well as the inputs,
processing and outputs of information systems. The external audit of information systems is
frequently a part of the overall external auditing performed by a Certified Public Accountant
(CPA) firm.

IS auditing considers all the potential hazards and controls in information systems. It focuses on
issues like operations, data, integrity, software applications, security, privacy, budgets and
expenditures, cost control, and productivity. Guidelines are available to assist auditors in their
jobs, such as those from Information Systems Audit and Control Association.

IT Audit process

The following are basic steps in performing the Information Technology Audit Process:

1. Planning
2. Studying and Evaluating Controls
3. Testing and Evaluating Controls
4. Reporting
5. Follow-up
6. reports

Security

Auditing information security is a vital part of any IT audit and is often understood to be the
primary purpose of an IT Audit. The broad scope of auditing information security includes such
topics as data centers (the physical security of data centers and the logical security of databases,
servers and network infrastructure components), networks and application security. Like most
technical realms, these topics are always evolving; IT auditors must constantly continue to
expand their knowledge and understanding of the systems and environment& pursuit in system
company.

Several training and certification organizations have evolved. Currently, the major certifying
bodies, in the field, are the Institute of Internal Auditors (IIA), the SANS Institute (specifically,
the audit specific branch of SANS and GIAC) and ISACA. While CPAs and other traditional
auditors can be engaged for IT Audits, organizations are well advised to require that individuals
with some type of IT specific audit certification are employed when validating the controls
surrounding IT systems.

165 www.someakenya.com Contact: 0707 737 890


History of IT Auditing

The concept of IT auditing was formed in the mid-1960s. Since that time, IT auditing has gone
through numerous changes, largely due to advances in technology and the incorporation of
technology into business.

Currently, there are many IT dependent companies that rely on the Information Technology in
order to operate their business e.g. Telecommunication or Banking company. For the other types
of business, IT plays the big part of company including the applying of workflow instead of
using the paper request form, using the application control instead of manual control which is
more reliable or implementing the ERP application to facilitate the organization by using only 1
application. According to these, the importance of IT Audit is constantly increased. One of the
most important roles of the IT Audit is to audit over the critical system in order to support the
Financial audit or to support the specific regulations announced e.g. SOX.

Audit personnel

Qualifications

The CISM and CAP credentials are the two newest security auditing credentials, offered by the
ISACAand (ISC) respectively. Strictly speaking, only the CISA or GSNA title would sufficiently
demonstrate competences regarding both information technology and audit aspects with the
CISA being more audit focused and the GSNA being more information technology focused.

Outside of the US, various credentials exist. For example, the Netherlands has the RE credential
(as granted by the NOREA [Dutch site] IT-auditors' association), which among others requires a
post-graduate IT-audit education from an accredited university, subscription to a Code of Ethics,
and adherence to continuous education requirement

 Auditing computer resources


 Audit techniques
 Audit applications

166 www.someakenya.com Contact: 0707 737 890


Introduction

An information system (IS) audit or information technology(IT) audit is an examination of the


controls within an entity's Information technology infrastructure. These reviews may be
performed in conjunction with a financial statement audit, internal audit, or other form of
attestation engagement. It is the process of collecting and evaluating evidence of an
organization's information systems, practices, and operations. Obtained evidence evaluation can
ensure whether the organization's information systems safeguard assets, maintains data integrity,
and are operating effectively and efficiently to achieve the organization's goals or objectives.

An IS audit is not entirely similar to a financial statement audit. An evaluation of internal


controls may or may not take place in an IS audit. Reliance on internal controls is a unique
characteristic of a financial audit. An evaluation of internal controls is necessary in a financial
audit, in order to allow the auditor to place reliance on the internal controls, and therefore,
substantially reduce the amount of testing necessary to form an opinion regarding the financial
statements of the company. An IS audit, on the other hand, tends to focus on determining risks
that are relevant to information assets, and in assessing controls in order to reduce or mitigate
these risks. An IT audit may take the form of a "general control review" or a "specific control
review". Regarding the protection of information assets, one purpose of an IS audit is to review
and evaluate an organization's information system's availability, confidentiality, and integrity by
answering the following questions:

1. Will the organization's computerized systems be available for the business at all times
when required? (Availability)
2. Will the information in the systems be disclosed only to authorized users?
(Confidentiality)

167 www.someakenya.com Contact: 0707 737 890


3. Will the information provided by the system always be accurate, reliable, and timely?
(Integrity).

The performance of an IS Audit covers several facets of the financial and organizational
functions of our Clients. The diagram to the right gives you an overview of the Information
Systems Audit flow: From Financial Statements to the Control Environment and Information
Systems Platforms.

Information Systems Audit Methodology

PHASE 1: Audit Planning

In this phase we plan the information system coverage to comply with the audit objectives
specified by the Client and ensure compliance to all Laws and Professional Standards. The first
thing is to obtain an Audit Charter from the Client detailing the purpose of the audit, the
management responsibility, authority and accountability of the Information Systems Audit
function as follows:

1. Responsibility: The Audit Charter should define the mission, aims, goals and objectives
of the Information System Audit. At this stage define the Key Performance Indicators
and an Audit Evaluation process;
2. Authority: The Audit Charter should clearly specify the Authority assigned to the
Information Systems Auditors with relation to the Risk Assessment work that will be
carried out, right to access the Client’s information, the scope and/or limitations to the
scope, the Client’s functions to be audited and the auditee expectations; and
3. Accountability: The Audit Charter should clearly define reporting lines, appraisals,
assessment of compliance and agreed actions.

The Audit Charter should be approved and agreed upon by an appropriate level within the
Client’s Organization.

In addition to the Audit Charter, we should be able to obtain a written representation (“Letter of
Representation”) from the Client’s Management acknowledging:

1. Their responsibility for the design and implementation of the Internal Control Systems
affecting the IT Systems and processes
2. Their willingness to disclose to the Information Systems Auditor their knowledge of
irregularities and/or illegal acts affecting their organisation pertaining to management and
employees with significant roles within the internal audit department.
3. Their willingness to disclose to the IS Auditor the results of any risk assessment that a
material misstatement may have occurred

168 www.someakenya.com Contact: 0707 737 890


PHASE 2 – Risk Assessment and Business Process Analysis

Risk is the possibility of an act or event occurring that would have an adverse effect on the
organisation and its information systems. Risk can also be the potential that a given threat will
exploit vulnerabilities of an asset or group of assets to cause loss of, or damage to, the assets. It is
ordinarily measured by a combination of effect and likelihood of occurrence.

More and more organisations are moving to a risk-based audit approach that can be adapted to
develop and improve the continuous audit process. This approach is used to assess risk and to
assist an IS auditor’s decision to do either compliance testing or substantive testing. In a risk
based audit approach, IS auditors are not just relying on risk. They are also relying on internal
and operational controls as well as knowledge of the organisation. This type of risk assessment
decision can help relate the cost/benefit analysis of the control to the known risk, allowing
practical choices.

The process of quantifying risk is called Risk Assessment. Risk Assessment is useful in making
decisions such as:

1. The area/business function to be audited


2. The nature, extent and timing of audit procedures
3. The amount of resources to be allocated to an audit

The following types of risks should be considered:

Inherent Risk: Inherent risk is the susceptibility of an audit area to error which could be
material, individually or in combination with other errors, assuming that there were no related
internal controls. In assessing the inherent risk, the IS auditor should consider both pervasive and
detailed IS controls. This does not apply to circumstances where the IS auditor’s assignment is
related to pervasive IS controls only. A pervasive IS Control are general controls which are
designed to manage and monitor the IS environment and which therefore affect all IS-related
activities. Some of the pervasive IS Controls that an auditor may consider include:

 The integrity of IS management and IS management experience and knowledge


 Changes in IS management
 Pressures on IS management which may predispose them to conceal or misstate
information (e.g. large business-critical project over-runs, and hacker activity)
 The nature of the organization’s business and systems (e.g., the plans for electronic
commerce, the complexity of the systems, and the lack of integrated systems)
 Factors affecting the organization’s industry as a whole (e.g., changes in technology, and
IS staff availability)

169 www.someakenya.com Contact: 0707 737 890


 The level of third party influence on the control of the systems being audited (e.g.,
because of supply chain integration, outsourced IS processes, joint business ventures, and
direct access by customers)
 Findings from and date of previous audits

A detailed IS control is a control over acquisition, implementation, delivery and support of IS


systems and services. The IS auditor should consider, to the level appropriate for the audit area in
question:

 The findings from and date of previous audits in this area


 The complexity of the systems involved
 The level of manual intervention required
 The susceptibility to loss or misappropriation of the assets controlled by the system (e.g.,
inventory, and payroll)
 The likelihood of activity peaks at certain times in the audit period
 Activities outside the day-to-day routine of IS processing (e.g., the use of operating
system utilities to amend data)
 The integrity, experience and skills of the management and staff involved in applying the
IS controls

Control Risk: Control risk is the risk that an error which could occur in an audit area, and which
could be material, individually or in combination with other errors, will not be prevented or
detected and corrected on a timely basis by the internal control system. For example, the control
risk associated with manual reviews of computer logs can be high because activities requiring
investigation are often easily missed owing to the volume of logged information. The control risk
associated with computerized data validation procedures is ordinarily low because the processes
are consistently applied. The IS auditor should assess the control risk as high unless relevant
internal controls are:

 Identified
 Evaluated as effective
 Tested and proved to be operating appropriately

Detection Risk: Detection risk is the risk that the IS auditor’s substantive procedures will not
detect an error which could be material, individually or in combination with other errors. In
determining the level of substantive testing required, the IS auditor should consider both:

 The assessment of inherent risk


 The conclusion reached on control risk following compliance testing

170 www.someakenya.com Contact: 0707 737 890


The higher the assessment of inherent and control risk the more audit evidence the IS auditor
should normally obtain from the performance of substantive audit procedures.

Our Risk Based Information Systems Audit Approach

A risk based approach to an Information Systems Audit will enable us to develop an overall and
effective IS Audit plan which will consider all the potential weaknesses and /or absence of
Controls and determine whether this could lead to a significant deficiency or material weakness.

In order to perform an effective Risk Assessment, we will need to understand the Client’s
Business Environment and Operations. Usually the first phase in carrying out a Risk Based IS
Audit is to obtain an understanding of the Audit Universe. In understanding the Audit Universe
we perform the following:

 Identify areas where the risk is unacceptably high


 Identify critical control systems that address high inherent risks
 Assess the uncertainty that exists in relation to the critical control systems

In carrying out the Business Process Analysis we:

 Obtain an understanding of the Client Business Processes


 Map the Internal Control Environment
 Identify areas of Control Weaknesses

The Chat to the right summarizes the business process analysis phase.

171 www.someakenya.com Contact: 0707 737 890


The template xxx will provide you with a guideline to document an Organisations Business Sub
Processes identified during the risk analysis phase.For each of the sub-processes, we identify a
list of What Could Go Wrong (WCGW). This WCGW represent the threat existing on a
particular process. A single process would have multiple WCGW’s. For each of the WCGW’s
identified in the prior phase we will determine the Key Activities within that process.For each
Key Activity:

1. We will identify the Information Systems Controls


2. For each of the Controls Identified, we would rate the impact/effect of the lack of that
control (on a rating of 1 - 5, with 5 indicating the highest impact),we will then determine
the likelihood of the threat occurring( also on a rating of 1 - 5 with 5 representing the
highest likelihood).

<< Outline specific risk assessment methodology here>>

PHASE 3 – Performance of Audit Work

In the performance of Audit Work the Information Systems Audit Standards require us to
provide supervision, gather audit evidence and document our audit work. We achieve this
objective through:

172 www.someakenya.com Contact: 0707 737 890


 Establishing an Internal Review Process where the work of one person is reviewed by
another, preferably a more senior person.
 We obtain sufficient, reliable and relevant evidence to be obtained through Inspection,
Observation, Inquiry, Confirmation and re-computation of calculations
 We document our work by describing audit work done and audit evidence gathered to
support the auditors’ findings.

Based on our risk assessment and upon the identification of the risky areas, we move ahead to
develop an Audit Plan and Audit Program. The Audit Plan will detail the nature, objectives,
timing and the extent of the resources required in the audit.

See Template for a Sample Audit Plan.

Based on the compliance testing carried out in the prior phase, we develop an audit program
detailing the nature, timing and extent of the audit procedures. In the Audit Plan various Control
Tests and Reviews can be done. They are sub-divided into:

1. General/ Pervasive Controls


2. Specific Controls

The Chat below to the left shows the Control Review Tests that can be performed in the two
Control Tests above.

Control Objectives for Information and related Technology (COBIT)

The Control Objectives for Information and related Technology (COBIT) is a set of best
practices (framework) for information (IT) management created by the Information Systems
Audit and Control Association (ISACA), and the IT Governance Institute (ITGI) in 1992.

COBIT provides managers, auditors, and IT users with a set of generally accepted measures,
indicators, processes and best practices to assist them in maximizing the benefits derived through
the use of information technology and developing appropriate IT governance and control in a
company.

173 www.someakenya.com Contact: 0707 737 890


COBIT helps meet the multiple needs of management by bridging the gaps between business
risks, control needs and technical issues. It provides a best practices framework for managing IT

174 www.someakenya.com Contact: 0707 737 890


resources and presents management control activities in a manageable and logical structure. This
framework will help optimize technology information investments and will provide a suitable
benchmark measure.

The Framework comprises a set of 34 high-level Control Objectives, one for each of the IT
processes listed in the framework. These are then grouped into four domains: planning and
organisation, acquisition and implementation, delivery and support, and monitoring. This
structure covers all aspects of information processing and storage and the technology that
supports it. By addressing these 34 high-level control objectives, we will ensure that an adequate
control system is provided for the IT environment. A diagrammatic representation of the
framework is shown below.

We shall apply the COBIT framework in planning, executing and reporting the results of the
audit. This will enable us to review the General Controls Associated with IT Governance Issues.
Our review shall cover the following domains;

 Planning and organisation of information resources;


 The planning and acquisition of systems and path in stage growth model of information
systems;
 The delivery and support of the IS/IT including facilities, operations, utilization and
access;
 Monitoring of the processes surrounding the information systems;
 The level of effectiveness, efficiency, confidentiality, integrity, availability, compliance
and reliability associated with the information held in; and
 The level of utilization of IT resources available within the environment of the IS
including people, the application systems of interface, technology, facilities and data.

The above control objectives will be matched with the business control objectives to apply
specific audit procedures that will provide information on the controls built in the application,
indicating areas of improvement that we need to focus on achieving.

Application Control Review

An Application Control Review will provide management with reasonable assurance that
transactions are processed as intended and the information from the system is accurate, complete
and timely. An Application Controls review will check whether:

 Controls effectiveness and efficiency


 Applications Security
 Whether the application performs as expected

175 www.someakenya.com Contact: 0707 737 890


A Review of the Application Controls will cover an evaluation of a transaction life cycle from
Data origination, preparation, input, transmission, processing and output as follows:

1. Data Origination controls are controls established to prepare and authorize data to be
entered into an application. The evaluation will involve a review of source document
design and storage, User procedures and manuals, Special purpose forms, Transaction ID
codes, Cross reference indices and Alternate documents where applicable. It will also
involve a review of the authorization procedures and separation of duties in the data
capture process.
2. Input preparation controls are controls relating to Transaction numbering, Batch serial
numbering, Processing, Logs analysis and a review of transmittal and turnaround
documents
3. Transmission controls involve batch proofing and balancing, Processing schedules,
Review of Error messages, corrections monitoring and transaction security
4. Processing controls ensure the integrity of the data as it undergoes the processing phase
including Relational Database Controls, Data Storage and Retrieval
5. Output controls procedures involve procedures relating to report distribution,
reconciliation, output error processing, records retention.

The use of Computer Aided Audit Techniques (CAATS) in the performance of an IS Audit

The Information Systems Audit Standards require us that during the course of an audit, the IS
auditor should obtain sufficient, reliable and relevant evidence to achieve the audit objectives.
The audit findings and conclusions are to be supported by the appropriate analysis and
interpretation of this evidence. CAATs are useful in achieving this objective.

Computer Assisted Audit Techniques (CAATs) are important tools for the IS auditor in
performing audits. They include many types of tools and techniques, such as generalized audit
software, utility software, test data, application software tracing and mapping, and audit expert
systems. For us, our CAATs include ACL Data Analysis Software and the Information Systems
Audit Toolkit(ISAT).

CAATs may be used in performing various audit procedures including:

 Tests of details of transactions and balances(Substantive Tests)


 Analytical review procedures
 Compliance tests of IS general controls
 Compliance tests of IS application controls

CAATs may produce a large proportion of the audit evidence developed on IS audits and, as a
result, the IS auditor should carefully plan for and exhibit due professional care in the use of

176 www.someakenya.com Contact: 0707 737 890


CAATs.The major steps to be undertaken by the IS auditor in preparing for the application of the
selected CAATs are:

 Set the audit objectives of the CAATs


 Determine the accessibility and availability of the organization’s IS facilities,
programs/system and data
 Define the procedures to be undertaken (e.g., statistical sampling, recalculation,
confirmation, etc.)
 Define output requirements
 Determine resource requirements, i.e., personnel, CAATs, processing environment
(organization’s IS facilities or audit IS facilities)
 Obtain access to the client’s IS facilities, programs/system, and data, including file
definitions
 Document CAATs to be used, including objectives, high-level flowcharts, and run
instructions
 Make appropriate arrangements with the Auditee and ensure that:

1. Data files, such as detailed transaction files are retained and made available before the
onset of the audit.
2. You have obtained sufficient rights to the client’s IS facilities, programs/system, and data
3. Tests have been properly scheduled to minimize the effect on the organization’s
production environment.
4. The effect that changes to the production programs/system have been properly
considered.

See Template here for example tests that you can perform with ACL

PHASE 4: Reporting

Upon the performance of the audit test, the Information Systems Auditor is required to produce
and appropriate report communicating the results of the IS Audit. An IS Audit report should:

1. Identify an organization, intended recipients and any restrictions on circulation


2. State the scope, objectives, period of coverage, nature, timing and the extend of the audit
work
3. State findings, conclusions, recommendations and any reservations, qualifications and
limitations
4. Provide audit evidence

177 www.someakenya.com Contact: 0707 737 890

You might also like