You are on page 1of 84

SYSTEM ANALYSIS AND DESIGN MANUAL

SYSTEMS DEVELOPMENT LIFE CYCLE

Introduction
- Most systems studied in our past lessons go through a life cycle – a sequence of phases from inception to death.
- A computer-based information system goes through an SDLC – an organised approach used in organisations to develop
and implement an information system.
- SDLC consists of the following phases:
1. Preliminary investigation
2. Systems analysis
3. Systems design
4. Systems development
5. Systems implementation and evaluation

Preliminary Report PRELIMINARY System request


INVESTIGATION

Termination
ANALYSIS

Requirements
Document DESIGN

Operational
Design Specification DEVELOPMENT system
Document

Complete IMPLEMENTATION AND


functioning EVALUATION
system

Preliminary investigation / problem definition phase

 Requests from users / management trigger the start of SDLC.


 The requests identify the nature of the work to be done i.e. they state the problem with the system – deficiencies or
improvements desired in the current system.

1|Page MS. SHIELA MARIE MERINO


SYSTEM ANALYSIS AND DESIGN MANUAL

The work requested can be:


i. Substantial e.g. requiring a new IS to meet a newly identified requirement or replacement of an existing IS that can
no longer handle changing business requirements. Usually, these requests take many months/ years of effort.Minor
e.g. addition of a new report or changes to existing calculations. Usually, minor systems requests require a few hours
of effort.
NB: The purpose of preliminary investigation phase is to identify clearly the nature and scope of the problems mentioned by
the users/management – that’s why we also call this phase problem definition phase.

The result of this phase is a preliminary investigation report – that specifies to the management the problem identified
within the system and the action(s) to be taken to solve them – either
 No further action; or
 Systems development (if problem is minor); or
 Further detailed investigation (to begin systems analysis).
2. Systems analysis

This phase is pursued if the problems identified require detailed investigation.


The purpose of this phase is:
i. To learn exactly what takes place in the current system,
ii. To determine and fully document in detail what should take place, and
iii. To make recommendations to management on the alternative solutions and their costs.
In this phase, therefore, two main activities take place:
i. Requirements determination, and
ii. Requirements analysis
Requirements determination is also called fact-finding/data gathering – here, the analyst defines all the functions performed by
the current system as he/she determines what modifications are needed by the organisation in the improved version of the IS.
Requirements analysis – also called systems definition/general design/logical design.
Once all the facts are obtained, they are analysed and evaluated in a systematic fashion in order to develop alternative plans to
solve problems of the current IS. The end product created in this phase is systems requirements document. It documents:
 All end user and management requirements,
 All alternative plans and their costs, and
 The recommended plan (best plan).
The management chooses the best alternative – Either:
 buy software off-the-shelf, or
 develop a solution in-house / outsource the development process, or
 Terminate the development process (e.g. due to high costs, changing priorities or failure to meet objectives).

2|Page MS. SHIELA MARIE MERINO


SYSTEM ANALYSIS AND DESIGN MANUAL

3. Systems design

 The purpose of this phase is to determine how to construct the IS to best satisfy the documented requirements.
 The analyst/designer designs all the required IS outputs, files/databases, inputs, programs, and manual procedures.
 He/she designs the internal and external controls – manual and computer-based steps that guarantee the IS is reliable,
accurate, and secure.
 The product of this phase is systems specification document. This product is presented to management and users for
their review and approval.

4. Systems development
During this phase, the I.S. is actually constructed:
 Application programs are written, tested, and documented;
 Operational documentation and procedures are completed; and
 End user/management review and approval is obtained.
The product of this phase is a functioning and documented IS which has to be reviewed and approved by management.
5. Systems implementation and evaluation

Systems implementation is a phase that follows soon after a functioning and documented system is realised.
In this phase, data is converted to the new system’s files (data conversion), end users are trained and the new system is put into
operation (as the old is retired).
 It is at this point that the end user and management actually begin to use the constructed IS.
 SDLC has a provision for systems evaluation – a process of determining if the IS operates as proposed and if the costs
and benefits are as anticipated.
 Once implemented, the IS enters its operation / maintenance phase in which changes are made to the IS (to remain
useful). This is usually the longest phase in SDLC.
 After some years in operation, the I.S. starts showing signs of obsolescence and a need to be replaced.
 The replacement of IS constitutes the end of its life cycle.

Characteristics of SDLC
There are several characteristics of SDLC that an analyst should keep in mind as he/she builds the system:
a) Complete phases in sequence
 Successful development of an IS requires that the analyst follows the SDLC phases in order i.e. a phase must be
completed before the next phase is started.
 Failure to complete a phase results in problems with the IS.
 Completing phases in sequence does not mean that you must restrict all your thoughts to the current phase’s
activities.

3|Page MS. SHIELA MARIE MERINO


SYSTEM ANALYSIS AND DESIGN MANUAL

 It is also possible to overlap work in one phase with work for the next phase – especially if goal is to shorten the
development time.
 Completing a phase doesn’t mean that we are through with it – a change in requirements may mean revisiting phases
done earlier.
b) Focus on the end products
 End products at the end of each phase represent milestones or checkpoints in the SDLC and signal the completion of a
specific phase.
 The management uses each checkpoint to assess where development stands and where it should go next – proceed to
the next phase, terminate etc.
 The analyst should focus on the content and quality of these products – for they are highly visible measures of his/her
progress.
c) Estimate the required resources
 Management is interested in the estimates of the cost of developing and operating the IS.
 At the start of each phase, the analyst should provide an accurate cost for that phase and projected cost estimates for
all succeeding phases and for the operation of the IS – generally first (e.g. 30,000 – 50,000) and more specifically as
you gain a better understanding of the IS.

PRELIMINARY INVESTIGATION AND SYSTEMS ANALYSIS


Introduction
Information systems support organizations attain their objectives by facilitating capture, processing, storage, retrieval and
distribution of information (to assist management and end users in decision making).
The failure of an information system to provide good information (for whatever reason) triggers systems requests
(problems, opportunities, or directives) from the users or management. In other words, the users complain if the
information system does not work as expected. Several reasons give rise to systems requests:
a) Changes in an organization’s objectives or formulation of new objectives.
b) Recognition of problems or errors in the current system that undermine the system’s ability to support the
organization’s objectives. Examples:
c) Need to improve or enhance service provision to customers and / or the end users of the organization e.g.
simplifying the registration process in MU, streamlining examination processing or revenue collection, etc
d) If the current system is not performing as it must/should e.g. reports take long to be produced or prepared
e) Information produced by the current system might be inadequate or incomplete or not produced when needed
(not timely), etc.
f) Inadequate controls in the current system i.e. allowing erroneous data to enter the system.
Following submission of systems requests by the users or management, preliminary investigation is undertaken to determine
whether or not the complaints were justified.

4|Page MS. SHIELA MARIE MERINO


SYSTEM ANALYSIS AND DESIGN MANUAL

 The purpose of this investigation is to gather enough information to determine if the problems specified warrant
conducting subsequent phases of the systems development life cycle.
 Shelly et al (1991) argue that there is no need for a comprehensive data gathering at this stage. All you need to do is:
1. Understand the true nature of the problem: problems, opportunities and directives
 Key objective of preliminary investigation is to understand the true nature of the problem and the reason(s) for the
systems requests i.e. is the stated problem real (problem) or is it a symptom of the real problem? (e.g. low speed) Is
it a problem solution? (e.g. need to a report).
 This means that the underlying cause of the problem or the problem itself might not be identified in the systems
request!
 Therefore, when interviewing users, be careful with the term ‘problem’ for it has negative connotation – they will
look for faults only! Desirable features or improvements to functioning parts of the system are also a ‘problem’. It
may be fruitful to ask them about specific improvements or additional capabilities.
 It may be helpful to look at the problem in the light of the organisation’s objectives i.e. the problem should not be
studied in isolation – identify the objectives and examine how the problem fits in with them.
2. Define the scope and constraints of the problem (proposed system project)
 Scope – is the range or extent of the problem. It helps determine the boundary of the systems requests.
i. To define the scope, the analyst should answer the following:
ii. What is the extent or range of the problem? – Correct billing errors/payroll is not being produced
accurately/etc.
iii. Who is affected by the problem?
iv. Who is likely to be affected by the solution?
 Constraint – is a condition, restriction, or requirement that must be met for the project to be viable. To determine
this, the analyst should answer:
 What conditions/restrictions/requirements must be met for the system to be viable? – technology(hardware,
Software), time, policy, cost, budget and environment etc. – they are yes/no conditions that affect potential problem
solutions e.g. should the expected solution function on existing equipment?
3. State the benefits/objectives
 Identify and state the benefits that are likely to result from the systems request (if the problem is solved/project is
completed). Analyst should answer:
 What are the tangible benefits? e.g. decrease in expenses, improved cash flow increase in revenues, or both.
 What about intangible benefits? e.g. relieve tedium of a task, improve morale and job satisfaction, improved decision
making process, and better management control etc.
4. Specify time and money estimates for the next phase
Estimate the costs of systems analysis, and for subsequent phases: state broadly

5|Page MS. SHIELA MARIE MERINO


SYSTEM ANALYSIS AND DESIGN MANUAL

Consider the following as you estimate:


 Information that must be obtained, and the volume of information to be gathered and analysed;
 The sources of information and the difficulties that will be encountered in gathering and analyzing information from
the sources;
 The people to be interviewed and the amount of time required to interview them;
 Time and number of people required to correlate the data gathered and to prepare a report indicating the findings and
alternative solutions to the problems.
5. Present a report to management describing the problem and recommendation. The report specifies the
identified problems and recommended action(s) to be taken – termination if there is no problem, direct
correction/development if the problem was a minor one, or detailed systems analysis (if problem requires clearer
understanding). On the basis of this report, management will decide whether systems analysis phase should take place.

A Project Initiation Document often contains the following:

 Project Goals

 Scope

 Project Organization

 Business Case

 Constraints

THE FEASIBILITY STUDY PHASE

The aims of the study are to investigate the proposed system and produce a feasibility report which will contain the results of the
study and indicate possible methods of achieving the requirements. The alternatives will be accompanied by cost/benefit
comparisons, to assist the steering committee in deciding which solution to adopt. A very detailed investigation is usually
unnecessary at this stage, as only broad estimates are required. This also keeps the cost of the investigation to a minimum.
A feasibility study should provide management with enough information to :
- Define candidate solution
- Analyze solutions for feasibility
- Compare feasible candidate solutions to select on or more recommended solutions.
- After a feasibility study, management makes a go/no go Decision
Types of Feasibility

a) Operational -- If the system is developed, will it be used? Includes people-oriented and social issues: internal issues,
such as manpower problems, labour objections, manager resistance, organizational conflicts and policies; also external issues,
including social acceptability, legal aspects and government regulations.

6|Page MS. SHIELA MARIE MERINO


SYSTEM ANALYSIS AND DESIGN MANUAL

b) Technical -- Is the project feasibility within the limits of current technology? Does the technology exist at all? Is it
available within given resource constraints (i.e., budget, schedule,...)?

c) Economic (Cost/Benefits Analysis) -- Is the project possible, given resource constraints? Are the benefits that
will accrue from the new system worth the costs? What are the savings that will result from the system, including tangible and
intangible ones? What are the development and operational costs?

d) Schedule – Will the project be developed within time?

e) Social feasibility:

It addresses the needs of the user on how they will affected by computerization. It addresses issues such as:

- Skills required for the users


- Staff motivation
- Industrial relations
- Personnel policies
f) Legal and contractual feasibility

This area looks at any legal ramifications related to the development of the system ie
- Copyright infringement
- Intellectual property rights
g) Political feasibility
Assesses how key stakeholders in the organization view the proposed system. Stakeholders not supporting the project may take
steps to block, disrupt or change the focus of the project.
h) Cost/Benefit Analysis
The purpose of a cost/benefit analysis is to answer questions such as:
 Is the project justified (because benefits outweigh costs)?
 Can the project be done, within given cost constraints?
 What is the minimal cost to attain a certain system?
 What is the preferred alternative, among candidate solutions?
Examples of things to consider:
 Hardware/software selection
 How to convince management to develop the new system
 Selection among alternative financing arrangements (rent/lease/purchase)
 Difficulties -- discovering and assessing benefits and costs; they can both be intangible, hidden and/or hard to
estimate, it's also hard to rank multi-criteria alternatives

7|Page MS. SHIELA MARIE MERINO


SYSTEM ANALYSIS AND DESIGN MANUAL

Types of Benefits

Examples of particular benefits: cost reductions, error reductions, increased throughput, increased flexibility of operation,
improved operation, better (e.g., more accurate) and more timely information

Benefits may be classified into one of the following categories:

 Monetary -- when $-values can be calculated


 Tangible (Quantified) -- when benefits can be quantified, and assigned a monetary value.
 Intangible – cannot be easily quantified but may lead to quantifiable gains in long run.
 Measurable benefits :, which can be defined as "a monetary or financial return which accrues to the organisation as
a result of the operation of the new system".
These are by far the most important benefits to be balanced against the costs of the new system. They are often difficult to
assess, yet effort should be expended to assess their monetary value. One example is a purchase ordering system which may
produce great benefits by ensuring that the best possible terms for bulk purchase discounts and early payment discounts are
obtained.
Tangible benefits Intangible benefits
Increased productivity More efficient customer services
Low operational costs Improved resource control
Reduced workforce Increased job satisfaction
Reduced computer expenses Better corporate image

Types of Costs

A range of costs must be included in the feasibility study:

- Hardware and software purchase costs;


- System development costs such as staff costs
- Installation costs including cabling and bringing in new furniture to house the computers;
- Migration costs such as transferring data from an existing system to the new system.
- Operating costs including maintenance costs of hardware and staff pay or salary
- Training costs
Note that costs may be tangible or intangible, fixed or variable.

Discount Rates

A dollar today is worth more than a dollar tomorrow… The dollar values used in this type of analysis should be normalized to
refer to current year dollar values. For this, we need a number, the discount rate, which measures the Opportunity cost of
investing money in other projects, rather than the information system development one. This number is company- and
industry-specific.

8|Page MS. SHIELA MARIE MERINO


SYSTEM ANALYSIS AND DESIGN MANUAL

To calculate the present value, i.e., the real dollar value given the discount rate i, n years from now, we use the formula
Present 1
Value (n) (1 + i)n
For example, if the discount rate is 12%, then
Present Value (1) = 1/(1 + 0.12)1 = 0.893
Present Value (2) = 1/(1 + 0.12)2 = 0.797

FORMAT OF A FEASIBILITY REPORT


This is a suitable format for a feasibility report:
1 Abstract
2 Executive summary
3 Contents list (including a separate list of illustrations)
4 Glossary
5. Introduction
 Background of the project leading to the project
 Terms of reference
 Reasons for the study
6. The current system
 Overview of the current system
 Limitations, problems of the current system and constraints

7. Proposed system

 Objectives
 Requirements of the new system
 Scope, resource, and benefits
 Technical implications-the hardware and software needed
 Operational impact-The impact the solution will have on the business in terms of human, organizational and political
aspects.
 Cost implications- Both initial(Capital) and continuing (operational)
 Cost benefit analysis
- A comparison of costs and benefits prepared using whatever evaluation technique is favored by the organization.

8. Recommendations

 Summary of the previous section of the report.


 Recommendations as to how the client should proceed
 Progress with a full detailed analysis
 Review with terms of reference/ or the scope of the study before proceeding further.

9|Page MS. SHIELA MARIE MERINO


SYSTEM ANALYSIS AND DESIGN MANUAL

SYSTEM ANALYSIS AND SOFTWARE / SPECIFICATION

It’s the process of establishing what services are required and the constraints on the system’s operation and development. It is
also called Requirements engineering process

It leads to the production of Requirements document that is the specification for the system

It involves the following stages:

• Requirements elicitation or determination (i.e. requirements determination)


• Requirements specification
• Requirements validation
• Make a decision (prepare systems requirements document).

Feasibility study Requirements

Elicitation and analysis

Requirements
specification
Feasibility report

System models Requirements validation

User and system requirements

Requirements
document


Analysts work with end users to establish functional and non functional requirements.
Fact finding techniques and prototyping may be used.
It involves collections of facts (requirements) about what the system should do or look like to be acceptable to the users i.e.
outputs, inputs, processes, timings, and controls; as well as volumes and frequencies for the first 3 of the foregoing items.
A requirement is a description of a system. It may describe the function (service) of the system, a desirable feature or
characteristic of the system as well as constraints that limit the boundaries of the proposed system. (Constraint on the product
and the process)

10 | P a g e MS. SHIELA MARIE MERINO


SYSTEM ANALYSIS AND DESIGN MANUAL

A requirement may range from a high-level abstract statement of a service or of a system constraint (User requirements) to a
detailed mathematical functional specification.(System requirements).

Throw-away prototype is used to explore requirements and design options.

Types of requirements

1. Functional
2. Nonfunctional
3. Domain
4. Usability

1. Functional requirements
A function is a set of ongoing business activities with a start to the end. Functions are named with nouns e.g Planning,
Scheduling

A function consists of processes that support specific activities. Logical processes can be:

- Calculate Grade
- Generate report
- Create a record etc.
An event is a logical unit of work that must be completed as a whole (A transaction).

Events have triggers (input) and responses (defined output).

A functional requirement is a description of the function or service of the system. When writing them, use action verb e.g:

i. Process a cheque
ii. Calculate grade
iii. Generate report
Example of functional requirement for a library system

i. The system should generate weekly report of books borrowed – output


ii. The system should calculate overdue charges automatically. – Processing
iii. The system should capture student’s Adm No, names, Book ID, Current date etc, when a student
borrows a book.
In other words, with functional requirements, we answer the question of what the system should be able to do in terms of
input, output and processing. The question of how it should do it is left to designers.

11 | P a g e MS. SHIELA MARIE MERINO


SYSTEM ANALYSIS AND DESIGN MANUAL

Statements of services the system should provide how the system should react to particular inputs and how the system should
behave in particular situations.

User requirements

User requirements are high-level statements of what the system should do.

Functional user requirements may be high-level statements of what the system should do but functional system requirements
should describe the system services in detail.

User requirements should be written using natural language, tables and diagrams.

User requirements are defined using natural language, tables and diagrams as these can be understood by all users.

Problems with natural language

• Lack of clarity
• Precision is difficult without making the document difficult to read.
• Requirements confusion
• Functional and non-functional requirements tend to be mixed-up.
• Requirements amalgamation
• Several different requirements may be expressed together.
• Invent a standard format and use it for all requirements.
• Use language in a consistent way. Use shall for mandatory requirements, should for desirable requirements.
• Use text highlighting to identify key parts of the requirement.
• Avoid the use of computer jargon.
2. Non-functional requirements
Describe system properties or desirable attributes, constraints on the services or functions offered by the system such as timing
constraints, constraints on the development process, standards, etc.

We can have three types of such requirements namely: Product, Organizational or External requirements

Examples

1. Product (Software)
a. The user interface of the proposed system should be easy to use
b. The user interface should be implemented as simple html
Product requirements may touch on reliability, portability, efficiency and usability (although I have covered usability
separately for the sake of clarity), efficiency etc- of the software product.

12 | P a g e MS. SHIELA MARIE MERINO


SYSTEM ANALYSIS AND DESIGN MANUAL

2. Organizational(The organization developing or procuring the software- its policies)


The system development process and deliverables shall conform to the process and deliverables defined (by specific standards)
IEEE standards
This requirement may focus on deliver time of the product, implementations techniques and standards to be adhered to.

i. External
The system shall not disclose personal information of users. Ethical and legislative issues are paramount here.
Note:

A software constraint is something, event or situation that limits the flexibility in defining a software solution to your
objectives. A constraint cannot be changed (no wonder we state them with “must” word). Deadlines and budgets, among
others constraint. It may be a constraint on the software process itself or a constraint on the software product to be developed
and delivered into use. The environment where the product will be operational may also be constraint.

For example you may state a constraint as: “Any system developed must be compatible with the existing Windows XP OS.”
Or “There will be no increase in workforce”.

3. Domain requirements: Requirements that come from the application domain of the system and that
reflect characteristics of that domain. Domain requirements be new functional requirements, constraints on existing
requirements or define specific computations.
Domain requirements problems:

If domain requirements are not satisfied, the system may be unworkable.

• Understand ability
• Requirements are expressed in the language of the application domain;
• This is often not understood by software engineers developing the system.
• Implicitness
• Domain specialists understand the area so well that they do not think of making the domain requirements explicit.
Techniques used in Requirements engineering

• Throw-away prototype is used to explore requirements and design options.(We will cover this under methodologies
topic)
• Fact finding techniques may also be used to gather data from uses. (Refer to your SAD notes on this)
• Modeling
Logical system models depict what the system is or what the system does- not how its physically implemented to express
essential requirements of the system. They focus on the logical design of the system and not physical design as do physical
models. Models should be highly cohesive. That is each module should accomplish one and only one function so that they can

13 | P a g e MS. SHIELA MARIE MERINO


SYSTEM ANALYSIS AND DESIGN MANUAL

be reusable. Models should also be loosely coupled i.e should minimally depend on each other. This minimizes the effect that
future changes on one module will have on others.

4. Usability requirements:

The issue of usability has grown more important as grate number of technically complicated products have become available to
a wider population. While manufacturers have concentrated upon increasing the functionality of their products, users have
grown steadily more confused that they cannot operate the machinery that they have bought.
Within the IT industry, this problem has been even more serous. Many software products have had to be abandoned, not
because they did not work but because they could not or would not use them.
Usability is not determined by just one or two constituents but is influenced by a number of factors which interact with one
another in sometimes complex ways. Eason (1984) has suggested a series of concepts that explain what these variables might
be.
So, the usability of a system will depend, not only upon the nature of the user, but also upon the characteristics of the task and
system – the variables of task, system and user all combine to determine the usability of a system.
Usability testing is a technique used to evaluate a product by testing it on users. This can be seen as an irreplaceable usability
practice, since it gives direct input on how real users use the system.

Human-computer interaction (HCI) is the study of interaction between people (users) and computers. It is an interdisciplinary
subject, relating computer science with many other fields of study and research. Interaction between users and computers occurs
at the user interface (or simply interface), which includes both hardware (e.g., general purpose computer peripherals and major
devices such as the Boeing B777) and software, which together present an environment in which humans (from pilots to
surgeons) are provided a wide extension of their native

Task 1: Requirements elicitation

Analysts work with end users to establish functional and non functional requirements.

Fact finding techniques and prototyping may be used.

A requirement is a description of a system. It may describe the function (service) of the system, a desirable feature or
characteristic of the system as well as constraints that limit the boundaries of the proposed system. (Constraint on the product
and the process)

Systems analysis involves the following steps:

1) Analysis of the business environment.


Study the nature of the organisation in more depth. This aids through understanding the organisation’s physical aspects,
objectives, management policies, market, products, services, and customer characteristics as well as reports, decisions, and
transactions essential for its success.

14 | P a g e MS. SHIELA MARIE MERINO


SYSTEM ANALYSIS AND DESIGN MANUAL

Also, try to understand the organisation’s resources – capital, personnel, customer good will, research capability, and financial
assets.
2) Determine information requirements.
Examine the requirements for management decision-making, transaction processing, and reporting activities. What is
required to make decisions, to process transactions, and prepare reports? In other words, you should attempt to:
• Determine their output requirements.
• Determine processes needed to provide output.
• Determine the input needed to provide the desired outputs.
• Identify the decision making process, specify information requirements in terms of transactions and reports. Spell out the
characteristics of each type of information.
• Identify communication requirements – how is output communicated?
3) Determine the logical model
• Devise a logical model of the proposed system for use in the subsequent design.
• Determine the most important processes, reports, data and information flows.
• Use DFDs to determine a logical information flow from sources to destinations, via decision-making, reporting, and
transaction processing steps. Among other things, DFDs help you define the data stores needed and, consequently,
to formulate appropriate database structure.
4) Analyse data (or database) requirements (i.e. database content)
This is the analysis of the database/file management requirements and information flow requirements. (develop ERDs and
DDs for the system)
Determine the sort of DBMS to be designed so as to accommodate data required for current and future applications.
There are various reasons why requirements determination is difficult:

• User limitations in terms of their ability to express correct requirements


• Lack of awareness of what can be achieved with an information system(both in terms of uner- and over-estimating an
IS’s capabilities.
 Different interpretation of requirements by different users;
 Communication issues can result because of complex web of interactions that exist between different users
 Existence of biases amongst users so that requirements are identifies on basis of attitude, personality or environment
rather than real business needs
Task 2: Requirements Analysis

- Upon conclusion of the requirements determination stage of systems analysis, the analyst has a statement of user
requirements – usually descriptive (narrative) and not precise.

15 | P a g e MS. SHIELA MARIE MERINO


SYSTEM ANALYSIS AND DESIGN MANUAL

- To develop an appropriate system, the designers must have a clear understanding of the user requirements.
This implies that a full understanding of the user needs is required of any analyst – he/she has to deploy a method that
enables him/her to precisely inform the user his/her understanding of the requirements (i.e. he/she must develop a
logical (user-oriented) design of the information system).
- The main tools used in modelling system requirements are examined below.
Modeling Tools:
- Much of your work as systems analysts involves modelling the system that the user wants – using such tools are
DFDs, ERDs, Decision tables/trees, structured English, etc.
- These tools are used to:
1. Focus on important system features while downplaying the less important ones;
2. Discuss changes and corrections to the users’ requirements with low cost and minimal risk;
3. Verify that the analyst correctly understands the users’ environment and has fully documented it – for designers and
programmers to understand and build it.
Task 3: Requirements specification

Is the activity of translating the information gathered during the analysis activity into a document that is a set of requirements
called Software Requirement Specification Document (SRS)-Used as a reference object

Content of Requirements Document:

1. Management Summary: This section introduces the report and sketches the objectives of the project, development efforts to
date, etc.
2. Introduction
3. Information System Background: This describes the problems with the current system, benefits and objectives of the new
system, scope, and results of preliminary study.
a) Scope
Give a one sentence explanation of what the product will do. Explain why this product is needed. Discuss the specific
problems that this product will attempt to solve. Present any domain-specific background necessary to understand the
problem. Include a measurable statement of the intended benefits or objectives to be achieved by developing this product.
b) Product Features: Give a brief summary or bullet list of the major features that the software will perform.
c) User characteristics: Describe general characteristics of the intended user of the product, including age, education
level, specific experience, and technical expertise.
d) Constraints
Describe any constraint that may limit the choices available to the developers, such as regulatory policies, intellectual property
restrictions, high level language requirements, etc.
e) Assumptions and dependencies
List any factors or dependencies that the developers may assume will exist that may affect the software product.

4. Functional Requirements: This section presents the logical design of the new system – i.e. DFDs, ERDs, DDs, process
descriptions. For large systems, the diagrams, esp., low level DFDs are placed in the appendix.

16 | P a g e MS. SHIELA MARIE MERINO


SYSTEM ANALYSIS AND DESIGN MANUAL

5. Environmental Requirements: This section documents operating constraints (volumes, size, frequencies, and timings); external
constraints (what state regulations say, tax, etc); hardware and software constraints; control and security requirements.
6. Informational Requirements: This section describes the information requirements from an external perspective only. Only
data that is visible by the end user is described.

a) Data Model: Include one or more visual depictions of the data and relationships in the problem domain using some
standard notation such as Entity-Relationship Model, Data Flow Diagram, State Transition Diagram, etc.
b) Data Dictionary: Provide an organized, alphabetical listing of all data elements that are pertinent to the system, with
precise, rigorous definitions. Each item must follow the Data dictionary notation.

7. Alternatives: This presents alternatives proposed for the new system, reasons for the infeasible options, advantages and
disadvantages of the feasible option(s) and a summary of their costs and plans.
8. Recommended Alternative: This documents the rationale for the alternative option and its details.
9. Time and Cost estimates: For the recommended option, this section presents costs, schedules, and staffing requirements.
10.Appendices: Presents documents that did not appear in earlier sections: systems request, preliminary report,
references/sources of information, questionnaires, interview guides, etc.
Task 4: Requirements validation

• Checks requirements for realism, consistency and completeness.


• SRS document must be modified to correct errors if discovered.
Requirements imprecision

Misunderstanding of user requirements is fatal! Always avoid ambiguous requirements statements and requirement
assumptions.

When requirements are wrong (due to ambiguities, assumptions and misunderstanding):

i. The system may cost more than projected.


ii. The system may be delivered later than promised.
iii. The system may not meet user expectations, who when dissatisfied may not use it.
iv. Cost of maintenance and enhancement may be excessively high.
v. The system may be unreliable and prone to errors and downtime.
vi. Reputation of IT staff or the development team may be tarnished.
System requirements should meet the following criteria:

i. Consistent: Requirements should not be conflicting or ambiguous.


ii. Complete: Requirements should describe all possible system inputs and responses.
iii. Feasible: Requirements should be satisfied within available resources and constraints.

17 | P a g e MS. SHIELA MARIE MERINO


SYSTEM ANALYSIS AND DESIGN MANUAL

iv. Required: The requirements should be truly needed.


v. Accurate: The requirements should be stated correctly
vi. Traceable: The requirements should directly map to the functions and features of the
system.
vii. Verifiable: The requirements should be testable.

• In principle, requirements should be both complete and consistent.


• In practice, it is impossible to produce a complete and consistent requirements document.

In summary, the objectives of the analysis phase are to:


 Understand how the current system operates using suitable fact finding techniques.
 Understand the objectives and terms of reference which reflect the scope of the investigation
 Specify requirements specification.

DATA GATHERING METHODS


As a fact-finding process, systems analysis uses the following data gathering techniques (both for systems analysis per se and for
evaluation):

• Interviewing
• Questionnaires
• Observation
• Records review
• Prototyping
• Ethnography

a) Sampling:

Before using any of these tools, the population of study must be sampled – picking a manageable number of elements e.g.
people, documents, etc. There are several methods of sampling: simple random, stratified, systematic random,
cluster/area/multiphase/multistage, accidental, quota, purposive, and convenience sampling techniques.

b) Interviewing
 It is the most commonly used and most productive fact-finding technique.
 It involves direct conversation with colleagues, users and management. It is a planned meeting during
which an analyst obtains information from another person.

18 | P a g e MS. SHIELA MARIE MERINO


SYSTEM ANALYSIS AND DESIGN MANUAL

 This technique requires the analyst to be a good listener.


 It consists of the following steps:
i) Determine who to interview:
 The principle is to “ask the right people the right questions” i.e. users/management at the various levels
in the organization should be asked the right questions in order to get an accurate picture of the system under study.
– Sample your population.
ii) Establish objectives for the interview:
 Determine the general areas to be discussed and the specific facts for each of the areas.
 Plan to solicit ideas, suggestions and opinions from the interviewees (different categories of people will
provide different information – so each interview will have different objectives.)
NB: objectives provide a framework to decide on specific questions to ask.

iii) Prepare for the interview:


 Schedule a meeting with the person to be interviewed for a given day at a given time.
 Call the person one hour or so before the interview to confirm availability.
 Schedule another interview (if interruption occurs).
 Keep departmental heads informed of your meetings with their staff members through memos.
 Prepare interview questions ahead of time (semi-structured interviews are quite okay since unexpected
subjects usually arise). – have open- and close-ended questions.
 Send the list of questions to the interviewees several days before the interview meeting – for them to
prepare – together with the purpose, date, time, and location for the interview.
 Include a list of documents that users/management should bring to the meeting.
iv) Conduct the interview:
Once through with the above, conduct the interview; the following steps are usually followed:
• introduce yourself (and establish a rapport with interviewee),
• summarise the project objectives and progress,
• summarise your interview objectives,
• ask questions (your role is to listen to answers), record/notes,
• review the interview responses and non-verbal cues (summarise the main points in the interview),
• thank interviewee.
v) Document the interview:
• During the interview, take steps to ensure that you do not forget the information from the
interview.

19 | P a g e MS. SHIELA MARIE MERINO


SYSTEM ANALYSIS AND DESIGN MANUAL

• Record what transpired during the interview – taking notes – main points to aid recall later (and
not everything). Tape recording is also a good way of capturing responses.
• Send memo expressing appreciation of the contribution – before and during the interview. Attach
written answers to questions for correction of any misconception.
vi) Evaluate the interview:
• Analyse the interview and the interviewee (to eliminate bias in answering questions). Distil the
proper answers from those that might not be correct.
• In other words, establish whether some answers were biased (for or against the establishment)?
Valid/reliable? Was termination of unproductive interviews tactful? Etc.
• Advantages

• Analyst can prompt and probe for more in-depth answers.


• Interviewer can reword or adapt the question.
• Analyst can observe the respondent’s non-verbal cues (or behaviour) and probe further.
• Disadvantages

• Time-consuming and therefore costly technique of fact-finding.


• Its success if highly dependent on the interviewers human relations skills.
• May be impractical due to the location of interviewees.

c) Questionnaires
• A questionnaire is a structured interview form with questions designed so they can be answered without
a face-to-face encounter – respondents write answers to the questions on the form.
• This requirements acquisition method is useful in gathering attitudes, beliefs/opinions, behaviours and
characteristics of a large number of (widely dispersed) people on, for example, work loads, reports, volume of work,
difficulties, etc.
• Since a questionnaire is usually completed by the respondent alone (without interviewer being present),
it has to be well designed. But designing a good (simple and comprehensive) questionnaire is a difficult task!
• In general, a questionnaire should have three sections: a heading – which describes the purpose of the
questionnaire; classification section – which collects information used for analysing and summarising the total data e.g. age,
sex, grade, job title, location, etc; and data section – which contains questions designed to elicit the specific information
sought by the systems analyst.
Types of Questionnaire:

i. Free-format questionnaire:
• This offers the respondent greater latitude in the answer – it has a space provided after the question.

20 | P a g e MS. SHIELA MARIE MERINO


SYSTEM ANALYSIS AND DESIGN MANUAL

• It has open-ended questions e.g. What reports do you receive? How do you use the reports? Are there
problems with these reports? Why?
NB: responses from this questionnaire are difficult to tabulate.

ii. Fixed-format questionnaire:

• This contains questions that require selection of predefined responses – the respondent chooses from the
available answers. It is a questionnaire with closed-ended questions.
• Although responses obtained through this type are easier to tabulate, the respondent is restricted to the
available options only – valuable additional information cannot be given!
There are three types of fixed-format questions:

 Multi-choice questions – give respondent several answers to choose from. The respondent should be told if
more than one answer should be selected. They may allow free-format questions when none of the options apply e.g. Are the
reports you receive current? Yes / No. If no, please explain.
 Rating questions – these give the respondent a statement and requests him/her to state an opinion by
selecting from the (negative and positive rating) options given e.g. The reports produced (by the IS) delay: Strongly agree,
agree, no opinion, disagree, strongly disagree.
 Ranking questions – these give respondents several possible answers to be ranked in order of preference or
experience e.g. Rank the following transactions according to the amount of time you spend processing them: __% new orders
__% order cancellations __% order modifications ___% payments
Steps in the preparation of questionnaire:

• Determine the facts and opinions that should be collected and from whom. Consider how the population
should be sampled (if large).
• Determine whether free- or fixed-format questions will produce the best answers – based on the facts and
opinions. One that combines both open and close-ended questions is often used.
• Formulate the questions and examine them for construction errors and possible misinterpretations. Avoid
personal bias or opinions. Edit the questions accordingly.
• Test the questionnaire on a sample of respondents – to ensure validity and reliability. Edit the questions.
• Make adequate copies of the questionnaire and distribute to the respondents.
Advantages
• Respondents can answer at their convenience; chances of being returned quickly.
• Relatively inexpensive to administer to a large group of respondents – time and money for trained
interviewer

21 | P a g e MS. SHIELA MARIE MERINO


SYSTEM ANALYSIS AND DESIGN MANUAL

• Allow respondents to maintain anonymity – hence more likely to give real facts rather what they think you
should hear.
• Easy to tabulate and analyse responses from this technique.
• It can be used to supplement other techniques – before interviews (to enable respondents assemble the
required information) and after interviews/observation (to verify data collected).
Disadvantages
• Some questionnaires are not returned – low response rate.
• Respondents may find difficulty in writing down their requirements esp for open response questions – takes
longer than an interview. They are not suitable for open-ended questions.
• Questionnaire questions are not answered exhaustively (or completely) – no follow up questions i.e. they
are inflexible, no rewording of questions.
• Other techniques e.g. observation cannot supplement it – analyst cannot observe nonverbal cues.
• Generally, it is difficult to design a questionnaire.
d) Observation
• This technique involves looking at/watching the users of the system as they interact with the system to be
able to understand the information system (their needs, problems and difficulties).
• Often, it is used when validity of data obtained through other methods is in question or if the users cannot
clearly explain certain aspects of the system.
• The analyst should have a checklist of what is to be observed. Some of the things targeted for observation
are the current operations/procedures, all processing steps and results/output, pertinent forms/reports, people who interact
with the system / what they do; completeness, timeliness, accuracy, and form of reports, etc.
• When preparing the checklist/observation guide, consider the following:
i. Ask sufficient pertinent questions to ensure that you thoroughly understand the present operations of the system.
ii. Observe all the steps in the information processing cycle and note the output from each procedural step.
iii. Examine each pertinent form, record, and report. Determine the purpose each item of information serves.
iv. Consider the work of each person associated with the system, keeping in mind the following questions: what is
received from other people? What information does this person generate? What tools does he/she use in certain procedures?
To whom is the information passed? What questions do workers ask each other? Etc.
v. Consult those who receive reports and ask them about the reports’ timeliness, completeness, accuracy, etc. what
should be removed? Added?
Advantages:

• Analyst can see exactly what is being done – complex tasks that could not be clearly explained in words are
identified.

22 | P a g e MS. SHIELA MARIE MERINO


SYSTEM ANALYSIS AND DESIGN MANUAL

• It is good in obtaining data on, say, the physical environment of the task e.g. layout of documents/reports,
noise level, etc.
• It is relatively inexpensive (compared to interviews and questionnaires) – number of employees involved as
well as amount of copying.
Disadvantages:

• People are uncomfortable when being watched – so they may behave differently. This is called the
Hawthorne effect alters users’ behaviour – to let you see what you want to see.
• The work being observed may not involve the level of difficulty or volume normally experienced by the
users i.e. it may not be representative.

e) Document Review

Is basically reading/studying the existing records or written documents (system documentation such as reports, procedure
manuals, memos, etc.) in order to understand an information system’s operations, procedures, etc.

• As we noted earlier (discussion on problem definition), the documents reveal where the organisation has
been and where management believes it is going as well as its information requirements (information needs and user
complaints/problems).
• Quite often documents are a useful starting point in requirements acquisition – in an analyst’s attempt to
understand the system.
Documents examined fall in two broad categories:

 Internal Documents about the organisation / IS: These are all documents produced within the organisation e.g.
organizational charts, job descriptions and specifications (show tasks carried out), prospectuses, annual and other periodic
reports, procedure manuals, forms, files maintained, memos, letters, handbooks, and so on. They should be examined for
content and/or indications of errors/misconceptions. They also enable the analyst ask the right questions (later when
interviewing).
 External Documents about the organisation / IS: Include news articles, external audit reports, etc. that may
reflect on the weaknesses/strengths of the current system.
Advantages
• Quite useful as a starting point in data collection – helps formulate appropriate questions, or decide who to
interview.
• It help the analyst gather basic facts about the information system and the organisation.

23 | P a g e MS. SHIELA MARIE MERINO


SYSTEM ANALYSIS AND DESIGN MANUAL

Disadvantages
• Some documents show the official state of affairs; in practice, staff members modify the documents e.g.
procedure manuals.
• Since systems are dynamic, written documents become out-of-date with time.
• Time-consuming and costly to read all documentation.

f) Discovery Prototyping
• Prototyping is the development of a preview (prototype) of the future system – for the users to have a look
and feel of how the system will be.
• Although prototyping is generally a design technique, it can be used to gather and analyse system
requirements. It is used to seek users’ reactions, suggestions, innovations, and revision plans (in order to make improvements
to the prototype).
• The philosophy underlying this fact-finding technique is that “users will recognise their requirements when
they see them”.
• Usually, only areas where requirements are not clearly understood are prototyped – meaning that this fact-
finding technique supplements the other methods of data collection e.g. interviewing, records review, observation, etc.
NOTE:
Generally, alternate technology other than the ones used for the final software/solution will be used to build discovery
prototypes. It is recommended that the prototype be functional – horizontal and vertical – so that users can experiment on
inputs and outputs.

Advantages:

• Allows users and developers to experiment with software and develop an understanding of how the system
might work.
• Aids in determining the feasibility and usefulness of the system before high development costs are incurred.
• Enhances commitment to the development process and acceptance of the system by the users – i.e. users
feel involved since a culture of democracy is created through involvement.
Disadvantages:

• Require developers/analysts to be trained on the application used.


• Users may develop unrealistic expectations on performance, reliability, and features of the prototype.
What to look for in fact-finding

24 | P a g e MS. SHIELA MARIE MERINO


SYSTEM ANALYSIS AND DESIGN MANUAL

During data-gathering, attempt to discover the following about the present system:

• Objectives – what it attempts to accomplish;


• Inputs to the system – form, origin, contents, volumes, etc.
• Files maintained by the present system – specific files, frequency of updates, etc.
• Outputs – nature and contents i.e. form, frequency, purpose, whether extremely necessary or not.
• Processing carried out – how inputs is used to update files maintained and produce output, equipment used,
accuracy checks performed when processing, time constraints, etc.
• Organisational structure – data processing department and personnel – some steps/activities need not be
followed to achieve some goal(s); problems could be due to dissatisfied staff; determine general attitude and skills of personnel
– will need retraining/job restructuring?
• Problems and difficulties encountered – bottlenecks, duplication, and weaknesses.
• Costs of the present system – tangible/intangible, etc.
• Views / Suggestions for improvement – under ideal conditions, what information does management need to
receive from the system? What improvements do users need?
NB: the above reveal detailed operations of the current system and what is required of the new system.

THE DESIGN STAGE

When the requirement specification is complete, it must be approved by management. The document is very important, as the
new system will be judged on the basis of it.

The design stage of development should thus not be started until a formal signed agreement is reached on the requirement
specification.

Differences between a logical design and physical design

A logical design deals with coming up with a blue-print describing how the new system works. Logical design is
seen in terms of what it processes i.e procedures, inputs, outputs, storage and controls the new system has.

Physical design involves taking the logical design blue-print and turning it into actual programs, databases and
purchasing the hardware and software necessary for the functioning of the system.

Physically designed systems should meet the following criteria:

 Flexible: A good design enable future requirements of the business to be incorporated without too much
difficult and the system should reflect business changes.

 Maintainable: A good design should be easy to maintain in order to reduce maintenance costs.

25 | P a g e MS. SHIELA MARIE MERINO


SYSTEM ANALYSIS AND DESIGN MANUAL

 Portable: A good design is portable- that is it is cable of being transferred from one machine environment
to another with minimum amount of effort.

 Ease of use: A good design will result in a system that is user-friendly-easy to understand..

 Reliable: A good system should be reliable especially in process control or banking.

 Secure: A good system design is secure. It restricts access to authorized users only i.e by introducing
passwords.

 Cost effective: User needs are met cost-effectively is a well designed system.

IMPORTANCE OF REQUIREMENT SPECIFICATION


When the requirement specification is complete, it must be approved by management. The document is very important, as the
new system will be judged on the basis of it.
Before the actual design and implementation of a system, the requirement specification must be agreed with the potential user as
a full statement of facilities needed. When a system meets the requirements stated, then the designer can consider that the
development has been concluded successfully.
Perhaps the most significant reason for the dissatisfaction many managers have with computer systems is the lack of importance
placed on a good requirement specification. It clarifies the purpose of the new system – when one is not produced, or little
attention is paidto it, nobody – user, designer, manager or clerk – will really know the purpose for which the system was
intended. It also has a major advantage for the designers in that they can design a system without interference. (Without an
agreed requirement specification, the user will often not accept that what is produced is what was asked for, and will keep
thinking of "improvements" to the system.) The design stage of development should thus not be started until a formal
agreement is reached on the requirement specification. Ideally, the user department management and the commissioning
management should sign a written statement of agreement.
When a system has been specified, the possibility of using ready-made packages should always be investigated, since this will
save a large amount of development work. If packages which may be suitable are identified, the facilities they offer will need to
be compared with those required by the users. Sometimes it may be more cost-effective for users to get 90% of what they said
they required if this can be obtained from a proven package, rather than waiting for a "100%" system to be developed.

INFLUENCES ON DESIGN
Whether packages are used or a new system developed from scratch, inevitably the desired aims could probably be achieved in
several different ways, and so the final design is almost bound to be a compromise based on a whole set of influences: cost,
accuracy, control, security, availability, reliability, and so on. The design must be acceptable to programmers and users.
(a) Cost
This will involve:

26 | P a g e MS. SHIELA MARIE MERINO


SYSTEM ANALYSIS AND DESIGN MANUAL

 Development cost, including the current design stage. (The feasibility and analysis stages are separately funded.)
 Operations cost, including data preparation, output handling supplies and maintenance.
(b) Accuracy
This means appropriate accuracy, resulting from a compromise between error avoidance and the cost of this.
(c) Control
The design must permit management control over activities, one facility of this type being the provision of control data in the
form of exception reporting.
(d) Security
This is a fairly complicated aspect of design, largely concerning data security and confidentiality.
(e) Availability
This refers to availability of the resources which will comprise the operational system (staff, stationery, software, space, etc.).
This is the responsibility of the analyst in the sense that he or she must be sure that, at the time of implementation, these are
available.
(f) Reliability
The design has to be reliable, that is sufficiently strong in itself to withstand operational problems. In addition, there must be
alternative computing facilities as "back-up" in the case of breakdown, and sufficient staff to deal with peaking of workload.

THE DESIGN PROCESS

We take a quick look at the way in which we might approach the design process.

a) Start with the Results

One approach to design is to consider the output requirement first; this is the main vehicle for achieving the
objectives. We look at the output from the point of view of content (the information to be produced) and of
format (the way in which this
information is to be presented).

b) Organise the Data

We need to input the variable and fixed information for each procedure we want to carry out. Derived
information will be produced by the computer and constant information such as report headings, etc. can be built
into the computer program.

c) Design the logical system

This will be carried out by taking into account a series of considerations as we consider below.

d) Design the physical system

The logical design will be shown to the user for approval or suggestions. When
approval is obtained, the physical design is prepared by the designer. It will consist of:

27 | P a g e MS. SHIELA MARIE MERINO


SYSTEM ANALYSIS AND DESIGN MANUAL

 Screen presentations;
 Input understandings;
 Output understanding;
 Processing descriptions.

The physical design implements the logical design in the technical specifications of the proposed system so as to
optimise the use of hardware and software. All the service requirements must be achieved, both technical and
administrative.

Steps in systems design process

1. REVIEW OF THE SYSTEMS REQUIREMENTS DOCUMENT


I don’t want to dwell on this so much. I am sure Egerton, JKUAT, KNEC and KASNEB students I teach this unit are able to
discuss this point.
2. OUTPUT DESIGN AND ITS CONSIDERATIONS

 Design printed reports i.e. detail, brief, etc.


 Design screen outputs i.e. reports, help, instructions, etc.
 In other words, design the physical layout for every systems output and define the physical disposition and
handling of the output.
Output can be presented in many ways. Consider first whether the information is necessarily
to be a permanent record. For example, an on-line enquiry system needs to supply the user
with an immediate report which will be acted upon; after this the information is no longer
required. A screen display would thus be the most appropriate output. On the other hand,
employee wage packets must be printed as a permanent record for the employee.
Four further crucial aspects are:
(a) WHERE is it to be Forwarded?
Which location or departments require the information? In some cases it may be
necessary to send output to people at a different geographical location, and the method
of transmitting it will affect the design of the system.
(b) WHEN is it to be Forwarded?
The timing and related accuracy with which information is made available are crucial.
Not only must the time interval be adhered to (e.g. daily, weekly, etc.) but on several
occasions specific times of days will be stipulated.
(c) WHAT is to be Forwarded?
Quite simply, only that which is required. Any additional information makes the output

28 | P a g e MS. SHIELA MARIE MERINO


SYSTEM ANALYSIS AND DESIGN MANUAL

unfamiliar to the recipient, and the unexpected is often ignored, hence additional
information – even if thought to be useful – will probably be rejected.
(d) HOW is it to be Forwarded?
This aspect covers such items as the method of delivery and the format in which it is
provided.

3. INPUT DESIGN AND CONSIDERATIONS


Determine how the data will be input to the system, design source documents for capturing the data,
design the audit trails and system security measures.

 Designing or modifying source documents for data capture


 Determine how data will be entered and input.
 Design data entry screens
 Design user interface screens (help screens / instructional messages)
 Design audit trails and system security
Input design considerations
As you are already aware, commercial data processing comprises four main functions: input,
output, processing and storage. The systems designer will consider each and decide the
most appropriate method to be adopted.
The designer must specify a complete mechanism from the origination of the data through to its input to the computer. SIX
important factors must be considered:
(a) The data capture system must be reliable and reduce the potential for error to a
minimum. The more separate stages there are, the greater the chance of error.
(b) The system must be cost-effective ; the cost of reliability must not exceed the cost of
errors, should they occur. For example, there is little point in spending Ksh. 100,000 per
year on a very reliable system which is error-free when a £5,000 system would
eliminate most errors and the errors would cost only £1,000 per year to correct.
(c) The geographical location of the places at which the data is originated will have a
great effect on the method chosen.
(d) The response time for output must be considered. The faster the response time, the
faster the data capture system must be.
So, what kind of objectives should the systems analyst have as far as data capture and
output are concerned? We may categorise them like this:
 to minimise the total volume of input, as far as is practicable;
 to minimise the extent of manually prepared input;

29 | P a g e MS. SHIELA MARIE MERINO


SYSTEM ANALYSIS AND DESIGN MANUAL

 to design input to the system so that the work in preparing it is as simple as possible;
 to minimise the number of steps between origin of the data and its input to the computer.
There are FOUR aspects to the minimisation of input volume:
 Data which is repeated needs to be entered into the system only once.
 Editing and spacing can be omitted from input which is keyed. This makes for more
 rapid input. Editing (for example, inserting decimal points) can be implied from the
 general format when the input is read into the computer.

a) Screen Design

 Here, we are concerned with input and user interface screens.


 All screen displays serve 2 general purposes:
 To present information (i.e. reports); and
 To assist the operator using the system.
When designing screens, we should consider:
 All screen displays should be attractive and uncrowded;
 Information on a single screen should be displayed in a meaningful logical order;
 Screen presentation should be consistent i.e. screen titles, messages/instructions should appear in the same
general locations on all types of screen displays & terminology should be consistent e.g. delete/erase/kill, cancel/quit/exit,
enter/CR/return, etc.
 All messages, including error messages, should be explicit, understandable, and politely stated. Avoid
messages like “Wrong! You have failed!” or “Error 10” that do not help the user.
 Messages should remain on the screen long enough to be read i.e. allow the operator to press any key or click
the mouse for the message to disappear (usu. Ok).
 Special video effects should be used sparingly e.g. colour, blinking, reverse video, etc. and sound effects.
 Feedback is important. Error messages provide negative feedback, but reassuring positive feedback is also
necessary e.g. cursor should automatically move to the next screen/box, clearing screen once option has been selected, etc.
If it takes long to load the next screen, have an intermediate screen explaining the reason for the delay e.g. Please wait….,
Loading …., Time left 33 seconds, etc.
 All input screen design layouts must be documented on the screen display layout forms for later use by
programmers.

30 | P a g e MS. SHIELA MARIE MERINO


SYSTEM ANALYSIS AND DESIGN MANUAL

Types of Screens

i. Data Entry Screen/Input design


Most commonly used data entry technique.
It consists of a form – displayed on the screen for the operator to fill in by entering data, field by field.
Guidelines of design include:
 Restrict access to only those areas on the screen where data is to be entered – not labels.
 Explicitly caption every field to be entered.
 If a field must be entered in a certain format, display the format e.g. Date (ddmmyyyy):__.
 For consistency, require an ending keystroke for every field that is entered e.g. Enter key.
 Don’t require screen users to enter special characters e.g. currency signs, slashes for dates, etc.
 Don’t require that decimal points always be typed e.g. KShs. 100.00, 2000.00.
 For fields with a standard value or relatively constant, display that value as a default – user can either accept or type
another e.g. date of creation, department, province, etc.
 For coded fields with a few valid values, display the values and their meanings e.g. A = Active, C = Completed, X
= Cancelled.
 Provide a means for leaving the data entry screen without creating an input record e.g. pressing ESC key, clicking on
Cancel.
 After the form has been completely filled in and validated, give the screen user a final opportunity to examine an
accept or reject the complete set of data before it is committed for input e.g. Add this Record? Yes/No
 Provide a means for moving from field to field on the form e.g. tab, arrow keys, mouse, etc.
 If the operator will be using some source document during data entry, design the screen form layout to match it (be
similar) e.g. course registration, library user registration, etc.
 Allow the operator to add, change, delete, and view records. Give feedback on the changes e.g. Apply these changes?
Yes/No, Delete this record? Yes/No

ii. Process control screen/dialogue screen design


Process control or dialogue screens are input screens used for entering end user processing requests. Two methods are
commonly used: menu input and prompted input.

Menu Screens

 Menu screens display a list of processing options and allow the user to select one of the options (by typing or
clicking).

31 | P a g e MS. SHIELA MARIE MERINO


SYSTEM ANALYSIS AND DESIGN MANUAL

 If there are many processing options, present the options in a logical hierarchy of menus (i.e. nested menus) –
start with main menu which has broad classes of processing choices which when selected lead to submenus – with more
specific processing choices.

Score Processing

1. Add scores
2. Edit scores
3. Exit to main menu
Main Menu

1. Student score
2. Class list
3. Report
4. Exit the system Report Processing

1. Print ABC report


2. Print CDE report
3. Print FGH report
4. Exit to main menu

NB: May be pull-down/drop-down menus.


 Ensure that menu hierarchy reflects groupings that are logical from the users’ point of view.
 Provide shortcuts to the desired menu for experienced users.
Prompt screens
With prompted input process control methods, the user types something in response to a prompt that appears on the screen
(e.g. commands, statements) in the required syntax.
Prompt screen display is initially an almost blank screen with a single prompt; when the user responds to the first prompt,
usually the second prompt displays below the first prompt and response e.g.
Do you wish to add, edit, delete, display or print records?
>PRINT
Which report do you want printed?
>2004 graduates
Printing ….

32 | P a g e MS. SHIELA MARIE MERINO


SYSTEM ANALYSIS AND DESIGN MANUAL

Question/Answer screen is an example of prompted input screen.

Another type of prompted input is natural language screen – using near natural English sentences.
iii. Help Screen Design
End users occasionally require additional information or assistance – so we design screens that provide this information – help
screens.
Help screens display text that explains concepts, procedures, menu choices, functions keys, formats, etc.
Users request help by typing /clicking on a special character (e.g. ?) or pressing a special key (e.g. F1) or both.
Provide context-sensitive help i.e. that relevant to what the user was doing when the user requested for help. You may also
include help topics for user to choose the appropriate topic (or both).
All guidelines for designing other screens apply, as well as:
 Provide a direct route for users to return to the point where help was requested.
 Title every help screen to identify the help text that follows.
 Write the help messages/text in an easy to understand, everyday language.
 Present attractive screens, they should not be too crowded.
 Provide examples whenever appropriate.
4. DESIGN THE DATABASE
 During analysis you describe all the data elements in the information system, create DFDs, designate data
stores, assign the data elements to the data stores, and normalise the data store designs. During design, you only evaluate and
refine the data store designs.
Steps:
 `Create the initial ERD
 Assign all data elements to entities
 Normalise entities and create the final ERD
 Verify all data dictionary entries (ensure DD is complete).

33 | P a g e MS. SHIELA MARIE MERINO


SYSTEM ANALYSIS AND DESIGN MANUAL

5. DESIGN THE SYSTEMS PROCESSING

Design the processing that will be accomplished by the systems software and utilities as well as any processing that will be
performed by people – i.e itemize in detail all the procedures/tasks that will be executed by personnel and machines.
6. PROGRAM DESIGN

Design the program software – programs to accomplish process descriptions that are to be handled by machines/computer.
Use tools like program flow charts, structured English statements, pseudocodes, decision trees/tables etc.
7. SECURITY AND CONTROL DESIGN
This involves specifying measures to counter the risks and threats to data and information systems.
8. PREPARE AND PRESENT THE SYSTEMS DESIGN SPECIFICATION

Contents of System Design Specification

(a) General Overview

 This will define, in not too much detail, the facilities which the system provides in terms of the input, output and
storage provisions of the system.

 It will contain a system diagram, and will give the environment and basic operations
of the system and

 Describe its interface with existing systems.

(b) System Operation

 The detailed specification of the logic of the system is given.

 Essentially, this will show how the input data is converted into output information and when the files are updated.

 Each program will be shown and placed in relation to the other programs.

 Exact details of volumes of data and frequencies of processing will be described, together with any special requirements
for recovery and restart in the event of system failure.

(c) Hardware and Software Requirements

This section will define in detail the hardware and systems software which must be
available to enable the new applications system to operate successfully.

(d) Output Layouts

This section will contain detailed plans of the layouts of every output produced by the system.

(e) Input Layouts

34 | P a g e MS. SHIELA MARIE MERINO


SYSTEM ANALYSIS AND DESIGN MANUAL

This section will contain detailed plans of the layouts of all input to the system.

(f) Data Storage Specifications

Exact plans for every record to be used by the system will be specified, and the structure of the files or database will be given.

(g) Detailed Specifications

For each program or module the following will be given:

 A general functional description specifying the facilities to be provided by the program;

 Details of input;

 Details of output;

 A detailed specification of any particular processing required, e.g. actions in the event of errors validation
procedures, any formulae to be applied, any standard subroutines to be used.

(h) Manual Procedures

This section of the system design specification will define the clerical procedures for the new system.

(i) Test Requirements

This section will define in detail the tests to which the system will be subjected, to prove that it has been correctly
manufactured.

(j) System Set-Up Procedures

The mechanisms for the transfer (of of data to the new system from the current one) must be carefully designed.

The mechanisms will be both manual and computerised, requiring the definition of special forms, programs, clerical
procedures, etc.

(k) Glossary of Terms

A design specification will use jargon terms in order to be precise.

(l) Index

In a complex system, the specification will be very large, and an index to it will be a great aid.

Processing Considerations

(a) Batch Processing

The logical processes that have to be carried out assist us in breaking up the overall process into far smaller ones. As soon as we

35 | P a g e MS. SHIELA MARIE MERINO


SYSTEM ANALYSIS AND DESIGN MANUAL

carry out this split in the overall process, we are faced with the need to convey information from one process to the next. We
create an intermediate or work file in one process and use it as part of the input to the next process. Usually, the complete set of
input transactions will be handled by the initial process, and an intermediate file will be created ready for the next stage. This
approach assumes that all transactions will be available for processing at the same time, and is known as the batch processing
approach, as we discussed in a previous module of the course.

(b) Transaction Processing

Many systems require that a transaction be processed as soon as it is received; it cannot wait to be batched. For such a system we
cannot use an intermediate file. In such transaction processing systems (we previously called these on-line processing systems)
we may need to cope with a number of transactions being processed simultaneously and independently through the system.

We can now see that the time taken to react to a transaction - the response time - required of the system will influence the way
we tackle its design. The response time will also
influence the way in which the data is stored.

(c) Storage Considerations

The selection of an appropriate method for organising the stored data is another important feature of the designer's task. The
main points to be considered are:

 Volume of storage required;

 Method of access (sequential or random) of all processes;

 Volatility of the data;


 Activity of the collection of data in relation to all processes using it; (e) Access (response) times required;

 Predictable additional volumes and uses.

(d) Maintenance and Expansion

The design of a system must take into account the more or less inevitable maintenance that will be needed to keep the system
up-to-date. If this inevitability is accepted, then design
may be developed in such a way as to make amendments easier to carry out than they
would otherwise be.

(e) System Design Constraints

 The budget: A well designed system incurs greater expenses.

 Time: Time taken to produce a very usable system would increase development cost and delay system
delivery.

36 | P a g e MS. SHIELA MARIE MERINO


SYSTEM ANALYSIS AND DESIGN MANUAL

 Integration with existing system: Existing and planned systems may limit option and available
features of a system.

 Knowledge and skills: The knowledge and skills of the development team may limit the designer’s options
as might the competence or computer literacy of potential users.

 Standards: Standards may drive the design tasks in a specified direction i.e the final design may be
constrained by procedures and methods.

File organization methods

File organization is the arrangements of records within a particular file.There are four methods of storing files and retrieving
them from secondary storage devices.

a) Sequential file organization


In this organization records are stored and accessed in a particular order sorted using key field. The key field is used to

Search for a particular record. Searching commences at the beginning of the file and proceeds to the ‘tail’ of the file until the
record is reached.

Mainly used with magnetic tapes.

37 | P a g e MS. SHIELA MARIE MERINO


SYSTEM ANALYSIS AND DESIGN MANUAL

Advantages

 Simple to understand the approach.


 Easy to organize, maintain and understand.
 Reading a record requires only a key field.
 Inexpensive input/output media devices are used.
Disadvantages

 Entire file must be accessed even when the activity rate is very low.
 Random enquiries are impossible to handle
 Data redundancy is typically high.

b) Random or direct file organization


In this organization records are stored randomly but accessed directly. To access a file stored randomly, a record key is used to
determine where a record is stored on the storage media. Used in magnetic and optical disks.

38 | P a g e MS. SHIELA MARIE MERINO


SYSTEM ANALYSIS AND DESIGN MANUAL

Advantages

 Records are quickly accessed.


 File update is easily achieved.
 They do not require the use of indexes.
Disadvantages

 Data may accidentally erased or over written. Unless special precaution taken.
 Expensive hardware and software resources are required.
 Relative complexity of programming.
 System design around it is complex and costly

c) Serial file organization


The records are laid out contiguously one after the other in no particular sequence. The records are stored one after the other
in the same order they come into the file and there exists no relationship between contiguous records. Used with magnetic
tapes.

39 | P a g e MS. SHIELA MARIE MERINO


SYSTEM ANALYSIS AND DESIGN MANUAL

When you select the medium for your sequential file, consider the following:

 Speed of access---Tape is significantly slower than disk. In general, most removable media storage
(magnetic, optical, and so forth) devices are slower than your fixed disks.
 Frequency of use---Use removable media devices to store relatively static files, and save your fixed
disk space for more dynamic files.
 Cost---Fixed disks are generally more expensive than removable media devices. The more
frequently you plan to access the data, the easier it is to justify maintaining the data on your fixed disks. For example, data that
is accessed daily must be kept on readily available disks; quarterly or annual data could be offloaded to removable media.
 Transportability---Use removable media if you need to use the file across systems that have no
common disk devices (this technique is commonly referred to as "sneakernetting").

d) Line-Sequential Organization
Line-sequential files are like sequential files, except that the records can contain only characters as data. Line-sequential files
are maintained by the native byte stream files of the operating system.

40 | P a g e MS. SHIELA MARIE MERINO


SYSTEM ANALYSIS AND DESIGN MANUAL

A line sequential file consists of records of varying lengths arranged in the order in which they were written to the file. Each
record is terminated with a "new line" character. The new line character is a line feed record terminator ('0A' hex).

Each record in a line sequential file should contain only printable characters and should not be written with a WRITE
statements that contains either a BEFORE ADVANCING or AFTER ADVANCING statement.

e) Indexed sequential file organization


This method is similar to sequential method, only that an index is used to enable the computer to locate individual records on
the storage media. Used with magnetic disk.

o emphasize the nature of the key.

You must define at least one main key, called the primary key, for an indexed file. You may also optionally define from 1 to
254 additional keys called alternate keys. Each alternate key represents an additional data item in each record of the file.
You can also use the key value in any of these alternate keys as a means of identifying the record for retrieval.

You define primary and alternate key values in the Record Description entry. Primary and alternate key values need not be
unique if you specify the WITH DUPLICATES phrase in the file description entry (FD). When duplicate key values are
present, you can retrieve the first record written in the logical sort order of the records with the same key value and any
subsequent records using the READ NEXT phrase. The logical sort order controls the order of sequential processing of the
record.

When you open a file, you must specify the same number and type of keys that were specified when the file was created. If the
number or type of keys does not match, the system will issue a run-time diagnostic when you try to open the file.

41 | P a g e MS. SHIELA MARIE MERINO


SYSTEM ANALYSIS AND DESIGN MANUAL

Advantages

 Records can be accessed sequentially or randomly.


 Records are not duplicated.
 Accessing of records can be fast if done randomly.
Disadvantages

 Storage medium is rather expensive.


 Accessing records sequentially is time consuming.
 Processing records sequentially may introduce redundancy.
The indexed sequential files may be accessed using 3 methods namely:
 sequential access;
 selective sequential access;
 random access.
 Sequential access is achieved by using the key fields; records are read one after another, until the one matching the
search key is found which then is read into memory.
 Selective sequential access is achieved using indexes with records of interest being accessed.
 Random access is achieved by moving forward and backward in the file in a non-orderly manner to access the record
of interest.

f) Inverted List
In file organization, this is a file that is indexed on many of the attributes of the data itself. The inverted list method has a single
index for each key type. The records are not necessarily stored in a sequence. They are placed in the are data storage area, but
indexes are updated for the record keys and location.

42 | P a g e MS. SHIELA MARIE MERINO


SYSTEM ANALYSIS AND DESIGN MANUAL

vii. Relative File Organization

A relative file consists of fixed-size record cells and uses a key to retrieve its records. The key, called a relative key, is an
integer that specifies the record's storage cell or record number within the file. It is analogous to the subscript of a table.
Relative file processing is available only on disk devices. In relative file organization, not every cell must contain a record.
Although each cell occupies one record space, a field preceding the record on the storage medium indicates whether or not
that cell contains a valid record. Thus, a file can contain fewer records than it has cells, and the empty cells can be anywhere in
the file.

HUMAN-COMPUTER INTERFACE
Human-computer interaction (HCI) is the study of interaction between people (users)

and computers. It is an interdisciplinary subject, relating computer science with many other fields of study and research.
Interaction between users and computers occurs at the user interface (or simply interface), which includes both hardware (e.g.,
general purpose computer peripherals and major devices such as the Boeing B777) and software, which together present an
environment in which humans (from pilots tosurgeons) are provided a wide extension of their native abilities.

Shneiderman (1987) suggests eight rules for HC dialogue design


 Dialogue should be consistent
 The system should allow users shortcuts through some parts of familiar dialogue
 Dialogues should offer informative feedback
 Sequence of dialogues should be organised into logical groups
 Systems should offer simple error handling
 Systems should allow actions to be reversed
 Systems should allow experienced users to feel as though they are in control rather than the system
 Systems should aim to reduce short-term memory load – users should not be expected to remember too much
These are ten general principles for user interface design. They are called "heuristics" - Nielsen’s Ten Usability Heuristics.
i. Visibility of system status: The system should always keep users informed about what is going on,
through appropriate feedback within reasonable time.
ii. Match between system and the real world; The system should speak the users' language, with words,
phrases and concepts familiar to the user, rather than system-oriented terms. Follow real-world conventions, making
information appear in a natural and logical order.
iii. User control and freedom: Users often choose system functions by mistake and will need a
clearly marked "emergency exit" to leave the unwanted state without having to go through an extended dialogue. Support
undo and redo.

43 | P a g e MS. SHIELA MARIE MERINO


SYSTEM ANALYSIS AND DESIGN MANUAL

iv. Consistency and standards: Users should not have to wonder whether different words, situations, or
actions mean the same thing. Follow platform conventions.
v. Error prevention: Even better than good error messages is a careful design which prevents a
problem from occurring in the first place. Either eliminate error-prone conditions or check for them and present users with a
confirmation option before they commit to the action.
vi. Recognition rather than recall: Minimize the user's memory load by making objects, actions, and
options visible. The user should not have to remember information from one part of the dialogue to another. Instructions for
use of the system should be visible or easily retrievable whenever appropriate.
vii. Flexibility and efficiency of use:Accelerators -- unseen by the novice user -- may often speed up the
interaction for the expert user such that the system can cater to both inexperienced and experienced users. Allow users to
tailor frequent actions.
viii. Aesthetic and minimalist design: Dialogues should not contain information which is irrelevant or
rarely needed. Every extra unit of information in a dialogue competes with the relevant units of information and diminishes
their relative visibility.
ix. Help users recognize, diagnose, and recover from errors “Error messages should be expressed in plain
language (no codes), precisely indicate the problem, and constructively suggest a solution.
x. Help and documentation: Even though it is better if the system can be used without documentation, it
may be necessary to provide help and documentation. Any such information should be easy to search, focused on the user's
task, list concrete steps to be carried out, and not be too large.

44 | P a g e MS. SHIELA MARIE MERINO


SYSTEM ANALYSIS AND DESIGN MANUAL

TYPES OF USER INTERFACES/ DIALOG OR INTERACTIVE STYLES

a) Direct manipulation
b) Menu selection
c) Form fill-in
d) Command language
e) Natural language
f) Mixed Interactive Styles
g) Graphical user Interface
h) Voice User interface

(a) Command Language

Command language is one of the oldest and most commonly used dialogue styles. It a command language, the user types in
commands to the computer system and the system then carries out these commands. The user my type
Delete wambuicvfile
And the system may respond with either a prompt to indicate that the command has been carried out or with a message stating
why this command could not be executed.
Command language leaves the user in control of the interaction; the system simply implements the commands that the user
issues.
The difficulty is that the user has to remember many commands and the syntax of these commands.

b) Menus

In a menu approach, the user simply chooses the command from a list (menu) of possible commands. Where there are many
possible commands and displaying them all might prove difficult, then menus are sometimes organised hierarchically in a tree-
like structure.
Like command language, menus also leave the user in control of the interaction. It should however be noted that the user
never has complete freedom within the dialogue because the total assemblage of possible command alternatives is dictated in
advance by the design of the system.

c) Forms fill-ins

The user is presented with a form where the various portions must be filled-in, leaving the user with few alternatives.
Forms fill-in leaves the user very little control over the dialogue. However an advantage is that the user rarely needs to
remember commands or their syntax.

d) Direct manipulation

45 | P a g e MS. SHIELA MARIE MERINO


SYSTEM ANALYSIS AND DESIGN MANUAL

The idea of a direct manipulation is that the user’s actions should directly affect what happens on the screen to the extent that
there is a feeling of physically manipulation of the objects on the screen.
Some of the advantages of direct manipulation interfaces that Shneiderman (1982) has listed are:
 Novices can learn basic functionality quickly, usually through a demonstration by a more experienced user.
 Experts can work extremely rapidly to carry out a wide range of tasks.
 Error messages are rarely needed.
 Users can see immediately if their actions are furthering their goals and if not, they
can simply change the direction of their actions.
 Direct manipulation leaves the user in control of the dialogue although the graphical
presentations necessary for the direct manipulation interface may not be suitable for all tasks
and in some situations may even mislead and confuse users.

Interaction style Main advantages Main disadvantages Application examples

Direct manipulation Fast and intuitive May be hard to implement. Video games
interaction
Only suitable where there is a CAD systems
Easy to learn visual metaphor for tasks and
objects.

Menu selection Avoids user error Slow for experienced users. Most general-purpose systems

Little typing required Can become complex if many


menu options.

Form fill-in Simple data entry Takes up a lot of screen space. Stock control, Personal loan
processing
Easy to learn Causes problems where user
options do not match the form
Checkable
fields.

Command language Powerful and flexible Hard to learn. Operating systems, Command and
control systems
Poor error management.

Natural language Accessible to casual users Requires more typing. Information retrieval systems

46 | P a g e MS. SHIELA MARIE MERINO


SYSTEM ANALYSIS AND DESIGN MANUAL

Easily extended Natural language understanding


systems are unreliable.

e) Graphical User Interfaces

Graphical user interfaces make computing easier by separating the logical threads of computing from the presentation of those
threads to the user, through visual content on the display device. This is commonly done through a window system that is
controlled by an operating system’s window manager. The WIMP (Windows, Icons, Menus, and Pointers) interface is the most
common implementation of graphical user interfaces today, and will be examined in detail later in this document.

GUI Description
Characteristics
Windows Multiple windows allow different information to be displayed
simultaneously on the user’s screen

Icons Represent different types of information-files, programs etc.

Menus Commands are selected from a menu rather than typed in a


command language

Pointing
Graphics Graphical elements can be mixed with text on the screen display

Advantages of GUIs
 They are relatively easy to learn and use. Users with no computing experience can learn to use the interface after a
training session
 The user has multiple screens (windows) for system interaction. Switching from one task to another is possible
without losing sight of information generated during the fist take.
 Fast full-screen interaction is possible with immediate access to anywhere on the screen.
f) Voice User Interfaces

Voice User Interfaces (VUIs) use speech technology to provide people with access to information and to allow them to
perform transactions. VUI development was driven by customer dissatisfaction with touchtone telephony interactions, the

47 | P a g e MS. SHIELA MARIE MERINO


SYSTEM ANALYSIS AND DESIGN MANUAL

need for cheaper and more effective systems to meet customer needs, and the advancement of speech technology to the stage
where it was robust and reliable enough to deliver effective interaction.

A Voice User Interface is what a person interacts with when using a spoken language application. Auditory interfaces interact
with the user purely through sound. Speech is input by the user, and speech or nonverbal audio is output by the system

g) Text-to Speech (TTS) interface

Text-to-Speech (TTS) Synthesis and Speaker Verification. Speaker Verification involves collecting a small amount of a person’s
voice to create a voice template, which is used to enrol a person into a system and then compare future conversation. The
system can be used, for example, to replace personal

Internet-based systems

Most of the design considerations already described apply equally well to Internet-
based systems. However, as the user is often a member of the public, rather than an employee, the look-and-feel of the site
takes on added importance. Here are some general guidelines:

 Keep the display simple - avoid clutter

 Keep the display consistent - the styles and layouts of pages should be the same

 Facilitate movement - it should be straightforward to find a way through the


system

 Design an attractive display - make it visually appealing to the user

 Use icons in display designs - a common example in ecommerce sites is the


shopping cart

 Use colour in display design - use contrasting colours for foreground and background and use colour to highlight
important fields. However do not overdo
it.

There are several tools and techniques used for designing. These tools and techniques are:

 Flowchart
 Data flow diagram (DFDs)
 Data dictionary
 Structured English
 Decision table
 Decision tree

48 | P a g e MS. SHIELA MARIE MERINO


SYSTEM ANALYSIS AND DESIGN MANUAL

SYSTEM CONSTRUCTION
It entails the development, installation and testing of system components.
Construction phase includes tasks such as:
1. Building and testing the database.
2. Building and testing the networks-incase the system runs on a networked environment
3. Installation and testing of software packages
4. Writing and testing new programs (Programming and debugging)
5. Acquiring and testing the hardware
Programming and Debugging
Involves translating a design into a program (programming) and removing errors from that program (debugging).
Programming is a personal activity - there is no generic programming process.
Programmers carry out some program testing to discover faults in the program and remove these faults in the debugging
process.

Diagram: The debugging process


There are 3 types of errors: Semantic errors, Syntax errors and logical errors

49 | P a g e MS. SHIELA MARIE MERINO


SYSTEM ANALYSIS AND DESIGN MANUAL

SOFTWARE VERIFICATION AND VALIDATION


• Validation is intended to ensure that the software meets the requirements of the system users customer work-
validation
• Verification is intended to show that a system conforms to its specification Involves checking and review
processes and system testing.
Validation: Are we building the right product?
Verification”: Are we building the product right?
Verification and validation techniques
• Software inspections
• Testing
Verification Techniques
1. Software inspections
• Analyses and checks system representations such as the requirements documentations, design diagrams and program
source code. They may be applied at all stages of the process unlike testing which can only be used when a prototype or an
executable program is available.
• Inspection techniques include program inspection, automated source code analysis ,or analysis of associated
documents. and formal verification.
• Software inspections and automated analyses are static techniques as they do not require the software to be executed
can only check the correspondence between a program and its specification (verification) .They cannot demonstrate that the
software is operationally useful; nor can they check non-functional characteristics of the software such as its performance and
reliability.
Reviews and Inspections have proved to be an effective technique of error detection in components and subsystems: Error
can be found more cheaply through inspection than by extensive program testing. Reasons are:
 Many different defects may be discovered in a single inspection session. The problem with testing is
that it can only detect one error per test because defects can cause the program to crash or interfere with the symptoms of
other program defects.
 They reuse domain and programming language knowledge. In essence, the reviewers are likely to have
seen the types of error that commonly occur in particular programming languages and in particular types of applications.
They can therefore focus on these error types during analysis.
However inspections:
 Cannot validate dynamic behavior of the system.
 It’s often impractical to inspect a complete system that is integrated from a number of different
sub-systems. Testing is the only possible V & V technique at the system level.

50 | P a g e MS. SHIELA MARIE MERINO


SYSTEM ANALYSIS AND DESIGN MANUAL

The process of inspection is a formal one carried out by a team of at least 4 people. Inspection Team members systematically
analyses the code and point out the possible defects. Proposal for team members’ roles may be:
i. Author or owner- The programmer or designer responsible for producing the program or document.
Also responsible for fixing defects discovered during the inspection process.
ii. Reader- Reads the code aloud to the inspection team
iii. Tester – Inspects the code from a testing perspective
iv. Inspector- Finds errors, omissions and inconsistencies in program and documents.
v. Subscribe -Records the results of the inspection meeting
vi. Moderator or chairperson - organizes the inspection process and facilitates the process.
vii. Chief moderator – Responsible for inspection process improvements, checklist updating, and standards
development and so on.

A very general inspection process is shown below

Fig: The inspection process


- The moderator is responsible for inspection planning. This involves selecting an inspection team, organizing a
meeting room and ensuring that the materials to be inspected and its specifications are complete.
- The program to be inspected is presented to the inspection team during overview stage where the author of the code
describes what the program is intended to do.
- This is followed by a period of individual preparation where each inspection team member studies the specification
and the program and looks for defects in the code.
- The inspection is done and should be relatively short(<= 2 hours) and should be exclusively concerned with
identifying defects, anomalies and non-compliance standards. The inspection team should not suggest how these defects should
be corrected nor recommend changes to these components.
- Following inspection, the program is modified by its author to correct the identified problems.
- In the follow-up stage the moderator must decide whether a reinspection of the code is required. If it is not required,
then the document is approved by the moderator fro release.
An organization may use the results of inspection process as means of process improvement

51 | P a g e MS. SHIELA MARIE MERINO


SYSTEM ANALYSIS AND DESIGN MANUAL

The inspection process should always be driven by a checklist of common programmer errors as shown below. These
checklists vary according to the programming language used because of different levels of checking provided by the language
compiler.

2. Structured Walkthroughs
There is nothing mysterious about the basic concept of a walkthrough: It is simply a peer group review of any product-a
computer program or system: thus, we will referring to walkthroughs of program listings, structure charts, dataflow diagrams,
entity-relationship diagrams, and other models that are associated with the development of information systems. Alternatively, a
walkthrough might be concerned with operational prototypes of a system in order to review the functionality, performance or
user interface.

Walkthroughs are an effective way to improve the quality of the source code of a computer program and the documents that
describe the design, architecture, performance, user interface, or functionality requirements of a system.

Each member of this team has a given role, as follows:

• The presenter, who usually is the person or entity (organization) that "puts on the table" the product they built and is to be
reviewed.

• The coordinator, who usually is a designed team leader for the process.

• The secretary / scribe, who usually records the discussed facts, issues, etc., takes and distributed minutes.

• The maintenance oracle, who is a person that will review the product from the future maintenance standpoint

• The standards bearer, who makes sure that the pre-established process or standards are being followed.

• The user representative, user as the names says a person representing the user community for who the product is being built.

• Other reviewers, or walkthrough team members that participate in the review process, have a good technical understanding of
the particular product that is presented for review.

• Time keeper, who as title says keeps the time, in the sense that he/she makes sure that the pre-established allotted time frames
for each step in the process, is spent accordingly. This is very important task, that there is a tendency to debate too long issues
that will result in an unproductive use of all team members' time and productivity.

Before the review commences, each of the above team members have to "get ready", i.e., prepare for the review. This usually
consists in pre-reviewing an advanced copy distributed to all team members, prior to formal review activity.

52 | P a g e MS. SHIELA MARIE MERINO


SYSTEM ANALYSIS AND DESIGN MANUAL

During the review process, special designed forms and procedures are being used. These forms and procedures, also known as
checklists are each designed to meet the requirements of a particular stage of the SDLC.

After the review ended, a list of documentation (forms) are being prepared, analyzed, and recommendation are being made to
the walkthrough team members. Usually one of the three outcomes are possible: the product is fully accepted and it's
recommended to be taken to the next stage, or the product is rejected, and therefore a series of recommendation are given to
the development team, or the product is (partially) accepted but with a series of minor recommendation to be implemented
before, passing to the next stage.

To begin walkthroughs:

 Be prepared to waste time the first few attempts


 Rely on the group's sense of responsibility
 Enforce the time limit
 Everybody signs the report!
 Make sure everybody has the standards for style
 Managers should stay out, and not count bugs

To build teams;

 Find a project,
 Let teams evolve naturally,
 Don't disband the team at the end of a project
 Encourage fresh blood
 Respect and trust the team
 Reward and punish the team equally (not quite so relevant)
 Protect the team from outside pressure (also not relevant)

There are a number of types of walkthroughs:

• Specification walkthroughs are, as the name implies, a review of the user requirements, or specifications, of an information
system. This type of walkthrough is concerned with the functionality of the system.

• Design walkthroughs assume that the functional requirements and user implementation model of the system are correct.

• Code walkthroughs often attract the most attention in organizations simply because code used to be private, and because code is
the final tangible product of the development project.

53 | P a g e MS. SHIELA MARIE MERINO


SYSTEM ANALYSIS AND DESIGN MANUAL

• Test walkthroughs are conducted to ensure the adequacy of the test data for the system, not to examine the output from the
test run.

Benefits of walkthroughs

 Improves software quality


 Reduces risks of discontinuity
 Provides training for junior personnel
 Time-effective and cost-effective

Checklists

Checklist: System Specification

 Are major functions defined in a bounded and unambiguous fashion?


 Are interfaces between system elements defined?
 Have performance bounds been established for the system as a whole and for each element?
 Are design constraints established for each element?
 Has the best alternative been selected?
 Is the solution technologically feasible?
 Has a mechanism for system validation and verification been established?
 Is there consistency among all system elements?

Checklist: Requirements Analysis

 Is information domain analysis complete, consistent, and accurate?


 Is problem partitioning complete?
 Are external and internal interfaces properly defined?
 Does the data model properly reflect data objects, their attributes, and relationships?
 Are all requirements traceable to system level?
 Has prototyping been conducted for the user/customer?
 Is performance achievable within the constraints imposed by other system elements?
 Are requirements consistent with schedule, resources, and budget?
 Are validation criteria complete?

Checklist: Preliminary Design

 Are software requirements reflected in the software architecture?

54 | P a g e MS. SHIELA MARIE MERINO


SYSTEM ANALYSIS AND DESIGN MANUAL

 Is effective modularity achieved? Are modules functionally independent?


 Is the program architecture factored?
 Are interfaces defined for modules and external system elements?
 Is the data structure consistent with the information domain?
 Is the data structure consistent with software requirements?
 Has maintainability been considered?
 Have quality factors been explicitly assessed?

Checklist: Design

 Does the algorithm accomplish the desired function?


 Is the algorithm logically correct?
 Is the interface consistent with the architectural design?
 Is the logical complexity reasonable?
 Have error handling and "antibugging" been specified?
 Are local data structures properly defined?
 Are structured programming constructs used throughout?
 Is design detail amenable to implementation language?
 Which operating system or language-dependent features are used?
 Is compound or inverse logic used?
 Has maintainability been considered?

Checklist: Code

 Has the design properly been translated into code?


 Are there misspellings and typos?
 Does the code adhere to proper use of language conventions?
 Is there compliance with coding standards for language style, comments, prologues, ...?
 Are there incorrect or ambiguous comments?
 Are data types and data declarations proper?
 Are physical constants correct?
 Have all the items on the design walkthrough checklist been reapplied as required?

Checklist: Test Plan

 Have major test phases properly been identified and sequenced?

55 | P a g e MS. SHIELA MARIE MERINO


SYSTEM ANALYSIS AND DESIGN MANUAL

 Has traceability to validation criteria and requirements been established?


 Are major functions demonstrated early? (top-down)
 Is the test plan consistent with the overall project plan?
 Has a test schedule been explicitly defined?
 Are test resources and tools identified and available?
 Has a test record-keeping mechanism been established?
 Have test stubs been identified and has work to develop them been scheduled?
 Has stress testing for the software been specified?
 Has a regression testing mechanism been established?

Checklist: Test Procedure

 Have both white and black box test been specified?


 Have all independent logic paths been tested?
 Have test cases been identified and listed with their expected results?
 Is error handling being tested?
 Are boundary values being tested?
 Are timing and performance being tested?
 Has an acceptable variation form the expected results been specified?

Checklist: Maintenance

 Have side effects associated with the change been considered?


 Has the request for change been documented, evaluated, and approved?
 Has the change, once made, been documented and reported to all interested parties?
 Have appropriate walkthroughs been conducted?
 Has a final acceptance review been conducted to ensure that all software has been properly
updated, tested, and replaced?

Validation Technique-Software testing


Software testing has three main purposes: verification, validation, and defect finding.
♦ The verification process confirms that the software meets its technical specifications. A “specification” is a description of a
function in terms of a measurable output value given a specific input value under specific preconditions.
♦ The validation process confirms that the software meets the business requirements.
♦ A defect is a variance between the expected and actual result. The defect’s ultimate source may be traced to a fault
introduced in the specification, design, or development (coding) phases.

56 | P a g e MS. SHIELA MARIE MERINO


SYSTEM ANALYSIS AND DESIGN MANUAL

Software testing answers questions that development testing and code reviews can’t.
♦ Does it really work as expected?
♦ Does it meet the users’ requirements?
♦ Is it what the users expect?
♦ Do the users like it?
♦ Is it compatible with our other systems?
♦ How does it perform?
♦ How does it scale when more users are added?
♦ Which areas need more work?
♦ Is it ready for release?
What do we test?
Business requirements
♦ Functional design requirements
♦ Technical design requirements
♦ Regulatory requirements
♦ Programmer code
♦ Systems administration standards and restrictions
♦ Corporate standards
♦ Professional or trade association best practices
♦ Hardware configuration
♦ Cultural issues and language differences
Testing Process
• Testing is the process of examining a software product to find errors.
• The basic unit of testing is the test case. A test case consists of a test case type, which is the aspect of the system
that the test case is supposed to exercise; test conditions, which consist of the input values for the test; the environmental state
of the system to be used in the test; and the expected behavior of the system given the inputs and environmental factors.
• It involves executing an implementation of the software with test data and examining the outputs of the
software and its operational behavior to check if it is performing as required. Testing is a dynamic technique of verification and
validation because it works with an executable representation of the system
Why testing than inspection:
i. Testing is the only possible V & V technique at the system level.
ii. Testing is necessary for reliability assessment, performance analysis, user interface validation and to check that the
software requirements are what the user really wants.
The testing levels

57 | P a g e MS. SHIELA MARIE MERINO


SYSTEM ANALYSIS AND DESIGN MANUAL

There is a five-stage testing process where system components are tested, the integrated system tested, and finally the system
is tested with customers. Ideally component defects are discovered early in the process and interface problems when system is
integrated.

• Unit testing: Individual components are tested to ensure they operate correctly. Each component that is tested
independently, without other system components.
• Module testing: A module is a collection of dependent components such as class objects, an abstract data type or
some collection of procedures and functions. The module is independently tested.
• Sub-system testing: Involves testing collection of modules which have been integrated into subsystem. The subsystem
test process should concentrate on the detection of module interface errors rigorously exercising these interfaces
• System testing: The subsystems are integrated to make up the system.
This process is concerned with finding errors that result from anticipated interaction between subsystems interface problems.
It is also concerned about validating that the system meets its functional and non-functional requirements and testing the
emergent system properties.

Diagram : Testing phases in SE

58 | P a g e MS. SHIELA MARIE MERINO


SYSTEM ANALYSIS AND DESIGN MANUAL

 Integration testing
Once individual programs have been tested, they must be integrated to create a partial or complete system.
 Finally, acceptance testing can be conducted by the end-user, customer, or client to validate whether or
not to accept the product. Acceptance testing may be performed as part of the hand-off process between any two phases of
development.

Approaches to testing/ Testing methods


 Defect testing:
The goal of defect testing is to expose latent defects in a software system before the system is delivered A successful defect test
is a test which causes the system to perform incorrectly and hence exposes a defect. This demonstrates the presence, not the
absence of program faults.
Software defect testing methods are traditionally divided into black box testing and white box testing.
 Black box testing
Black box testing treats the software as a "black box"—without any knowledge of internal implementation. It focuses on
functional requirements hence sometimes called Functional testing.
Its goal is to identify whether the input/output behavior of the system is consistent with stated specification.
 White box testing
White box testing (also called structural testing)is when the tester has access to the internal data structures and algorithms
including the code that implement these.
Its goal is to assess the adequacy of the system’s logic i.e. it focuses on internal construction of the program. Statistical testing
is an example of such a method.
Statistical testing is used to test the program’s performance and reliability and to check how it works under operational
conditions. Tests are designed to reflect the actual user inputs and their efficiencies.
 Grey box testing involves having access to internal data structures and algorithms for purposes of designing the test
cases, but testing at the user, or black-box level. Manipulating input data and formatting output do not qualify as grey box,
because the input and output are clearly outside of the "black-box" that we are calling the system under test. This distinction is
particularly important when conducting integration testing between two modules of code written by two different developers,
where only the interfaces are exposed for test.
Objectives of testing

59 | P a g e MS. SHIELA MARIE MERINO


SYSTEM ANALYSIS AND DESIGN MANUAL

Regression testing focuses on finding defects after a major code change has occurred. Specifically, it seeks to uncover software
regressions, or old bugs that have come back. Such regressions occur whenever software functionality that was previously
working correctly stops working as intended.
(Before shipping the final version of software, alpha and beta testing are often done additionally)
Alpha testing is simulated or actual operational testing by potential users/customers or an independent test team at the
developers' site. Alpha testing is often employed for off-the-shelf software as a form of internal acceptance testing, before the
software goes to beta testing. Its also called verification testing
Beta testing comes after alpha testing. Versions of the software, known as beta versions, are released to a limited audience
outside of the programming team. The software is released to groups of people so that further testing can ensure the product
has few faults or bugs. Sometimes, beta versions are made available to the open public to increase the feedback field to a
maximal number of future users. It is also called validation testing which requires the system to perform correctly using a
given acceptance test cases.
Finally, acceptance testing
Acceptance testing may mean two things:
 A smoke test is used as an acceptance test prior to introducing a new build to the main testing process, i.e.
before integration or regression.
 User Acceptance Testing (UAT)- Can be conducted by the end-user, customer, or client to validate whether
or not to accept the product.
Functional testing – This type of testing ignores the internal parts and focus on the output is as per requirement or not. Black-
box type testing geared to functional requirements of an application.
Non-Functional testing
Special methods exist to test non-functional aspects of software:
 Performance testing checks to see if the software can handle large quantities of data or users. This is generally
referred to as software scalability. This activity of Non Functional Software Testing is often referred to as Endurance Testing.
 Stability testing checks to see if the software can continuously function well in or above an acceptable period. This
activity of Non Functional Software Testing is oftentimes referred to as load (or endurance) testing.
 Usability testing is needed to check if the user interface is easy to use and understand.
 Security testing is essential for software that processes confidential data to prevent system intrusion by hackers.
 Stress testing
 Exercises the system beyond its maximum design load. Stressing the system often causes defects to come to light... Systems
should not fail catastrophically. Stress testing checks for unacceptable loss of service or data.
 Stress testing is particularly relevant to distributed systems that can exhibit severe degradation as a network becomes
overloaded.
 Component testing
- Component or unit testing is the process of testing individual components in isolation.

60 | P a g e MS. SHIELA MARIE MERINO


SYSTEM ANALYSIS AND DESIGN MANUAL

- It is a defect testing process.


- Components may be:
- Individual functions or methods within an object;
- Object classes with several attributes and methods;
- Composite components with defined interfaces used to access their functionality.
 Interface testing
- Objectives are to detect faults due to interface errors or invalid assumptions about interfaces.
- Particularly important for object-oriented development as objects are defined by their interfaces.
Interface errors
Interface misuse- ling component calls another component and makes an error in its use of its interface e.g. parameters in the
wrong order.

 Load testing – Its a performance testing to check system behavior under load. Testing an application under heavy loads,
such as testing of a web site under a range of loads to determine at what point the system’s response time degrades or fails.

THE TESTING PROCESS THE V-MODEL OF SOFTWARE TESTING


The main process phases: requirements, specification, design and implementation have a corresponding verification
and validation testing phase. Implementation of modules are tested by unit testing, system design is tested by integration
testing, system specifications are tested by system testing and finally acceptance testing verifies the requirements. The V-model
gets its name from the timing of the phases. Starting from the requirements, the system is developed one phase at a time until
the lowest phase, the implementation phase, is finished. At this stage testing begins, starting from unit testing and moving up
one test level at a time until the acceptance testing phase is completed.

There is a higher level layer which gives a business view to the whole process. This instance is frequently called Business Case
and its testing counterpart is the Release Testing. This is not necessarily a formal testing process but we find it very important
since it represents a statement not only about the quality of the product but also about the client's strategy and objectives.

61 | P a g e MS. SHIELA MARIE MERINO


SYSTEM ANALYSIS AND DESIGN MANUAL

The V-model

The unit tests and integration tests ensure that the system design is followed in the code. The system and acceptance tests
ensure that the system does what the customer wants it to do. The test levels are planned so that each level tests different
aspects of the program and so that the testing levels are independent of each other. Someone is responsible for each level. Each
testing level will begin as soon as there is something to test. The traditional V-model states that testing at a higher level is
begun only when the previous test level is completed.

Items Covered by a Test Plan

omponent Description Purpose


Assigns responsibilities and
Specific people who are and
Responsibilities keeps everyone on track and
their assignments
focused
Code and systems status and Avoids misunderstandings
Assumptions
availability about schedules
Testing scope, schedule, Outlines the entire process
Test
duration, and prioritization and maps specific tests
Everyone knows what they
Communications plan—
Communication need to know when they
who, what, when, how
need to know it
Provides focus by
Critical items that will be
Risk Analysis identifying areas that are
tested
critical for success
Tells how to document a
How defects will be logged defect so that it can be
Defect Reporting
and documented reproduced, fixed, and
retested

62 | P a g e MS. SHIELA MARIE MERINO


SYSTEM ANALYSIS AND DESIGN MANUAL

The technical environment, Reduces or eliminates


Environment data, work area, and misunderstandings and
interfaces used in testing sources of potent

DOCUMENTATION.

Documentation is the creation and maintenance of significant information about the system in clear and concise manner.

Documentation also represents a functional description of the program and it can serve as a set of specifications to the
programmer.

System documentation is a life long process in the system development life cycle. All stages of system development should be
documented in order to help during future modification of the program or system. However, we can generally say that the
documentation produced falls into two classes:

1. Process documentation These documents record the process of development and maintenance. Plans, schedules, process
quality documents and organizational and project standards are process documentation.

2. Product documentation This documentation describes the product that is being developed. System documentation describes
the product from the point of view of the engineers developing and maintaining the system; user documentation provides a
product description that is oriented towards system users.

Documentation can be either Internal or external.

Internal documentation is the written non-executable lines (comments) in the source code that help other programmers
to understand the code statements. It also implies offline and online help facility that the user queries while using the system
especially on how to perform a particular task.

External documentation refers to reference materials such as the user manuals.

i. User oriented documentation/User manual: This type enables the user to learn how to use
the system as quickly as possible and with little help from the program / system developers.

Some modern on-line systems include "help" facilities, which reduce user documentation, but do not make it completely
unnecessary.

In general, users require documentation which tells them:

 What the system does and how it works.

 How to provide input data to the system and how to control it.

63 | P a g e MS. SHIELA MARIE MERINO


SYSTEM ANALYSIS AND DESIGN MANUAL

 How to identify and correct errors.

 What are their particular responsibilities?

As described above, each user manual must be tailored to suit the system and the particular users but, in general, the contents will
include:

(a) An introduction giving a simple overview of the system.

(b) Running the system; when users run the system themselves, the user guide must include details of how to switch on the
equipment, call up the programs they use, and also how to end sessions. In some cases, instructions for taking back-up copies of
data, etc. will need to be included.

(c) Input requirements: with some users, data input forms will need to be completed and sent to the data input department. In
other cases, users will themselves input the data, either from individual source documents or in batches which have been
assembled in the user department.

(d) Output with examples of all the different types relevant to the particular users. Some of these will be screen displays whilst
others will be printed reports available either as
standard output or on request.

(e) Error messages: an explanation of all the error messages which might occur, together
with the appropriate action to be taken. This section should include details of the person to contact if problems are experienced.

(f) Logging procedures: each installation should set up standards for manual logging of any exceptional occurrences, with details
of the action taken.

(g) A glossary of terms used.

(h) Index: all except the shortest of manuals should have a comprehensive index, since
the experienced user is likely to refer to it only occasionally but wants to be able to find the specific item quickly.

ii. Operator oriented documentation


It is meant for computer operators such as the technical staff. It helps them to install and maintain the system.

iii. Programmer oriented documentation/program documentation:

It is a detailed documentation written for skilled programmers. It provides necessary technical information to help in future
modification of the system. Different computer installations have various levels of program documentation, but in essence they all
include the following items.

(a) A large part of the program documentation will consist of a copy of the original program

64 | P a g e MS. SHIELA MARIE MERINO


SYSTEM ANALYSIS AND DESIGN MANUAL

specification, updated as necessary.

(b) There should be actual samples of all printed output as produced by the working program.

(c)The documentation must also include the latest program listing or compilation which, ofcourse, should be completely free of
errors.

(d) Programs should include not only the individual ideas of their creators but also their positive identities together with the
dates, names and details of any subsequent alterations.

(e) Finally, as a permanent reference to the degree of program testing undertaken, the documentation should include details of
test data submitted, along with examples of results produced.

iv. System documentation: Encompasses all information needed to define the proposed computer based
system to level where it can be programmed, tested and implemented.

For large systems that are developed to a customer’s specification, the system documentation should include:

 The requirements document and an associated rationale.

 A document describing the system architecture.

 For each program in the system, a description of the architecture of that program.

 For each component in the system, a description of its functionality and interfaces.

 Program source code listings.

These should be commented where the comments should explain complex sections of code and provide a rationale for the
coding method used. If meaningful names are used and a good, structured programming style is used, much of the code should
be self-documenting without the need for additional comments. This information is now normally maintained electronically
rather than on paper with selected information printed on demand from readers.

 Validation documents describing how each program is validated and how the validation information relates to the
requirements.

v. Analytical documentation:
Consists of all records and reports produced when a system is initiated.

Qualities of good documentation:

i. Corrective-It should be accurate and correct in every thing.

65 | P a g e MS. SHIELA MARIE MERINO


SYSTEM ANALYSIS AND DESIGN MANUAL

ii. Completeness- It should be exhaustive so that the system is completely documented.


iii. Concise- It should be precise enough.
Uses of documentation

i. Communication tool: Documents, flow charts and descriptive materials on a system or a program enable the
analyst and programmers communicate effectively. Enables clear understanding of the problem
ii. Facilitates installation and trouble shooting of the system. Detection of error and corrections of
malfunction or error conditions. Proper installation demand installation manual. There is faster correction of errors.
iii. System maintenance: Thorough system documentation facilitates the revision, changing or modification of the
existing system. It is easier to improve the system
iv. Personnel training: Documentation aids in designing training programmers for operators and users.
v. Management tool: Documentation provides management with a picture of how the system operates and
background information needed when making intelligent decisions.
vi. Auditing: Complete documentation enables auditors to identify personnel, records, facts and data promptly and
accurately.

THE IMPLEMENTATION PROCESS

The purpose of implementation is to put the theoretical design into practice. It can involve the installation of a complete system or
the introduction of a small subsystem. A particular project was selected. Its objectives, requirements and constraints were defined
in the requirement specification. A designer, or a team of designers, has specified a suitable system, in the systems design
specification. Now, the designed system must be developed and implemented. So, we can say that the aim of this phase is:

“to implement a fully-documented operational system which meets the original


requirements according to the design given in the systems design specification. “
a) Implementation Planning

The successful implementation of any system is based upon the following points:

 A project control monitoring time, cost and quality of output.


Managerial commitment and involvement at all levels.
 Analysts who are good communicators and have a thorough knowledge of the
organization’s operations and applications.
 The users' knowledge of and agreement with the system objectives.
 Recognition of user responsibilities in the system development.
 A computer manager capable of getting user support and of instilling confidence in
users.
 Most of all, sound planning beforehand is essential.

66 | P a g e MS. SHIELA MARIE MERINO


SYSTEM ANALYSIS AND DESIGN MANUAL

(b) Implementation Personnel

 As well as the specialist computing staff responsible for the implementation of a system, other personnel have an essential
role:

 The business manager and user group, including those involved in the prototyping, will
be brought in to make the final test.

 The technical manager will assist the users with the mechanics of actually running the
machine(s).

 Hardware representatives will be consulted over problems and for general advice.
Consultants will be available for specialist advice on larger projects.

 The administrative section will be advised of new personnel, job and responsibility changes and all the necessary
clerical backup; again this is applicable to larger projects.

Tasks in the Implementation phase

Implementation involves the following activities:

a) Creation of all the master files required in the system.


b) Preparation of user and data processing department operating instructions.
c) Commissioning of the new system.
d) Education and training of all staff that will use the system.
a) File Conversion/Creation

When a new system is to be implemented, it is likely that the master files either do not exist,
or, if they do, that they are not organized as required by the new system. Before the system
becomes operational, the master files must be created. This can be a major task, and it may
involve the production of a file-conversion system, with its own programs. This is expensive,
because the conversion will be a once-and-for-all operation, and the programs will be used
only once.

File creation imposes a heavy load on user departments, particularly if the information is coming from a manual system. Not only
do the users have to extract the required data from their manual records but, after it has been put on to the computer, they have
to check it for accuracy. If a system starts with inaccurate data, the confidence of the users will very quickly be undermined, so it
is most important that adequate time is allowed for this part of the work.

If the volume of data to be entered is large, the company may be incapable of handling the additional load. Also, if conversion is
from a manual system, there may be insufficient clerical staff available to fill in the conversion input forms. Sometimes, it is
possible to employ temporary staff but, often, the only solution is to use the services of a computer bureau which specialises in file

67 | P a g e MS. SHIELA MARIE MERINO


SYSTEM ANALYSIS AND DESIGN MANUAL

conversion and/or data entry. The service may include the clerical aspects, data preparation, and even the actual creation of the
files. The specification of the new file is given, and access to the data of the old file, after which the bureau does everything
necessary to prepare the new file.

The factors to consider at this point are:

• Whether the new system requires a new operating system and hardware.

• Whether you need to install new application software.

• Whether you need to create new database files for the new system

b) Education and Training

All staff must appreciate the objectives of the new system, and how it will operate, as well as the facilities it will provide; hence
need for detailed training and practice.

What levels of training are needed?

 Those who just need knowledge can attend a manufacturer's training school or attend
targeted in-house courses.

 Those requiring a skill can go on specialist training courses, which can be conducted
internally or by the product manufacturer.

Types of Training

o Full-time training is best carried out away from the normal place of work, to prevent
distractions.

o Generalized lectures giving background to the system should be given to all who are to
be involved.

o Detailed training about particular aspects should be given to small groups and
undertaken in situ, between normal work periods.
• Aims/Benefits of training
1. To reduce errors arising from learning through trial and error\
2. To make the system more acceptable to users
3. To improve security by reducing accidental destruction of data.
4. To reduce cost of maintenance
5. To ensure efficiency in system operations when it goes live
6. To improve quality of operations and services to the users.
7. Less supervision and positive working environment.

68 | P a g e MS. SHIELA MARIE MERINO


SYSTEM ANALYSIS AND DESIGN MANUAL

8. Organisations with a good training reputation are able to attract good recruits/improved work performance/increased
morale and self-confidence

User Involvement in Implementation

At the implementation stage, managers will formally accept the system. They will be closely involved in clerical procedures and in
staff training.

In operation of the system, the line management must know the new duties that their staff will be required to perform. They have
to make sure that:
(a) Input data is being prepared correctly, and on time.
(b) New reports are being properly used.
(c) Their staff are able to use and understand the system.

During the project development period, the user management will have a great deal more work to do in helping with the new
system.

c). Changeover Strategies

Importance of Successful Changeover

The changeover implies changes in working practices: from clerical to computerised, from centralised computing to distributed
computing; from one type of machine to another; and so on. Staff tend to resent change, and so to ease the way they must be kept
fully informed, and in a direct manner. Any individuals adversely affected must be told personally.

A perfectly sound system can be completely destroyed by poor changeover. To be successful, remember, changeover has to have
the support and involvement of managers and the co-operation of systems staff and users.

Thus, prior to changeover, management must verify that the system does actually satisfy defined information needs; that the
equipment, software and staff necessary for successful changeover are available; that control and audit procedures are in existence
to ensure system integrity; and that performance requirements have been established for the system's assessment in operation.

It is the analyst's responsibility to ensure that staff information is complete and accurate, the object being to obtain co-operation
and a smooth, trouble-free changeover.

There are two basic methods of changeover - direct and parallel - and some variations of

these.

i. Direct Changeover

Using direct changeover, at a specified time the old system is switched off and the new switched on,

This is advantageous in that resources are spared - the method involves the immediate discontinuance of the old system. However,

69 | P a g e MS. SHIELA MARIE MERINO


SYSTEM ANALYSIS AND DESIGN MANUAL

the new system must have been thoroughly tested so as to minimise risks in initial operation. Should the new system meet with
unexpected problems - hardware, software, or design - then the old system may not be able to be retrieved.

As you will realise, this technique is potentially dangerous since it implies transfer of dependence from a current working system
to a new system which, although tested, has not been used in a real situation. However, there are several situations where the
technique is applicable or unavoidable:

 In very small systems it is often not worthwhile considering any other technique, owing
to the inherent simplicity of the system.

 In very large systems it is sometimes not feasible to maintain two systems


simultaneously (as in parallel and pilot running) owing to the work involved.

 Where there is little similarity between the old and new systems, the simultaneous
running of both systems may be unhelpful.
ii. Parallel Changeover

In parallel changeover the old and new systems are run with the same data until there is
confidence in the new system, whereupon the old system is dropped.

Parallel changeover or parallel running of the old and the new systems simultaneously allows a comparison of output to be made
between them. Any shortcomings of the new system can be rectified, and continuous cross-checks made. This is the most
common method of changeover, but it is important to identify objectives, and a timescale must be established.

Parallel changeover is often used so that the old system may still be operated when there is a breakdown in the new system. If this
is the main reason then there must be a specific limit to the number of production cycles for which the parallel runs are to be
carried out. What has to be remembered in this particular context is that running in parallel means double the cost.

Another problem concerns the staff and other resources used to run the two systems together.

There may well need to be separate controls for the two systems, to be maintained and then reconciled. Where the reconciliation
is difficult, the period of parallel running may have to be prolonged. A delay such as this could create tension and strain for the
user department(s) because of the need to undertake two operations.

The objective should be to terminate the running of the old system as soon as is conveniently possible.

Generally you find that a database has to be created, or else existing files must be converted into a usable format, for the new
system. The creation of files takes up a good deal of effort and resources. After the construction of the database for the new
system is complete, there is the important question of maintenance. Any premature maintenance must be avoided, because
maintenance of files for both old and new systems can stretch staff to the fullest extent.

Some specialists argue that it is beneficial not to carry out a parallel run, as this imposes more rigorous discipline on programmers
and analysts to ensure that the system is viable. In any case, parallel running should not be used as a means of showing up inherent

70 | P a g e MS. SHIELA MARIE MERINO


SYSTEM ANALYSIS AND DESIGN MANUAL

faults in the new system - programs should be properly pre-tested with live data beforehand.
iii. Phased Changeover

The new system is introduced in phases or stages as each stage is implemented and operating correctly. The phases continue until
the whole system is in operation.

The method consists of a series of direct changes. The implementation of each phase can be controlled, and risk to the user
department is thus reduced considerably.

It has 2 possibilities:

• Where the organization is divided into divisions and the system is implemented phasewise i.e one division at a
time- location conversion.

• Where the information system is divided into components and the components are installed in stages; one
component at a time- Stagged conversion.

iv. Pilot Running


Involves installing the new system but using it only in one part of the organization on experimental basis. The whole system is
implemented on a section of the organization for testing before it is completely implemented into all other phases.

71 | P a g e MS. SHIELA MARIE MERINO


SYSTEM ANALYSIS AND DESIGN MANUAL

Changeover Methods Compared

We will now give a brief list of the main advantages and disadvantages for each of the above changeover approaches.

(a) Direct Changeover

Advantages:

 This is the simplest method: stop one system, start another. It is usually only
undertaken over a weekend or holiday period.

 No extra work in running two systems together for a time.

Disadvantage:

72 | P a g e MS. SHIELA MARIE MERINO


SYSTEM ANALYSIS AND DESIGN MANUAL

 Very high-risk - if the new system is disastrously wrong, it is difficult to recreate


the old system.

(b) Parallel Changeover

Advantages:

 This is a safer method as the old system is kept operational while the new system
is brought in.

 Much greater security

Disadvantages:

 Greater effort is needed to run the two systems and to compare the outputs.

 It may not be very easy to revert to the old system should things go wrong. The new system may handle the
information differently, making it awkward to compare outputs.

 The responsibilities of staff may well change between systems, leading to


confusion.

 Knowing when to make the actual changeover. This is usually a compromise between too short a
time, keeping costs to a minimum, and too long a time, allowing for extensive testing.

(c) Phased Changeover

Advantage:

There is considerable control as only manageable chunks are being changed


over at a time.

Disadvantages:

 The system may not easily be split into phases.

 The phases may well be different in the two systems.

 The interfaces between remaining old system phases and the new system
phases already changed over, are extremely difficult to control.

(d) Pilot running

This is often the preferred method.

Advantage:

Considerable control is retained and no risks are taken even if direct changeover

73 | P a g e MS. SHIELA MARIE MERINO


SYSTEM ANALYSIS AND DESIGN MANUAL

is applied to each area.

Disadvantages:

 Time is needed to collect and collate the data to be used.

 The two systems may handle the data differently, making comparison of outputs
difficult.

Pilot and phased strategies are recommended for large systems while direct strategy is recommended for urgent and smaller
systems.

POST-IMPLEMENTATION REVIEWS

Project Review

Management will require a review to check that the system is up to requirements and to note lessons for future use. In an ideal
situation, two reviews would be made:

(a) At the implementation stage;

(b) Six months later. /post implementation review.

The first is unlikely to be undertaken in full, as implementation of a system is an extremely busy time, and most DP staff feel that
all the tests made will compensate for this. However, if an early review is made, then the essential review three to six months
later has some guidelines to follow and data to compare with.

The review after a few months should be made by an independent consultant, who should assess whether the projected costs and
benefits are being realised. He should check if the system requirements are being achieved and he should identify the strengths and
weaknesses of the system.

The review consultant should be assisted in the review by an audit team who will undertake a parallel audit review of the system.
In addition, a user representative and a representative of the development staff should be available for consultation as required.

(a) Topics for Review

The following points should be reported on by the review:

 The manpower estimates, compared with the actual manpower effort and skills achieved.
 The amount of machine time used during testing.

 The overall costs, analyzed in detail.

 The delivery plan, with any differences explained.

 Productivity achieved since implementation.

74 | P a g e MS. SHIELA MARIE MERINO


SYSTEM ANALYSIS AND DESIGN MANUAL

 Program sizes and any lessons learned here.

(b) Scope of Review

The review consultant should examine:


 The documentation of the system, programs and operator and user procedures.
 The test packs developed for future maintenance.

 The test data, results and test logs already used and achieved.

 The errors found to date, the type of error, the department and program responsible, the cause and status of the
errors.

 The user's opinion and view of the system - problems and benefits, deficiencies and training needs, and input and
output.

 The design of the system and its operation; the interaction with other systems; the system's control functions and
audit trail

 Details of data capture procedures.

 A study of the operations department.

 Full cost/benefit analysis, to decide if the system justifies its costs.

 Any recommendations to be made - enhancements required, operational changes, any changes to documentation, data
integrity, testing facilities, controls, training, user procedures and personnel, and anything else that can be discovered.

The review consultant has a wide brief and is given plenty of time in which to carry out his review. He will
inevitably find defects, but hopefully will also identify strengths in the system.

QUALITY MANAGEMENT
Software Quality Management (SQM) is concerned with ensuring that the required level of quality is achieved in a software
product. It involves defining appropriate quality standards and procedures and ensuring that these are followed.
Quality management activities
 Quality assurance
o Establish organizational procedures and standards for quality which is a framework that leads to high quality
software.
 Quality planning
o Select applicable procedures and standards for a particular project from the framework and the adoption of
them for a specific software project
 Quality control

75 | P a g e MS. SHIELA MARIE MERINO


SYSTEM ANALYSIS AND DESIGN MANUAL

o The definition and enactment of processes which ensure that the project quality procedures and standards
are followed by the software development team.
a) Quality assurance and standards
There are types of standards:
1. Product standards: These are standards that apply to the software product being developed. They include
documentation standards and coding standards
2. Process standards Process standards monitor the development process to ensure that standards are being followed.
It defines the processes which should be followed during software development. They may include definitions for specs, design
and validation processes, how reviews should be conducted, configuration and a description of documents which must be
generated in the course of these processes.

3 Performance Standards

Performance standards set base lines against which actual performance is measured. In this sense, performance standards are not
estimates of how long tasks will take, but are used to specify how long tasks should take. Their primary use is for comparison with
actual performance to pinpoint deviations that call for investigation and, possibly, action.

Operating Standards

The operating environment is, in many ways, a more difficult function than systems and programming for management control.
Operating standards will need to cover:

Physical security procedures


File security procedures
Work log procedures
Error log procedures
Definition of staff responsibilities (probably complete job specifications)
Benefits of Standards

(a) Saving Time

Elimination of indecision and the consequent economy in time are major benefits of standards. Good standards reduce the
amount of repetitive work and leave more time for the more challenging aspects of development work - for example, problem
analysis.

(b) Management Control

The management of all aspects of the department is greatly assisted by the use of standards.

All work is carried out to a prescribed quality that can be measured.


Monitoring of progress of work is easy, since all tasks follow a defined sequence.

76 | P a g e MS. SHIELA MARIE MERINO


SYSTEM ANALYSIS AND DESIGN MANUAL

Precise responsibilities can be allocated to staff.


Estimates of time-scales can be more accurately predicted.
Schedules of work can be prepared more easily.
Staffing levels can be determined.

(c) Improved System Design and Development

Standard procedures provide a checklist to ensure that important aspects of the development of a system are not ignored.

Adequate documentation is produced at all stages.

Standard design methods ensure that similar systems are designed in similar ways, thus aiding maintenance.

Programming standards ensure that reliable and well-controlled software is produced, and that standard problems are
solved in a standard way.

(d) Computer Operations

The standards in this area ensure that all computer and related operations are carried out correctly, reliably and fully.

(e) Other Advantages

Training

When a set of standards is in force, the training of new staff is greatly eased
since all aspects of the work have been defined - the actual content of a training programme is clear.

Reduction of Dependence on Individuals

In a data processing department there is a fairly high rate of turnover of staff, especially in the systems and programming areas. It
is essential that the absence of an individual does not have a catastrophic effect on the work of the department.
Problems with standards
 They may not be seen as relevant and up-to-date by software engineers.
 They often involve too much bureaucratic form filling.
 If they are unsupported by software tools, tedious manual work is often involved to maintain the
documentation associated with the standards.
Process and product quality is more complex for software because:
i. The application of individual skills and experience is particularly important in software development;
ii. External factors such as the novelty of an application or the need for an accelerated development
schedule may impair product quality.
Documentation standards

77 | P a g e MS. SHIELA MARIE MERINO


SYSTEM ANALYSIS AND DESIGN MANUAL

Documentation standards in a software project are particularly important as documents are the only tangible
way of representing the software and the software process/ standardized documents have a consistent
appearance, structure and quality and should therefore be easier to read and understand.
These are types of documentation standards:
1. Document identification standards As large projects typically produce thousands of documents, each document must be
uniquely identified. For formal documents, this identifier may be the formal identifier defined by the configuration manager.
For informal documents, the style of the document identifier should be defined by the project manager.
2. Document structure standards There is an appropriate structure for each class of document produced during a software project.
Structure standards should define this organization. They should also specify the conventions used for page numbering, page
header and footer information, and section and subsection numbering.
3. Document presentation standards Document presentation standards define a ‘house style’ for documents and they contribute
significantly to document consistency. They include the definition of fonts and styles used in the document, the use of logos
and company names, the use of color to highlight document structure, etc.
4. Document update standards As a document is changed to reflect changes in the system, a consistent way of indicating these
changes should be used. These might include the use of different colors of cover to indicate a new document version and the
use of change bars to indicate modified or deleted paragraphs.
Quality Planning

 A quality plan sets out the desired product qualities and how these are assessed and defines the most significant quality
attributes.
 It should set out which organisational standards should be applied and, where necessary, define new standards to be
used.
Quality plan structure

o Product introduction;
o Product plans;
o Process descriptions;
o Quality goals;
o Risks and risk management.
Quality plans should be short, succinct documents

Figure showing Software Quality attributes

Safety Understa ndability Portability

Security Testability Usability

78 | P a g e MS. SHIELA MARIE MERINO


SYSTEM ANALYSIS AND DESIGN MANUAL

Resilience Modularity Efficiency

Reliability Adaptability Reusability

Robustness Complexity Learnability

Quality control
This involves checking the software development process to ensure that procedures and standards are being
followed.
There are two approaches to quality control:
i. Quality reviews
ii. Automated software assessment and software measurement-
Quality reviews – The software, its documentation and the processes used to produce that software are reviewed by a group of
people who are responsible for checking that the project standards have been followed and the software and documents
conform to these standards. Deviations from these standards are noted and brought to the attention of project team. This is the
principal method of validating the quality of a process or of a product There are different types of review with different
objectives
Types of review

Review type Principal purpose

Design or program inspections To detect detailed errors in the requirements, design or code. A
checklist of possible errors should drive the review.

Progress reviews To provide information for management about the overall


progress of the project. This is both a process and a product
review and is concerned with costs, plans and schedules.

Quality reviews To carry out a technical analysis of product components or


documentation to find mismatches between the specification and
the component design, code or documentation and to ensure that
defined quality standards have been followed.

Software measurement and metrics


Software measurement is concerned with deriving a numeric value for an attribute of a software product or process. This
allows for objective comparisons between techniques and processes.

Software metric
Predictor and control metrics

79 | P a g e MS. SHIELA MARIE MERINO


SYSTEM ANALYSIS AND DESIGN MANUAL

Software metric is any type of measurement which relates to a software system, process or related documentation. Examples
are measures of the size of product in lines of code, the number of reported faults in delivered software product and the
number of person-days required to develop a system component.
Metrics may be either control metrics or predictor metrics.
 Control metrics usually are associated with the software process e.g the average effort and time required
to repair reported defects.
 Predictor metrics are associated with the software product. An example is the average length of identifiers
in a program (the longer the identifiers, the more likely they are to be meaningful and hence the more understandable the
program.), length of code(usually the larger the size of the code of a program component, the more complex and error-prone
that component is likely to be.) and the number of attributes and operations associated with objects in a design. Both types of
metrics as shown in the diagram may influence management decision making.

Importance of quality.

If a system is of such poor quality that it doesn’t meet the user’s requirements and contains errors then the users will be
dissatisfied with the system leading to:

i. If the user is not satisfied, he can insist on rework free of charge (depending on contractual
management or on the method of finding maintenance). This could cause disastrous financial implications to the developers-
human efforts, time and other resources are diverted from doing productive work of building new systems to reworking the
system to improve it.
ii. Because of the above stated problem of concentrating on maintaining old systems instead of building new ones,
efficiency of business will deteriorate.

80 | P a g e MS. SHIELA MARIE MERINO


SYSTEM ANALYSIS AND DESIGN MANUAL

iii. Incase maintenance is regarded as an overhead to the user, then there will be ill will and lack of confidence by the
user on the developers. This will create problems for any subsequent projects on the hand. Demand for systems from that
developer(s) will go down- due to decreased user satisfaction.
iv. Equally, the customer incurs more maintenance costs incase maintenance is regarded as an overhead to the user
Implementing the steps necessary to ensure quality systems could incur greater development, cost and delayed delivery dates.
However long-term benefits of this must be considered such as reduced maintenance costs and increased user satisfaction.

Auditing and Audits

The term "audit" is often used, in a more general sense, to mean the independent examination of any completed work to ensure
that it carries out the tasks required of it with the correct level of control to ensure that errors are minimised. For example, the
coding of a program could be audited by a more senior programmer to ensure that standards had been adhered to and that the
coding was correct.

It is a legal requirement that the financial transactions and the systems used to record and process these transactions are subject to
audit, to ensure that they show "a true and fair view" of the activities of the organisation. It is also in the interests of the
organisation to make certain that it is conducting its business properly. For this reason, many organisations have their own internal
audit departments.

(a) Work of auditors

The work of internal auditors can be divided into two categories:

Installation auditing covering, in particular:

(i) security aspects, which will be discussed in the next study unit;

(ii) the evaluation of controls over the processing of data and, in particular personal data with
regard to the Data Protection Act;

(iii) production of standards, best practice, etc.

Audit Trail

Even if security procedures did not require it, auditors will ask that an "audit trail" be provided. This will be printed evidence,
showing the passage of transactions through the system, giving not merely totals but also information as to the way in which those
totals are obtained. The audit trail may make use of existing control procedures, and it will cover source documents; coded
documents (from source); control totals; prints of master files; error messages generated during processing; and the processing
log itself. In most cases, auditors will require their own copy of this printed information; it will not be sufficient for the designer
to lend a report on a "see and return" basis.

81 | P a g e MS. SHIELA MARIE MERINO


SYSTEM ANALYSIS AND DESIGN MANUAL

It must be expected that during the operation of some procedures, something will go wrong and the run will be abandoned. It
should be (but rarely is) common practice to have a processing log, in which the start and finish of every operation is recorded
and remarks added where justified, such as runs which have been stopped before completion. These logs must be regularly
examined by the auditor, to ensure that controls are adequate.

(c) Operational Audit

After a system has been running for some time, an operational audit may be carried out to see if the real system comes up to
what was hoped from it. For example, a stock control feasibility study may have shown that introduction of a particular system
should reduce stock holdings by 50% while, at the same time, reducing the number of stockouts by 60%. These results will not
be achieved in the first month's operation of
the new system. After it has been running for a few months and stock levels have settled down, an audit might be carried out to
see if the projected benefits have been obtained.

MANAGEMENT OF CHANGE

A good deal of research and survey results confirm that, in practice, human problems are a major cause of failure in the effective
development and management of computer systems for example:

 A technically feasible application may still collapse in operation, perhaps for reasons o f ignorance, antagonism or sheer
apathy on the part of user staff.

 Line managers may look upon change as a threat or merely as unhelpful, and withdraw their co-operation
with the system.

 Other managers may simply ignore the new system and continue using the former
system, regarding the new system as a waste of paper.

 Computer specialists often tend to follow their own technical objectives and do not
consider the overall corporate interest.
Software systems are subject to continual change requests:
i. From users;
ii. From developers;
iii. From market forces.
Change management is concerned with keeping track of these changes and ensuring that they are implemented in the most
cost-effective way.

Reasons for Change


The demand for a configuration or engineering change can be generated either from within the company or externally from
the customer or a supplier in order to:
 Correct a usability, reliability or safety problem

82 | P a g e MS. SHIELA MARIE MERINO


SYSTEM ANALYSIS AND DESIGN MANUAL

 Fix a bug or product defect


 Improve performance and/or functionality
 Improve productivity
 Lower cost
 Incorporate new customer requirements
 Specify a new supplier or supplier part/material
 Enhance installation, service, or maintenance
 Respond to regulatory requirements
CONFIGURATION MANAGEMENT
Configuration management involves the development and application of procedures and standards to manage an evolving
software product. You need to manage evolving systems because, as they evolve, many different versions of the software are
created. There may be several versions under development and in use at the same time. You need to keep track of the changes
that have been implemented and how these changes have been included in the software.

SYSTEMS MAINTENANCE
Is the general process of changing a system after it has been delivered. The changes may be simple changes to correct coding
errors, more extensive changes to correct design errors or significant enhancement to correct specification errors or
accommodate new requirements.
Types of maintenance
 Corrective maintenance- maintenance to repair software defects/ faults.
 Adaptive maintenance- maintenance to adapt the software to a different operating environment- such as hardware,
the platform OS or other support software systems.
 Perfective maintenance – To add or modify a system’s functionality especially when requirements change in response
to organizational or business change.
 Preventive maintenance –To ensure the system can withstand stress.
In practice, there isn’t a clear-cut distinction between these different types of maintenance.
The costs of system maintenance represent a large proportion of the budget of most organizations that use software systems. It
is usually cost-effective to invest effort when designing and implementing a system to reduce maintenance costs. Good
software engineering techniques all contribute to maintenance cost reduction.

Software maintenance processes

This section describes the six software maintenance processes as:

83 | P a g e MS. SHIELA MARIE MERINO


SYSTEM ANALYSIS AND DESIGN MANUAL

1. The implementation processes contains software preparation and transition activities, such as the conception and
creation of the maintenance plan, the preparation for handling problems identified during development, and the follow-up on
product configuration management.
2. The problem and modification analysis process, which is executed once the application has become the responsibility
of the maintenance group. The maintenance programmer must analyze each request, confirm it (by reproducing the situation)
and check its validity, investigate it and propose a solution, document the request and the solution proposal, and, finally,
obtain all the required authorizations to apply the modifications.
3. The process considering the implementation of the modification itself.
4. The process acceptance of the modification, by confirming the modified work with the individual who submitted the
request in order to make sure the modification provided a solution.
5. The migration process (platform migration, for example) is exceptional, and is not part of daily maintenance tasks. If
the software must be ported to another platform without any change in functionality, this process will be used and a
maintenance project team is likely to be assigned to this task.
6. Finally, the last maintenance process, also an event which does not occur on a daily basis, is the retirement of a piece
of software.

Enhancement or Development?

Every system will require enhancement. There is the choice of producing and developing an entirely new system.
But even then, changes may be required to existing programs, as the new system will have to incorporate the
previous system, and it will be sensible to use already prepared software.

Sooner or later, a choice must be made between enhancement (of the existing system) and development (of a new
system).

84 | P a g e MS. SHIELA MARIE MERINO

You might also like