You are on page 1of 55

COURSE CODE: DIT 0301

COURSE TITLE: SOFTWARE ENGINEERING

COURSE OUTLINE:
1. Introduction: Basic Definitions: Software Engineering, System Engineering, Software
Process, Software Process Model, Software Engineering Methods, Information Systems
Manufacturing Techniques.
2. Software Development Process Trends: Process And Meta Processes, Process Models;
Waterfall, Incremental, Spiral, Evolutionary Development, Component Based Software
Engineering.
3. Software Requirements: Functional And Non Functional Requirements, User
Requirements, System Requirements, Interface Specifications, Software Requirement
Document.
4. Requirement Engineering: Feasibility Study, Requirements Elicitation And Analysis,
Requirements Validation And Requirement Management
5. Software Design – SW Construction, Design Concepts, Modularity, CASE Tools,
Aspects of Object Oriented Programming
6. CAT 1
7. Software Project Management: Management Activities: Proposal Writing, Project
Planning And Scheduling, Project Cost, Project Monitoring And Reviews, Personnel
Selection And Evaluation, Report Writing And Presentations. Project Planning: Project
Plan , Project Scheduling, Software Configuration, Software Quality Assurance, SQA
Plan, Process And Product Quality, Standards And Procedures, Quality Planning And
Quality Control
8. Risk Management: Steps In Risk Management; Risk Identification, Risk Analysis, Risk
Prevention; Limitations; Risk Planning; Eliminating Risk Occurrence, Ignoring Risk,
Risk Contingency; Risk Monitoring; Types Of Risks; Known, Unknown, And
Unforeseen
9. Software Maintenance: Types Of Maintenance, Factors Affecting Maintenance
10. Computer Security: Data Security, Network Security, Techniques: Password,
Biometrics, Firewalls, Physical Plant Protection, Key Cards, Cryptography, Encryption,
Digital Certificates, Policy
11. Professional Issues In IT: Copyright, Trademark, Legal Issues, Patent

Assessment
CATS -30%
ASSIGNMENTS - 10%
FINAL EXAM - 60%

REFERENCE
Main Text
Sommerville Ian, Software Engineering 7th Ed. Pearson Education Ltd.

Others
1. Roger S. Pressman, Software Engineering: A Practioner’s Approach, 6th Ed. McGraw-Hill.
2005.
2. Edward Yourdon, Decline, Fall of the American Programmer, Prentice Hall, Inc. 1993.
3. url: http://www.software-engi.com
INTRODUCTION TO SOFTWARE ENGINEERING
Software engineering is the application of a systematic, disciplined, quantifiable approach to the
development, operation, and maintenance of software; that is, the application of engineering to
software. (IEEE Computer Society)
Software engineering is the process of solving customers' problems by the systematic
development and evolution of large-high quality software systems within cost, time, and other
constraints (Lethbridge)
Software engineering can be described in terms of
Analysis - breaking apart a problem into pieces
Synthesis - constructing a solution from available or new components.
Methods & tools - that enable software projects to be built predictably within proscribed
schedules & budgets, meeting the customer's requirements of functionality & reliability.

Software engineering is concerned with the theories, methods and tools which are needed to
develop high quality, complex software in a cost effective way on a predictable schedule.

Software engineering: The disciplined application of engineering, scientific, and mathematical


principles, methods, and tools to the economical production of quality software.
Software is abstract and intangible. It is not constrained by materials, governed by physical laws
or by manufacturing processes. There are no physical limitations on the potential of software. –

A software product consists of developed programs and all associated documentation and
configuration data needed to make the programs operate correctly.

Software process: The set of activities, methods, and practices that are used in the production
and evolution of software.

Software process model: One specific embodiment of Software process architecture.

Software engineering process: The total set of software engineering activities needed to
transform a user's requirements into software.

Software process architecture: A framework within which project-specific software processes


are defined.

The Nature of Software


Software is flexible - Software is an executable specification of a computation / application
Software is expressive - All computable functions may be expressed in software. Complex event
driven systems may be expressed in software.
Software is huge - An operating system may consist of millions of lines of code.
Software is complex - Software has little regularity or recognizable components found in other
complex systems and there are exponentially many paths through the code and changes in one
part of the code may have unintended consequences in other equally remote sections of the code.
Software is cheap - Manufacturing cost is zero, development cost is everything. Thus, the first
copy is the engineering prototype, the production prototype and the finished product.
Software is never finished - The changing requirements and ease of modification permits the
maintenance phase to dominate a software product's life cycle, i.e., the maintenance phase
consists of on going design and implementation cycles.
Software is easily modified - It is natural to use an iterative development process combining
requirements elicitation with design and implementation and use the emerging implementation to
uncover errors in design and in the requirements.
Software is communication - Communication with a machine but also communication between
the client, the software architect, the software engineer, and the coder. Software must be readable
in order to be evolvable.

NEED OF SOFTWARE ENGINEERING


The need of software engineering arises because of higher rate of change in user requirements
and environment on which the software is working.

 Large software - It is easier to build a wall than to a house or building, likewise, as the
size of software become large engineering has to step to give it a scientific process.
 Scalability- If the software process were not based on scientific and engineering
concepts, it would be easier to re-create new software than to scale an existing one.
 Cost- As hardware industry has shown its skills and huge manufacturing has lower down
the price of computer and electronic hardware. But the cost of software remains high if
proper process is not adapted.
 Dynamic Nature- The always growing and adapting nature of software hugely depends
upon the environment in which the user works. If the nature of software is always
changing, new enhancements need to be done in the existing one. This is where software
engineering plays a good role.
 Quality Management- Better process of software development provides better and
quality software product.
CHARACTERESTICS OF GOOD SOFTWARE
A software product can be judged by what it offers and how well it can be used. This
software must satisfy on the following grounds:
i. Operational: This tells us how well software works in operations. It can be measured
on: Budget, Usability, Efficiency, Correctness, Functionality, Dependability, Security
and Safety
ii. Transitional: This aspect is important when the software is moved from one platform
to another: Portability, Interoperability, Reusability and Adaptability
iii. Maintenance: This aspect briefs about how well a software has the capabilities to
maintain itself in the ever-changing environment: Modularity, Maintainability,
Flexibility and Scalability
In short, Software engineering is a branch of computer science, which uses well-defined
engineering concepts required to produce efficient, durable, scalable, in-budget and on-time
software products
SOFTWARE DEVELOPMENT LIFE CYCLE
LIFE CYCLE MODEL
A software life cycle model (also called process model) is a descriptive and diagrammatic
representation of the software life cycle.
A life cycle model represents all the activities required to make a software product transit
through its life cycle phases
Different software life cycle models
Many life cycle models have been proposed so far. Each of them has some advantages as well as
some disadvantages. A few important and commonly used life cycle models are as follows:
i. Classical Waterfall Model
ii. Iterative Waterfall Model
iii. Prototyping Model
iv. Evolutionary Model
v. Spiral Model

1. CLASSICAL WATERFALL MODEL


The classical waterfall model is intuitively the most obvious way to develop software. It is not a
practical model in the sense that it cannot be used in actual software development projects. Thus,
this model can be considered to be a theoretical way of developing software. But all other life
cycle models are essentially derived from the classical waterfall model. Classical waterfall
model divides the life cycle into the following phases:

Feasibility study - The main aim of feasibility study is to determine whether it would be
financially and technically feasible to develop the product.
Requirements analysis and specification: - The aim of the requirements analysis and
specification phase is to understand the exact requirements of the customer and to document
them properly. This phase consists of two distinct activities, namely
 Requirements gathering and analysis: The goal of the requirements gathering activity is
to collect all relevant information from the customer regarding the product to be
developed. This is done to clearly understand the customer requirements so that
incompleteness and inconsistencies are removed
 Requirements specification: The requirements analysis activity is begun by collecting all
relevant data regarding the product to be developed from the users of the product and
from the customer through interviews and discussions.
After all ambiguities, inconsistencies, and incompleteness have been resolved and all the
requirements properly understood, the requirements specification activity can start. During
this activity, the user requirements are systematically organized into a Software
Requirements Specification (SRS) document.. The important components of this document
are functional requirements, the nonfunctional requirements, and the goals of
implementation.
Design: - The goal of the design phase is to transform the requirements specified in the SRS
document into a structure that is suitable for implementation in some programming language.
Two distinctly different approaches are available: the traditional design approach and the
object-oriented design approach.

 Traditional design approach -Traditional design consists of two different


activities; first a structured analysis of the requirements specification is carried
out where the detailed structure of the problem is examined. This is followed by
a structured design activity. During structured design, the results of structured
analysis are transformed into the software design
 Object-oriented design approach -In this technique, various objects that occur
in the problem domain and the solution domain are first identified, and the
different relationships that exist among these objects are identified. The object
structure is further refined to obtain the detailed design.
Coding and unit testing:-The purpose of the coding phase (sometimes called the
implementation phase) of software development is to translate the software design into source
code. Each component of the design is implemented as a program module. The end-product of
this phase is a set of program modules that have been individually tested. During this phase, each
module is unit tested to determine the correct working of all the individual modules.
Integration and system testing: -Integration of different modules is undertaken once they have
been coded and unit tested. Integration is normally carried out incrementally over a number of
steps. During each integration step, the partially integrated system is tested and a set of
previously planned modules are added to it. Finally, when all the modules have been successfully
integrated and tested, system testing is carried out.
The goal of system testing is to ensure that the developed system conforms to its requirements
laid out in the SRS document. System testing usually consists of three different kinds of testing
activities:

 α – testing: It is the system testing performed by the development team.


 β –testing: It is the system testing performed by a friendly set of customers.
 Acceptance testing: It is the system testing performed by the customer himself after the
product delivery to determine whether to accept or reject the delivered product.
System testing is normally carried out in a planned manner according to the system test plan
document. The system test plan identifies all testing-related activities that must be performed,
specifies the schedule of testing, and allocates resources. It also lists all the test cases and the
expected outputs for each test case.
Maintenance: -Maintenance of a typical software product requires much more than the effort
necessary to develop the product itself. Maintenance involves performing any one or more of the
following three kinds of activities:
i. Correcting errors that were not discovered during the product development phase. This is
called corrective maintenance.
ii. Improving the implementation of the system, and enhancing the functionalities of the
system according to the customer’s requirements. This is called perfective maintenance.
iii. Porting the software to work in a new environment. For example, porting may be
required to get the software to work on a new computer platform or with a new operating
system. This is called adaptive maintenance.
Shortcomings of the classical waterfall model
1. The rigid sequential flow of the model is rarely encountered in real life. Iteration can
occur causing the sequence of steps to become muddled.
2. It is often difficult for the customer to provide a detailed specification of what is required
early in the process. Yet this model requires a definite specification as a necessary
building block for subsequent steps.
3. Much time can pass before any operational elements of the system are available for
customer evaluation. If a major error in implementation is made, it may not be uncovered
until much later.
2. ITERATIVE WATERFALL MODEL
To overcome the major shortcomings of the classical waterfall model, we come up with the
iterative waterfall model.

Here, we provide feedback paths for error correction as & when detected later in a phase.
The advantage of this model is that there is a working model of the system at a very early stage
of development which makes it easier to find functional or design flaws.
The disadvantage with this SDLC model is that it is applicable only to large and bulky software
development projects. This is because it is hard to break a small software system into further
small serviceable increments/modules.
3. PROTOTYPING
Prototyping moves the developer and customer toward a "quick" implementation.
1. Prototyping begins with requirements gathering.
2. Meetings between developer and customer are conducted to determine overall system
objectives and functional and performance requirements.
3. The developer then applies a set of tools to develop a quick design and build a working
model (the "prototype") of some element(s) of the system.
4. The customer or user "test drives" the prototype, evaluating its function and
recommending changes to better meet customer needs.
5. Iteration occurs as this process is repeated, and an acceptable model is derived. The
developer then moves to "productize" the prototype by applying many of the steps
described for the classic life cycle. In object oriented programming a library of
reusable objects (data structures and associated procedures) the software engineer can
rapidly create prototypes and production programs.
The benefits of prototyping are:
1. A working model is provided to the customer/user early in the process, enabling early
assessment and bolstering confidence,
2. The developer gains experience and insight by building the model, thereby resulting in a
more solid implementation of "the real thing"
3. The prototype serves to clarify otherwise vague requirements, reducing ambiguity and
improving communication between developer and user.
But prototyping also has a set of inherent problems:
1. The user sees what appears to be a fully working system (in actuality, it is a partially
working model) and believes that the prototype (a model) can be easily transformed into
a production system. This is rarely the case.
2. The developer often makes technical compromises to build a "quick and dirty" model.
Sometimes these compromises are propagated into the production system, resulting in
implementation and maintenance problems.
3. Prototyping is applicable only to a limited class of problems. In general, a prototype is
valuable when heavy human-machine interaction occurs, when complex output is to be
produced or when new or untested algorithms are to be applied.
4. EVOLUTIONARY MODEL
It is also called successive versions model or incremental model. At first, a simple working
model is built. Subsequently it undergoes functional improvements & we keep on adding new
functions till the desired system is built.
Applications:
 Large projects where you can easily find modules for incremental implementation. Often
used when the customer wants to start using the core features rather than waiting for the
full software.
 Also used in object oriented software development because the system can be easily
portioned into units in terms of objects.
Advantages:
 User gets a chance to experiment partially developed system
 Reduce the error because the core modules get tested thoroughly.
Disadvantages:
 It is difficult to divide the problem into several versions that would be acceptable to the
customer which can be incrementally implemented & delivered.
5. SPIRAL MODEL
The term “spiral” is used to describe the process that is followed as the development of the
system takes place. With each iteration around the spiral (beginning at the center and working
outward), progressively more complete versions of the system are built.
Risk assessment is included as a step in the development process as a means of evaluating each
version of the system to determine whether or not development should continue. If the customer
decides that any identified risks are too great, the project may be halted. The Spiral Model is
made up of the following steps:
Project Objectives - Similar to the system conception phase of the Waterfall Model Objectives
are determined, possible obstacles are identified and alternative approaches are weighed
Risk Assessment - Possible alternatives are examined by the developer, and associated
risks/problems are identified. Resolutions of the risks are evaluated and weighed in the
consideration of project continuation. Sometimes prototyping is used to clarify needs.
Engineering & Production - Detailed requirements are determined and the software Piece is
developed.
Planning and Management - The customer is given an opportunity to analyze the results of the
version created in the Engineering step and to offer feedback to the developer.
Problems/Challenges associated with the Spiral Model

 Due to the relative newness of the Spiral Model, it is difficult to assess its strengths and
weaknesses
 However, the risk assessment component of the Spiral Model provides both developers
and customers with a measuring tool that earlier Process Models do not have.
 The measurement of risk is a feature that occurs everyday in real-life situations, but
unfortunately) not as often in the system development industry.
 The practical nature of this tool helps to make the Spiral Model a more realistic Process
model than some of its predecessors.
REQUIREMENTS ANALYSIS AND SPECIFICATION
REQUIREMENTS ENGINEERING
This is the process of eliciting, analysing and recording requirements for software systems. Its
goal is to create and maintain a system requirement document. The process involves assessing
whether the system is useful to business, discovering requirements, converting them into some
standard form and checking that the requirements actually define the system.
The requirements engineering process
These are the activities that are involved in requirements engineering. They include:
1. Requirements elicitation
In requirements engineering, requirements elicitation is the practice of obtaining the
requirements of a system from users, customers and other stakeholders. The practice is also
sometimes referred to as requirements gathering/requirements discovery.
Requirements elicitation is non-trivial because you can never be sure you get all
requirements from the user and customer by just asking them what the system should do.
Requirements elicitation practices involves fact finding techniques such as interviews,
questionnaires, user observation/ethnography, Documentary review e.t.c
2. Requirements Analysis and Negotiation
The purpose of this activity is to transform technical requirements into formal requirements
by ensuring that they express the needs of the customer. Analysis is an iterative activity. The
process steps will likely be repeated several times in consultation with the customers. The
goal of analysis is to locate places where requirements are unclear, incomplete, ambiguous or
contradictory. It groups related requirements and organizes them into coherent clusters. The
requirements are then prioritized and any conflicts that arise are resolved.
3. Requirements Validation
Concerned with demonstrating that the requirements define the system that the customer
really wants. Requirements errors are very costly and so validation is very important. Fixing
a requirements error after delivery may cost up to 100 times the cost of fixing an
implementation error. The requirements are checked against the following factors:
Validity – Does the system provide the functions which best support the customers needs?
Consistency – Are there any requirements conflicts?
Completeness – Are all functions required by the customer included?
Regular reviews should be held during the whole process both the technical staff and the
customer should be involved in the reviews. Good communication between the technical
staff and the customer can resolve problems at an early stage. Before changing requirements,
check for traceability (origin) and adaptability (impact on other requirements) of the
requirements.

4. Requirements Management
The process of managing changing requirements during the requirements engineering
process. Requirements may change because:
 Technology may change
 User needs may change
 Changing business processes e.t.c
5. Requirements Documentation
Requirements need to be documented and a standard format is used.
Requirements should state what the system should do and the design should describe how
it does this.
N.B: Learn more about the IEEE standard for documentation especially requirement
documents.

Challenges in getting requirements


 Stakeholders don’t know what they really want.
 Stakeholders express requirements in their own terms.
 Different stakeholders may have conflicting requirements.
 Organisational and political factors may influence the system requirements.
 The requirements change during the analysis process. New stakeholders may emerge and
the business environment change.

FACT FINDING
The formal process of using techniques such as interviews and questionnaires to collect
facts about systems, requirements and preferences.

When are fact finding techniques used?


There are many occasions for fact finding during the system development process.
However, fact finding is particularly crucial to the early stages of the life cycle including
system planning, system definition and requirements collection and analysis stages. It is
during these early stages that the developer learns about the terminology, problems,
opportunities, constraints, requirements and priorities of the enterprise and the users of
the system.

Interviews
This is the most commonly used and normally most useful, fact-finding technique.
Interviews are a fact finding technique whereby the systems analysts collect information
from individuals (and/or groups) through face-to-face interaction. There can be several
objectives to using interviewing, such as finding out facts, verifying facts, clarifying
facts, generating enthusiasm, getting end-user involved, identifying requirements and
gathering ideas and opinions. However, using the interviewing technique requires good
communication skills for dealing effectively with people who have different values,
priorities, opinions, motivations and personalities. As with other fact finding techniques,
interviewing is not always the best method for all situations.
There are two types of interviews:
i. Unstructured interviews: Conducted with only a general objective in mind
and with few, if any, specific questions. The interviewer counts on the
interviewee to provide a framework and direction to the interview. This type
of interview frequently loses focus and, for this reason, it often does not work
well for systems analysis and design.
ii. Structured interviews: The interviewer has a specific set of questions to ask
the interviewee. Depending on the interviewee’s responses, the interviewer
will direct additional questions to obtain clarification or expansion. Some of
the questions may be planned and others spontaneous.
Open-ended questions allow the interviewee to respond in any way he/she seems appropriate.
An example of an open-ended question is: ‘Why are you dissatisfied with the report on client
registration?’
Closed-ended questions restrict answers to either specific choices or short, direct answers. An
example of a closed-ended question might be: ‘Are you receiving the report on client registration
on time?’ or ‘Does the report on client registration contain accurate information?’ Both questions
only require a ‘Yes’ or ‘No’ response.
To ensure a successful interview includes selecting appropriate individuals, preparing
extensively for the interview and conducting the interview in an efficient and effective
manner.

Advantages
- Interviews give the analyst an opportunity to motivate the interviewee to respond
freely and openly to questions.
- Interviews allow the systems analyst to probe for more feedback from the
interviewee.
- Interviews give the analyst an opportunity to observe the interviewee’s nonverbal
communication. A good analyst may be able to obtain information by observing the
interviewee’s body movements and facial expressions as well as by listening to verbal
replies to questions.
- Interviews permit the systems analyst to adapt or reword questions for each
individual.

Disadvantages
- Interviewing is a very time-consuming, and therefore costly, fact-finding approach.
- Success of interviews is highly dependent on the systems analyst’s human relations
skills.
- Interviewing may be impractical due to the location of interviews.
- Success can be dependent on the willingness of the interviewees to participate in
interviews.

Observation
Observation is one of the most effective fact-finding techniques for understanding a
system. With this technique, it is possible to either participate in, or watch, a person
perform activities to learn about the system. This technique is particularly useful when
the validity of data collected through other methods is in question or when the complexity
of certain aspects of the system prevents a clear expansion by the end users.
As with the other fact-finding techniques, successful observation requires preparation. To
ensure that the observation is successful, it is important to know as much about the
individuals and the activity to be observed as possible. For example, ‘When are the low,
normal and peak periods for the activity being observed?’ and ‘Will the individuals be
upset by having someone watch and record their actions?’

Advantages
- Data gathered by observation can be highly reliable. Sometimes observations are
conducted to check the validity of data obtained directly from individuals.
- The systems analyst is able to see exactly what is being done. Complex tasks are
sometimes difficult to clearly explain in words. Through observation, the systems analyst
can identify tasks that have been missed or inaccurately described by other fact-finding
techniques. Also, the analyst can obtain data describing the physical environment of the
task (e.g., physical layout, traffic lighting, noise level).
- Observation is relatively inexpensive compared with other fact-finding techniques. Other
techniques usually require substantially more employee release time and copying
expenses.
- Observation allows the systems analyst to do work measurements.

Disadvantages
- Because people usually feel uncomfortable when being watch, they may unwittingly
perform differently when being observed.
- The work being observed may not involve the level of difficulty or volume normally
experienced during that time period.
- Some systems activities may take place at odd times causing a scheduling inconvenience
for the systems analyst.
- The tasks being observed are subject t various types of interruption.
- Some tasks may not always be performed in the manner in which they are observed by the
systems analyst. For example, the systems analyst might have observed how a company
filled several customer orders. However, the procedures observed may have been those
steps used to fill a number of regular customer orders. If any of those orders had been
special orders (e.g. an order for goods not normally kept in stock), the systems analyst
would have observed a different set of procedures being executed.
- If people have been performing tasks in a manner that violates standard operating
procedures, they may temporarily perform their jobs correctly while you are observing
them. In other words, people may let you see what they you want you to see.

Questionnaires
Questionnaires are special–purpose documents that allow facts to be gathered from a
large number of people while maintaining some control over their responses. When
dealing with a large audience, no other fact-finding technique can tabulate the same facts
as efficiently.
There are two types of questions that cam be asked in a questionnaire, namely free-
format and fixed-format.
Free-format questions offer greater freedom in providing answers. A question is asked
and the respondent records the answer in the space provided after the question. Examples
of free-format questions are: ‘What reports do you currently receive and how are they
used?’ and ‘Are there any problems with these reports?’ If so, please explain. The
problems with free-format questions are that the respondent’s answers may prove
difficult to tabulate and, in some, may not match the questions asked.

Fixed-format questions require responses from individuals. Given any question, the
respondent must choose from the available answers. This means the results much easier
to tabulate. On the other hand, the respondent cannot provide additional information. An
example of a fixed-format question is: ‘The current format of the report on property
rentals is ideal and should not be changed?’ The respondent may be given the option to
answer ‘Yes’ or ‘No’ to this question or be given the option to answer from a range of
responses including ‘Strongly Agree’, ‘Agree’, ‘No Opinion’, ‘Disagree’, and ‘Strongly
Disagree’.

Advantages
- Most questionnaires can be answered quickly. People can complete and return
questionnaires at their convenience.
- Questionnaires provide a relatively inexpensive means for gathering data from a large
number of individuals.
- Questionnaires allow individuals to maintain anonymity. Therefore, individuals are more
likely to provide the real facts, rather than telling you what they think their boss would
want them to.
- Responses can be tabulated and analyzed quickly.

Disadvantages
- The number of respondents is often low.
- There’s no guarantee that an individual will answer or expand on all the questions.
- Questionnaires tend to be inflexible. There’s no opportunity for the systems analyst to
obtain voluntary information from individuals or to reword questions that may have been
misinterpreted.
- It is not possible for the systems analyst to observe and analyze the respondent’s body
language.
- There is no immediate opportunity to clarify a vague or incomplete answer to any
question.
- Good questionnaires are difficult to prepare.

Documentary Review
It involves perusing through literature to gain a better understanding of the existing
system. The documents to be reviewed will include: job descriptions, procedure
manuals, management reports, and sales report e.t.c.
When to Use Documentary Review
1) When a system analyst wants to have a quick overview of a system.
2) When the information required cannot be obtained using other techniques.
Merits
1) It is comparatively cheaper.
2) It is a faster means of information gathering especially if the documents are few.
Demerits
1) It may be time consuming and requires a lot of patience.
2) The relevant information maybe missing.
3) The success depends on the experience and skills of the system analyst.
4) Most of the information available may be outdated.
5) The information available may not suit well with the requirements of the system
proposed.

Sampling

This is a process of systematically selecting representative elements of a population when


these elements are examined closely; it is assumed that the analysis will reveal useful
information about the population as a whole.
In the language of sampling:  
-a population is the entire collection of people or things you are interested in;
-a census is a measurement of all the units in the population;
-a population parameter is a number that results from measuring all the units in the
population;
-a sampling frame is the specific data from which the sample is drawn, e.g., a telephone
book;
-a unit of analysis is the type of object of interest, e.g., arsons, fire departments,
firefighters;
-a sample is a subset of some of the units in the population;
-a statistic is a number that results from measuring all the units in the sample;
-statistics derived from samples are used to estimate population parameters.  
 
For example, to find out the average age of all motor vehicles in the state in 1997:
 
Population=all motor vehicles in the state in 1997
Sampling frame=all motor vehicles registered with the DMV on July 1, 1997
Design=probability sampling
Unit of analysis=motor vehicle
Sample=300 motor vehicles
Data gathered=the age of each of the 300 motor vehicles selected in the sample
Statistic=the average age of the 300 motor vehicles in the sample Parameter=the estimate
of the average age of all motor vehicles in the state-1997

Types of Samples:
Non-probability (non-random) samples:
These samples focus on volunteers, easily available units, or those that just happen to be
present when the research is done. Non-probability samples are useful for quick and
cheap studies, for case studies, for qualitative research, for pilot studies, and for
developing hypotheses for future research.
a. Convenience sample: Also called an "accidental" sample or "man-in-the-
street" samples. The researcher selects units that are convenient, close at
hand, easy to reach, etc.
b. Purposive sample: The researcher selects the units with some purpose in
mind, for example, students who live in dorms on campus, or experts on
urban development.
c. Quota sample: The researcher constructs quotas for different types of
units. For example, to interview a fixed number of shoppers at a mall, half
of whom are male and half of whom are female.
Other samples that are usually constructed with non-probability methods include library
research, participant observation, marketing research, consulting with experts, and
comparing organizations, nations, or governments.
Probability-based (random) samples:
a. Simple random sample: Each unit in the population is identified, and each unit has
an equal chance of being in the sample. The selection of each unit is independent of
the selection of every other unit. Selection of one unit does not affect the chances of
any other unit.
b. Systematic random sampling: Each unit in the population is identified, and each
unit has an equal chance of being in the sample.
c. Stratified random sampling: Each unit in the population is identified, and each unit
has a known, non-zero chance of being in the sample. This is used when the
researcher knows that the population has sub-groups (strata) that are of interest.
d. Cluster sampling: cluster sampling views the units in a population as not only being
members of the total population but as members also of naturally-occurring in
clusters within the population. For example, city residents are also residents of
neighborhoods, blocks, and housing structures.
When to Use Sampling
1) When the target population is too large.
2) When the population has similar characteristics.
Merits
1) It is cheaper considering the target population.
2) It speeds up the data gathering process.
3) It is effective since one obtains accurate information, which is assumed to be
representative.
Demerits
1) One requires statistical knowledge to use the method.
2) The sample taken may not be representative of the entire population and may overlook
certain critical issues.
Research and survey/site visit
This involves thorough research of the applications and problems of the old system. The
analyst must review trade journals, periodicals and books containing relevant
information.
He can also attend professional meetings, seminars and visiting others companies which
have similar systems.
Existing computerized system
The user requirement of a new computerized system can also be collected from the
existing computer system. The way work is done is analyzed and improvement
suggested. The areas looked at are:
i. File structures
ii. Transaction volumes
iii. Screen design
iv. User satisfaction
v. Causes of system crash e.t.c.

Software Requirements Specification (SRS) document

The important parts of SRS document are:

i. Functional requirements of the system: The functional requirements part discusses the
functionalities required from the system. The system is considered to perform a set of
high-level functions. Each function of the system can be considered as a transformation
of a set of input data to the corresponding set of output data. The user can get some
meaningful piece of work done using a high-level function.
ii. Non-functional requirements of the system: Nonfunctional requirements deal with the
characteristics of the system which cannot be expressed as functions - such as the
maintainability of the system, portability of the system, usability of the system, etc.
iii. Goals of implementation: The goals of implementation part documents some general
suggestions regarding development. These suggestions guide trade-off among design
goals. The goals of implementation section might document issues such as revisions to
the system functionalities that may be required in the future, new devices to be supported
in the future, reusability issues, etc.

Properties of a good SRS document


The important properties of a good SRS document are the following:
a. Concise. The SRS document should be concise and at the same time unambiguous,
consistent, and complete. Verbose and irrelevant descriptions reduce readability and also
increase error possibilities.
b. Structured. It should be well-structured. A well-structured document is easy to
understand and modify. In practice, the SRS document undergoes several revisions to
cope up with the customer requirements. Often, the customer requirements evolve over a
period of time. Therefore, in order to make the modifications to the SRS document easy,
it is important to make the document well-structured.
c. Black-box view. It should only specify what the system should do and refrain from
stating how to do these. This means that the SRS document should specify the external
behavior of the system and not discuss the implementation issues. The SRS document
should view the system to be developed as black box, and should specify the externally
visible behavior of the system. For this reason, the SRS document is also called the
black-box specification of a system.
d. Conceptual integrity. It should show conceptual integrity so that the reader can easily
understand it.
e. Response to undesired events. It should characterize acceptable responses to undesired
events. These are called system response to exceptional conditions.
f. Verifiable. All requirements of the system as documented in the SRS document should
be verifiable. This means that it should be possible to determine whether or not
requirements have been met in an implementation.

Problems without a SRS document


The important problems that an organization would face if it does not develop a SRS document
are as follows:
a. Without developing the SRS document, the system would not be implemented
according to customer needs.
b. Software developers would not know whether what they are developing is what
exactly required by the customer
c. Without SRS document, it will be very much difficult for the maintenance
engineers to understand the functionality of the system.
d. It will be very much difficult for user document writers to write the users’
manuals properly without understanding the SRS document.
Problems with an unstructured specification
 It would be very much difficult to understand that document.
 It would be very much difficult to modify that document.
 Conceptual integrity in that document would not be shown.
 The SRS document might be unambiguous and inconsistent.

FEASIBILITY STUDY
Depending on the results of the initial investigations, the survey is expanded to a more
detailed feasibility study.
i) A feasibility study is an activity undertaken to determine the possibility or probability of
either improving the existing system or developing a totally new system.
ii) A feasibility study is a test of system proposal according to its workability, impact on the
organization, ability to meet user needs, and effectively use resources. It focuses on three
major questions:
- What are the user’s demonstrable needs and how does a candidate system meet them?
- What resources are available for given candidate systems? Is the problem worth solving?
- What are the likely impacts of the candidate system on the organization? How well does
it fit within the organization’s master computerization plan?

The objective of feasibility study is not to solve the problem but to acquire a sense of its
scope. During the study, the problem definition is crystallized and aspects of the problem
to be included in the system are determined. Consequently, costs and benefits are
estimated with greater accuracy at this stage. The result of this feasibility study is a
formal proposal. This is simply a report – a formal document detailing the nature and
scope of the proposed solution. The proposal summarizes what is known and what is
going to be done. The report is the medium by which you tell the management what the
problem is, what you have found its causes to be and what you have to offer in the way of
recommendations. It consists of:

i) Statement of the problem – a carefully worded statement of the problem that led to
analysis.
ii) Summary of findings and recommendations – a list of major findings and
recommendations of the study. Clearly describe the subject and scope. List the areas
included and excluded. It is ideal for the user who requires quick access to the results of
the analysis of the system under study. Conclusions are stated, followed by a list of the
study recommendations and a justification for them.

iii) Details of findings – an outline of the methods and procedures undertaken by the existing
system, followed by coverage of the objectives and procedures of the candidate system.
Included are also discussions of the output reports, file structures and costs and benefits
of the candidate system.

iv) Recommendations and conclusions – specific recommendations regarding the candidate


system, including personnel assignments, costs, project schedules and target dates.

After the report is reviewed by the management, it becomes a formal agreement that
paves the way for actual design and implementation. This is a crucial decision point in
the life cycle. Many projects die here, whereas the more promising ones continue through
implementation. Changes in the proposal are made in writing, depending on the
complexity, size and cost of the project. It is important to verify changes before
committing the project to design.

Types of Feasibility Studies


i. Technical Feasibility: Technical feasibility assesses whether the current
technical resources are sufficient for the new system.
ii. Economic Feasibility: Economic feasibility determines whether the time and
money are available to develop the system. Includes the purchase of, New
equipment, Hardware and Software

iii. Operational Feasibility: Operational feasibility determines if the human


resources are available to operate the system once it has been installed. Users that
do not want a new system may prevent it from becoming operationally feasible.
iv. Social Feasibility: Determines the impact of the system to the people who will
work with the system. This may include work schedule, separation of offices,
retraining, redeployment and even laying off workers.

A well-done feasibility study enables the firm to avoid six common mistakes often made
in project work. These are:
i) Lack of top management support – top management has to understand and support
subordinate managers in their effort to improve the firm’s operations. Feasibility studies
also get subordinate managers directly involved in exploring and designing the systems
they will have to live with in the future. Such involvement results in increased
conscientiousness, which in turn enables top management to have more confidence in
subordinates’ plans. The result will be top management support for such plans.
ii) Failure to clearly specify problems and objectives – the feasibility study can be directed
toward defining the problems and objectives involved in a project, after management has
given the group some understanding of what they would like to accomplish.
iii) Over optimism – a feasibility study can be conducted in an objective realistic manner to
prevent overoptimistic forecasts. The study should be conservative in its estimates of
improved operations; reduced costs, and so on, to ensure that all the firm’s future
surprises with a new system are happy ones.
iv) Estimation errors – it is easy to underestimate the time and money involved in the
following areas:
a) Impact on the company’s structure
b) Employee’s resistance to change
c) Difficulty of retraining personnel
d) System development and implementation
e) Computer program debugging and running
v) The crash project – many managers do not realize the magnitude of work involved in
developing new systems. Crash projects usually involve changing too quickly. A
feasibility study might determine that a present system with all its inadequacies is
superior to a crash project – assuming of course that the feasibility study itself is not run
as a crash project.
vi) The hardware approach – firms have been known to get a computer first and then decide
on how to use it. A feasibility study can identify, in advance, the uses to which the
computer will be put and can identify the best computers for the job before any
irreversible commitments are made.

SOFTWARE DESIGN
Software design is a process to transform user requirements into some suitable form, which helps
the programmer in software coding and implementation.
Software Design Levels
Software design yields three levels of results:
a. Architectural Design - The architectural design is the highest abstract version of the
system. It identifies the software as a system with many components interacting with
each other. At this level, the designers get the idea of proposed solution domain.
b. High-level Design- The high-level design breaks the ‘single entity-multiple component’
concept of architectural design into less-abstracted view of sub-systems and modules and
depicts their interaction with each other. High-level design focuses on how the system
along with all of its components can be implemented in forms of modules. It recognizes
modular structure of each sub-system and their relation and interaction among each other.
c. Detailed Design- Detailed design deals with the implementation part of what is seen as a
system and its sub-systems in the previous two designs. It is more detailed towards
modules and their implementations. It defines logical structure of each module and their
interfaces to communicate with other modules
Modularization
Modularization is a technique to divide a software system into multiple discrete and independent
modules, which are expected to be capable of carrying out task(s) independently. These modules
may work as basic constructs for the entire software. Designers tend to design modules such that
they can be executed and/or compiled separately and independently.
Modular design unintentionally follows the rules of ‘divide and conquer’ problem-solving
strategy this is because there are many other benefits attached with the modular design of
software.
Advantage of modularization:
 Smaller components are easier to maintain
 Program can be divided based on functional aspects
 Desired level of abstraction can be brought in the program
 Components with high cohesion can be re-used again.
 Concurrent execution can be made possible
 Desired from security aspect
Concurrency
In software design, concurrency is implemented by splitting the software into multiple
independent units of execution, like modules and executing them in parallel. In other words,
concurrency provides capability to the software to execute more than one part of code in parallel
to each other.
It is necessary for the programmers and designers to recognize those modules, which can be
made parallel execution.
Coupling and Cohesion
When a software program is modularized, its tasks are divided into several modules based on
some characteristics. There are measures by which the quality of a design of modules and their
interaction among them can be measured. These measures are called coupling and cohesion.
Cohesion
Cohesion is a measure that defines the degree of intra-dependability within elements of a
module. The greater the cohesion, the better is the program design.
There are seven types of cohesion, namely –
iii. Co-incidental cohesion - It is unplanned and random cohesion, which might
be the result of breaking the program into smaller modules for the sake of
modularization. Because it is unplanned, it may serve confusion to the
programmers and is generally not-accepted.
iv. Logical cohesion - When logically categorized elements are put together into
a module, it is called logical cohesion.
v. Temporal Cohesion - When elements of module are organized such that they
are processed at a similar point in time, it is called temporal cohesion.
vi. Procedural cohesion - When elements of module are grouped together,
which are executed sequentially in order to perform a task, it is called
procedural cohesion.
vii. Communicational cohesion - When elements of module are grouped
together, which are executed sequentially and work on same data
(information), it is called communicational cohesion.
viii. Sequential cohesion - When elements of module are grouped because the
output of one element serves as input to another and so on, it is called
sequential cohesion.
ix. Functional cohesion - It is considered to be the highest degree of cohesion,
and it is highly expected. Elements of module in functional cohesion are
grouped because they all contribute to a single well-defined function. It can
also be reused.

Coupling
Coupling is a measure that defines the level of inter-dependability among modules of a program.
It tells at what level the modules interfere and interact with each other. The lower the coupling,
the better the program.
There are five levels of coupling, namely -
ii. Content coupling - When a module can directly access or modify or refer to
the content of another module, it is called content level coupling.
iii. Common coupling- When multiple modules have read and write access to
some global data, it is called common or global coupling.
iv. Control coupling- Two modules are called control-coupled if one of them
decides the function of the other module or changes its flow of execution.
v. Stamp coupling- When multiple modules share common data structure and
work on different part of it, it is called stamp coupling.
vi. Data coupling- Data coupling is when two modules interact with each other
by means of passing data (as parameter). If a module passes data structure as
parameter, then the receiving module should use all its components.

Software design strategies


There are multiple variants of software design;
i. Structured Design
Structured design is a conceptualization of problem into several well-organized elements of
solution. It is basically concerned with the solution design. Benefit of structured design is, it
gives better understanding of how the problem is being solved. Structured design also makes it
simpler for designer to concentrate on the problem more accurately.
Structured design is mostly based on ‘divide and conquer’ strategy where a problem is broken
into several small problems and each small problem is individually solved until the whole
problem is solved.
These modules are arranged in hierarchy. They communicate with each other. A good structured
design always follows some rules for communication among multiple modules, namely -
Cohesion - grouping of all functionally related elements.
Coupling - communication between different modules.
A good structured design has high cohesion and low coupling arrangements.

ii. Function Oriented Design


In function-oriented design, the system is comprised of many smaller sub-systems known as
functions. These functions are capable of performing significant task in the system. The system
is considered as top view of all functions.
Function oriented design inherits some properties of structured design where divide and conquer
methodology is used.
This design mechanism divides the whole system into smaller functions, which provides means
of abstraction by concealing the information and their operation. These functional modules can
share information among themselves by means of information passing and using information
available globally.

Design Process
 The whole system is seen as how data flows in the system by means of data flow
diagram.
 DFD depicts how functions change the data and state of entire system.
 The entire system is logically broken down into smaller units known as functions on the
basis of their operation in the system.
 Each function is then described at large.

iii. Object Oriented Design


Object oriented design works around the entities and their characteristics instead of functions
involved in the software system. This design strategy focuses on entities and its characteristics.
The whole concept of software solution revolves around the engaged entities.
Let us see the important concepts of Object Oriented Design:
 Objects - All entities involved in the solution design are known as objects. For
example, person, banks, company and customers are treated as objects. Every
entity has some attributes associated to it and has some methods to perform on the
attributes.
 Classes - A class is a generalized description of an object. An object is an
instance of a class. Class defines all the attributes, which an object can have and
methods, which defines the functionality of the object.
In the solution design, attributes are stored as variables and functionalities are
defined by means of methods or procedures.
 Encapsulation - In OOD, the attributes (data variables) and methods (operation
on the data) are bundled together is called encapsulation. Encapsulation not only
bundles important information of an object together, but also restricts access of
the data and methods from the outside world. This is called information hiding.
 Inheritance - OOD allows similar classes to stack up in hierarchical manner
where the lower or sub-classes can import, implement and re-use allowed
variables and methods from their immediate super classes. This property of OOD
is known as inheritance. This makes it easier to define specific class and to create
generalized classes from specific ones.
 Polymorphism - OOD languages provide a mechanism where methods
performing similar tasks but vary in arguments, can be assigned same name. This
is called polymorphism, which allows a single interface performing tasks for
different types. Depending upon how the function is invoked, respective portion
of the code gets executed.

Design Process
 A solution design is created from requirement or previous used system and/or system
sequence diagram.
 Objects are identified and grouped into classes on behalf of similarity in attribute
characteristics.
 Class hierarchy and relation among them are defined.
 Application framework is defined.

Software Design Approaches


There are two generic approaches for software designing:
i. Top down Design
Top-down design takes the whole software system as one entity and then decomposes it to
achieve more than one sub-system or component based on some characteristics. Each sub-system
or component is then treated as a system and decomposed further. This process keeps on running
until the lowest level of system in the top-down hierarchy is achieved.
Top-down design starts with a generalized model of system and keeps on defining the more
specific part of it. When all components are composed the whole system comes into existence.
Top-down design is more suitable when the software solution needs to be designed from scratch
and specific details are unknown.

ii. Bottom-up Design


The bottom up design model starts with most specific and basic components. It proceeds with
composing higher level of components by using basic or lower level components. It keeps
creating higher level components until the desired system is not evolved as one single
component. With each higher level, the amount of abstraction is increased.
Bottom-up strategy is more suitable when a system needs to be created from some existing
system, where the basic primitives can be used in the newer system.

Both, top-down and bottom-up approaches are not practical individually. Instead, a good
combination of both is used.

Software Analysis & Design Tools


Software analysis and design includes all activities, which help the transformation of
requirement specification into implementation. Requirement specifications specify all functional
and non-functional expectations from the software. These requirement specifications come in the
shape of human readable and understandable documents, to which a computer has nothing to do.

1. Data Flow Diagram


Data flow diagram is a graphical representation of data flow in an information system. It is
capable of depicting incoming data flow, outgoing data flow and stored data. The DFD does not
mention anything about how data flows through the system.

Types of DFD
Data Flow Diagrams are either Logical or Physical.
a. Logical DFD - This type of DFD concentrates on the system process and flow of data in
the system. For example in a Banking software system, how data is moved between
different entities.
b. Physical DFD - This type of DFD shows how the data flow is actually implemented in
the system. It is more specific and close to the implementation.
DFD Components
DFD can represent Source, destination, storage and flow of data using the following set of
components –

 Entities - Entities are source and destination of information data. Entities are represented
by rectangles with their respective names.
 Process - Activities and action taken on the data are represented by Circle or Round-
edged rectangles.
 Data Storage - There are two variants of data storage - it can either be represented as a
rectangle with absence of both smaller sides or as an open-sided rectangle with only one
side missing.

Data Flow - Movement of data is shown by pointed arrows. Data movement is shown
from the base of arrow as its source towards head of the arrow as destination.
Context Diagram: It represents the context in which the system is to exist, i.e. the external
entities who would interact with the system and the specific data items they would be supplying
the system and the data items they would be receiving from the system

Shortcomings of a DFD model

i. DFDs leave ample scope to be imprecise - In the DFD model, the function performed by
a bubble is judged from its label.
ii. Control aspects are not defined by a DFD- For instance; the order in which inputs are
consumed and outputs are produced by a bubble is not specified. A DFD model does not
specify the order in which the different bubbles are executed. Representation of such
aspects is very important for modeling real-time systems.
iii. The method of carrying out decomposition to arrive at the successive levels and the
ultimate level to which decomposition is carried out are highly subjective and depend on
the choice and judgment of the analyst. Due to this reason, even for the same problem,
several alternative DFD representations are possible. Further, many times it is not
possible to say which DFD representation is superior or preferable to another one.
iv. The data flow diagramming technique does not provide any specific guidance as to how
exactly to decompose a given function into its sub-functions and we have to use
subjective judgment to carry out decomposition.

Reasons why large programs fail or are not completed on schedule?


Due to inability to make realistic program design schedules and meet them. Reasons include:-
 Underestimation of time to gather requirements and define system functions
 Underestimation of time to produce a workable (cost and time) program design.
 Underestimation of time to test individual programs.
 Underestimation of time to integrate complete program into the system and complete
acceptance tests.
 Underestimation of time and effort needed to correct and retest program changes.
 Failure to provide time for restructuring program due to changes in requirements.
 Failure to keep documentation up-to-date.
 Underestimation of system time required to perform complex functions.
 Underestimation of program and data memory requirements. Tendency to set end date for
job completion and then to try to meet the schedule by attempting to bring more manpower
to the job by splitting job into program design blocks in advance of having defined the
overall system plan well enough to define the individual program blocks and their
appropriate interfaces.

CODING
Coding- The objective of the coding phase is to transform the design of a system into code in a
high level language and then to unit test this code. The programmers adhere to standard and well
defined style of coding which they call their coding standard. The main advantages of adhering
to a standard style of coding are as follows:
 A coding standard gives uniform appearances to the code written by different engineers
 It facilitates code of understanding.
 Promotes good programming practices.
For implementing our design into a code, we require a good high level language. A
programming language should have the following features:

Characteristics of a Programming Language


 Readability: A good high-level language will allow programs to be written in some ways
that resemble a quite-English description of the underlying algorithms. If care is taken,
the coding may be done in a way that is essentially self-documenting.
 Portability: High-level languages, being essentially machine independent, should be able
to develop portable software.
 Generality: Most high-level languages allow the writing of a wide variety of programs,
thus relieving the programmer of the need to become expert in many diverse languages.
 Brevity: Language should have the ability to implement the algorithm with less amount
of code. Programs expressed in high-level languages are often considerably shorter than
their low-level equivalents.
 Error checking: Being human, a programmer is likely to make many mistakes in the
development of a computer program. Many high-level languages enforce a great deal of
error checking both at compile-time and at run-time.
 Cost: The ultimate cost of a programming language is a function of many of its
characteristics.

 Familiar notation: A language should have familiar notation, so it can be understood by


most of the programmers.
 Quick translation: It should admit quick translation.
 Efficiency: It should permit the generation of efficient object code.
 Modularity: It is desirable that programs can be developed in the language as a
collection of separately compiled modules, with appropriate mechanisms for ensuring
self-consistency between these modules.
 Widely available: Language should be widely available and it should be possible to
provide translators for all the major machines and for all the major operating systems.

TESTING

Definition 2
Testing involves actual execution of program code using representative test data sets to
exercise the program and outputs are examined to detect any deviation from the expected
output

Definition 1
Testing is classified as dynamic verification and validation activities

Reviews can be applied to:


 Requirement specifications
 High level system designs
 Detailed designs
 Program code
 User documentation
 Operation of delivered system

Objectives of Testing
1. To demonstrate the operation of the software.
2. To detect errors in the software and therefore:
 Obtain a level of confidence,
 Produce measure of quality.

THE TESTING PROCESS


Systems in this case are tested, as a single unit therefore testing should proceed in stages,
where testing is carried out incrementally in conjunction with system implementation.
The most widely used testing process consists of 5 stages:
(a) Unit testing
(b) Module testing
(c) Sub-system testing
(d) System testing
(e) Acceptance (alpha) testing.

(A) UNIT TESTING


Unit testing is where individual components are tested independently to ensure they operate
correctly.
(B) MODULE TESTING
A module is a collection of dependent components e.g. an object class, an abstract data type
or collection of procedures and functions.
Module testing is where related components (modules) are tested without other system
modules.
(C) SUB-SYSTEM TESTING
Sub-systems are integrated to make up a system.
Sub-system testing aims at finding errors of unanticipated interactions between sub-systems
and system components. Sub-system testing also aims at validating that the system meets the
functional and non-functional components.

(D) ACCEPTANCE TESTING (ALPHA TESTING)


Acceptance testing is also known as alpha testing or last testing.
In this case the system is tested with real data (from client) and not simulated test data.
Acceptance testing:
 Reveals errors and omissions in systems requirements definition.
 Test whether the system meets the users’ needs or if the system performance is acceptable.
Acceptance testing is carried out till users /clients agree it’s an acceptable implementation of
the system.

N/B 1:Beta testing


Beta testing approach is used for software to be marketed.
It involves delivering it to a number of potential customers who agree to use it and report
problems to the developers.
After this feedback, it is modified and released again for another beta testing or general use.

N/B 2:
The five steps of testing are based on incremental system integration i.e.
(Unit testing – module testing – sub-system testing - system testing- acceptance testing). But
object oriented development is different and levels have clear/ distinct
 Operations and data forms objects –units
 Object integrated forms class (equivalent to) –modules
Therefore class testing is cluster testing.

MODES OF TESTING
1. Black Box Functional Testing
This is based on specification alone, without reference to implementation details.
2. White Box Or Structural, Glass Testing
This is based on inspection of code structure (implementation details, low level design)

Black Box Functional Testing


Techniques
The test cases are derived from specification of the module /system /sub-system under test.
The actual and expected results are compared.

Techniques
I. Equivalence partitioning
II. Boundary value analysis.

Equivalence partitioning
 Equivalence partitioning involves breaking down the input data into sets regarded as
“equivalent”.
 It relies on an assumption of “uniform behavior within ranges of input values that are not
significantly different in terms of the specification.
E.g.
-A program must handle from 1 to 10,000 records.
-If it can handle 40 records and 9,000 records chances are that it will work with 5,000
records.
-Therefore chances of detecting fault (if present) are equally good if any test case from 1-
10,000 is selected.
-Therefore if the program work for any one test case it will probably work for any text cases
in the range.
 The range 1-10,000 constitutes an equivalence class i.e. A set of test cases such that any one
member of the class is as good a test case as any other.
 Therefore classes:
Equivalence class 1 – less than one
Equivalence class 2 –1 to 10,000
Equivalence class 3 – more than 10,000.
 Equivalence partitioning requires a test case from each class be carried out.

Boundary value analysis.


Test case 1: 0 records i.e. member of class 1 adjacent to lower boundary.
Test case 2: 1 record i.e. lower boundary value.
Test case 3: 2 records i.e. adjacent lower boundary
Test case 4: 500 records i.e. member of class 2.
Teats case 5: 999 records i.e. adjacent to upper boundary
Test case 6: 10,000 records i.e. upper boundary value.
Test case 7: 10,001 records i.e. member of class and adjacent to upper boundary.

White box testing


Test cases are derived from examination of the program code with its following coverage
 Statement coverage
 Branch coverage
 Multiple condition coverage
 Path coverage
Statement coverage
Statement coverage involves running a series of test cases that ensure every statement in the
code is executed at least once.
E.g. Pg. 6 (Comm 64 – St. h. book)

Branch coverage
Requires test data that causes each branch to have a true or false outcome
E.g. Pg. 6 (IBID)

Multiple condition coverage


Multiple conditional coverage requires all possible combinations of true or false outcomes to
be tested.
E.g. Pg. 7 (IBID)
Number of conditions x 2 = number of cases.

Path coverage
Path coverage is concerned with testing all paths through a program basis path testing.
Path coverage enables the logical complexity of a program to be measured.
It uses the measure to define a basis set of execution paths
Flow graphs depict logical flow through a program

Initialize 1
2
Do get character from file ------
3
If character <> new line add 1 to
Character count------------------ 4 5
Else 6
Add 1to line 7
Count------
8
End if-------------
While not end of file --------

Print character and line count -----


Test cases are created to guarantee that every statement is executed at least once.
Basis of test cases:
123678
1235678
1234672
1235672

TEST PLANNING
Test planning is setting out standards for the testing process rather than describing product
tests.
Test plans allow developers get an overall picture of the system tests as well as ensure
required hardware, software, resources are available to the testing team.
Components of a test plan:
 Testing process
This is a description of the major phases of the testing process.
 Requirement traceability
This is a plan to test all requirements individually.
 Testing schedule
This includes the overall testing schedule and resource allocation.
 Test recording procedures
This is the systematic recording of test results.
 Hardware and software requirements.
Here you set out the software tools required and hardware utilization.
 Constraints
This involves anticipation of hardships /drawbacks affecting testing e.g. staff shortage should
be anticipated here.

N/B
Test plan should be revised regularly.

TESTING STRATEGIES
This is the general approach to the testing process.
There are different strategies depending on the type of system to be tested and development
process used: -
 Top-down testing
This involves testing from most abstract component downwards.
 Bottom-up testing
This involves testing from fundamental components upwards.
 Thread testing
This is testing for systems with multiple processes where the processing of transactions
threads through these processes.
 Stress testing
This relies on stressing the system by going beyond the specified limits therefore testing on
how well it can cope with overload situations.
 Back to back testing
It is used to test versions of a system and compare the outputs.

N/B
Large systems are usually tested using a mixture of strategies.

Top-down testing
Tests high levels of a system before testing its detailed components. The program is
represented as a single abstract component with sub-components represented by stubs.
Stubs have the same interface as the component, but limited functionality.
After top-level component (the system program) is tested. Its sub-components (sub-systems)
are implemented and tested through the same way and continues to the bottom component
(unit).
If top-down testing is used:
- Unnoticed errors maybe detected early (structured errors)
- Validation is done early in the process.
Disadvantages Of Using Top down Testing
1. It is difficult to implement because:
Stubs are required to simulate lower levels of the system. Complex components are
impractical to produce a stub that can be tested correctly.
Requires knowledge of internal pointer representation.
2. Test output is difficult to observe. Some higher levels do not generate output therefore must
be forced to do so e.g. (classes) therefore create an artificial environment to generate test
results.
N/b therefore it is not appropriat3e for object oriented systems but individual systems may be
tested.

Bottom-up testing
This is the opposite of top-down testing. This is testing modules at lower levels in the hierarchy,
then working up to the final level.
Advantages of bottom up are the disadvantages of top-down. +
1. Architectural faults are unlikely to be discovered till much of the system has been tested.
2. It is appropriate for object oriented systems because individual objects can be tested using
their own test drivers, then integrated and collectively tested.

Thread testing (transaction flow testing-by Bezier 1990)


This is for testing real time systems.
It is an event-based approach where tests are based on the events, which trigger system actions.
It may be used after objects have been individually tested and integrated into sub-system.
Processing of each external event “threads” its way through the system processes or objects with
processing carried out at each stage
It involves identifying and executing each possible processing thread.
The system should be analyzed to identify as many threads as possible.
After each thread has been tested with a single event, processing of multiple events of same type
should be tested without events of any other type (multiple-input thread testing).
After multiple-input thread testing, the system is tested for its reaction’s to more than one class
of simultaneous event i.e. multiple thread testing.

SOFTWARE MAINTENACE
Definition 1 - Maintenance is the process of changing a system after it has been delivered and is
in use.
Simple - correcting coding errors
Extensive - correcting design errors.
Enhancement- correcting specification errors or accommodate new requirements.

Definition 2 - Maintenance is the evolution i.e. process of changing a system to maintain its
ability to survive.
The maintenance stage of system development involves
a) correcting errors discovered after other stages of system development
b) improving implementation of the system units
c) enhancing system services as new requirements are perceived
Information is fed back to all previous development phases and errors and omissions in original
software requirements are discovered, program and design errors found and need for new
software functionality identified.
TYPES OF SOFTWARE MAINTENANCE
The following are the different types of maintenance:
Corrective maintenance - This involves fixing discovered errors in software (Coding errors,
design errors, requirement errors) once the software is implemented and is in full operation, it is
examined to see if it has met the objectives set out in the original specifications. Unforeseen
problems may need to be overcome, and may involve returning to the earlier stages in the system
development life cycle to take corrective actions
Adaptive maintenance - This is changing the software to operate in a different environment
(operating system, hardware) this doesn’t radically change the software functionality. After
running the software for some time, the original environment e.g. operating System and the
peripherals, for which the software was developed, may change.
At this stage the software will be modified to accommodate the changes that will have occurred
in its external environment. This could even call for a repeat of the system development life
cycle yet again
Perfective maintenance - Implementing new functional or non-functional system requirements,
generated by software customers as their organization or business changes. Also as the software
is used, the user will recognize additional functions that could provide benefits or enhance the
software if added to it
Preventive maintenance - Making changes on software to prevent possible problems or
difficulties (collapse, slow down, stalling, self-destructive e.g. Y2K).

Operation stage involves


 use of documentation to train users of system and its resource
 system configuration
 repairs and maintenance
 safety precautions
 date control
 Train user to get help on the system.

Maintenance cost (fixing bugs) is usually higher than what software is original due to: -
I. Program being maintained may be old, and not consistent to modern software engineering
techniques. They may be unstructured and optimized for efficiency rather than
understandability.
II. Changes made may introduce new faults, which trigger further change requests. This is
mainly since complexity of the system may make it difficult to assess the effects of a
change.
III. Changes made tend to degrade system structure, making it harder to understand and make
further changes (program becomes less cohesive.)
IV. Loss of program links to its associated documentation therefore its documentation is
unreliable therefore need for a new one.

SOFTWARE MAINTENANCE PROCESS


Maintenance process is triggered by
(a) A set of change requests from users, management or customers.
(b) Cost and impact of the changes are assumed, If acceptable,
(c) New release is planned involving maintenance elements (adaptive, corrective perfective..)

NB. Changes are implemented and validated and new versions of system released
CHA IMP SYST CHANGE SYST
NGE ACT EM IMPLEMEN EM
REQ ANAL RELE TATION RELE
UES YSIS ASE ASE
T PLAN
NING
PERFECTI ADAPTIV CORREC
VE E TIVE
MAINTE MAINTE MAINTE
NANCE NANCE NANCE

Factors affecting maintenance


Module independence - Use of design methods that allow easy change through concepts such as
functional independence or object classes (where one can be maintained independently)
Quality of documentation - A program is easier to understand when supported by clear and
concise documentation.
Programming language and style - Use of a high level language and adopting a consistent style
throughout the code.
Program validation and testing - Comprehensive validation of system design and program
testing will reduce corrective maintenance.
Configuration management - Ensure that all system documentation is kept consistent
throughout various releases of system (documentation of new editions.)
Understanding of current system and staff availability - Original development staff may not
always be available. Undocumented code can be difficult to understand (team management).
Application domain - Clear and understood requirements.
Hardware stability – concerns for the equipment to be fault tolerant
Dependence of program on external environment

SOFTWARE PROJECT MANAGEMENT


Software Engineering Management Context
 Clients often do not appreciate the complexity inherent in software engineering,
particularly to the impact of changed client requirements
 Software development is an iterative rather than linear process, thus need to maintain a
balance between creativity and discipline
 Management to have an underlying theory concerning Software products which are
intangible and cannot be easily tested
 Also the degree of Software complexity and the rapid pace of change in the underlying
technology

There is need to carry out management in the following areas


Organizational Management – consideration on Policy, Personnel, Communication,
Portfolio and Procurement management
Project Management – it begins with initiation and scope definition, Planning, Enactment,
Review, evaluation and Closure of the project
Software Engineering Measurement – It deals with issues concerning goals, selection of
software and its development, Collection of data, Software measurement models and
Organizational comparison.
Policy management – includes means of policy development, Policy dissemination and
enforcement, Development and deployment of standards
Personnel management - Hiring and retention, Training and motivation, mentoring for
career Development, enhancing Communication channels and media, Meeting procedures,
Written, Oral or negotiation presentations
Portfolio management – consideration of multiple clients and/or projects, Strategy
development and coordination, General investment management techniques, Project selection
and construction
Procurement management – involves Procurement planning and selection, Supplier and
contract coordination

Process Management
Aspects in Process Management
Initiation and scope definition – Concerned with determination and negotiation of
requirements, Feasibility analysis (technical, operational, financial, social/political), Process
for review and revision of requirements.
Planning - Process planning, Project planning, Determine deliverables, Effort, schedule and
cost estimation, Resource allocation, Risk management, Quality management, Plan
management.
Implementation - Enactment of plans, Implementation of measurement process, Monitor
process, Control process, Reporting.
Review and evaluation - Determining satisfaction of requirements, Reviewing and
evaluating performance,
Closure - Determining closure and Closure activities

MANAGING SYSTEM DEVELOPMENT PROJECT


Effective system project management focuses on 4P’s i.e.
 People : recruiting, selection, performance management, training, compensation, career
development, organization and work design and team/ culture development
 Product : product objective and scope should be established first, alternative solutions
considered, parameters established etc. defining the product , therefore it is possible to
estimate cost, effectiveness and project breakdown to manageable schedules
 Process : frame work activities from which a comprehensive plan for system development
can be established
 Number of framework activities, made up of i.e. tasks, milestone, work products and the
project team adapts quality assurance points.
 Umbrella activities such as quality assurance, system configuration management and
measurement lays the process model
 Project : is planned and controlled system (project) undertaking to achieve a goal/ attain a
solution

PEOPLE
“System are not developed by individuals but teams”.
Players in system development: -
 Senior Managers: define business issues that have significant influence on the project
 Project (technical) Manager: plan, motivate,, organize and control practitioners who do the
development work
 Practitioners : deliver the technical skill necessary to engineer a product
 Customers : specify the requirement for the system to be engineered
 End-users : interact with the released system/ product
System Development Team Leaders
They should be
 Motivative: encourage team members
 Organizational : able to mould existing processes (or invent new) that will enable the initial
concept be translated to final product
 Innovative : able to generate new/ creative ideas/ solution
 Achiever : able to optimize productivity of team members
 Problem solver : able to diagnose technical and organizational issues that are relevant and
develop a solution
 Controller/ authoritative : able to take charge of the project therefore confident to control
 Understanding and flexible : able to understand others point of view, understand others
reactions/ signals and change position flexibility, and remain in control during high- stress
situation.

The team should be motivated: -


 Provided a conducive working environment
 Properly rewarded
 Issued with properly drafted and interpreted specification and tasks
 Should be secured

PRODUCT
Major challenge to the system development manager is quantitative estimates and organized
plan. Production in view of scattered requirements and unavailability of solid information
and fluid (changing) requirements
Therefore examine the product and problem to be solved

First management activity will determine system scope by looking at: -


a) Context : How does it fit into a large system, product, or business context
b) Information objectives : what are its inputs and output requirements
c) Function and performance : what functions does it perform in order to transform input into
output Problem decomposition / problem partitioning / problem elaboration.

PROCESS
Generic phases that characterize system development process are; definition, development
and support. Appropriate engineering model must be employed: -
a) Linear sequential (traditional/ waterfall) model
b) Prototyping
c) RAD model
d) Spiral model
e) Incremental model

PROJECT PLANNING
Manages are responsible for
a) Writing project proposal
b) Writing project costing
c) Project planning and scheduling
d) Project monitoring and reviewing
e) Personnel selection and evaluation
f) Report writing and presentations
Project planning is concerned with identifying the activities, milestone and deliverable
produced by a project.
- a plan must be drawn to guide the development towards the project goals
- system project estimation is activity concerned with estimating the
resources required to accomplish the project plan

Project manager must anticipate problems which might arise and prepare tentative solutions
to the problem
- plan is used as the driver for the project
- plan (initial one) is not static but must be modified on the project
progress as mere information becomes available

TYPES OF PLAN
a) Quality plan : describes quality procedures and standards that will be used in a project
b) Validation plan : describes the approach, resources and schedule used for system validation
c) Configuration management plan : describes the configuration management procedures and
structure to be used
d) Maintenance plan : predicts the maintenance requirements of the system, maintenance cost
and effort required
e) Staff development plan : describes how the skills and experience of the project team
members will be developed.

The planning process starts with an assessment of the constraints (required delivery date,
overall budget, staff available etc) affecting the project
This is carried out in conjunction with an estimation of project parameters such as structure,
size and distribution of functions
The program milestone and deliverables are then defined

A schedule (for the project) is drawn, analyzed and passed and subjected to later reviews

PROJECT SCHEDULING
It is the estimation of time and resources required to complete activities and organizing them
in a coherent sequence.
It also involves separating the work (project) into separate activities and judging the time
required to complete these activities, some of which are carried out in parallel
Schedules must: -
 Properly co-ordinate the parallel activities.
 Avoid situation where whole project is delayed for a critical task to be finished (critical
tasks are the jobs your team must complete to finish a project. The critical path is the
sequence of critical tasks, identifying which order to complete them and how long each
task will take. The length of the critical path tells you the timeline for completing your
project.)
 Schedules must have allowances (error allowances) that can cause delays in completion
therefore flexible.
 They must also estimate resources needed to complete each task (human effort, hardware,
software, finance (budget) etc)
NB: the key to estimation is to estimate as if nothing will go wrong, then increase the
estimate to cover anticipated problems. Also add a further contingency factor to cover the
problems.
Project schedule is usually presented as a set of charts showing
 Work breakdown
 Activity dependency
 Staff allocation

Such charts include: -


 Activity bar charts
 Activity network chart
 Gantt charts (staff allocation Vs time chart)
Gantt Charts
 A Gantt chart is a horizontal bar chart that illustrates a project task against a calendar.
 The horizontal position of the bar shows the start and end of the activity, and the length of
the bar indicates its duration.
 For the work in progress the actual dates are shown on the horizontal axis
 A vertical line indicates a current or reporting date
 To reduce complexity a Gantt chart for a large project can have a master chart displaying
the major activity groups (where each activity represents several task) and is followed by
individual Gantt charts that show the tasks assigned to team members.
 The chart can be used to track and report progress as it presents a clear picture of project
status
 It clearly shows overlapping tasks- tasks that can be performed at that same time
 The bars can be shaded to clearly indicate percentage completion and project progress
 Popular due to its simplicity – it is easy to read, learn, prepare and use
 More effective than PERT/CPM charts when one is seeking to communicate schedules
 They do not show activity dependencies. One cannot determine from the Gantt chart the
impact on the entire project caused by single activity that falls behind schedule.
 The length of the bar only indicates the time span for completing an activity not the number
of people assigned or the person days required

PERT/CPM

Program Evaluation and Review Technique (PERT), (Also Critical Path Method – CPM) is a
graphical network model that depicts a project’s tasks and the relationships between those tasks.
The project is shown as a network diagram with the activities shown as vectors and events
displayed as nodes.
 Shows all individual activities and dependencies
 It forms the basis for planning and provides management with the ability to plan for best
possible use of resources to achieve a given goal within time and cost limitations
 It provides visibility and allows management to control unique programs as opposed to
repetitive situations
 Helps management handle the uncertainties involved in programs by answering such
questions as how time delays in certain elements influence others as well as the project
completion. This provides management with a means for evaluating alternatives
 It provides a basic structure for reporting information
 Reveals interdependencies of activities
 Facilitates hat if exercises
 It allows one to perform scheduling risk analysis
 Allows a large amount of sophisticated data to be presented in a well organized diagram
from which both the contractor and the customer can make joint decisions
 Allows one to evaluate the effect of changes in the program
 More effective than Gantt charts when you want to study the relationships between tasks
 Requires intensive labour and time
 The complexity of the charts adds to implementation problems
 Has more data requirements thus is expensive to maintain
 Is utilized mainly in large and complex projects

 Gantt Charts and PERT/CPM are not mutually exclusive techniques project managers often
use both methods. Neither handles the scheduling of personnel and allocation of resources

NETWORK ANALYSIS
Network analysis is a generic name for a family of related techniques developed to aid
management to plan and control projects. It provides planning and control information on time,
cost and resource aspects of a project. It is most suitable where the projects are complex, large or
restrictions exist.
The critical path method is applied most where a network is drawn either an activity on arrow or
activity on node network. In the network analysis a project is broken down into consistent
activities and their presentation in a diagrammatic form. In the CPM one has to analyze the
project, draw the network, estimate the time and cost, locate the critical path, schedule the
project, monitor and control the progress of the project and revise the plan.

Example: draw a network and find the critical path for the following project
Activity Proceeding Activity Duration
A - 4
B A 2
C B 10
D A 2
E D 5
F A 2
G F 4
H G 3
J C 6
K C, E 6
L H 3

Using the information in Table 1, assuming that the project team will work a standard working
week (5 working days in 1 week) and that all tasks will start as soon as possible:
(i)Draw the network diagram

(ii) Determine the critical path of the project


(iii) Calculate the planned duration of the project in weeks
(iv) Identify any non-critical tasks and the float (free slack) on each.
Task D is non critical with 10 days (2 weeks) float
Task D is non critical with 5 days (1 weeks) float

ADVANTAGES OF PROJECT MANAGEMENT TOOLS


Easier visualization of relationships – a produced network diagram shows how the different
tasks and activities relate together making it easier to understand a project intuitively, improving
planning and enhancing communication details of the project plan to the interested parties
More effective planning – CPM forces the management to think the project through for it
requires careful detailed planning and high discipline that justifies its use
Better focusing on the problem areas – the technique enables the manager to pin-point likely
bottle-necks and problem areas before they can occur
Improve resource allocation – resources can be directed where most needed thus reducing costs
and speeding up completion of the project, e.g. overtime can be eliminated or confined to those
tasks where it will do most good
Strong alternative options – management can simulate the effect of alternative causes of action;
and gauge the effect of problems in carrying out particular tasks and making contingency plans
Management by exception – CPM identifies those actions whose timely completion is critical
to the overall timetable and enables the leeway on other actions to be calculated. This enables the
management to focus their attention on important areas of the project
Improve project monitoring – by comparing the actual performance of each task with the
expected the manager can immediately recognize when the problems are occurring, identify their
causes and take appropriate action in time to rescue the project.

Importance of project scheduling


 The project manager musts know the duration of each activity, the order in which the
activities must be performed, the start and end times for each activity, and who will be
assigned each specific task.
 The scheduling allows for a balanced activity time estimate, sequences and personnel
assignments to achieve a workable schedule, which is essential for project success
 The schedule provides:
Effective use of estimating processes
Ease in project control
Ease in time or cost revisions
Allows for better communication of project tasks and deadlines
Criteria for defining project success

Completed:
i) Within the allocated time
ii) Within the budgeted cost
iii) At the proper performance or specification level
iv) With acceptance by the customer/user
v) With minimum or mutually agreed upon scope changes
vi) Without disturbing the main work flow of the organization
vii) Without changing the corporate culture
viii) Within the required quality and standards thus you can use the customer’s name as a
reference

Potential benefits of project management


i) Identification of functional responsibilities to ensure that all activities are accounted
for regardless of personnel turnover
ii) Minimizing the need for continuous reporting
iii) Identification of time limits for scheduling
iv) Measurement of accomplishment against plans
v) Early identification of problems so that corrective action may follow
vi) Improved estimating capability for future planning
vii) Knowing when objectives cannot be met or will be exceeded

PROJECT ESTIMATION
System (software) cost and effort estimate can never be exact, too many variables, human,
technical, environmental, political can affect system cost and efforts applied to development
Project estimation strive to achieve a reliable cost and effort estimation
A number of options arise trying to achieve this: -
a) Delay estimation until late in the project (estimates done after the project)
b) Base estimates on similar projects that have already been completed
c) Use relating simple decomposition technique to generate project cost and effort estimates
d) Use one or more empirical models for system cost and effort estimation

1st option is not practical. Estimate must be provided “upfront”


2nd option only works in similar projects
3rd and 4th should be used together to check on one another decomposition techniques take a
“divide and conquer approach to project estimation. Project is decomposed (divided) into
major functions and related activities and cost and efforts estimated on each.

Empirical estimation models: based an experience (historical data) and takes a form

D= f(vi)
Where d = one of a number of estimated values (e.g. effort, cost, project duration etc)
Vi = selected independent parameters (e.g. estimated *LOC or *FP)

DECOMPOSITION TECHNIQUES

Accuracy of system (software) project estimate is predicted on a number of things
a) Degree to which the planner has properly estimated the size of the product to be built.
b) Ability to translate the size estimate into human efforts, calendar time and money.
c) The degree to which project plan reflects the abilities of the system development team
d) The ability of product requirements and the environment that supports the system
development efforts

Project estimate is as good as the estimate of the sizes of the work to be accomplished
Size is a quantifiable outcome of the system/ software project

Four different approaches to sizing problem are: -


a) “Fuzzy-logic” sizing: it is an approach that uses approximation reasoning technique that are
cornerstone of fuzzy logic, qualitative
b) function point sizing
c) change sizing

PROBLEM BASED ESTIMATION

LOC and FP are used in project estimation


a) As estimate variable used to “size” each element of the system software
b) As baseline metrics collected from past projects

EMPIRICAL ESTIMATION MODELS

An estimation model for PC software uses empirically derived formula to predict effort as a
function of LOC or FP

E= A+Bx (ev)c

COCOMO MODEL (Barry Boehm)

COCOMO is Constructive Cost Model


It has been revised into COCOMO II
It is a hierarchy of estimates model that addresses the following areas: -
a) Application composition models – used in early development stages when prototyping user
interface, performance assessment and technology maturity assessment.
b) Early design stage model – used after requirements have been established and basic system
architecture has been established.
c) Post architecture stages models – used during construction of the system.

Like other models, COCOMO II model require sizing information

QUANTITATIVE MANAGEMENT AND ASSURANCE

Responsibility of quality managers to ensure the required level of quality is achieved

Definition – Quality management involves defining appropriate procedures and standards


and checking that all engineers (developers) follow them

It depends on developing a “quality culture”


System quality is multi-dimensional – the product should meet specifications
a) Depending on customer need and wants , as well as the developers needs/ requirements
which may not be included in specification
b) Some qualities are difficult to measure
c) Some specification are incomplete
“"Quality is hard to define, impossible to measure and easy to recognize.
Definition – “Quality is continually satisfying customer requirements” (Smith 1987)
International Standards Organization (ISO) – The totality of features and characteristics of a
product or service that bear on the ability to satisfy specified or implied needs (ISO 1986)

Garvins view of quality (Garvin 1984) identifies five views of quality


a) The transcendent view – Quality is immeasurable but can be seen, sensed or felt and
appreciated e.g. art or music
b) Product based view – Quality is measured by the attributes/ ingredients in a product
c) User based view – Quality is fitness for purpose, meeting needs as specified
d) Value based view – the ability to provide the customer with the product/ services they want
at the price they can afford.

SOFTWARE COST ESTIMATION


The dominant cost is the effort cost. This is the most difficult to estimate and control, and has the
most significant effect on overall costs. Software costing should be carried out objectively with
the aim of accurately predicting the cost to the contractor of developing the software. Software
cost estimation is a continuing activity which starts at the proposal stage and continues
throughout the lifetime of a project. Projects normally have a budget, and continual cost
estimation is necessary to ensure that spending is in line with the budget. Effort can be measured
in staff-hours or staff-months (Used to be known as man-hours or man-months). Boehm (1981)
discusses seven techniques of software cost estimation:

(1) Algorithmic cost modeling - A model is developed using historical cost information which
relates some software metric (usually its size) to the project cost. An estimate is made of that
metric and the model predicts the effort required.

(2) Expert judgement - One or more experts on the software development techniques to be used
and on the application domain are consulted. They each estimate the project cost and the final
cost estimate is arrived at by consensus.

(3) Estimation by analogy - This technique is applicable when other projects in the same
application domain have been completed. The cost of a new project is estimated by analogy with
these completed projects.

(4) Parkinson's Law - Parkinson's Law states that work expands to fill the time available. In
software costing, this means that the cost is determined by available resources rather than by
objective assessment. If the software has to be delivered in 12 months and 5 people are available,
the effort required is estimated to be 60 person-months.

(5) Pricing to win - The software cost is estimated to be whatever the customer has available to
spend on the project. The estimated effort depends on the customer's budget and not on the
software functionality.

"
(6) Top- down estimation - A cost estimate is established by considering the overall
functionality of the product and how that functionality is provided by interacting sub-functions.
Cost estimates are made on the basis of the logical function rather than the components
implementing that function.

(7) Bottom- up estimation - The cost of each component is estimated. All these costs are added
to produce a final cost estimate.

SOFTWARE VERIFICATION AND VALIDATION


Verification and Validation are concerned with assuring that a software system meets a user's
needs
Validation: validation shows that the program meets the customer’s needs. The software should
do what the user really requires. The designers are guided by the notion of whether they are
building the right product
Verification: Verification shows conformance with specification. The software should conform
to its functional specification. The designers are guided by the notion whether they are building
the product right
Static and dynamic verification
Static verification on Software inspections is concerned with analysis of the static system
representation to discover problems within software product based on document and code
analysis

Dynamic Verification - concerns Software testing with exercising and observing product
behaviour where the system is executed with test data and its operational behaviour is observed

Program testing – is done to reveal the presence of errors NOT their absence. A successful test
is a test which discovers one or more errors. The only validation technique for non-functional
requirements

Verification and validation should establish confidence that the software is fit for purpose it is
designed for. This does NOT mean completely free of defects rather, it must be good enough for
its intended use which determines the degree of confidence needed. Depends on system’s
purpose, user expectations and marketing environment

Software function – concerned with the level of confidence depends on how critical the
software is to an organisation

User expectations - Users may have low expectations of certain kinds of software

Marketing environment - Getting a product to the market early which may be more important
than finding defects in the program
SOFTWARE CONFIGURATION MANAGEMENT
Software configuration is a collection of the items that comprise all information produced as part
of the software process. The output of software process is information and includes PC
programs, documentation and the data (in its program and external to it).

Software Configuration Management


Definition 1
These are a set of activities that are developed to manage change through out the life cycle of PC
software.
Changes are caused by the following:-
 New customer needs
 New business / market conditions and rules
 Budgetary or scheduling constraints etc
 Reorganization or restructuring of business for growth

Definition 2
The process which controls the changes made to a system, and manages the different versions of
the evolving software product. It involves development and application of procedures and
standards for managing an evolving system product. Procedures should be developed for
building system releasing them to customers
Standards should be developed for recording for recording and processing proposed system
changes and identifying and storing different versions of the system.
Configuration managers (team) are responsible for controlling software changes. Controlled
systems are called baselines. They are the starting point for controlled evolution.

Software may exist in different configurations (versions).


 produced for different computers (hardware)
 produced for different operating system
 produced for different client-specific functions etc

Configuration managers are responsible for keeping track of difference between software
versions and ensuring new versions are derived in a controlled way.
Are also responsible for ensuring that new versions are released to the correct customers at the
appropriate time

Configuration management and associated documentation should be based on a set of standards,


which should be published in configuration management hand book (or quality handbook) E.g.
IEEE standards 8238-1983 which is standard for configuration management plans.

Main configuration management’s activities:


1. Configuration management planning (planning for product evolution)
2. Managing changes to the systems
3. Controlling versions and releases (of systems)
4. Building systems from other components

Benefits of effective configuration management

 Better communication among staff


 Better communication with the customer
 Better technical intelligence
 Reduced confusion for changes
 Screening of frivolous changes
 Provides a paper trail

Configuration Management and Planning


Configuration management and planning takes control of the systems after they have been
developed therefore planning the process must start during development.
The plan should be developed as part of overall project planning process.
The plan should include:
(a) Definitions of what entities are to be managed and formal scheme for identifying these
entities.
(b) Statement of configuration management team.
(c) Configuration management policies for change and version control / management.
(d) Description of the tools to be used in configuration management and the process to be used.
(e) Definition of the configuration database which will be used to record configuration
information. (Recording and retrieval of project information.)
(f) Description of management of external information
(g) Auditing procedures.

Configuration database is used to record all relevant information relating to configuration to:
a) assert with assessing the impact of system changes
b) Provide management information about configuration management.
Configuration database defines/describes
 Customers who have taken delivery of a particular version
 Hardware and software operating system requirements to run a given version.
 The number of versions of system so far made and when they were made etc

CONFIGURATION MANAGEMENT TOOLS

Configuration Management –CM- is a procedural process so it can be modelled and integrated


with a version management system
Configuration Management processes are standardised and involve applying pre-defined
procedures so as to manage large amounts of data
Configuration Management tools
Form editor - to support processing the change request forms
Workflow system - to define who does what and to automate information transfer
Change database - that manages change proposals and is linked to a VM system
Version and release identification - Systems assign identifiers automatically when a new
version is submitted to the system
Storage management - System stores the differences between versions rather than all the
version code
Change history recording - Record reasons for version creation
Independent development - Only one version at a time may be checked out for change or
Parallel working on different versions
CASE TOOLS FOR CONFIGURATION MANAGEMENT

Computer Aided Software Engineering (CASE) tools support for CM is therefore essential and
are available ranging from stand-alone tools to integrated CM workbenches
CASE Tools is a software package that supports construction and maintenance of a logical
system specification model
Designed to support rules and interactions of models defined in a specific methodology
Also permit software prototyping and code generation
Aim to automate document production process by ensuring automation of analysis and design
operations

ADVANTAGES OF CASE TOOLS


 Make construction of the various analysis and design logical elements easy e.g. DFD,
ERM etc
 Integration of separate elements allowing software to do additional tasks e.g. rechecking
and notifying on defined data and programs
 Streamline the development of the analysis documentation allowing for use of graphics
and manipulation of the data dictionaries
 Allow for easy maintenance of specifications which in turn will be more reliably updated
 Enforce rigorous standards for all developers and projects making communication more
efficient
 Check specifications for errors, omissions and inconsistencies
 Provide everyone on the project team with easy access to the latest updates and project
specifications
 Encourage iterative refinements resulting to higher quality system that better meets the
needs of the users

DISADVANTAGES
 CASE products can be expensive
 CASE technology is not yet fully evolved so its software is often large and inflexible
 Products may not provide a fully integrated development environment
 There is usually a long time for learning before the tools can be effectively used i.e. no
soon benefits realized
 Analysts must have a mastery of the structured analysis and design techniques if they are to
exploit CASE tools
 Time and cost estimates may have to be inflated to allow for an extended learning period of
CASE tools

PROGRAM EVALUATION DYNAMICS


Program evolution is the study of system change Lehman’s Laws (by Lehman Beladay 1985)
about system change.
The Laws are:
a) Law of Continuing change - Program used in real world environment must change or become
progressively less useful in the environment.
b) Law of Increasing complexity - Program changes make its structure more complex therefore
extra resources must be devoted to preserving and simplifying the structure.
c) Law of Large program evolution - Program evaluation in self-regulating process.
d) Law of Organization stability - In a program lifetime its rate of development is
approximately constant and independent of resources devoted to system development.
e) Law of Conservation familiarity - Over the system lifetime the incremental change in each
release is approximately constant

Quality Assurance
Quality management System
- relevant procedures and standards to be followed
- Quality Assurance assessments to be carried out

Definition – controls to that


- Relevant procedures and standards are followed
- Relevant deliverables are produced

Software quality assurance techniques:

The quality and reliability of software can be improved by using


 A standard development methodology
 Software metrics
 Thorough testing procedures
 Allocating resources to put more emphasis on the analysis and design stages of systems
development.

Standards specification to be applied during development enforces Quality of products. The


specifications should include Quality Assurance (QA) standards to be adopted, which should be
of one of the recognized standards or clients specified ones e.g.

Correctness - ensures the system operates correctly and provides the value to its user and
performs the required functions therefore defects must be fixed/ corrected

Maintainability - is the ease with which system can be corrected if an error is encountered,
adept if its environment changes or enhance if the user desires a change in requirements

Integrity - is the measure of the system ability to withstand attacks (accidental or intentional) to
its security in terms of data processing, program performance and documentation

Usability - is the measure of user friendliness of a system as measured in terms of physical and
intellectual skills required to learn the system, the time required to become moderately efficient
in using it, the net increase in productivity if used by moderately efficient user, and the general
user attitude towards the system.

System Quality can be looked at in two ways: -


Quality of design – is the characteristic the designers specify for an item/ product. The grade of
the materials, tolerance and performance specifications
Quality of conformance - The degree to which the design specification are followed during
development and construction (implementation)

QUALITY ASSURANCE
Since quality should be measurable, then quality assurance needs to be put in place
Quality Assurance consists of Auditing and reporting functions of the management
Quality Assurance must outline the standards to be adopted i.e. either International recognized
standards or client designed standards
Quality Assurance must lay down the working procedure to be adopted during project lifetime,
which includes: -

 Design and Program reviews


 Program monitoring and reporting
 Quality Assurance related procedure
 Test procedure and Fault reporting
 Delivery and Liaison mechanisms
 Safety aspects and Resource usage

Quality Assurance system should be managed independently of development and production


departments and clients should have a right to access the contractors Quality Assurance System
and Plan

Quality Assurance builds clients confidence (increase acceptability) as well as contractors own
confidence in knowing that they are building the right system and that it will be highly
acceptable

Testing and error correction assures system will perform as expected without defects or collapse
and also ensures accuracy and reliability.

POOR QUALITY SYSTEM


 High cost of maintenance and correcting errors (unnecessary maintenance)
 Low productivity due to poor performance
 Unreliability in terms of functionalities
 Risk of injury to safety critical systems (e.g. robots)
 Loss of business due to errors
 Lack of confidence to (in) the developers by clients.

SOFTWARE QUALITY ISSUES

METRICS
Metric is a quantitative measure of the degree to which a system, component, or process
processes a given attribute. Measurement occurs as a result of the collection of one or more
data points.
Software engineers collect, measure and develop metrics so that indicators can be obtained.
Indicator is a metric or combination of metrics that provide insight into the software process,
project or product itself.
Indicator provides insight that enables project managers or software engineers to adjust the
process or project to make things better.
Metrics should be collected so that process and product indicators can be ascertained, and
enable software engineers, organizations gain insight into the efficiency of an existing
process (i.e. paradigm, software engineering tasks, work products, and milestones).

Metrics is mainly applied in software productivity and quality. It is used to measure software
development “output” as a function of effort and time applied, measure the “fitness of use “
of the product.
Software process, product.
Software processes, products and resources are measured to characterize and gain
understanding of them and establish baseline for comparisons with future assessments.
Software is also measured:
 To evaluate and determine the status with respect to plans and ensure we don’t get out of
track during the engineering life cycle /lifetime.
 To predict, therefore gain understanding of relationships among processes and products and
the value observed can be used to predict others. This helps in planning for the future trends,
costs, time and quality.
 For projection and estimation costs useful in risk analysis and making design-cost trade-offs.
 Also measuring for improvement after identifying problems, root causes, inefficiency and
other opportunities for improving the software quality and process performance.

Metrics help managers assess what works and what doesn’t.


Process metrics
Are collected across all projects over long periods of time. This provides indicators that lead
to long term software process improvements.
Project indicators

Enable software managers to asses status of an on-going project, track potential risks,
uncover problem areas before they get critical, adjust workflow or tasks and evaluate the
project teams ability to control quality of software products.

Software Measurements
 Size-oriented metrics
 Function oriented metrics
 Extended function point metrics

Size oriented metrics


Size oriented metrics are derived by normalizing quality and productivity measures by
considering the size of the software produced.
This could be done by measuring,
1. Lines of code (LOC),
2. Effort ( persons – months effort )
3. The cost incurred in all activities in production lifetime (analysis, design, coding, testing
e.t.c.),
4. Documentation size in pages,
5. Error recorded before software release,
6. Defects after release,
7. And the total number of people involved in its development.

Functional oriented metrics


Functional oriented metrics uses a measure of functionality of software as a normalized
value.
 Function points are used to measure functionality in this metrics
 The functional points are derived using empirical relationship based on countable measure of
software information domain and assessment of software complexity.
 The information domain value includes its number of user, input, output, inquiries, file and
external interface and weigh their complexity.

Extended function point metrics


Extended function point metrics accommodate functional points as well as behavioral
(control) dimensions hence work with feature points.
Quality software metrics should encompass a number of attributes such as:
 Simplicity
 Be empirically and intuitively persuasive
 Consistent and objective
 Programming language independent
 Consistent in its use of units and dimensions
 Have an effective mechanism for quality feedback.

Every level of software engineering has appropriate metrics that can be applied. These
includes:
 Metrics for analysis which include the function based metrics
 The bang metrics
 Metrics for specification quality
 Metrics for design model: architectural design metrics, component-level design metrics
(which tests coupling and cohesion)
 Interface design metrics
 The metrics for the source code
 Metrics for testing
 Metrics for maintenance

Software Metrics
Software metrics is any type of measurement, which relates to a software system process or
related documentation.
E.g. size measurement in lines of code
Fog index (Hunning 1962) =measure of readability of a product manual e.t.c.
Metrics fall into two classes
 Control metrics
They provide information about process quality therefore is related to product quality.
 Predictor metrics
Measurement of a product attribute that can be used to predict an associated product quality
E.g. Fog index predicts readability, cyclomatic complexity to predict maintainability of
software.

SOFTWARE QUALITY EVALUATION

This concerns identifying key issues or measures that should show where a program is deficient.
Managers must decide on the relative importance of:

 On-time delivery of the software product


 Efficient use of resources e.g. processing units, memory, peripheral devices etc
 Maintainable code issues e.g. comprehensibility, modifiability, portability etc

Problem areas cited software production include

1. User demands for enhancements, extensions


2. Quality of system documentation
3. Competing demands on maintenance personnel time
4. Quality of original programs
5. Meeting scheduled commitments
6. Lack of user understanding of system
7. Availability of maintenance program personnel
8. Adequacy of system design specifications
9. Turnover of maintenance personnel
10. Unrealistic user expectations
11. Processing time of system
12. Forecasting personnel requirements
13. Skills of maintenance personnel
14. Changes to hardware and software
15. Budgetary pressures
16. Adherence to programming standards in maintenance
17. Data integrity
18. Motivation of maintenance personnel
19. Application failures
20. Maintenance programming productivity
21. Hardware and software reliability
22. Storage requirements
23. Management support of system
24. Lack of user interest in system

RISK MANAGEMENT

Risk
A problem or a threat that you would prefer not to have during your project development
because it can threaten the project, software or the organization.

Risk Management
Involves anticipating risks that might affect the project schedule or quality of software
being developed and taking actions to avoid those risks. Results of risks analysis should be
documented in project plan along with an analysis of the consequences of risk occurring.
Importance of risk management
It is easier to cope with problems and ensure that they do not lead to unacceptable
budget or schedule slippage.
- Because of inherent uncertainties that most projects face.
Categories of Risks

i) Project risks - affect project schedule or resources e.g. Loss of


experienced designer.
ii) Product Risks – Affect quality or performance of
software e.g. purchased component not performing as
expected.
iii) Business risks - Affect organizational dev or procuring software e.g.
Competitor introducing new product.

Causes of risks
-Stem from loosely defined requirements.
-Difficulty in estimating the time and resources required for software development.

-Dependence on individual skills.


-Requirements changes due to changes in customer needs.

Types of risks and possible risks


There are different types of risks:
i. Technology risks – are risks from software and hardware technologies. E.g. database
used in system can’t process as many transactions as possible.
ii. People risks – are associated with people in the development team .e.g. impossible to
recruit staff with skills required.
iii. Organizational risks – derived from the organizational environment where software is
being developed. E.g. organizational financial problems.
iv. Tools risks – derived from case tools and other software used to develop the system. E.g.
code generated by case tools is inefficient.
v. Requirements risks – derived from changes in customer requirements and processes of
managing the requirements change. E.g. changes in requirements that require major
design rework are proposed.
vi. Estimation risks – derived from management estimates of system characteristics and
resources required to build the system. E.g. time required to develop the system is
underestimated.

Stages of risk management


Process of risk management involves four stages:
1. Risk identification – is whereby the possible project, product and business risks are
identified. This may be carried out as a team process using a brainstorming approach.
2. Risk analysis – is whereby the likelihood and consequences of this risk are assessed.
Consider each identified risk and make judgment about probability and seriousness
of it. Estimate probability i.e. very low (<10%), low (10-25%), moderate (25-50%),
high (50-75%) or very high (>75%).
3. Risk planning – whereby plans to address the risk either by avoiding or minimizing
its effect are discussed and strategies to manage it are identified. i.e.
a. Avoidance strategy – following this means that probability that the risk
will arise will be reduced.
b. Minimization strategy – following this means that impact of the risk will
be reduced.
c. Contingency plans - following this means that you are
prepared for the worst and have a strategy in place to deal with
it.
4. Risk monitoring – whereby the risk is constantly assessed and plans for risk
mitigation revised as more information about the risk become available.

Stages of Risk Management Process (Diagram)

Risk Risk Risk Risk Monitoring


Identification Analysis Planning

List of Prioritized risk Risk voidance and Risk


potential risks list contingency plans assessment
SOFTWARE ENGINEERING DOCUMENTATION

It is a very important aid to maintenance engineers.


Definition - It includes all documents describing implementation of the system from
requirements specification to final test plan.
The documents include:
 Requirement documents and an associated rationale
 System architecture documents
 Design description
 Program source code
 Validation documents on how validation is done
 Maintenance guide for possible /known problems.

Documentation should be
 Clear and non-ambiguous
 Structured and directive
 Readable and presentable
 Tool-assisted (case tools) in production (automation).

SYSTEM DOCUMENTATION
Items for documentation to be produced for a software product include:-
System Request – this is a written request that identifies deficiencies in the current system
besides requesting for change
Feasibility Report – this indicates the economic, legal, technical and operational feasibility of
the proposed project
Preliminary Investigation Report – this is a report to the management clearly specifying the
identified problems within the system and what further action to be taken is also recommended
System Requirements report – this specifies the entire end –user and management
requirements, all the alternatives plans, their costs and the recommendations to the management
System Design Specification – it contains the designs for the inputs, outputs, program files and
procedures
User Manual – it guides the user in the implementation and installation of the information
system
Maintenance Report – a record of the maintenance tasks done
Software Code – this refers to the code written for the information system
Test Report – this should contain test details e.g. sample test data and results etc
Tutorials - a brief demonstration and exercise to introduce the user to the working of the
software product

SOFTWARE DOCUMENTATION
The typical items included in the software documentation are
Introduction – shows the organization’s principles, abstracts for other sections and notation
guide
Computer characteristics – a general description with particular attention to key attributes and
summarized features
Hardware interfaces – a concise description of information received or transmitted by the
computer
Software functions – shows what the software must do to meet requirements, in various
situations and in response to various events
Timing constraints – how often and how fast each function must be performed
Accuracy constraints – how close output values must be ideal to expected values for them to be
acceptable
Response to undesired events – what the software must do in events e.g. sensor goes down,
invalid data etc
Program sub-sets – what the program should do it if it cannot do everything
Fundamental assumptions – the characteristics of the program that will stay the same, no
matter what changes are made
Changes – the type of changes that have been made or are expected
Sources – annotated list of documentation and personnel, indicating the types of questions each
can answer
Glossary – most documentation is fraught with acronyms and technical terms

CHANGE MANAGEMENT.

Change management process involves technical change analysis, cost-benefit analysis and
change tracking

Stages of change management:


(1) 1st Stage
1st stage in change management is to complete a change request from (CRF).
A change request form is a formal document that sets out the changes required to the system,
records recommendations regarding the change, estimated costs of the change, dates when
change was requested, approved, implemented and validated. Also section where engineers
outline how change is to be implemented.
(2) 2nd stage
It involves analysis of the change requested, for validity. (If it is invalid, duplicated or
already considered it is rejected). Any rejection should be returned to the person who rejected
it.
(3) 3rd stage
For valid changes. Assessment and costing of the change is made, the impact of the change
on the rest of system assessed, also check on how (technically) it will be implemented.
(4) 4th Stage
Submission to Change Control Board (CCB) who decide whether on not the change should
be accepted (after considering cost, impact e.t.c)
(5) 5th Stage
After approval by the CCB the software is taken to software maintenance team for
implementation, after which it is validated (tested) then released (by configuration
management team and not maintenance team).

VERSION AND RELEASE MANAGEMENT.

Version and release management are the processes of identifying and keeping track of
versions and new releases of system.
 Ensures release (time), right version is released.
 Some versions may be designed to operate in different hardware or software (operating
system) platforms though their functions are the same.
System release - version that is distributed to customer.
A release (isn’t only a set of programs but) includes: -
 Configuration files - defining how the release should be configured for installations.
 Data files – needed for successful system operation.
 Installation programs - to help install the system on target hardware
 Electronic and paper documentation - describing the system.
All information must be availed to customers.

DATA AND COMPUTER SECURITY


Are methods that protect an organization’s computing and network facilities and their contents
from loss or destruction. Computer networks and computer centers are subject to such hazards as
accidents, natural disasters, sabotage, vandalism, unauthorized use, industrial espionage,
destruction and theft of resources. Therefore, various safe guards and control procedures are
necessary to protect the hardware, software, network and vital data resources of a company. This
is especially vital as more and more companies engage in electronic commerce on the internet.
i) Network Security – Security of a network may be provided by specialized system
software packages known as system security monitors. These monitor the use of
computer systems and networks and protect then from unauthorized use, fraud, and
destruction. Such programs provide the security measures needed to allow only
authorized users to access the networks. For example, identification codes and
passwords are frequently used for this purpose. Security monitors also control the use
of hardware, software and data resources of a computer system. For example, even
authorized users may be restricted to the use of certain devices, programs, and data
files. Additionally, security programs monitor the use of computer network and
collect statistics on any attempts at improper use. They then produce reports to assist
in maintaining the security of the network.
ii) Encryption – Encryption of data has become an important way to protect data and
other computer network resources especially on the internet, intranets and extranets.
Passwords, messages, files and other data can be transmitted in scrambled form and
unscrambled by computer systems for authorized users only. Encryption involves
using special mathematical algorithms, or keys, to transform digital data into a
scrambled code before they are transmitted and to decode the data when they are
received. The most widely used encryption methods use a pair of public key and
private keys unique to each individual. For example, email could be scrambled and
encoded using a unique public key for the recipient that is known to the sender. After
the email is transmitted, only the recipients secret private key could unscramble the
message.
iii) Firewalls – A network fire wall is a “gatekeeper” computer system that protects a
company’s intranets and other computer networks from intrusion by serving as a filter
and safe transfer point for access to and from the internet and other networks. It
screens all network traffic for proper passwords or other security codes and only
allows authorized transmissions in and out of the network. Firewalls have become an
essential component of the organizations connecting to the internet, because of its
vulnerability and lack of security. Firewalls can deter, but not completely prevent,
unauthorized access (hacking) into computer networks. In some cases a firewall may
allow access only from trusted locations on the internet to particular computers inside
the fire wall. Or it may allow only “safe” information to pass. For example, a firewall
may permit users to read email from remote locations but not to run certain programs.
In other cases, it is impossible to distinguish safe use of a particular network service
from unsafe use and so all requests must be blocked. The firewall may then provide
substitutes for some network services (such as email or file transfer) that perform
most of the same functions are not as vulnerable to penetration.
iv) Physical protection controls – provide maximum security and protection for an
organization’s computer and network resources. For example, computer centers and
end user work areas are protected through such techniques as identification badges,
electronic door locks, burglar alarms, security police, close-circuit TV and other
detection systems. Computer centers may be protected from disaster by such safe
guards as fire detection and extinguishing systems; fireproof storage vaults for
protection of files; emergency power systems; electronic magnetic shielding and
temperature, humidity and dust controls.
v) Biometric Controls – Are a fast-growing area of computer security. These are security
measures provided by computer devices that measure physical traits that make each
individual unique. This includes voice verification, fingerprints, hand geometry,
signature dynamics, keystroke analysis, retina scanning, face recognition, and genetic
pattern analysis. Biometric control devices use special-purpose sensors to measure
and digitize a biometric profile of an individual’s fingerprints, voice or other physical
trait. The digitized signal is processed and compared to a previously processed profile
of the individual stored on magnetic disk. If the profiles match, the individual is
allowed entry into a computer facility or given access to information system
resources.
vi) Computer Failure Controls - A variety of controls can prevent computer failure or
minimize its effects. Computer systems fail for several reasons – power failure,
electronic circuitry malfunctions, telecommunications network problems, hidden
programming errors, computer viruses, computer operator errors and electronic
vandalism. The information systems department typically takes steps to prevent
equipment failure and to minimize its detrimental effects. For example, computers are
available with automatic and remote maintenance capabilities. Programs of
preventive maintenance of hardware and management of software updates are
commonplace. Adequate electrical supply, air-conditioning, humidity control and fire
protection standards are a prerequisite. A back up computer system capability can be
arranged with disaster recovery organizations. Major hardware or software changes
are usually carefully scheduled and approved to avoid problems. Finally, highly
trained data center personnel and the use of performance and security management
software help keep a company’s computer systems and networks working properly.

PROFESSIONAL ISSUES IN SYSTEMS DEVELOPMENT


System development is a profession and belongs to the engineering discipline that employs
scientific methods in solving problems and providing solutions to the society.
Profession is an employment (not mechanical), that require some degree of learning, a calling,
habitual employment is a collective body of persons engaged in any profession
The main professional task in system development is on management of the tasks, with an aim of
producing system that meets users’ needs, on time and within budget.
Therefore main concerns of the management are: - Planning, Progress monitoring and Quality
control
There are a number of tasks carried out in an engineering organization and are classified into
their function: -
Production - activities that directly contribute to creating products and services the organization
sells
Quality management - activities necessary to ensure the quality of products / services
maintained at this agreed level
Research and development - ways of creating / improving products and production process
Sales and Marketing - selling products / services and involves activities such as advertising,
transporting, distribution etc

INDIVIDUAL PROFESSIONAL RESPONSIBILITIES

Do not harm others – ethical behaviour concerned with both helping clients satisfy their needs
and not hurting them
Be competent – IT Professionals master the complex body of knowledge in their profession; a
challenging issue because IT is dynamic and rapidly evolving field. Wrong advice to the client
can be costly
Maintain independence and avoid conflict of interests – in excising of their professional
duties, they should be free from influence, guidance or control of other parties e.g. vendors. Thus
avoid corruption and fraud
Match clients’ expectations – it is unethical to misrepresent either your qualifications or ability
to perform a certain job
Maintain fiduciary responsibility – IP to hold in trust information provided to them
Safeguard client and source privacy – ensure privacy of all private and personal information and
do not ‘leak it’
Protect records – safeguard records they generate and keep on business transactions with their
clients
Safeguard intellectual property – they are trustees of information and software and hence must
recognize that these are intellectual property that must be safeguarded
Provide quality information – the creator of information / products must disclose information
about the quality and even the source of information in a report or product record
Avoid selection bias – IT Professionals routinely make selection decisions at various stages of
the information life cycle. They must avoid the bias of prevailing point of view. Selection is
related to censorship
Be a steward of a clients assets, energy and attention - Provide information at the right time,
right place and at the right cost
Manage gate-keeping and censorship and obtain informed consent
Obtain confidential information and keep client confidentiality
Abide by laws, contracts, and license agreements; Exercising Professional Judgement

You might also like