Professional Documents
Culture Documents
COURSE OUTLINE:
1. Introduction: Basic Definitions: Software Engineering, System Engineering, Software
Process, Software Process Model, Software Engineering Methods, Information Systems
Manufacturing Techniques.
2. Software Development Process Trends: Process And Meta Processes, Process Models;
Waterfall, Incremental, Spiral, Evolutionary Development, Component Based Software
Engineering.
3. Software Requirements: Functional And Non Functional Requirements, User
Requirements, System Requirements, Interface Specifications, Software Requirement
Document.
4. Requirement Engineering: Feasibility Study, Requirements Elicitation And Analysis,
Requirements Validation And Requirement Management
5. Software Design – SW Construction, Design Concepts, Modularity, CASE Tools,
Aspects of Object Oriented Programming
6. CAT 1
7. Software Project Management: Management Activities: Proposal Writing, Project
Planning And Scheduling, Project Cost, Project Monitoring And Reviews, Personnel
Selection And Evaluation, Report Writing And Presentations. Project Planning: Project
Plan , Project Scheduling, Software Configuration, Software Quality Assurance, SQA
Plan, Process And Product Quality, Standards And Procedures, Quality Planning And
Quality Control
8. Risk Management: Steps In Risk Management; Risk Identification, Risk Analysis, Risk
Prevention; Limitations; Risk Planning; Eliminating Risk Occurrence, Ignoring Risk,
Risk Contingency; Risk Monitoring; Types Of Risks; Known, Unknown, And
Unforeseen
9. Software Maintenance: Types Of Maintenance, Factors Affecting Maintenance
10. Computer Security: Data Security, Network Security, Techniques: Password,
Biometrics, Firewalls, Physical Plant Protection, Key Cards, Cryptography, Encryption,
Digital Certificates, Policy
11. Professional Issues In IT: Copyright, Trademark, Legal Issues, Patent
Assessment
CATS -30%
ASSIGNMENTS - 10%
FINAL EXAM - 60%
REFERENCE
Main Text
Sommerville Ian, Software Engineering 7th Ed. Pearson Education Ltd.
Others
1. Roger S. Pressman, Software Engineering: A Practioner’s Approach, 6th Ed. McGraw-Hill.
2005.
2. Edward Yourdon, Decline, Fall of the American Programmer, Prentice Hall, Inc. 1993.
3. url: http://www.software-engi.com
INTRODUCTION TO SOFTWARE ENGINEERING
Software engineering is the application of a systematic, disciplined, quantifiable approach to the
development, operation, and maintenance of software; that is, the application of engineering to
software. (IEEE Computer Society)
Software engineering is the process of solving customers' problems by the systematic
development and evolution of large-high quality software systems within cost, time, and other
constraints (Lethbridge)
Software engineering can be described in terms of
Analysis - breaking apart a problem into pieces
Synthesis - constructing a solution from available or new components.
Methods & tools - that enable software projects to be built predictably within proscribed
schedules & budgets, meeting the customer's requirements of functionality & reliability.
Software engineering is concerned with the theories, methods and tools which are needed to
develop high quality, complex software in a cost effective way on a predictable schedule.
A software product consists of developed programs and all associated documentation and
configuration data needed to make the programs operate correctly.
Software process: The set of activities, methods, and practices that are used in the production
and evolution of software.
Software engineering process: The total set of software engineering activities needed to
transform a user's requirements into software.
Large software - It is easier to build a wall than to a house or building, likewise, as the
size of software become large engineering has to step to give it a scientific process.
Scalability- If the software process were not based on scientific and engineering
concepts, it would be easier to re-create new software than to scale an existing one.
Cost- As hardware industry has shown its skills and huge manufacturing has lower down
the price of computer and electronic hardware. But the cost of software remains high if
proper process is not adapted.
Dynamic Nature- The always growing and adapting nature of software hugely depends
upon the environment in which the user works. If the nature of software is always
changing, new enhancements need to be done in the existing one. This is where software
engineering plays a good role.
Quality Management- Better process of software development provides better and
quality software product.
CHARACTERESTICS OF GOOD SOFTWARE
A software product can be judged by what it offers and how well it can be used. This
software must satisfy on the following grounds:
i. Operational: This tells us how well software works in operations. It can be measured
on: Budget, Usability, Efficiency, Correctness, Functionality, Dependability, Security
and Safety
ii. Transitional: This aspect is important when the software is moved from one platform
to another: Portability, Interoperability, Reusability and Adaptability
iii. Maintenance: This aspect briefs about how well a software has the capabilities to
maintain itself in the ever-changing environment: Modularity, Maintainability,
Flexibility and Scalability
In short, Software engineering is a branch of computer science, which uses well-defined
engineering concepts required to produce efficient, durable, scalable, in-budget and on-time
software products
SOFTWARE DEVELOPMENT LIFE CYCLE
LIFE CYCLE MODEL
A software life cycle model (also called process model) is a descriptive and diagrammatic
representation of the software life cycle.
A life cycle model represents all the activities required to make a software product transit
through its life cycle phases
Different software life cycle models
Many life cycle models have been proposed so far. Each of them has some advantages as well as
some disadvantages. A few important and commonly used life cycle models are as follows:
i. Classical Waterfall Model
ii. Iterative Waterfall Model
iii. Prototyping Model
iv. Evolutionary Model
v. Spiral Model
Feasibility study - The main aim of feasibility study is to determine whether it would be
financially and technically feasible to develop the product.
Requirements analysis and specification: - The aim of the requirements analysis and
specification phase is to understand the exact requirements of the customer and to document
them properly. This phase consists of two distinct activities, namely
Requirements gathering and analysis: The goal of the requirements gathering activity is
to collect all relevant information from the customer regarding the product to be
developed. This is done to clearly understand the customer requirements so that
incompleteness and inconsistencies are removed
Requirements specification: The requirements analysis activity is begun by collecting all
relevant data regarding the product to be developed from the users of the product and
from the customer through interviews and discussions.
After all ambiguities, inconsistencies, and incompleteness have been resolved and all the
requirements properly understood, the requirements specification activity can start. During
this activity, the user requirements are systematically organized into a Software
Requirements Specification (SRS) document.. The important components of this document
are functional requirements, the nonfunctional requirements, and the goals of
implementation.
Design: - The goal of the design phase is to transform the requirements specified in the SRS
document into a structure that is suitable for implementation in some programming language.
Two distinctly different approaches are available: the traditional design approach and the
object-oriented design approach.
Here, we provide feedback paths for error correction as & when detected later in a phase.
The advantage of this model is that there is a working model of the system at a very early stage
of development which makes it easier to find functional or design flaws.
The disadvantage with this SDLC model is that it is applicable only to large and bulky software
development projects. This is because it is hard to break a small software system into further
small serviceable increments/modules.
3. PROTOTYPING
Prototyping moves the developer and customer toward a "quick" implementation.
1. Prototyping begins with requirements gathering.
2. Meetings between developer and customer are conducted to determine overall system
objectives and functional and performance requirements.
3. The developer then applies a set of tools to develop a quick design and build a working
model (the "prototype") of some element(s) of the system.
4. The customer or user "test drives" the prototype, evaluating its function and
recommending changes to better meet customer needs.
5. Iteration occurs as this process is repeated, and an acceptable model is derived. The
developer then moves to "productize" the prototype by applying many of the steps
described for the classic life cycle. In object oriented programming a library of
reusable objects (data structures and associated procedures) the software engineer can
rapidly create prototypes and production programs.
The benefits of prototyping are:
1. A working model is provided to the customer/user early in the process, enabling early
assessment and bolstering confidence,
2. The developer gains experience and insight by building the model, thereby resulting in a
more solid implementation of "the real thing"
3. The prototype serves to clarify otherwise vague requirements, reducing ambiguity and
improving communication between developer and user.
But prototyping also has a set of inherent problems:
1. The user sees what appears to be a fully working system (in actuality, it is a partially
working model) and believes that the prototype (a model) can be easily transformed into
a production system. This is rarely the case.
2. The developer often makes technical compromises to build a "quick and dirty" model.
Sometimes these compromises are propagated into the production system, resulting in
implementation and maintenance problems.
3. Prototyping is applicable only to a limited class of problems. In general, a prototype is
valuable when heavy human-machine interaction occurs, when complex output is to be
produced or when new or untested algorithms are to be applied.
4. EVOLUTIONARY MODEL
It is also called successive versions model or incremental model. At first, a simple working
model is built. Subsequently it undergoes functional improvements & we keep on adding new
functions till the desired system is built.
Applications:
Large projects where you can easily find modules for incremental implementation. Often
used when the customer wants to start using the core features rather than waiting for the
full software.
Also used in object oriented software development because the system can be easily
portioned into units in terms of objects.
Advantages:
User gets a chance to experiment partially developed system
Reduce the error because the core modules get tested thoroughly.
Disadvantages:
It is difficult to divide the problem into several versions that would be acceptable to the
customer which can be incrementally implemented & delivered.
5. SPIRAL MODEL
The term “spiral” is used to describe the process that is followed as the development of the
system takes place. With each iteration around the spiral (beginning at the center and working
outward), progressively more complete versions of the system are built.
Risk assessment is included as a step in the development process as a means of evaluating each
version of the system to determine whether or not development should continue. If the customer
decides that any identified risks are too great, the project may be halted. The Spiral Model is
made up of the following steps:
Project Objectives - Similar to the system conception phase of the Waterfall Model Objectives
are determined, possible obstacles are identified and alternative approaches are weighed
Risk Assessment - Possible alternatives are examined by the developer, and associated
risks/problems are identified. Resolutions of the risks are evaluated and weighed in the
consideration of project continuation. Sometimes prototyping is used to clarify needs.
Engineering & Production - Detailed requirements are determined and the software Piece is
developed.
Planning and Management - The customer is given an opportunity to analyze the results of the
version created in the Engineering step and to offer feedback to the developer.
Problems/Challenges associated with the Spiral Model
Due to the relative newness of the Spiral Model, it is difficult to assess its strengths and
weaknesses
However, the risk assessment component of the Spiral Model provides both developers
and customers with a measuring tool that earlier Process Models do not have.
The measurement of risk is a feature that occurs everyday in real-life situations, but
unfortunately) not as often in the system development industry.
The practical nature of this tool helps to make the Spiral Model a more realistic Process
model than some of its predecessors.
REQUIREMENTS ANALYSIS AND SPECIFICATION
REQUIREMENTS ENGINEERING
This is the process of eliciting, analysing and recording requirements for software systems. Its
goal is to create and maintain a system requirement document. The process involves assessing
whether the system is useful to business, discovering requirements, converting them into some
standard form and checking that the requirements actually define the system.
The requirements engineering process
These are the activities that are involved in requirements engineering. They include:
1. Requirements elicitation
In requirements engineering, requirements elicitation is the practice of obtaining the
requirements of a system from users, customers and other stakeholders. The practice is also
sometimes referred to as requirements gathering/requirements discovery.
Requirements elicitation is non-trivial because you can never be sure you get all
requirements from the user and customer by just asking them what the system should do.
Requirements elicitation practices involves fact finding techniques such as interviews,
questionnaires, user observation/ethnography, Documentary review e.t.c
2. Requirements Analysis and Negotiation
The purpose of this activity is to transform technical requirements into formal requirements
by ensuring that they express the needs of the customer. Analysis is an iterative activity. The
process steps will likely be repeated several times in consultation with the customers. The
goal of analysis is to locate places where requirements are unclear, incomplete, ambiguous or
contradictory. It groups related requirements and organizes them into coherent clusters. The
requirements are then prioritized and any conflicts that arise are resolved.
3. Requirements Validation
Concerned with demonstrating that the requirements define the system that the customer
really wants. Requirements errors are very costly and so validation is very important. Fixing
a requirements error after delivery may cost up to 100 times the cost of fixing an
implementation error. The requirements are checked against the following factors:
Validity – Does the system provide the functions which best support the customers needs?
Consistency – Are there any requirements conflicts?
Completeness – Are all functions required by the customer included?
Regular reviews should be held during the whole process both the technical staff and the
customer should be involved in the reviews. Good communication between the technical
staff and the customer can resolve problems at an early stage. Before changing requirements,
check for traceability (origin) and adaptability (impact on other requirements) of the
requirements.
4. Requirements Management
The process of managing changing requirements during the requirements engineering
process. Requirements may change because:
Technology may change
User needs may change
Changing business processes e.t.c
5. Requirements Documentation
Requirements need to be documented and a standard format is used.
Requirements should state what the system should do and the design should describe how
it does this.
N.B: Learn more about the IEEE standard for documentation especially requirement
documents.
FACT FINDING
The formal process of using techniques such as interviews and questionnaires to collect
facts about systems, requirements and preferences.
Interviews
This is the most commonly used and normally most useful, fact-finding technique.
Interviews are a fact finding technique whereby the systems analysts collect information
from individuals (and/or groups) through face-to-face interaction. There can be several
objectives to using interviewing, such as finding out facts, verifying facts, clarifying
facts, generating enthusiasm, getting end-user involved, identifying requirements and
gathering ideas and opinions. However, using the interviewing technique requires good
communication skills for dealing effectively with people who have different values,
priorities, opinions, motivations and personalities. As with other fact finding techniques,
interviewing is not always the best method for all situations.
There are two types of interviews:
i. Unstructured interviews: Conducted with only a general objective in mind
and with few, if any, specific questions. The interviewer counts on the
interviewee to provide a framework and direction to the interview. This type
of interview frequently loses focus and, for this reason, it often does not work
well for systems analysis and design.
ii. Structured interviews: The interviewer has a specific set of questions to ask
the interviewee. Depending on the interviewee’s responses, the interviewer
will direct additional questions to obtain clarification or expansion. Some of
the questions may be planned and others spontaneous.
Open-ended questions allow the interviewee to respond in any way he/she seems appropriate.
An example of an open-ended question is: ‘Why are you dissatisfied with the report on client
registration?’
Closed-ended questions restrict answers to either specific choices or short, direct answers. An
example of a closed-ended question might be: ‘Are you receiving the report on client registration
on time?’ or ‘Does the report on client registration contain accurate information?’ Both questions
only require a ‘Yes’ or ‘No’ response.
To ensure a successful interview includes selecting appropriate individuals, preparing
extensively for the interview and conducting the interview in an efficient and effective
manner.
Advantages
- Interviews give the analyst an opportunity to motivate the interviewee to respond
freely and openly to questions.
- Interviews allow the systems analyst to probe for more feedback from the
interviewee.
- Interviews give the analyst an opportunity to observe the interviewee’s nonverbal
communication. A good analyst may be able to obtain information by observing the
interviewee’s body movements and facial expressions as well as by listening to verbal
replies to questions.
- Interviews permit the systems analyst to adapt or reword questions for each
individual.
Disadvantages
- Interviewing is a very time-consuming, and therefore costly, fact-finding approach.
- Success of interviews is highly dependent on the systems analyst’s human relations
skills.
- Interviewing may be impractical due to the location of interviews.
- Success can be dependent on the willingness of the interviewees to participate in
interviews.
Observation
Observation is one of the most effective fact-finding techniques for understanding a
system. With this technique, it is possible to either participate in, or watch, a person
perform activities to learn about the system. This technique is particularly useful when
the validity of data collected through other methods is in question or when the complexity
of certain aspects of the system prevents a clear expansion by the end users.
As with the other fact-finding techniques, successful observation requires preparation. To
ensure that the observation is successful, it is important to know as much about the
individuals and the activity to be observed as possible. For example, ‘When are the low,
normal and peak periods for the activity being observed?’ and ‘Will the individuals be
upset by having someone watch and record their actions?’
Advantages
- Data gathered by observation can be highly reliable. Sometimes observations are
conducted to check the validity of data obtained directly from individuals.
- The systems analyst is able to see exactly what is being done. Complex tasks are
sometimes difficult to clearly explain in words. Through observation, the systems analyst
can identify tasks that have been missed or inaccurately described by other fact-finding
techniques. Also, the analyst can obtain data describing the physical environment of the
task (e.g., physical layout, traffic lighting, noise level).
- Observation is relatively inexpensive compared with other fact-finding techniques. Other
techniques usually require substantially more employee release time and copying
expenses.
- Observation allows the systems analyst to do work measurements.
Disadvantages
- Because people usually feel uncomfortable when being watch, they may unwittingly
perform differently when being observed.
- The work being observed may not involve the level of difficulty or volume normally
experienced during that time period.
- Some systems activities may take place at odd times causing a scheduling inconvenience
for the systems analyst.
- The tasks being observed are subject t various types of interruption.
- Some tasks may not always be performed in the manner in which they are observed by the
systems analyst. For example, the systems analyst might have observed how a company
filled several customer orders. However, the procedures observed may have been those
steps used to fill a number of regular customer orders. If any of those orders had been
special orders (e.g. an order for goods not normally kept in stock), the systems analyst
would have observed a different set of procedures being executed.
- If people have been performing tasks in a manner that violates standard operating
procedures, they may temporarily perform their jobs correctly while you are observing
them. In other words, people may let you see what they you want you to see.
Questionnaires
Questionnaires are special–purpose documents that allow facts to be gathered from a
large number of people while maintaining some control over their responses. When
dealing with a large audience, no other fact-finding technique can tabulate the same facts
as efficiently.
There are two types of questions that cam be asked in a questionnaire, namely free-
format and fixed-format.
Free-format questions offer greater freedom in providing answers. A question is asked
and the respondent records the answer in the space provided after the question. Examples
of free-format questions are: ‘What reports do you currently receive and how are they
used?’ and ‘Are there any problems with these reports?’ If so, please explain. The
problems with free-format questions are that the respondent’s answers may prove
difficult to tabulate and, in some, may not match the questions asked.
Fixed-format questions require responses from individuals. Given any question, the
respondent must choose from the available answers. This means the results much easier
to tabulate. On the other hand, the respondent cannot provide additional information. An
example of a fixed-format question is: ‘The current format of the report on property
rentals is ideal and should not be changed?’ The respondent may be given the option to
answer ‘Yes’ or ‘No’ to this question or be given the option to answer from a range of
responses including ‘Strongly Agree’, ‘Agree’, ‘No Opinion’, ‘Disagree’, and ‘Strongly
Disagree’.
Advantages
- Most questionnaires can be answered quickly. People can complete and return
questionnaires at their convenience.
- Questionnaires provide a relatively inexpensive means for gathering data from a large
number of individuals.
- Questionnaires allow individuals to maintain anonymity. Therefore, individuals are more
likely to provide the real facts, rather than telling you what they think their boss would
want them to.
- Responses can be tabulated and analyzed quickly.
Disadvantages
- The number of respondents is often low.
- There’s no guarantee that an individual will answer or expand on all the questions.
- Questionnaires tend to be inflexible. There’s no opportunity for the systems analyst to
obtain voluntary information from individuals or to reword questions that may have been
misinterpreted.
- It is not possible for the systems analyst to observe and analyze the respondent’s body
language.
- There is no immediate opportunity to clarify a vague or incomplete answer to any
question.
- Good questionnaires are difficult to prepare.
Documentary Review
It involves perusing through literature to gain a better understanding of the existing
system. The documents to be reviewed will include: job descriptions, procedure
manuals, management reports, and sales report e.t.c.
When to Use Documentary Review
1) When a system analyst wants to have a quick overview of a system.
2) When the information required cannot be obtained using other techniques.
Merits
1) It is comparatively cheaper.
2) It is a faster means of information gathering especially if the documents are few.
Demerits
1) It may be time consuming and requires a lot of patience.
2) The relevant information maybe missing.
3) The success depends on the experience and skills of the system analyst.
4) Most of the information available may be outdated.
5) The information available may not suit well with the requirements of the system
proposed.
Sampling
Types of Samples:
Non-probability (non-random) samples:
These samples focus on volunteers, easily available units, or those that just happen to be
present when the research is done. Non-probability samples are useful for quick and
cheap studies, for case studies, for qualitative research, for pilot studies, and for
developing hypotheses for future research.
a. Convenience sample: Also called an "accidental" sample or "man-in-the-
street" samples. The researcher selects units that are convenient, close at
hand, easy to reach, etc.
b. Purposive sample: The researcher selects the units with some purpose in
mind, for example, students who live in dorms on campus, or experts on
urban development.
c. Quota sample: The researcher constructs quotas for different types of
units. For example, to interview a fixed number of shoppers at a mall, half
of whom are male and half of whom are female.
Other samples that are usually constructed with non-probability methods include library
research, participant observation, marketing research, consulting with experts, and
comparing organizations, nations, or governments.
Probability-based (random) samples:
a. Simple random sample: Each unit in the population is identified, and each unit has
an equal chance of being in the sample. The selection of each unit is independent of
the selection of every other unit. Selection of one unit does not affect the chances of
any other unit.
b. Systematic random sampling: Each unit in the population is identified, and each
unit has an equal chance of being in the sample.
c. Stratified random sampling: Each unit in the population is identified, and each unit
has a known, non-zero chance of being in the sample. This is used when the
researcher knows that the population has sub-groups (strata) that are of interest.
d. Cluster sampling: cluster sampling views the units in a population as not only being
members of the total population but as members also of naturally-occurring in
clusters within the population. For example, city residents are also residents of
neighborhoods, blocks, and housing structures.
When to Use Sampling
1) When the target population is too large.
2) When the population has similar characteristics.
Merits
1) It is cheaper considering the target population.
2) It speeds up the data gathering process.
3) It is effective since one obtains accurate information, which is assumed to be
representative.
Demerits
1) One requires statistical knowledge to use the method.
2) The sample taken may not be representative of the entire population and may overlook
certain critical issues.
Research and survey/site visit
This involves thorough research of the applications and problems of the old system. The
analyst must review trade journals, periodicals and books containing relevant
information.
He can also attend professional meetings, seminars and visiting others companies which
have similar systems.
Existing computerized system
The user requirement of a new computerized system can also be collected from the
existing computer system. The way work is done is analyzed and improvement
suggested. The areas looked at are:
i. File structures
ii. Transaction volumes
iii. Screen design
iv. User satisfaction
v. Causes of system crash e.t.c.
i. Functional requirements of the system: The functional requirements part discusses the
functionalities required from the system. The system is considered to perform a set of
high-level functions. Each function of the system can be considered as a transformation
of a set of input data to the corresponding set of output data. The user can get some
meaningful piece of work done using a high-level function.
ii. Non-functional requirements of the system: Nonfunctional requirements deal with the
characteristics of the system which cannot be expressed as functions - such as the
maintainability of the system, portability of the system, usability of the system, etc.
iii. Goals of implementation: The goals of implementation part documents some general
suggestions regarding development. These suggestions guide trade-off among design
goals. The goals of implementation section might document issues such as revisions to
the system functionalities that may be required in the future, new devices to be supported
in the future, reusability issues, etc.
FEASIBILITY STUDY
Depending on the results of the initial investigations, the survey is expanded to a more
detailed feasibility study.
i) A feasibility study is an activity undertaken to determine the possibility or probability of
either improving the existing system or developing a totally new system.
ii) A feasibility study is a test of system proposal according to its workability, impact on the
organization, ability to meet user needs, and effectively use resources. It focuses on three
major questions:
- What are the user’s demonstrable needs and how does a candidate system meet them?
- What resources are available for given candidate systems? Is the problem worth solving?
- What are the likely impacts of the candidate system on the organization? How well does
it fit within the organization’s master computerization plan?
The objective of feasibility study is not to solve the problem but to acquire a sense of its
scope. During the study, the problem definition is crystallized and aspects of the problem
to be included in the system are determined. Consequently, costs and benefits are
estimated with greater accuracy at this stage. The result of this feasibility study is a
formal proposal. This is simply a report – a formal document detailing the nature and
scope of the proposed solution. The proposal summarizes what is known and what is
going to be done. The report is the medium by which you tell the management what the
problem is, what you have found its causes to be and what you have to offer in the way of
recommendations. It consists of:
i) Statement of the problem – a carefully worded statement of the problem that led to
analysis.
ii) Summary of findings and recommendations – a list of major findings and
recommendations of the study. Clearly describe the subject and scope. List the areas
included and excluded. It is ideal for the user who requires quick access to the results of
the analysis of the system under study. Conclusions are stated, followed by a list of the
study recommendations and a justification for them.
iii) Details of findings – an outline of the methods and procedures undertaken by the existing
system, followed by coverage of the objectives and procedures of the candidate system.
Included are also discussions of the output reports, file structures and costs and benefits
of the candidate system.
After the report is reviewed by the management, it becomes a formal agreement that
paves the way for actual design and implementation. This is a crucial decision point in
the life cycle. Many projects die here, whereas the more promising ones continue through
implementation. Changes in the proposal are made in writing, depending on the
complexity, size and cost of the project. It is important to verify changes before
committing the project to design.
A well-done feasibility study enables the firm to avoid six common mistakes often made
in project work. These are:
i) Lack of top management support – top management has to understand and support
subordinate managers in their effort to improve the firm’s operations. Feasibility studies
also get subordinate managers directly involved in exploring and designing the systems
they will have to live with in the future. Such involvement results in increased
conscientiousness, which in turn enables top management to have more confidence in
subordinates’ plans. The result will be top management support for such plans.
ii) Failure to clearly specify problems and objectives – the feasibility study can be directed
toward defining the problems and objectives involved in a project, after management has
given the group some understanding of what they would like to accomplish.
iii) Over optimism – a feasibility study can be conducted in an objective realistic manner to
prevent overoptimistic forecasts. The study should be conservative in its estimates of
improved operations; reduced costs, and so on, to ensure that all the firm’s future
surprises with a new system are happy ones.
iv) Estimation errors – it is easy to underestimate the time and money involved in the
following areas:
a) Impact on the company’s structure
b) Employee’s resistance to change
c) Difficulty of retraining personnel
d) System development and implementation
e) Computer program debugging and running
v) The crash project – many managers do not realize the magnitude of work involved in
developing new systems. Crash projects usually involve changing too quickly. A
feasibility study might determine that a present system with all its inadequacies is
superior to a crash project – assuming of course that the feasibility study itself is not run
as a crash project.
vi) The hardware approach – firms have been known to get a computer first and then decide
on how to use it. A feasibility study can identify, in advance, the uses to which the
computer will be put and can identify the best computers for the job before any
irreversible commitments are made.
SOFTWARE DESIGN
Software design is a process to transform user requirements into some suitable form, which helps
the programmer in software coding and implementation.
Software Design Levels
Software design yields three levels of results:
a. Architectural Design - The architectural design is the highest abstract version of the
system. It identifies the software as a system with many components interacting with
each other. At this level, the designers get the idea of proposed solution domain.
b. High-level Design- The high-level design breaks the ‘single entity-multiple component’
concept of architectural design into less-abstracted view of sub-systems and modules and
depicts their interaction with each other. High-level design focuses on how the system
along with all of its components can be implemented in forms of modules. It recognizes
modular structure of each sub-system and their relation and interaction among each other.
c. Detailed Design- Detailed design deals with the implementation part of what is seen as a
system and its sub-systems in the previous two designs. It is more detailed towards
modules and their implementations. It defines logical structure of each module and their
interfaces to communicate with other modules
Modularization
Modularization is a technique to divide a software system into multiple discrete and independent
modules, which are expected to be capable of carrying out task(s) independently. These modules
may work as basic constructs for the entire software. Designers tend to design modules such that
they can be executed and/or compiled separately and independently.
Modular design unintentionally follows the rules of ‘divide and conquer’ problem-solving
strategy this is because there are many other benefits attached with the modular design of
software.
Advantage of modularization:
Smaller components are easier to maintain
Program can be divided based on functional aspects
Desired level of abstraction can be brought in the program
Components with high cohesion can be re-used again.
Concurrent execution can be made possible
Desired from security aspect
Concurrency
In software design, concurrency is implemented by splitting the software into multiple
independent units of execution, like modules and executing them in parallel. In other words,
concurrency provides capability to the software to execute more than one part of code in parallel
to each other.
It is necessary for the programmers and designers to recognize those modules, which can be
made parallel execution.
Coupling and Cohesion
When a software program is modularized, its tasks are divided into several modules based on
some characteristics. There are measures by which the quality of a design of modules and their
interaction among them can be measured. These measures are called coupling and cohesion.
Cohesion
Cohesion is a measure that defines the degree of intra-dependability within elements of a
module. The greater the cohesion, the better is the program design.
There are seven types of cohesion, namely –
iii. Co-incidental cohesion - It is unplanned and random cohesion, which might
be the result of breaking the program into smaller modules for the sake of
modularization. Because it is unplanned, it may serve confusion to the
programmers and is generally not-accepted.
iv. Logical cohesion - When logically categorized elements are put together into
a module, it is called logical cohesion.
v. Temporal Cohesion - When elements of module are organized such that they
are processed at a similar point in time, it is called temporal cohesion.
vi. Procedural cohesion - When elements of module are grouped together,
which are executed sequentially in order to perform a task, it is called
procedural cohesion.
vii. Communicational cohesion - When elements of module are grouped
together, which are executed sequentially and work on same data
(information), it is called communicational cohesion.
viii. Sequential cohesion - When elements of module are grouped because the
output of one element serves as input to another and so on, it is called
sequential cohesion.
ix. Functional cohesion - It is considered to be the highest degree of cohesion,
and it is highly expected. Elements of module in functional cohesion are
grouped because they all contribute to a single well-defined function. It can
also be reused.
Coupling
Coupling is a measure that defines the level of inter-dependability among modules of a program.
It tells at what level the modules interfere and interact with each other. The lower the coupling,
the better the program.
There are five levels of coupling, namely -
ii. Content coupling - When a module can directly access or modify or refer to
the content of another module, it is called content level coupling.
iii. Common coupling- When multiple modules have read and write access to
some global data, it is called common or global coupling.
iv. Control coupling- Two modules are called control-coupled if one of them
decides the function of the other module or changes its flow of execution.
v. Stamp coupling- When multiple modules share common data structure and
work on different part of it, it is called stamp coupling.
vi. Data coupling- Data coupling is when two modules interact with each other
by means of passing data (as parameter). If a module passes data structure as
parameter, then the receiving module should use all its components.
Design Process
The whole system is seen as how data flows in the system by means of data flow
diagram.
DFD depicts how functions change the data and state of entire system.
The entire system is logically broken down into smaller units known as functions on the
basis of their operation in the system.
Each function is then described at large.
Design Process
A solution design is created from requirement or previous used system and/or system
sequence diagram.
Objects are identified and grouped into classes on behalf of similarity in attribute
characteristics.
Class hierarchy and relation among them are defined.
Application framework is defined.
Both, top-down and bottom-up approaches are not practical individually. Instead, a good
combination of both is used.
Types of DFD
Data Flow Diagrams are either Logical or Physical.
a. Logical DFD - This type of DFD concentrates on the system process and flow of data in
the system. For example in a Banking software system, how data is moved between
different entities.
b. Physical DFD - This type of DFD shows how the data flow is actually implemented in
the system. It is more specific and close to the implementation.
DFD Components
DFD can represent Source, destination, storage and flow of data using the following set of
components –
Entities - Entities are source and destination of information data. Entities are represented
by rectangles with their respective names.
Process - Activities and action taken on the data are represented by Circle or Round-
edged rectangles.
Data Storage - There are two variants of data storage - it can either be represented as a
rectangle with absence of both smaller sides or as an open-sided rectangle with only one
side missing.
Data Flow - Movement of data is shown by pointed arrows. Data movement is shown
from the base of arrow as its source towards head of the arrow as destination.
Context Diagram: It represents the context in which the system is to exist, i.e. the external
entities who would interact with the system and the specific data items they would be supplying
the system and the data items they would be receiving from the system
i. DFDs leave ample scope to be imprecise - In the DFD model, the function performed by
a bubble is judged from its label.
ii. Control aspects are not defined by a DFD- For instance; the order in which inputs are
consumed and outputs are produced by a bubble is not specified. A DFD model does not
specify the order in which the different bubbles are executed. Representation of such
aspects is very important for modeling real-time systems.
iii. The method of carrying out decomposition to arrive at the successive levels and the
ultimate level to which decomposition is carried out are highly subjective and depend on
the choice and judgment of the analyst. Due to this reason, even for the same problem,
several alternative DFD representations are possible. Further, many times it is not
possible to say which DFD representation is superior or preferable to another one.
iv. The data flow diagramming technique does not provide any specific guidance as to how
exactly to decompose a given function into its sub-functions and we have to use
subjective judgment to carry out decomposition.
CODING
Coding- The objective of the coding phase is to transform the design of a system into code in a
high level language and then to unit test this code. The programmers adhere to standard and well
defined style of coding which they call their coding standard. The main advantages of adhering
to a standard style of coding are as follows:
A coding standard gives uniform appearances to the code written by different engineers
It facilitates code of understanding.
Promotes good programming practices.
For implementing our design into a code, we require a good high level language. A
programming language should have the following features:
TESTING
Definition 2
Testing involves actual execution of program code using representative test data sets to
exercise the program and outputs are examined to detect any deviation from the expected
output
Definition 1
Testing is classified as dynamic verification and validation activities
Objectives of Testing
1. To demonstrate the operation of the software.
2. To detect errors in the software and therefore:
Obtain a level of confidence,
Produce measure of quality.
N/B 2:
The five steps of testing are based on incremental system integration i.e.
(Unit testing – module testing – sub-system testing - system testing- acceptance testing). But
object oriented development is different and levels have clear/ distinct
Operations and data forms objects –units
Object integrated forms class (equivalent to) –modules
Therefore class testing is cluster testing.
MODES OF TESTING
1. Black Box Functional Testing
This is based on specification alone, without reference to implementation details.
2. White Box Or Structural, Glass Testing
This is based on inspection of code structure (implementation details, low level design)
Techniques
I. Equivalence partitioning
II. Boundary value analysis.
Equivalence partitioning
Equivalence partitioning involves breaking down the input data into sets regarded as
“equivalent”.
It relies on an assumption of “uniform behavior within ranges of input values that are not
significantly different in terms of the specification.
E.g.
-A program must handle from 1 to 10,000 records.
-If it can handle 40 records and 9,000 records chances are that it will work with 5,000
records.
-Therefore chances of detecting fault (if present) are equally good if any test case from 1-
10,000 is selected.
-Therefore if the program work for any one test case it will probably work for any text cases
in the range.
The range 1-10,000 constitutes an equivalence class i.e. A set of test cases such that any one
member of the class is as good a test case as any other.
Therefore classes:
Equivalence class 1 – less than one
Equivalence class 2 –1 to 10,000
Equivalence class 3 – more than 10,000.
Equivalence partitioning requires a test case from each class be carried out.
Branch coverage
Requires test data that causes each branch to have a true or false outcome
E.g. Pg. 6 (IBID)
Path coverage
Path coverage is concerned with testing all paths through a program basis path testing.
Path coverage enables the logical complexity of a program to be measured.
It uses the measure to define a basis set of execution paths
Flow graphs depict logical flow through a program
Initialize 1
2
Do get character from file ------
3
If character <> new line add 1 to
Character count------------------ 4 5
Else 6
Add 1to line 7
Count------
8
End if-------------
While not end of file --------
TEST PLANNING
Test planning is setting out standards for the testing process rather than describing product
tests.
Test plans allow developers get an overall picture of the system tests as well as ensure
required hardware, software, resources are available to the testing team.
Components of a test plan:
Testing process
This is a description of the major phases of the testing process.
Requirement traceability
This is a plan to test all requirements individually.
Testing schedule
This includes the overall testing schedule and resource allocation.
Test recording procedures
This is the systematic recording of test results.
Hardware and software requirements.
Here you set out the software tools required and hardware utilization.
Constraints
This involves anticipation of hardships /drawbacks affecting testing e.g. staff shortage should
be anticipated here.
N/B
Test plan should be revised regularly.
TESTING STRATEGIES
This is the general approach to the testing process.
There are different strategies depending on the type of system to be tested and development
process used: -
Top-down testing
This involves testing from most abstract component downwards.
Bottom-up testing
This involves testing from fundamental components upwards.
Thread testing
This is testing for systems with multiple processes where the processing of transactions
threads through these processes.
Stress testing
This relies on stressing the system by going beyond the specified limits therefore testing on
how well it can cope with overload situations.
Back to back testing
It is used to test versions of a system and compare the outputs.
N/B
Large systems are usually tested using a mixture of strategies.
Top-down testing
Tests high levels of a system before testing its detailed components. The program is
represented as a single abstract component with sub-components represented by stubs.
Stubs have the same interface as the component, but limited functionality.
After top-level component (the system program) is tested. Its sub-components (sub-systems)
are implemented and tested through the same way and continues to the bottom component
(unit).
If top-down testing is used:
- Unnoticed errors maybe detected early (structured errors)
- Validation is done early in the process.
Disadvantages Of Using Top down Testing
1. It is difficult to implement because:
Stubs are required to simulate lower levels of the system. Complex components are
impractical to produce a stub that can be tested correctly.
Requires knowledge of internal pointer representation.
2. Test output is difficult to observe. Some higher levels do not generate output therefore must
be forced to do so e.g. (classes) therefore create an artificial environment to generate test
results.
N/b therefore it is not appropriat3e for object oriented systems but individual systems may be
tested.
Bottom-up testing
This is the opposite of top-down testing. This is testing modules at lower levels in the hierarchy,
then working up to the final level.
Advantages of bottom up are the disadvantages of top-down. +
1. Architectural faults are unlikely to be discovered till much of the system has been tested.
2. It is appropriate for object oriented systems because individual objects can be tested using
their own test drivers, then integrated and collectively tested.
SOFTWARE MAINTENACE
Definition 1 - Maintenance is the process of changing a system after it has been delivered and is
in use.
Simple - correcting coding errors
Extensive - correcting design errors.
Enhancement- correcting specification errors or accommodate new requirements.
Definition 2 - Maintenance is the evolution i.e. process of changing a system to maintain its
ability to survive.
The maintenance stage of system development involves
a) correcting errors discovered after other stages of system development
b) improving implementation of the system units
c) enhancing system services as new requirements are perceived
Information is fed back to all previous development phases and errors and omissions in original
software requirements are discovered, program and design errors found and need for new
software functionality identified.
TYPES OF SOFTWARE MAINTENANCE
The following are the different types of maintenance:
Corrective maintenance - This involves fixing discovered errors in software (Coding errors,
design errors, requirement errors) once the software is implemented and is in full operation, it is
examined to see if it has met the objectives set out in the original specifications. Unforeseen
problems may need to be overcome, and may involve returning to the earlier stages in the system
development life cycle to take corrective actions
Adaptive maintenance - This is changing the software to operate in a different environment
(operating system, hardware) this doesn’t radically change the software functionality. After
running the software for some time, the original environment e.g. operating System and the
peripherals, for which the software was developed, may change.
At this stage the software will be modified to accommodate the changes that will have occurred
in its external environment. This could even call for a repeat of the system development life
cycle yet again
Perfective maintenance - Implementing new functional or non-functional system requirements,
generated by software customers as their organization or business changes. Also as the software
is used, the user will recognize additional functions that could provide benefits or enhance the
software if added to it
Preventive maintenance - Making changes on software to prevent possible problems or
difficulties (collapse, slow down, stalling, self-destructive e.g. Y2K).
Maintenance cost (fixing bugs) is usually higher than what software is original due to: -
I. Program being maintained may be old, and not consistent to modern software engineering
techniques. They may be unstructured and optimized for efficiency rather than
understandability.
II. Changes made may introduce new faults, which trigger further change requests. This is
mainly since complexity of the system may make it difficult to assess the effects of a
change.
III. Changes made tend to degrade system structure, making it harder to understand and make
further changes (program becomes less cohesive.)
IV. Loss of program links to its associated documentation therefore its documentation is
unreliable therefore need for a new one.
NB. Changes are implemented and validated and new versions of system released
CHA IMP SYST CHANGE SYST
NGE ACT EM IMPLEMEN EM
REQ ANAL RELE TATION RELE
UES YSIS ASE ASE
T PLAN
NING
PERFECTI ADAPTIV CORREC
VE E TIVE
MAINTE MAINTE MAINTE
NANCE NANCE NANCE
Process Management
Aspects in Process Management
Initiation and scope definition – Concerned with determination and negotiation of
requirements, Feasibility analysis (technical, operational, financial, social/political), Process
for review and revision of requirements.
Planning - Process planning, Project planning, Determine deliverables, Effort, schedule and
cost estimation, Resource allocation, Risk management, Quality management, Plan
management.
Implementation - Enactment of plans, Implementation of measurement process, Monitor
process, Control process, Reporting.
Review and evaluation - Determining satisfaction of requirements, Reviewing and
evaluating performance,
Closure - Determining closure and Closure activities
PEOPLE
“System are not developed by individuals but teams”.
Players in system development: -
Senior Managers: define business issues that have significant influence on the project
Project (technical) Manager: plan, motivate,, organize and control practitioners who do the
development work
Practitioners : deliver the technical skill necessary to engineer a product
Customers : specify the requirement for the system to be engineered
End-users : interact with the released system/ product
System Development Team Leaders
They should be
Motivative: encourage team members
Organizational : able to mould existing processes (or invent new) that will enable the initial
concept be translated to final product
Innovative : able to generate new/ creative ideas/ solution
Achiever : able to optimize productivity of team members
Problem solver : able to diagnose technical and organizational issues that are relevant and
develop a solution
Controller/ authoritative : able to take charge of the project therefore confident to control
Understanding and flexible : able to understand others point of view, understand others
reactions/ signals and change position flexibility, and remain in control during high- stress
situation.
PRODUCT
Major challenge to the system development manager is quantitative estimates and organized
plan. Production in view of scattered requirements and unavailability of solid information
and fluid (changing) requirements
Therefore examine the product and problem to be solved
PROCESS
Generic phases that characterize system development process are; definition, development
and support. Appropriate engineering model must be employed: -
a) Linear sequential (traditional/ waterfall) model
b) Prototyping
c) RAD model
d) Spiral model
e) Incremental model
PROJECT PLANNING
Manages are responsible for
a) Writing project proposal
b) Writing project costing
c) Project planning and scheduling
d) Project monitoring and reviewing
e) Personnel selection and evaluation
f) Report writing and presentations
Project planning is concerned with identifying the activities, milestone and deliverable
produced by a project.
- a plan must be drawn to guide the development towards the project goals
- system project estimation is activity concerned with estimating the
resources required to accomplish the project plan
Project manager must anticipate problems which might arise and prepare tentative solutions
to the problem
- plan is used as the driver for the project
- plan (initial one) is not static but must be modified on the project
progress as mere information becomes available
TYPES OF PLAN
a) Quality plan : describes quality procedures and standards that will be used in a project
b) Validation plan : describes the approach, resources and schedule used for system validation
c) Configuration management plan : describes the configuration management procedures and
structure to be used
d) Maintenance plan : predicts the maintenance requirements of the system, maintenance cost
and effort required
e) Staff development plan : describes how the skills and experience of the project team
members will be developed.
The planning process starts with an assessment of the constraints (required delivery date,
overall budget, staff available etc) affecting the project
This is carried out in conjunction with an estimation of project parameters such as structure,
size and distribution of functions
The program milestone and deliverables are then defined
A schedule (for the project) is drawn, analyzed and passed and subjected to later reviews
PROJECT SCHEDULING
It is the estimation of time and resources required to complete activities and organizing them
in a coherent sequence.
It also involves separating the work (project) into separate activities and judging the time
required to complete these activities, some of which are carried out in parallel
Schedules must: -
Properly co-ordinate the parallel activities.
Avoid situation where whole project is delayed for a critical task to be finished (critical
tasks are the jobs your team must complete to finish a project. The critical path is the
sequence of critical tasks, identifying which order to complete them and how long each
task will take. The length of the critical path tells you the timeline for completing your
project.)
Schedules must have allowances (error allowances) that can cause delays in completion
therefore flexible.
They must also estimate resources needed to complete each task (human effort, hardware,
software, finance (budget) etc)
NB: the key to estimation is to estimate as if nothing will go wrong, then increase the
estimate to cover anticipated problems. Also add a further contingency factor to cover the
problems.
Project schedule is usually presented as a set of charts showing
Work breakdown
Activity dependency
Staff allocation
PERT/CPM
Program Evaluation and Review Technique (PERT), (Also Critical Path Method – CPM) is a
graphical network model that depicts a project’s tasks and the relationships between those tasks.
The project is shown as a network diagram with the activities shown as vectors and events
displayed as nodes.
Shows all individual activities and dependencies
It forms the basis for planning and provides management with the ability to plan for best
possible use of resources to achieve a given goal within time and cost limitations
It provides visibility and allows management to control unique programs as opposed to
repetitive situations
Helps management handle the uncertainties involved in programs by answering such
questions as how time delays in certain elements influence others as well as the project
completion. This provides management with a means for evaluating alternatives
It provides a basic structure for reporting information
Reveals interdependencies of activities
Facilitates hat if exercises
It allows one to perform scheduling risk analysis
Allows a large amount of sophisticated data to be presented in a well organized diagram
from which both the contractor and the customer can make joint decisions
Allows one to evaluate the effect of changes in the program
More effective than Gantt charts when you want to study the relationships between tasks
Requires intensive labour and time
The complexity of the charts adds to implementation problems
Has more data requirements thus is expensive to maintain
Is utilized mainly in large and complex projects
Gantt Charts and PERT/CPM are not mutually exclusive techniques project managers often
use both methods. Neither handles the scheduling of personnel and allocation of resources
NETWORK ANALYSIS
Network analysis is a generic name for a family of related techniques developed to aid
management to plan and control projects. It provides planning and control information on time,
cost and resource aspects of a project. It is most suitable where the projects are complex, large or
restrictions exist.
The critical path method is applied most where a network is drawn either an activity on arrow or
activity on node network. In the network analysis a project is broken down into consistent
activities and their presentation in a diagrammatic form. In the CPM one has to analyze the
project, draw the network, estimate the time and cost, locate the critical path, schedule the
project, monitor and control the progress of the project and revise the plan.
Example: draw a network and find the critical path for the following project
Activity Proceeding Activity Duration
A - 4
B A 2
C B 10
D A 2
E D 5
F A 2
G F 4
H G 3
J C 6
K C, E 6
L H 3
Using the information in Table 1, assuming that the project team will work a standard working
week (5 working days in 1 week) and that all tasks will start as soon as possible:
(i)Draw the network diagram
Completed:
i) Within the allocated time
ii) Within the budgeted cost
iii) At the proper performance or specification level
iv) With acceptance by the customer/user
v) With minimum or mutually agreed upon scope changes
vi) Without disturbing the main work flow of the organization
vii) Without changing the corporate culture
viii) Within the required quality and standards thus you can use the customer’s name as a
reference
PROJECT ESTIMATION
System (software) cost and effort estimate can never be exact, too many variables, human,
technical, environmental, political can affect system cost and efforts applied to development
Project estimation strive to achieve a reliable cost and effort estimation
A number of options arise trying to achieve this: -
a) Delay estimation until late in the project (estimates done after the project)
b) Base estimates on similar projects that have already been completed
c) Use relating simple decomposition technique to generate project cost and effort estimates
d) Use one or more empirical models for system cost and effort estimation
Empirical estimation models: based an experience (historical data) and takes a form
D= f(vi)
Where d = one of a number of estimated values (e.g. effort, cost, project duration etc)
Vi = selected independent parameters (e.g. estimated *LOC or *FP)
DECOMPOSITION TECHNIQUES
Accuracy of system (software) project estimate is predicted on a number of things
a) Degree to which the planner has properly estimated the size of the product to be built.
b) Ability to translate the size estimate into human efforts, calendar time and money.
c) The degree to which project plan reflects the abilities of the system development team
d) The ability of product requirements and the environment that supports the system
development efforts
Project estimate is as good as the estimate of the sizes of the work to be accomplished
Size is a quantifiable outcome of the system/ software project
An estimation model for PC software uses empirically derived formula to predict effort as a
function of LOC or FP
E= A+Bx (ev)c
(1) Algorithmic cost modeling - A model is developed using historical cost information which
relates some software metric (usually its size) to the project cost. An estimate is made of that
metric and the model predicts the effort required.
(2) Expert judgement - One or more experts on the software development techniques to be used
and on the application domain are consulted. They each estimate the project cost and the final
cost estimate is arrived at by consensus.
(3) Estimation by analogy - This technique is applicable when other projects in the same
application domain have been completed. The cost of a new project is estimated by analogy with
these completed projects.
(4) Parkinson's Law - Parkinson's Law states that work expands to fill the time available. In
software costing, this means that the cost is determined by available resources rather than by
objective assessment. If the software has to be delivered in 12 months and 5 people are available,
the effort required is estimated to be 60 person-months.
(5) Pricing to win - The software cost is estimated to be whatever the customer has available to
spend on the project. The estimated effort depends on the customer's budget and not on the
software functionality.
"
(6) Top- down estimation - A cost estimate is established by considering the overall
functionality of the product and how that functionality is provided by interacting sub-functions.
Cost estimates are made on the basis of the logical function rather than the components
implementing that function.
(7) Bottom- up estimation - The cost of each component is estimated. All these costs are added
to produce a final cost estimate.
Dynamic Verification - concerns Software testing with exercising and observing product
behaviour where the system is executed with test data and its operational behaviour is observed
Program testing – is done to reveal the presence of errors NOT their absence. A successful test
is a test which discovers one or more errors. The only validation technique for non-functional
requirements
Verification and validation should establish confidence that the software is fit for purpose it is
designed for. This does NOT mean completely free of defects rather, it must be good enough for
its intended use which determines the degree of confidence needed. Depends on system’s
purpose, user expectations and marketing environment
Software function – concerned with the level of confidence depends on how critical the
software is to an organisation
User expectations - Users may have low expectations of certain kinds of software
Marketing environment - Getting a product to the market early which may be more important
than finding defects in the program
SOFTWARE CONFIGURATION MANAGEMENT
Software configuration is a collection of the items that comprise all information produced as part
of the software process. The output of software process is information and includes PC
programs, documentation and the data (in its program and external to it).
Definition 2
The process which controls the changes made to a system, and manages the different versions of
the evolving software product. It involves development and application of procedures and
standards for managing an evolving system product. Procedures should be developed for
building system releasing them to customers
Standards should be developed for recording for recording and processing proposed system
changes and identifying and storing different versions of the system.
Configuration managers (team) are responsible for controlling software changes. Controlled
systems are called baselines. They are the starting point for controlled evolution.
Configuration managers are responsible for keeping track of difference between software
versions and ensuring new versions are derived in a controlled way.
Are also responsible for ensuring that new versions are released to the correct customers at the
appropriate time
Configuration database is used to record all relevant information relating to configuration to:
a) assert with assessing the impact of system changes
b) Provide management information about configuration management.
Configuration database defines/describes
Customers who have taken delivery of a particular version
Hardware and software operating system requirements to run a given version.
The number of versions of system so far made and when they were made etc
Computer Aided Software Engineering (CASE) tools support for CM is therefore essential and
are available ranging from stand-alone tools to integrated CM workbenches
CASE Tools is a software package that supports construction and maintenance of a logical
system specification model
Designed to support rules and interactions of models defined in a specific methodology
Also permit software prototyping and code generation
Aim to automate document production process by ensuring automation of analysis and design
operations
DISADVANTAGES
CASE products can be expensive
CASE technology is not yet fully evolved so its software is often large and inflexible
Products may not provide a fully integrated development environment
There is usually a long time for learning before the tools can be effectively used i.e. no
soon benefits realized
Analysts must have a mastery of the structured analysis and design techniques if they are to
exploit CASE tools
Time and cost estimates may have to be inflated to allow for an extended learning period of
CASE tools
Quality Assurance
Quality management System
- relevant procedures and standards to be followed
- Quality Assurance assessments to be carried out
Correctness - ensures the system operates correctly and provides the value to its user and
performs the required functions therefore defects must be fixed/ corrected
Maintainability - is the ease with which system can be corrected if an error is encountered,
adept if its environment changes or enhance if the user desires a change in requirements
Integrity - is the measure of the system ability to withstand attacks (accidental or intentional) to
its security in terms of data processing, program performance and documentation
Usability - is the measure of user friendliness of a system as measured in terms of physical and
intellectual skills required to learn the system, the time required to become moderately efficient
in using it, the net increase in productivity if used by moderately efficient user, and the general
user attitude towards the system.
QUALITY ASSURANCE
Since quality should be measurable, then quality assurance needs to be put in place
Quality Assurance consists of Auditing and reporting functions of the management
Quality Assurance must outline the standards to be adopted i.e. either International recognized
standards or client designed standards
Quality Assurance must lay down the working procedure to be adopted during project lifetime,
which includes: -
Quality Assurance builds clients confidence (increase acceptability) as well as contractors own
confidence in knowing that they are building the right system and that it will be highly
acceptable
Testing and error correction assures system will perform as expected without defects or collapse
and also ensures accuracy and reliability.
METRICS
Metric is a quantitative measure of the degree to which a system, component, or process
processes a given attribute. Measurement occurs as a result of the collection of one or more
data points.
Software engineers collect, measure and develop metrics so that indicators can be obtained.
Indicator is a metric or combination of metrics that provide insight into the software process,
project or product itself.
Indicator provides insight that enables project managers or software engineers to adjust the
process or project to make things better.
Metrics should be collected so that process and product indicators can be ascertained, and
enable software engineers, organizations gain insight into the efficiency of an existing
process (i.e. paradigm, software engineering tasks, work products, and milestones).
Metrics is mainly applied in software productivity and quality. It is used to measure software
development “output” as a function of effort and time applied, measure the “fitness of use “
of the product.
Software process, product.
Software processes, products and resources are measured to characterize and gain
understanding of them and establish baseline for comparisons with future assessments.
Software is also measured:
To evaluate and determine the status with respect to plans and ensure we don’t get out of
track during the engineering life cycle /lifetime.
To predict, therefore gain understanding of relationships among processes and products and
the value observed can be used to predict others. This helps in planning for the future trends,
costs, time and quality.
For projection and estimation costs useful in risk analysis and making design-cost trade-offs.
Also measuring for improvement after identifying problems, root causes, inefficiency and
other opportunities for improving the software quality and process performance.
Enable software managers to asses status of an on-going project, track potential risks,
uncover problem areas before they get critical, adjust workflow or tasks and evaluate the
project teams ability to control quality of software products.
Software Measurements
Size-oriented metrics
Function oriented metrics
Extended function point metrics
Every level of software engineering has appropriate metrics that can be applied. These
includes:
Metrics for analysis which include the function based metrics
The bang metrics
Metrics for specification quality
Metrics for design model: architectural design metrics, component-level design metrics
(which tests coupling and cohesion)
Interface design metrics
The metrics for the source code
Metrics for testing
Metrics for maintenance
Software Metrics
Software metrics is any type of measurement, which relates to a software system process or
related documentation.
E.g. size measurement in lines of code
Fog index (Hunning 1962) =measure of readability of a product manual e.t.c.
Metrics fall into two classes
Control metrics
They provide information about process quality therefore is related to product quality.
Predictor metrics
Measurement of a product attribute that can be used to predict an associated product quality
E.g. Fog index predicts readability, cyclomatic complexity to predict maintainability of
software.
This concerns identifying key issues or measures that should show where a program is deficient.
Managers must decide on the relative importance of:
RISK MANAGEMENT
Risk
A problem or a threat that you would prefer not to have during your project development
because it can threaten the project, software or the organization.
Risk Management
Involves anticipating risks that might affect the project schedule or quality of software
being developed and taking actions to avoid those risks. Results of risks analysis should be
documented in project plan along with an analysis of the consequences of risk occurring.
Importance of risk management
It is easier to cope with problems and ensure that they do not lead to unacceptable
budget or schedule slippage.
- Because of inherent uncertainties that most projects face.
Categories of Risks
Causes of risks
-Stem from loosely defined requirements.
-Difficulty in estimating the time and resources required for software development.
Documentation should be
Clear and non-ambiguous
Structured and directive
Readable and presentable
Tool-assisted (case tools) in production (automation).
SYSTEM DOCUMENTATION
Items for documentation to be produced for a software product include:-
System Request – this is a written request that identifies deficiencies in the current system
besides requesting for change
Feasibility Report – this indicates the economic, legal, technical and operational feasibility of
the proposed project
Preliminary Investigation Report – this is a report to the management clearly specifying the
identified problems within the system and what further action to be taken is also recommended
System Requirements report – this specifies the entire end –user and management
requirements, all the alternatives plans, their costs and the recommendations to the management
System Design Specification – it contains the designs for the inputs, outputs, program files and
procedures
User Manual – it guides the user in the implementation and installation of the information
system
Maintenance Report – a record of the maintenance tasks done
Software Code – this refers to the code written for the information system
Test Report – this should contain test details e.g. sample test data and results etc
Tutorials - a brief demonstration and exercise to introduce the user to the working of the
software product
SOFTWARE DOCUMENTATION
The typical items included in the software documentation are
Introduction – shows the organization’s principles, abstracts for other sections and notation
guide
Computer characteristics – a general description with particular attention to key attributes and
summarized features
Hardware interfaces – a concise description of information received or transmitted by the
computer
Software functions – shows what the software must do to meet requirements, in various
situations and in response to various events
Timing constraints – how often and how fast each function must be performed
Accuracy constraints – how close output values must be ideal to expected values for them to be
acceptable
Response to undesired events – what the software must do in events e.g. sensor goes down,
invalid data etc
Program sub-sets – what the program should do it if it cannot do everything
Fundamental assumptions – the characteristics of the program that will stay the same, no
matter what changes are made
Changes – the type of changes that have been made or are expected
Sources – annotated list of documentation and personnel, indicating the types of questions each
can answer
Glossary – most documentation is fraught with acronyms and technical terms
CHANGE MANAGEMENT.
Change management process involves technical change analysis, cost-benefit analysis and
change tracking
Version and release management are the processes of identifying and keeping track of
versions and new releases of system.
Ensures release (time), right version is released.
Some versions may be designed to operate in different hardware or software (operating
system) platforms though their functions are the same.
System release - version that is distributed to customer.
A release (isn’t only a set of programs but) includes: -
Configuration files - defining how the release should be configured for installations.
Data files – needed for successful system operation.
Installation programs - to help install the system on target hardware
Electronic and paper documentation - describing the system.
All information must be availed to customers.
Do not harm others – ethical behaviour concerned with both helping clients satisfy their needs
and not hurting them
Be competent – IT Professionals master the complex body of knowledge in their profession; a
challenging issue because IT is dynamic and rapidly evolving field. Wrong advice to the client
can be costly
Maintain independence and avoid conflict of interests – in excising of their professional
duties, they should be free from influence, guidance or control of other parties e.g. vendors. Thus
avoid corruption and fraud
Match clients’ expectations – it is unethical to misrepresent either your qualifications or ability
to perform a certain job
Maintain fiduciary responsibility – IP to hold in trust information provided to them
Safeguard client and source privacy – ensure privacy of all private and personal information and
do not ‘leak it’
Protect records – safeguard records they generate and keep on business transactions with their
clients
Safeguard intellectual property – they are trustees of information and software and hence must
recognize that these are intellectual property that must be safeguarded
Provide quality information – the creator of information / products must disclose information
about the quality and even the source of information in a report or product record
Avoid selection bias – IT Professionals routinely make selection decisions at various stages of
the information life cycle. They must avoid the bias of prevailing point of view. Selection is
related to censorship
Be a steward of a clients assets, energy and attention - Provide information at the right time,
right place and at the right cost
Manage gate-keeping and censorship and obtain informed consent
Obtain confidential information and keep client confidentiality
Abide by laws, contracts, and license agreements; Exercising Professional Judgement