You are on page 1of 43

SYSTEM DESIGN

Design

The most challenging phase of the system life cycle is system design. The term design

describes a final system and the process by which it is developed. It refers to the technical

specifications that will be applied in implementing the candidate system. It also includes the

construction of programs and program testing.

System design is a solution, a “how to” approach the creation of a new system. This

important phase is composed of several steps. It provides understanding and procedural details

necessary for implementing the system recommended in the feasibility study. Emphasis is on

translating the performance requirements into design specifications.

The first step is to determine how the output is to be produced and int what format.

Samples of the output and input are also presented. Second, input data and master files (database)

have to be designed to meet the requirements of the proposed output. The operational (processing)

phases are handled through program construction and testing, including a list of programs needed

to meet the systems objectives and to complete

documentation. Finally, details related to justification of the system and an estimate of the impact

of the candidate system on the user and the organization are documented and evaluated by

management as a step toward implementation. The basic steps in designing are:

 Output Design

 Input Design

 Data Design

 Process Design

Output Design
Computer output is the most important and direct source of information to the user.

Efficient, intelligible output design should improve the system’s relationships with the user and

help in decision making. A major form of output is hard copy from the printer.

Output from computer system is required to communicate the result of processing to users.

Designing computer output should process in an organized. Well thought out manner. The right

output must be developed while ensuring that each output element is designed so that the user will

find the system easy to use effectively. The term output applies to any information by an

information system, whether printed or displayed. While designing computer outputs the following

steps have to be followed.

In addition to deciding on the output device, the systems analyst must consider the print

format and the editing for the final printout. The task of output preparation is critical, requiring

skill and ability to align user requirements with the capabilities of the system in operation. The

design considerations we have followed while designing output are:

 Name or title.

 Space and arrangement

 Headers and footers.

In online applications, the layout sheet for displayed output is similar to the layout chart

used for designing input. In these cases, the output forms are similar to the input forms. Other type

of applications output forms like reports used to make decisions must be designed carefully. The

following layout describes our report designing.

Input Design

Inaccurate input data are the most common cause of errors in data processing. Errors

entered by data entry operators can be controlled by input design. Input design is the process of

converting user-originated inputs to a computer-based format. In the system design phase, the
expanded data flow diagram identifies logical data flows, data stores, sources and destinations. A

system flowchart specifies master files (database), transaction files and computer programs.

Input Media:

In this project, earlier stages identified the data that is input to the transactions. The next

step is what media should be used for the input. Since this is an online data entry project we need

computer based online forms as the media for input entry. There are three approaches for data

entry with forms: menu based formatted forms, and prompts. We adopted the formatted form

approach for entering data. A formatted form is a preprinted form or a template that requests the

user to enter data in appropriate locations. It is a fill-in-blank type. The form is flashed on the

screen as a unit. The cursor is usually positioned at the first blank. After the user responds by

filling in the appropriate information, the cursor automatically moves to the next line, and so on

until the form is completed.

Form Types:

There are three types of forms classified by what it does in the system. They are: action

forms – to perform some action such as storing, modifying, and deleting data, memory forms – to

perform extraction and display operations on existing historical data, and report forms – to

generate decision support data from existing records. We used reports as output forms. As an input

media we used both action and memory forms in combination.

Form Layout:

When form is designed, a list is prepared of all the items to be included on the form and the

maximum space to be reserved. The form user to make sure it has the required details should check

the list.

 Title

 Data Zoning
 Rules and Captions

Design Considerations:

In designing these forms we taken care several attributes that are mentioned below:

 Identification and wording.

- Form titles and labels.

 Maximum readability and use.

- Legible, intelligible, uncomplicated, and space.

 Physical factors.

- Composition, color, layout.

 Order of data items.

- Logical sequence, data relation.

 Ease of data entry.

- Field positions.

 Size and arrangement.

- Size, storing, filing, and space for signs.

 Use of instructions.
Form Title
- Online help for data entry, status info.

The following diagram describes the sample form layout we used to design forms in our project.

Record
Data Zone
Navigation

Action Commands

Online Help
Database Design

Database design is the process of developing database structures to hold data to cater to

user requirements. The final design must satisfy user needs in terms of completeness, integrity,

performance and other factors. For a large enterprise, the database design will turn out to be an

extremely complex task leaving a lot to the skill and experience of the designer. A number of tools

and techniques, including computer-assisted techniques, are available to facilitate database design.

The primary input to the database design process is the organizations’ statement of

requirements. Poor definition of these requirements is a major cause of poor database design,

resulting in databases of limited scope and utilities which are unable to adopt to changes.
The major step in database design is to identify the entities and relationships that reflect the

organizations’ data, naturally. The objective of this step is to specify conceptual structure of the

data and is often referred to as data modeling.

There are several methodologies to model the data logically. We adopted ER modeling as

our data modeling technique. ER model is technique for analysis and logical modeling of systems

data requirements. It uses three basic concepts: entities, attributes and relations.

Entity:

Entity is a distinguishable object. These entities are classified into regular entities and weak

entities. A weak entity is an entity that is existence dependent on some other entity i.e. it does not

exist if that other entity does not exist. A regular entity is that it is

not weak. The graphical notation of entity is shown below.

Regular Entity Weak Entity

Attribute:

Entities have properties known as attributes. All entities of a given type have certain kinds

of properties in common. Each kind of property draws its value from a corresponding value set.

Properties can be of various types: Simple or composite, key, single or multi, missing, and base or

derived. Attributes are graphically represented as shown below.

Attribute

Relation:

Relationship defines an association among entities. The entities involved in a given

relationship are said to be participants in that relationship.


The number of participants in a given relationship is called the degree of that relationship. An ER

relationship can be one – one, one – many, and many - many. Cardinality of a relationship refers

to representing the number of occurrences of entities in a given relationship. The graphical

notation of relation is represented as shown below.

Relationship

In our project we have identified entities, attributes for those entities, and relationships between

those entities from data collected at analysis phase.

These are listed below:

Entities:

 Administrator
 Department
 Designation
 Employee
 Client
 Project
 Project assigned
 Daily time sheet
 Candidate
 Appointment
 Post interview

Attributes:

Entity Attributes
Administrator adminid, firstname, lastname, username, adminpwd
,addeddate
Department DepartmentID, DepartmentCode, DepartmentName,
CurrentStrength, MaxNoEmp, MinSal, MaxSal,
MinAge, MaxAge
Designation DesignationID, DesignationCode, DesignationName

Employee EmployeeID, DepartmentID, DesignationID,


EmployeeCode, FirstName, LastName, Sex,
DateOfBirth, UserName, Pwd, Address, EmailID,
PhoneNo, EmployeeImage, Nationality
Client ClientID, ClientCode, ClientName,
EmailID,Country, AddedDate
Project ProjectID, ProjectCode, ProjectName, ClientID,
ProjectHour, StartDate, EndDate, AddedDate,
Status
Project assigned AssignmentID, EmployeeID, ProjectID

Daily time sheet DailyTimeSheetID, EmployeeID, ProjectID,


EntryDate, Hours, Minutes
Candidate CandidateID, CandidateCode, ResumeDate,
FirstName, LastName, DOB, Sex, Address,
PhoneNo, EmailID, Department, Designation,
College, Stream, Percentage, Location, Nationality,
Hobby, Skill, Experience, Achivements
Appointment AppointmentID, CandidateID, AppointmentNo,
IssueDate, Authority, InterviewDate,
ProbationPeriod, JoiningDate, Salary
Post interview PostInterviewID, CandidateID, PostInterviewCode,
InterviewLevel, Description, PostAppFor,
OfferedSalary, Negotiated, SelectionStatus,
InterviewerCode, InterviewerName,
InterviewerDesignation

Relationships:
 Administrator controls Departments

 Administrator controls Employee

 Administrator deals with Clients

 Administrator selects Candidate

 Employee has Designation

 Employee works on Department

 Employee assigned Project

 Client deals with Project

 Project deals with Project assigned

 Project deals with Daily time sheet


 Candidate face interview in Post Interview

 In Post Interview if a candidate selected then Appointed


ER Diagrams:
ER diagrams are used to represent ER model as graphical notation. Then it will be easy to

understand the relations.


addeddat
adminpw e
d
Deals Administrator Controls
with

username
admini currentrengt
d departmentname h
lastname
firstnam
e departmentcod maxnoem
e p
departmentid

clientname minsal maxsal


emaili
clientcod d Contr
e countr ols
y employeecod minage
clienti e
d isactiv designationi
e d
Client lastname Department
addeddat firstnam
e e
sex dateofbirth
departmentid
isactiv maxag
employeei e e
Deals d Employee Works
with
on
username
clientid emaili
projecthou d
r nationality
projectnam pwd
e startdat
e
addres
projectcod enddat s isactiv
e e e
phoneno
statu
projecti Project s
d employeeimag
e

assign designationcod
ed has Designation e
addeddat
e
De
als
De wit
als h Project assigned assignmentid
designationname
wit
h
employeei
dailytimesheeti d employeeid isactive
d

projectid
projectid designationi
Daily time sheet d

minute
s entrydate
hours
firstnam lastname
adminid e

Administrator

usernam adminpw
e d

addeddat
candidateid e department
sele
ct
candidatecod designation
e

resumedate college

Candidate
firstname stream

lastname percentage

dob location
Face
interview
sex nationality

address hobby

phoneno skill

emailid experience

postinterviewid achievement
s
candidateid offeredsalary
Post Interview
postinterviewcode negotiated

interviewlevel selectionstatus

description interviewercode
If
select interviewername
postappfor ed

Appointment
interviewerdesignatio
appointmentid n

candidateid authority

appointmentno interviewdate

probationperiod
issuedate salary
joiningdat
e
Data Dictionary:
A data dictionary is a catalogue – a repository – of the elements in a system. As the name

suggests, these elements center around data the way they are structured to meet user requirements

and organization needs. In a data dictionary you will find a list of all the elements composing the

data flow through a system.

Table Name: Administrator


Description: This data structure describes the administrator details
Contents:
Name Type Null Key
adminid bigint Not Null Primary Key
firstname nvarchar(50)
lastname nvarchar(50)
username nvarchar(50)
adminpwd nvarchar(50)
addeddate datetime
adminid bigint
firstname nvarchar(50)
lastname nvarchar(50)
username nvarchar(50)
adminpwd nvarchar(50)
addeddate datetime

Table Name: HRMStblAppointment


Description: this data structure defined the appointment details
Contents:
Name Type Null Key
AppointmentID bigint Not Null Prymary Key
CandidateID int
AppointmentNo nvarchar(50)
IssueDate datetime
Authority nvarchar(50)
InterviewDate datetime
ProbationPeriod bigint
JoiningDate datetime
Salary bigint

Table Name: HRMStblAssignment


Description: This data structure defines the assignment details.
Contents:
Name Type Null Key
AssignmentID bigint Not Null Primary Key
EmployeeID int
ProjectID int

Table Name: HRMStblCandidate.


Description: this data structure defines the candidate details.
Contents:
Name Type Null Key
CandidateID bigint Not Null Primary Key
CandidateCode nvarchar(50)
ResumeDate datetime
FirstName nvarchar(50)
LastName nvarchar(50)
DOB datetime
Sex nvarchar(50)
Address nvarchar(50)
PhoneNo nvarchar(50)
EmailID nvarchar(50)
Department bigint
Designation bigint
College nvarchar(50)
Stream nvarchar(50)
Percentage nvarchar(50)
Location nvarchar(50)
Nationality nvarchar(50)
Hobby nvarchar(50)
Skill nvarchar(50)
Experience nvarchar(50)

Table Name: HRMStblClient.


Description: this data structure defines the client details.
Contents:
Name Type Null Key
ClientID bigint Not Null Primary Key
ClientCode bigint
ClientName nvarchar(50)
EmailID nvarchar(50)
Country nvarchar(50)
AddedDate datetime

Table Name: HRMStblDailyTimeSheet.


Description: this data structure defines the daily time sheet.
Contents:
Name Type Null Key
DailyTimeSheetID bigint Not Null Primary Key
EmployeeID bigint
ProjectID bigint
EntryDate datetime
Hours int
Minutes int

Table Name: HRMStblDepartment.


Description: this data structure defines the department details.
Contents:
Name Type Null Key
DepartmentID bigint Not Null Primary Key
DepartmentCode nvarchar(50)
DepartmentName nvarchar(50)
CurrentStrength bigint
MaxNoEmp bigint
MinSal bigint
MaxSal bigint
MinAge bigint
MaxAge bigint

Table Name: HRMStblDesignation.


Description: this data structure defines the designation details.
Contents:
Name Type & Size Null Key
DesignationID bigint Not Null Primary Key
DesignationCode nvarchar(50)
DesignationName nvarchar(50)
Table Name: HRMStblEmployee.
Description: this data structure defines the employee details.
Contents:
Name Type & Size Null Key
EmployeeID bigint Not Null Primary Key
DepartmentID bigint
DesignationID bigint
EmployeeCode nvarchar(50)
FirstName nvarchar(50)
LastName nvarchar(50)
Sex nvarchar(50)
DateOfBirth datetime
UserName nvarchar(50)
Pwd nvarchar(50)
Address nvarchar(50)
EmailID nvarchar(50)
PhoneNo nvarchar(50)
EmployeeImage nvarchar(50)
Nationality nvarchar(50)

Table Name: HRMStblFreshers.


Description: this data structure defines the freshers details.
Contents:
Name Type & Size Null Key
FreshersID bigint Not Null Primary Key
FreshersCode nvarchar(50)
FirstName nvarchar(50)
LastName nvarchar(50)
Sex nvarchar(50)
Address nvarchar(50)
PhoneNo nvarchar(50)
EmailID nvarchar(50)
JoiningDate datetime
RecruitedOn datetime
DateOfBirth datetime
College nvarchar(50)
Stream nvarchar(50)
Percentage nvarchar(50)
Location nvarchar(50)
GradeOnInterview nvarchar(50)
Nationality nvarchar(50)

Table Name: HRMStblLeave.


Description: this data structure defines the leave details.
Contents:
Name Type & Size Null Key
LeaveID bigint Not Null Primary Key
LeaveCode nvarchar(50)
LeaveName nvarchar(50)
LeaveNumber bigint

Table Name: HRMStblLeaveApplication.


Description: this data structure defines the leave application details.
Contents:
Name Type & Size Null Key
LeaveApplicationID bigint Not Null Primary Key
LeaveID bigint
StartDate datetime
EndDate datetime
Reason nvarchar(MAX)
EmployeeID bigint
NotApproved bit
Approved bit
applyDate datetime
approveDate datetime

Table Name: HRMStblPostInterview.


Description: this data structure defines the post interview details.
Contents:
Name Type & Size Null Key
PostInterviewID bigint Not Null Primary Key
CandidateID bigint
PostInterviewCode nvarchar(50)
InterviewLevel nvarchar(50)
Description nvarchar(50)
PostAppFor nvarchar(50)
OfferedSalary nvarchar(50)
Negotiated nvarchar(50)
SelectionStatus nvarchar(50)
InterviewerCode nvarchar(50)
InterviewerName nvarchar(50)
InterviewerDesignation nvarchar(50)

Table Name: HRMStblProject.


Description: this data structure defines the project details.
Contents:
Name Type & Size Null Key
ProjectID bigint Not Null Primary Key
ProjectCode nvarchar(50)
ProjectName nvarchar(50)
ClientID bigint
ProjectHour bigint
StartDate datetime
EndDate datetime
AddedDate datetime
Status nvarchar(50)

Normalization:
Normalization is the process of refining the data model built by the ER diagram. The

normalization technique, logically groups the data over the number of tables, with minimum

redundancy of data. The entities or tables resulting from normalization contain data items, with

relationships being represented by replication of key data items.

The goal of relational database design is to generate a set of relation schemes that allow us

to store information with minimum redundancy of data and allow us to retrieve information easily

and efficiently. The approach followed is to design schemas that are in an appropriate form one of

the so-called normal form.


The first step towards normalization is to convert the ER model into tables or relations. The

next step is to examine the database for redundancy and if necessary, change them to non-

redundant forms. This non-redundant model is then converted into a database definition, which

achieves the objective of the database design phase.We defined database from the above ER model

by normalizing it to 3rd normal form. We will show the definitions of those database tables later at

the time of physical database design phase.

Process Design
Structured design is a data flow based methodology. The approach begins with a system

specification that identifies inputs and outputs and describes the functional aspects of the system.

The next step is the definition of the modules ands their relationships to one another in a form

called a structure chart, using a data dictionary, DFD, and other structured tools.

Structured design partitions a program into small, independent modules. They are arranged

in a hierarchy that approximates a model of the business area and is organized in a top – down

manner.

Data Flow Diagrams:


A graphic tool used to describe and analyze the moment of data through a system – manual

or automated – including the processes, stores of data, and delays in the system. Data flow

diagrams are the central tools and the basis from which other components are developed. The

transformation of data from input to output, through processes, may be described logically and

independently of the physical components associated with the system.


Context Level DFD:

Leave approval
Request Data HRM
Administr Employee
System
ator
Report Data Leave apply

Data

Data Store
Request Data
Leave approval

Report Data
Leave apply

Employee 2.0 Administration


Level Zero DFD : 1.0

Employee Administrator
Data

Process 2.0 process 1.0

Data Store
Level one DFD for Administrator:

HRMStblLeave HRMStblProject HRMStblCandidate

Leave Details Candidate details


Project Info. Project Status

HRMStblClient HRMStblAppointment HRMStblPostInterview

Client info. Appointment Interview

Project
Leave Management Recruitmen
1.4 1.5 t
1.6

Assignment
HRMStbl
-Assignment
Quotation details
Dept. info Employee data

Quotation status
Administrator

Dept. status
Report data

Designation Employee
Dept. Details

Departmen Designatio Managemn


t n t
1.1 1.2 1.3

Employee
Fresher-
Details HRMStblFreshers

LevelHRMStblDepartme-
one DFD forapply
Leave Employee:
HRMStblDesigna Time HRMStblEmploy-
details
nt -tion ee
Employee

Leave approval
Time status
Leave application

Leave Time
Managemen Managemen
t t
2.1 2.2
Time sheet

HRMStblLeaveA HRMStblDailyTi-
pplication meSheet
Modules:
Well-structured designs improve the maintainability of a system. A structured system is

one that is developed from the top down and modular, that is, broken down into manageable

components. In this project we modularized the system so that they have minimal effect on each

other.

– Administrative Module deals with department ,employee, client and almost all the

purposes related of that company such as leave management ,payroll management

and even to manage the project. It is the main part of this software.

– Profile Management deals with creation of profiles for new employees along with

updating existing profiles. An employee can be searched on the basis of their id and

names.
– Employee Recruitment deals with registration of new applicants, short listing of

applicants for technical rounds then to HR rounds and at last it produces the final

list of selected candidates.

– Leave Management deals with allocation of leaves to employees, considering

various factors Leave apply and Leave approval.

– Time Management deals with daily attendance of employees in the organization

with enter and update attendance

– Payroll Management is our final module which deals with Employee agreement

and calculation of salary.

Functional Flowchart:
A System consists of many different activities or processes. We know the relation between

the processes that one process will contain several individual processes. We often show these

relations in terms of process charts.

Human Resource
Management
System

Administrator Employee
Employee Employee Fresher
Department Designation Employee Recruitment
Administrator Leave Project
Master
Management Management Management
Employee
Management

Time Leave
Management Management

Recruitment

Candidates Post Interview Appointment

Leave
Management

Leave Status Leave


Application

Project
Management

Project Project
Assignment
Use–Case Diagram:
Use – case diagram shows us the way how we’ll interact with the software as a normaluser. The

diagram for this software is given below:


Search Reset Add new Search Reset Add new

Department Master Project Assignment


Search Search
Reset

Add new
Designation Master Project Status
Reset
Fresher

Employee Master Administrator Leave Master


Search Reset

Search
Employee Search Add new
Reset
Add new
Search
Reset

Search Client Master Leave Approval Reset

Reset
Add new
Add new

Project Master Log Out

Search Reset Add new


Implementations
How the Project Is Implemented
A crucial phase in the system life cycle is the successful implementation of the new system design.
The change to make a web based application took place in phased manner. First the system was
used to enter, validate and store the different types of data in the database used by the systems. The
static data were also entered in the directory files. The comparison testified to the reliability, speed
and accuracy of the web based system.
There are three types of implementation
1. Implementation of a windows based application to replace a window based
application The problems encountered are converting files, training users, creating
accurate files, and verifying printouts for integrity.
2. Implementation of a modified application to replace an existing one using the same
computer. This type of conversion is relatively easy to handle provided there are no
major changes in the files.
The proposed system is an implementation from computer based system to Web based System.
More clearly we can state that user can access the system on web efficiently while the previous
system is window based. Conversion means changing from one system to another. The objective is
to put tested system into operation while holding costs, risks, and personnel irritation to a
minimum. Conversion should be exciting because it is the last step before the
candidate system begins to show result. Unfortunately, the results of conversion have been chaotic
and traumatic for many firms. Unforeseen difficulties crop up as the system breaks down, data files
are damaged and tempers grow short. The training package is frequently not complete and people
are trying to figure out what to do. Much of this steams of poor planning at all. Let us examine the
steps that preceded conversion.
 Creating Test files.
 Training the operating staff.
 Installing terminals and hardware.
Creating Test files:
The best method for gaining control of the conversion is to use well-planned test files for testing
all new programs. Before testing live data, test are created on the old system, copied over to the
new system, and used for initial test of each program. The test file offers the following:
 Predictable results
 Previously determined output results to check with a sampling of different types of
records.

User Training
It focuses on two factors:
 User capabilities
 Nature of the system being installed
The user may range from naive to sophisticated users .Naive users have fear towards exposure to
new system. Therefore, formal user training is required with some training aids like:
 User manual
 User-friendly screen
 Data dictionary
 Proper flow of system
Post Implementation
Operational systems are quickly taken for granted. Every system requires periodic evaluation after
implementation. A post-implementation review measures the system’s performance against
predefined requirements. Unlike system testing, this determines where the system fails so that
necessary adjustments can be. Made, a post-implementation review determines how the system
continues to meet performance specifications. It is after the fact after the design and con-versions
are complete. It also provides information to determine whether major redesign is necessary. A
post implementation review is an evaluation of a system in terms of the extent to which the system
accomplishes stated objectives and actual project costs exceed initial estimates. It is usually review
of major problems that need converting and those that surfaced during the implementation phase.
The primary responsibility for initiating the review lies with the user.

Testing
Introduction
Software Testing is the process of executing a program or system with the intent of finding
errors. Or, it involves any activity aimed at evaluating an attribute or capability of a program or
system and determining that it meets its required results Software is not unlike other physical
processes where inputs are received and outputs are produced. Where software differs is in the
manner in which it fails. Most physical systems fail in a fixed (and reasonably small) set of ways.
By contrast, software can fail in many bizarre ways. Detecting all of the different failure modes for
software is generally infeasible unlike most physical systems; most of the defects in software are
design errors, not manufacturing defects. Software does not suffer from corrosion, wear-and-tear --
generally it will not change until upgrades, or until obsolescence. So once the software is Shipped,
the design defects -- or bugs -- will be buried in and remain latent until activation. Software bugs
will almost always exist in any software module with moderate size: not because programmers are
careless or irresponsible, but because the complexity of software is generally intractable -- and
humans have only limited ability to manage complexity.
It is also true that for any complex systems, design defects can never be completely ruled
out. Discovering the design defects in software is equally difficult, for the same reason of
complexity. Because software and any digital systems are not continuous, testing boundary values
are not sufficient to guarantee correctness. All the possible values need to be tested and verified,
but complete testing is infeasible. Exhaustively testing a simple program to add only two integer
inputs of 32-bits (yielding 2^64 distinct test cases) would take hundreds of years, even if tests were
performed at a rate of thousands per second. Obviously, for a realistic software module, the
complexity can be far beyond the example mentioned here. If inputs from the real world are
involved, the problem will get worse, because timing and unpredictable environmental effects and
human interactions are all possible input parameters under consideration.

The testing activities are done in all phases of the lifecycle in an iterative software development
approach. However, the emphasis on testing activities varies in different phases. This procedure
explains the focus of testing in inception, elaboration, construction and transition phases. In the
inception phase most of requirements capturing is done and the test plan is developed. In
elaboration phase most of design is developed, and test cases are developed. Construction phase
mainly focuses on development of components and units, and unit testing is the focus in this phase.
Transition phase is about
Deploying software in the user community and most of the system testing and acceptance testing is
done in this phase.

Purpose
The main purposes of this procedure are:
 To carry out comprehensive testing of the system/product and its individual components in
order to ensure that the developed system/product conforms to the user requirements/ design.
 To verify the proper integration of all components of the software.
 To verify that all requirements have been correctly implemented.
 To identify and ensure defects are addressed prior to the deployment of the software.

Test Planning

Initial test plan addresses system test planning, and over the elaboration, construction and
transition phases this plan is updated to cater other testing requirements of these phases, like, unit
& integration testing.

The test Plan must contain the following:

 Scope of testing
 Methodology to be used for testing
 Types of tests to be carried out
 Resource & system requirements
 A tentative Test Schedule
 Identification of various forms to be used to record test cases and test results Testing is
usually performed for the following purposes:
To improve quality

Quality means the conformance to the specified design requirement. Being correct, the minimum
requirement of quality, means performing as required under specified circumstances. Debugging, a
narrow view of software testing, is performed heavily to find out design defects by the
programmer. The imperfection of human nature makes it almost impossible to make a moderately
complex program correct the first time. Finding the problems and get them fixed, is the purpose of
debugging in programming phase.

For Verification & Validation (V&V)

Just as topic Verification and Validation indicated, another important purpose of testing is
verification and validation (V&V). Testing can serve as metrics. It is heavily used as a tool in the
V&V process. Testers can make claims based on interpretations of the testing results, which either
the product works under certain situations or it does not work. We can also compare the quality
among different products under the same specification, based on results from the same test.
Testing Methods Used For Project
There is a plethora of testing methods and testing techniques, serving multiple purposes in
different life cycle phases. Classified by purpose, software testing can be divided into: Correctness
testing, performance tests, reliability testing and security testing. Classified by life-cycle phase,
software testing can be classified into the following categories: requirements phase testing, design
phase testing, program phase testing, evaluating test results, installation phase testing, acceptance
testing and maintenance testing. By scope, software testing can be categorized as follows: unit
testing, component testing, integration testing, and system are testing.
Correctness testing
Correctness is the minimum requirement of software, the essential purpose of testing. Correctness
testing will need some type of oracle, to tell the right behavior from the wrong one. The tester may
or may not know the inside details of the software module under test, e.g. control flow, data flow,
etc. Therefore, either a white-box point of view or black-box point of view can be taken in testing
software. We must note that the black box and white-box ideas are not limited in correctness
testing only.
Black-box testing
The black-box approach is a testing method in which test data are derived from the specified
functional requirements without regard to the final program structure It is also termed data-driven,
input/output driven or requirements-based testing. Because only the functionality of the software
module is of concern, black box testing also mainly refers to functional testing -- a testing method
emphasized on executing the functions and examination of their input and output data. The tester
treats the software under test as a black box -- only the inputs, outputs and specification are visible
and the functionality are determined by observing the outputs to corresponding Inputs. In testing,
various inputs are exercised and the outputs are compared against specification to validate the
correctness. All test cases are derived from the specification. No implementation details of the
code are considered. It is obvious that the more we have covered in the input space, the more
problems we will find and therefore we will be more confident about the quality of the software.
Ideally would be tempted to exhaustively test the input space. But as stated above, exhaustively
testing the combinations of valid inputs will be impossible for most of the programs, let alone
considering invalid inputs, timing, sequence, and resource variables. Combinatorial Explosion is
the major roadblock in functional testing. To make things worse, we can never be sure whether the
specification is either correct or complete. Due to limitations of the language used in the
specifications (usually natural language), ambiguity is often inevitable. Even if we use some type
of formal or restricted language, we may still fail to write down all the possible cases in the
specification. Sometimes, the specification itself becomes an intractable problem: it is not possible
to specify precisely every situation that can be encountered using limited words. And people can
seldom specify clearly what they want -- they usually can tell whether a prototype is, or is not,
what they want after they have been finished. A specification problem contributes approximately
30 percent of all bugs in software. The research in black box testing mainly focuses on how to
maximize the effectiveness of testing with minimum cost, usually the number of test cases. It is not
possible to exhaust the input space, but it is possible to exhaustively test a subset of the input
space. Partitioning is one of the common techniques. If we have partitioned the input space and
assume all the input values in a partition is equivalent, then we only need to test one representative
value in each partition to sufficiently cover the Whole input space. Domain testing partitions the
input domain into regions, and considers the input values in each domain an equivalent class.
Domains can be exhaustively tested and covered by selecting a representative value(s) in each
domain. Boundary values are of special interest. Experience shows that test cases that explore
boundary conditions have a higher payoff than test cases that do not. Boundary value analysis
requires one or more boundary values selected as representative test cases. The difficulties with
domain testing are that incorrect domain definitions in the specification cannot be efficiently
discovered. Good partitioning requires knowledge of the software structure. A good testing plan
will not only contain black-box testing, but also white-box approaches, and combinations of the
two.
White-box testing
Contrary to black box testing, software is viewed as a white-box, or glass-box in white-box testing,
as the structure and flow of the software under test are visible to the tester. Testing plans are made
according to the details of the software implementation, such as programming language, logic, and
styles. Test cases are derived from the program structure. White-box testing is also called glass-
box testing, logic-driven testing or design-based testing. There are many techniques available in
white-box testing, because the problem of intractability is eased by specific knowledge and
attention on the structure of the software under test. The intention of exhausting some aspect of the
software is still strong in white-box testing, and some Degree of exhaustion can be achieved, such
as executing each line of code at least once (statement coverage), traverse every branch statements
(branch coverage), or cover all the possible combinations of true and false condition predicates
(Multiple condition coverage). Control-flow testing, loop testing, and data-flow testing, all maps
the corresponding flow structure of the software into a directed graph. Test cases are carefully
selected based on the criterion that all the nodes or paths are covered or traversed at least once. By
doing so we may discover unnecessary "dead" code -- code that is of no use, or never get executed
at all, which cannot be discovered by functional testing. In mutation testing, the original program
code is perturbed and many mutated programs are created, each contains one fault. Each faulty
version of the program is called a mutant. Test data are selected based on the effectiveness of
failing the mutants. The more mutants a test case can kill, the better the test case is considered. The
problem with Mutation testing is that it is too computationally expensive to use. The boundary
between black-box approach and white-box approach is not clear-cut. Many testing strategies
mentioned above, may not be safely classified into black box testing or white-box testing. It is also
true for transaction-flow testing, syntax testing, finite-state testing, and many other testing
strategies not discussed in this text. One reason is that all the above techniques will need some
knowledge of the specification of the software under test. Another reason is that the idea of
specification itself is broad -- it may contain any requirement including the structure, Programming
language, and programming style as part of the specification content. We may be reluctant to
consider random testing as a testing technique. The test case selection is simple and
straightforward: they are randomly chosen. Study in indicates that random testing is more cost
effective for many programs. Some very subtle errors can be discovered with low cost. And it is
also not inferior in coverage than other carefully designed testing techniques. One can also obtain
reliability estimate using random testing results based on operational profiles. Effectively
combining random testing with other testing techniques may yield more powerful and cost-
effective testing strategies.
Performance testing
Not all software systems have specifications on performance explicitly. But every system will have
implicit performance requirements. The software should not take infinite time or infinite resource
to execute. "Performance bugs" sometimes are used to refer to those design problems in software
that cause the system performance to degrade. Performance has always been a great concern and a
driving force of computer evolution. Performance evaluation of a software system usually
includes: resource usage, throughput, and stimulus-response time and queue lengths detailing the
average or maximum number of tasks waiting to be serviced by selected resources. Typical
resources that need to be considered include network bandwidth requirements, CPU cycles, disk
space, disk access operations, and memory usage. The goal of performance testing can be
Performance bottleneck identification, performance comparison and evaluation, etc. The typical
method of doing performance testing is using a benchmark -- a program, workload or trace
designed to be representative of the typical system usage.
Reliability testing
Software reliability refers to the probability of failure-free operation of a system. It is related to
many aspects of software, including the testing process. Directly estimating software reliability by
quantifying its related factors can be difficult. Testing is an effective sampling method to measure
software reliability. Guided by the operational profile, software testing (usually black-box testing)
can be used to obtain failure data, and an estimation model can be further used to analyze the data
to estimate the present reliability and predict future reliability. Therefore, based on the estimation,
the developers can decide whether to release the software, and the users can decide whether to
adopt and use the software. Risk of using software can also be assessed based on reliability
information. Advocates that the primary goal of testing should be to measure the dependability of
tested software. There is agreement on the intuitive meaning of dependable software: it does not
fail in unexpected or catastrophic ways. Robustness testing and stress testing are variances of
reliability testing based on this simple criterion. The robustness of a software component is the
degree to which it can function correctly in the presence of exceptional inputs or stressful
environmental conditions. Robustness testing differs with correctness testing in the sense that the
functional correctness of the software is not of concern. It only watches for robustness problems
such as machine crashes, process hangs or abnormal termination. The oracle is relatively simple;
therefore robustness testing can be made more portable and scalable than correctness testing. This
research has drawn more and more interests recently, most of which uses commercial operating
systems as their target, such as the work in .Stress testing, or load testing, is often used to test the
whole system rather than the software alone. In such tests the software or system are exercised
with or beyond the Specified limits. Typical stress includes resource exhaustion, bursts of
activities, and sustained high loads.
Security testing
Software quality, reliability and security are tightly coupled. Intruders to open security holes can
exploit flaws in software. With the development of the Internet, software security problems are
becoming even more severe. Many critical software applications and services have integrated
security measures against malicious attacks.
Maintenance

Definition

Maintenance is very important task & is poorly managed. Times spent and effort required in
maintaining software and keeping it operational takes about 40 % to 70% of the total cost of the
life cycle.
“Software maintenance is the activity that includes error corrections, enhancements of capabilities,
deletion of obsolete capabilities and optimization.” Basically, any work done to change the
software after it is in operation is considered to be maintenance. Its purpose is to preserve the value
of the software.

Categories

Corrective Maintenance

It means modifications made to the software to correct the defects. Defects can result from design
errors, logic errors, coding errors, data processing errors and system performance errors.

Adaptive Maintenance

It includes modifying the software to match changes in the ever-changing environment.


Environment refers to the totality of all conditions and influences which act from outside upon the
software. E.g. business rules, government policies, work patterns and software/hardware operating
platforms.

Perfective Maintenance

It means improving processing efficiency or performances, or restructuring the Software to


improve changeability.

Process

The process of maintenance for given software can be divided into four stages as follows:

 Program understanding: It consists of analyzing the program in order to


understand it. The ease of understanding the program is primarily affected by
complexity and documentation of the program.
 Generate particular maintenance proposal: The ease of generating the
maintenance proposal is primarily affected by extensibility of the program.
 Account for ripple effect: If any change is made to any part of the system, it may
affect the other parts also. Thus, there is a kind of ripple effect from the location of
modification to the other parts of the software. The primary feature affecting the
ripple effect is stability.
 Modified program testing: The modified program is to be tested again and again to
check that the software has enhanced and reliability is validated.

Models

The models that present for the maintenance of the Software are –
 Quick-Fix Model
 Iterative Enhancement Model
 Reuse Oriented Model
 Boehm’s Model

For our Software, we are going to use the Boehm’s Model.

Boehm’s Model

This model is based on a closed loop of activities, which involve economic principles as these help
in improving productivity in maintenance. The basic motive in this model is that “the whole
process of maintenance is driven or initiated by decision making done by management who studies
the objectives against the constraints present.”

Management Decisions
Proposed Approved Changes
Changes

Change Implementations

Evaluation

New Versions of
Results the Software
Obtained Software in Use
COST INCURRED OR COST MODEL USED

COCOMO was first published in 1981 Barry J. Boehm's Book Software engineering economics for
estimating effort, cost, and schedule for software projects. It drew on a study of 63 projects at
TRW Aerospace where Barry Boehm was Director of Software Research and Technology in 1981.
The study examined projects ranging in size from 2000 to 100,000 lines of code, and programming
languages ranging from assembly to PL/I. These projects were based on the waterfall model of
software development which was the prevalent software development process in 1981.References
to this model typically call it COCOMO 81. In 1997 COCOMO II was developed and finally
published in 2001 in the book Software Cost Estimation with COCOMO II. COCOMO II is the
successor of COCOMO 81 and is better suited for estimating modern software development
projects. It provides more support for modern software development processes and an updated
project database. The need for the new model came as software development technology moved
from mainframe and overnight batch processing to desktop development, code reusability and the
use of off-the-shelf software components. This article refers to COCOMO 81.COCOMO consists
of a hierarchy of three increasingly detailed and accurate forms. The first level, Basic COCOMO is
good for quick, early, rough order of magnitude estimates of software costs, but its accuracy is
limited due to its lack of factors to account for difference in project attributes (Cost Drivers).
Intermediate COCOMO takes these Cost Drivers into account and Detailed COCOMO
additionally accounts for the influence of individual project phases.

Basic COCOMO

Basic COCOMO is a static, single-valued model that computes software development effort (and
cost) as a function of program size expressed in estimated lines of code. COCOMO applies to three
classes of software projects:

 Organic projects - are relatively small, simple software projects in which small teams with
good application experience work to a set of less than rigid requirements.
 Semi-detached projects - are intermediate (in size and complexity) software projects in
which teams with mixed experience levels must meet a mix of rigid and less than rigid
requirements.
 Embedded projects - are software projects that must be developed within a set of tight
hardware, software, and operational constraints.

The basic COCOMO equations take the form

E=ab(KLOC)bb
D=cb(E)db
P=E/D
where E is the effort applied in person-months, D is the development time in chronological
months, KLOC is the estimated number of delivered lines of code for the project
(expressed in thousands), and P is the number of people required. The coefficients ab, bb, cb
and db are given in the following table.

You might also like