Professional Documents
Culture Documents
1
CURRICULUM VITAE
KHALSA COLLEGE
01-SEP-2008 TO 28 -FEB-2009 Lecturer
AMRITSAR
Publication
I have published Solved Question Papers and Practical Copy on my subject namely
2
Seminar Attended
3
Certificate of Originality
This is to certify that the project entitled Web site of Shri Amritsar submitted to Indira
Gandhi National Open University in partial fulfillment of the requirement for the award of
degree of Master of Computer applications (MCA), is an authentic and original work carried
out by Mrs. Shiwani Sharma with enrollment no141261187 under my guidance.
The matter embodied in this project is genuine work done by the student and has not been
submitted whether to this University or to any University/ Institute for the fulfillment of the
requirement of any course of study.
…………………………….. ……………………………
Signature of the Student: Signature of the guide
Date……………………. Date……………………
Name and Address of the Student Name, Designation and Address
of the Guide
…………………………………… ………………………………….
…………………………………… ………………………………….
…………………………………… ………………………………….
………………………………….... …………………………………
Enrolment No……………………
4
Introduction to Amritsar Website
We live in century of information and technology, where every step of live we need
information to do anything. In real world there are many ways to get the required
information. In the world there are many ways to get the required information and data like
we can get information from newspaper ,books and from television and data from
government and private inquiry offices and documents, but these all ways needs so much
time ,money and efforts to spend. Our website will provide required information to the
tourists.
AMRITSAR Website is one of the searching website. It’s basically the guide site and
attractive destination for tourists. The website is divided into different modules that give
glance at “HOLY CITY”.
Its covers information related to different colleges, hospitals, hotels, malls, temples and
schools. My website will provide all information that may a user required. Important key
feature of our website is that we covered all the developments in the city like metro, new
malls and guest rooms.
As everybody cannot afford expensive rooms in hotels so there we provide list of all the
guest rooms available in the Amritsar by which tourists can easily book their rooms
Here I also add the facilities to book the guest rooms for the tourists.
Amritsar, literally a pool of nectar, derives its name from “AMRIT SAROVAR”, the holy tank
that surrounds the fabulous golden temple .First time visitors to Amritsar could be forgiven
for the impression that Amritsar is like any other small town in northern India.
The project is divided into several modules and there are two types of users: Administrators
and Guest the various modules of this website is defined by following descriptions:-
Admin section:-the administrator is provided with password and username with which
he/she can access the system. Administrator has the right of maintaining the database and
users deletion process:-
login
edit
view
change password
logout
User modules: As users are the main visitors of site, the following facilities are available
through this module. From this modules user can get all types of facilities he/she can gather
information related to various sectors like heath, education, historical places and tourism
etc.
User modules includes:-
Home
Cultures
List of religious places
Educational institutes
Hotels
List of malls
5
List of restaurant
List of hospitals
Cab Booking
Hotel booking
Guide booking
6
Objectives
The objective of this website is to implement the detail of the Holy city Amritsar, various places to
visit in Amritsar, list of the colleges, hotels, hospitals, weather about the city, important persons of the
Amritsar and there contact numbers, various malls, and historical places. I would like to use databases
to facilitate this process of smooth adding, deleting, modification, the important detail at admin level
such as list of hotels and there contact numbers; room rent season wise and activities of city and part
of both the operators and the administrators involved
The various objectives of the website can be presented in views of the different people involved with
it. The main people involved in the website are
1. Administrator
2. Public
Administrator’s view:-
The administrator needs to know and control the following information at all times
4) About city.
7
System Analysis
System analysis means to reduction of entire system by studying the various operations
performed and his relationships within the system, an examination of a business activity with
a view to identifying problem area & recommending alternative solutions.
EXISTING SYSTEM
In some existing system several activities are done in website of Amritsar manually like
submit the complaints, Applying birth and death certificates etc. it was not a fully automated
so there are number of chances to occur the common error which leads loss for Municipal
Corporation in term of money, so after conducting the feasibility study I decided to make
manual working of Amritsar Website management system to be computerized and to upgrade
the existing automated systems.
DRAWBACKS OF EXISTING SYSTEM
Lack of security:- Data is not secured as any member and in the manual system , the
records books get damaged the records may not get recovered.
Common error: - If the data is entered wrong, there is a chance for the error to get
repeated again in future.
Lack of updating in place:- To update anything in the record, the whole record is to be
entered again. So there is no easy updating facility available.
8
Fast access to database.
Less error.
More storage capacity
User friendly interface.
Better and efficient service to members.
The systems development life cycle (SDLC) describes a set of steps that produces a new
computer information system. The SDLC is a problem-solving process. Each step in the
process delineates a number of activities such as entering the record of customer, detail about
the sale and purchase of cars, spare parts, and payroll of employee. Performing these
activities in the order prescribed by the SDLC will bring about a solution to the business
situation. The SDLC process consists of the following phases:
1. Preliminary investigation:-the problem is defined and investigated.
2. Requirements definition:-the specifics of the current system as well as the
requirements of the proposed new system are studied and defined.
3. Systems design: - a general design is developed with the purpose of planning for the
construction of the new system.
4. Systems development: - the new system is created.
5. System installation:-the current operation is converted to run on the new system.
6. Systems evaluation and monitoring:-the newly operational system is evaluated and
monitored for the purpose of enhancing its performance and adding value to its
functions.
7. Looping back from a later phase to an earlier one may occur if the need arises.
Each phase has a distinct set of unique development activities. Some of these activities may
span more than one phase. The management activity tends to be similar among all phases.
The SDLC is not standardized and may be unique to a given organization. In other words, the
names and number of phases may differ from one SDLC to the next. However, the SDLC
discussed here is, to a large extent, representative of what is typically adopted by
organizations.
At each phase certain activities are performed; the results of these activities are documented
in a report identified with that phase. Management reviews the results of the phase and
determines if the project is to proceed to the next phase.
The first two phases of the SDLC process constitute the systems-analysis function of a
business situation. The following discussion will concentrate on phase one (Preliminary
Investigation) and phase two (Requirements Definition) of the outlined SDLC process.
9
Identification of Need
The major problem arises in website of the Website of Amritsar City was that it was
not fully automated system and there is no option to keep the previous record and there was
no option to Cab Booking and guide booking. We can create extra form with database
connection we can keep the record of all visitors who visited at Amritsar City known as Shri
Guru Ram Das Nagri. So we can develop the website which has rich with full of record
management features. Another need in website of Amritsar city which was usually faces by
that there is no option to change the desire information, so in case public doesn’t like and
want to change the record then we should have an option to change it and give excellent
service to public in each aspects. The solution of these problems helps to manage the website
more easily.
PRELIMINARY INVESTIGATION
The first phase of the systems development life cycle is preliminary investigation. Due to
limited resources an organization can undertake only those projects that are critical to its
mission, goals, and objectives. Therefore, the goal of preliminary investigation is simply to
identify and select a project for development from among all the projects that are under
consideration. Organizations may differ in how they identify and select projects for
development. Some organizations have a formal planning process that is carried out by a
steering committee or a task force made up of senior managers. Such a committee or task
force identifies and assesses possible computer information systems projects that the
organization should consider for development. Other organizations operate in an ad hoc
fashion to identify and select potential projects. Regardless of the method used and after all
potential projects have been identified, only those projects with the greatest promise for the
well-being of the organization, given available resources, are selected for development.
The objective of the systems-investigation phase is to answer the following questions: What
is the business problem? Is it a problem or an opportunity? What are the major causes of the
problem? Can the problem be solved by improving the current information system? Is a new
information system needed? Is this a feasible information system solution to this problem?
The preliminary-investigation phase sets the stage for gathering information about the current
problem and the existing information system. This information is then used in studying the
feasibility of possible information systems solutions.
It is important to note that the source of the project has a great deal to do with its scope and
content. For example, a project that is proposed by top management usually has a broad
strategic focus. A steering committee proposal might have a focus that covers a cross-
function of the organization. Projects advanced by an individual, a group of individuals, or a
department may have a narrower focus.
A variety of criteria can be used within an organization for classifying and ranking potential
projects. For planning purposes, the systems analyst—with the assistance of the stakeholders
of the proposed project—collects information about the project. This information has a broad
range and focuses on understanding the project size, costs, and potential benefits. This
information is then analyzed and summarized in a document that is then used in conjunction
with documents about other projects in order to review and compare all possible projects.
Each of these possible projects is assessed using multiple criteria to determine feasibility.
10
FEASIBILITY STUDY
A feasibility study is conducted to select the best system that meets performance
requirements. This entails an identification description, an evaluation of the candidate
systems, and the selection of the best system for the job.
Before the commencement of the development of any application and carrying out its
requirement analysis there are some points, which are to be considered for his feasibility
analysis, important points are:
Economic Feasibility:
It is important that the system that should be built in place of the existing system. It is to be
modified then it must not incur costs so high that its feasibility becomes a distant dream. It
should be built keeping in view that it should cover most of the user needs and with minimum
expenditure. If the application that I am going to develop requires lot of financial support
then it is possible that I may have to drop the project.
Technical Feasibility:
When I develop any project it should be made clear what type of software and hardware
support will be required and will it be available. It can happen that at some moment during
the development of my project I may require some sort of software which is not provided to
us than one gets into dilemma. So before starting any project I must check its technical
feasibility.
Operational Feasibility:
By operationally feasible I want to make clear whether the requirements of the user can be
put into my application then does it give us accurate results and meet the standards that have
been specified in requirements specifications.
REQUIREMENTS DEFINITION
This phase is an in-depth analysis of the stakeholders' information needs. This leads to
defining the requirements of the computer information system. These requirements are then
incorporated into the design phase. Many of the activities performed in the requirements
definition phase are an extension of those used in the preliminary investigation phase. The
main goal of the analyst is to identify what should be done, not how to do it. The following is
a discussion of the activities involved in requirements definition.
INFORMATION NEED.
Analysis of the information needs of the stakeholders is an important first step in determining
the requirements of the new system. It is essential that the analyst understands the
environment in which the new system will operate. Understanding the environment means
knowing enough about the management of the organization, its structure, its people, its
business, and the current information systems to ensure that the new system will be
appropriate.
11
output activities that form the user's interface. In addition, the volume and timing of such
activities may be documented.
Walk-through starts with a description of the project. From this point, the analysts delineate a
set of well-defined goals, objectives, and benefits of the computer information system.
Following that, the budgets and staffing requirements are articulated and the plans are shared
with the committee. Specific, planned tasks are compared to actual accomplishments, and
deviations, if any, are noted and accounted for. The plans for asset protection and business
control are reviewed with the committee members. Finally, the analysts seek the committee's
approval of the objectives, plans, time table, and budget for the next phase—systems design.
Project Planning:-
Project planning is most important part of website management, which relates to the use of
schedules such as Gantt charts to plan and subsequently report progress within the project
environment. Initially, the project scope is defined and the appropriate methods for
completing the project are determined. Following this step, the durations for the various tasks
necessary to complete the work are listed and grouped into a work breakdown structure.
Project planning is often used to organize different areas of a project, including project plans,
workloads and the management of teams and individuals. The logical dependencies between
tasks are defined using an activity network diagram that enables identification of the critical
path. Project planning is inherently uncertain as it must be done before the project is actually
started. Therefore the duration of the tasks is often estimated through a weighted average of
optimistic, normal, and pessimistic cases. The critical chain method adds "buffers" in the
planning to anticipate potential delays in project execution. Float or slack time in the
schedule can be calculated using project management software. Then the necessary resources
can be estimated and costs for each activity can be allocated to each resource, giving the total
project cost. At this stage, the project schedule may be optimized to achieve the appropriate
balance between resource usage and project duration to comply with the project objectives.
Once established and agreed, the project schedule becomes what is known as the baseline
schedule. Progress will be measured against the baseline schedule throughout the life of the
project. Analyzing progress compared to the baseline schedule is known as earned value
management.
The inputs of the project planning phase 2 include the project charter and the concept
proposal. The outputs of the project planning phase include the project requirements, the
project schedule, and the project management plan.
The Project Planning can be done manually. However, when managing several projects, it is
usually easier and faster to use project management software.
REQUIREMENT SPECIFICATION
As we have decided to develop a new system now it is time to determine the new
requirements for the new system. As the user is the most important part of any system it is
required to find the users requirements to develop a user-friendly system rather than having to
develop a developer friendly system.
12
The outputs required by the user that must be included into the proposed system are as
follows:
1. The work for the particular user must be personalized.
2. Details of the various departments.
3. The public must be provided easy if he wants to switch from one application to other
at a time.
Interviews
Record Reviews
Interviews: User interviews were conducted to retrieve the qualitative information. These
interviews, which were unstructured, provided opportunity to gather information from the
respondents who involved in the process for a long time.
These interviews provided information such as:
Activities involved in process of reservation processing involving fare, other
services, flight information, flight schedule information, airbus, various reports are
generated using existing system.
Type & frequency of forms and reports.
Limitation of Existing system.
Record Reviews: To gather details about the Airlines Reservation, many kinds of records &
reports were reviewed. This study covered.
Standard Operating Procedure.
Forms and reports generated by existing manual system.
Document flow (Input / Output) of the system.
13
PLANNING AND SCHEDULING
PERT CHART
Pert stands for program evolution and review technique. A pert chart is a network of boxes
and arrows. The boxes in the pert chart can be decorated with starting and ending dates for
activities.
Days 25-70
Days 1-25
Days 35-60
Requirement
SRS Low Level
s Gathering
Creation Design
Days 50-85
Days 50-95
Procedural Design
Days 75-100
coding
Coding
14
Gantt Chart
15
Pert Chart
Search View
hotels hotels
Select
Hotel
Book
Hotel
View
Detail
Booking
Complet
ed
Confirma Add
tion details
16
17
System Requirements Specification (SRS):
SRS is the starting point of software development activity. As system grew more complex, it
becomes evident that the goal of entire system cannot be easily comprehend. Hence the needs
for the requirements analysis phase arose. Specifying requirements necessarily involves what
some people have in his minds. The SRS is the means of translating the ideas in the minds of
Customers (the input) into a formal document (the output of the SRS phase).
Functional
Requireme
nt
External
Interfaces
18
Functional Requirements
Functional requirements specify which output should be produced from the given inputs.
They describe the relationship between the input and output of the system. For each
functional requirement, the detailed description of all the data inputs the unit of measure, and
range of valid input must be specified. All the operations to be performed on the input data to
obtain the output should be specified. This includes specifying the validity check on output
and the output data, parameters affected by the operations and the equations or other logical
operations that must be used to transform the inputs into corresponding outputs. The
functional must clearly state what the system should do in abnormal situations like invalid
input or error during computation.
Performance Requirements
This part of SRS specifies the performance constraints on the software system. All the
requirements relating to the performance characteristics of the system must be clearly specified.
There are two types of performance requirements.
Static Requirement:
Static Requirements are those that do not impose constraints on the execution characteristics of
the system.
Dynamic Requirement:
Dynamic requirement specifies constraints on the execution behavior of the system.
External Interface Requirements
All the possible interactions between the people, hardware and other software should be clearly
specified. For hardware interface requirements, the SRS should specify the logical characteristics
of each interface between the software product and hardware component.
A paradigm is the model of a process. It defines the flow of activities that occur as the process
progresses from start to end. In the context of software engineering, a paradigm provides a framework
that identifies major activities (sometime called phases), detailed work tasks, milestones and
deliverables. A number of different paradigms can be applied during a software project. These will be
discussed in this ESE module.
Questions:
1. Does a written project plan (it can be a few pages or an in-depth document) exist before the
project begins?
2. Is there a predictable set of tasks (other than coding) that will be performed on every project?
3. Do practitioners (e.g., programmers, engineers) apply a predictable set of methods as the software
project proceeds?
4. Do your customers understand your approach to software development or maintenance and their
role within your approach?
5. Does everyone have a clear understanding of the milestones that represent progress on a software
19
project?
6. Do practitioners understand what deliverables to produce and what the content of these
deliverables should be?
If you answered yes to all of the above questions, you probably have a defined software process and
have established a software engineering paradigm. Describe the major activities below and then com-
pare them to the paradigms discussed in the video portion of this module.
Management and technical tasks are defined for each of the task regions. To accommodate the need
for an adaptive process (e.g., one that adapts itself to the characteristics of the project at hand), the
evolutionary model should define a number of task sets. Each task set contains software engineering
tasks, milestones, and deliverables that have been chosen to meet the needs of different types of
projects.
Each task set must provide enough discipline to achieve high software quality. But at the same time, it
must not burden the project team with unnecessary work. Although any number of task sets can be
suggested, the following are typical:
Casual. The process model does not apply to the project, but selected tasks may be applied informally
and basic principles of software engineering must still be followed.
Disciplined. The process model will be applied for the project with a degree of discipline that will
ensure high quality and good application maintainability.
Rigorous. All process model tasks, documents, and milestones will be applied to the project. High
quality, good documentation, and long maintainability are paramount.
Quick reaction. The process model will be applied for the project, but because of extremely tight
time constraints, only those tasks essential to maintaining good quality will be applied. When
necessary, "back-filling" (e.g., developing a complete set of documentation) will be accomplished
20
after the application is delivered to the customer.
21
DATA FLOW DIAGRAM
Symbols:
22
DFD Are on Separate Pages.
23
State Diagram
The state diagram in the Unified Modeling Language is essentially a Harel state chart with
standardized notation which can describe many systems, from computer programs to business
processes. In UML 2 the name has been changed to State Machine Diagram. The following
are the basic notational elements that can be used to make up a diagram:
Add Hotels
& Guide Info Book Hotels Proceed Book Guide
Revise
Initial Stage
Quit Pay
Payment Info
Paid
Final State
24
Sequence Diagram
A Sequence diagram is an interaction diagram that shows how processes operate with one
another and in what order. It is a construct of a Message Sequence Chart. A sequence
diagram shows object interactions arranged in time sequence. It depicts the objects and
classes involved in the scenario and the sequence of messages exchanged between the objects
needed to carry out the functionality of the scenario. Sequence diagrams are typically
associated with use case realizations in the Logical View of the system under development.
Sequence diagrams are sometimes called event diagrams or event scenarios.
A sequence diagram shows, as parallel vertical lines (lifelines), different processes or objects
that live simultaneously, and, as horizontal arrows, the messages exchanged between them, in
the order in which they occur. This allows the specification of simple runtime scenarios in a
graphical manner.
If the lifeline is that of an object, it demonstrates a role. Leaving the instance name blank can
represent anonymous and unnamed instances. Messages, written with horizontal arrows with
the message name written above them, display interaction. Solid arrow heads represent
synchronous calls, open arrow heads represent asynchronous messages, and dashed lines
represent reply messages. If a caller sends a synchronous message, it must wait until the
message is done, such as invoking a subroutine. If a caller sends an asynchronous message, it
can continue processing and doesn’t have to wait for a response. Asynchronous calls are
present in multithreaded applications and in message-oriented middleware. Activation boxes,
or method-call boxes, are opaque rectangles drawn on top of lifelines to represent that
processes are being performed in response to the message.
Objects calling methods on themselves use messages and add new activation boxes on top of
any others to indicate a further level of processing.
When an object is destroyed (removed from memory), an X is drawn on top of the lifeline,
and the dashed line ceases to be drawn below it (this is not the case in the first example
though). It should be the result of a message, either from the object itself, or another.
A message sent from outside the diagram can be represented by a message originating from a
filled-in circle (found message in UML) or from a border of the sequence diagram (gate in
UML). UML has introduced significant improvements to the capabilities of sequence
diagrams. Most of these improvements are based on the idea of interaction fragments which
represent smaller pieces of an enclosing interaction. Multiple interaction fragments are
combined to create a variety of combined fragments, which are then used to model
interactions that include parallelism, conditional branches, and optional interactions.
25
26
On Separate Pages
27
Class Diagram
The class diagram is the main building block of object oriented modeling. It is used both for general
conceptual modeling of the systematics of the application, and for detailed modeling translating the
models into programming code. Class diagrams can also be used for data modeling. The classes in a
class diagram represent both the main objects, interactions in the application and the classes to be
programmed.
In the diagram, classes are represented with boxes which contain three parts:
The top part contains the name of the class. It is printed in bold and centered, and the
first letter is capitalized.
The middle part contains the attributes of the class. They are left-aligned and the first
letter is lowercase.
The bottom part contains the methods the class can execute. They are also left-aligned
and the first letter is lowercase.
In the design of a system, a number of classes are identified and grouped together in a class
diagram which helps to determine the static relations between those objects. With detailed
modeling, the classes of the conceptual design are often split into a number of subclasses.
In order to further describe the behavior of systems, these class diagrams can be
complemented by a state diagram or UML state machine.
Generalization
General relationship
Class diagram showing dependency between "Car" class and "Wheel" class (An even clearer
example would be "Car depends on Wheel", because Car already aggregates (and not just
uses) Wheel)
Dependency is a weaker form of bond which indicates that one class depends on another
because it uses it at some point in time. One class depends on another if the independent class
is a parameter variable or local variable of a method of the dependent class. This is different
from an association, where an attribute of the dependent class is an instance of the
independent class. Sometimes the relationship between two classes is very weak. They are
28
not implemented with member variables at all. Rather they might be implemented as member
function arguments.
Multiplicity
This association relationship indicates that (at least) one of the two related classes make
reference to the other. This relationship is usually described as "A has a B" (a mother cat has
kittens, kittens have a mother cat).
The UML representation of an association is a line with an optional arrowhead indicating the
role of the object(s) in the relationship, and an optional notation at each end indicating the
multiplicity of instances of that entity (the number of objects that participate in the
association).
List of List of
Hospitals Colleges in
Amritsar
Various Weather of
places to Amritsar
Visit
Online
Booking for
guide and
Cabs
29
System database
Admin Login
Submit Comp
Downloads
Home
Cabs Guide
Booking for
30
CRC Model
CRC Model:- A Class Responsibility Collaborator (CRC) model (Beck & Cunningham 1989;
Wilkinson 1995; Ambler 1995) is a collection of standard index cards that have been divided
into three sections, as depicted in Figure 1. A class represents a collection of similar objects,
a responsibility is something that a class knows or does, and a collaborator is another class
that a class interacts with to fulfill its responsibilities. Figure 2 presents an example of two
hand-drawn Class-responsibility-collaboration (CRC) cards are a brainstorming tool used
in the design of object-oriented software. They were originally proposed by Ward
Cunningham and Kent Beck as a teaching tool, but are also popular among expert designers
and recommended by extreme programming supporters. Martin Fowler has described CRC
cards as a viable alternative to UML sequence diagram to design the dynamics of object
interaction and collaboration.
CRC cards are usually created from index cards. Members of a brainstorming session will
write up one CRC card for each relevant class/object of their design. The card is partitioned
into three areas:
Using a small card keeps the complexity of the design at a minimum. It focuses the designer
on the essentials of the class and prevents her/him from getting into its details and inner
workings at a time when such detail is probably counter-productive. It also forces the
designer to refrain from giving the class too many responsibilities. Because the cards are
portable, they can easily be laid out on a table and re-arranged while discussing a design with
other people.
31
the behavior of the system, scenario that the system goes through in response to stimuli from
an actor. They are drawn as Ellipses.
Each Use Case is documented by a description of the scenario. The description can be written
in textual form or in a step-by-step format. Each Use Case can also be defined by other
properties, such as the pre- and post-conditions of the scenario – conditions that exist before
the scenario begins, and conditions that exist after the scenario completes.
32
SYSTEM DESIGN
Software Requirements Specification refine the function and performance allocated as a part
of software engineering by establishing a complete information description, a detailed
functional and behavioral description, an indication of performance requirements and design
constraints, appropriate validation criteria and other date pertinent to requirements.
System Requirements Specification (SRS):
SRS is the starting point of software development activity. As system grew more complex, it
becomes evident that the goal of entire system cannot be easily comprehend. Hence the needs
for the requirements analysis phase arose. Specifying requirements necessarily involves what
some people have in his minds. The SRS is the means of translating the ideas in the minds of
Customers (the input) into a formal document (the output of the SRS phase).
Functional
Requireme
nt
External
Interfaces
33
Functional Requirements
Functional requirements specify which output should be produced from the given inputs.
They describe the relationship between the input and output of the system. For each
functional requirement, the detailed description of all the data inputs the unit of measure, and
range of valid input must be specified. All the operations to be performed on the input data to
obtain the output should be specified. This includes specifying the validity check on output
and the output data, parameters affected by the operations and the equations or other logical
operations that must be used to transform the inputs into corresponding outputs. The
functional must clearly state what the system should do in abnormal situations like invalid
input or error during computation.
Performance Requirements
This part of SRS specifies the performance constraints on the software system. All the
requirements relating to the performance characteristics of the system must be clearly specified.
There are two types of performance requirements.
Modularization details
Besides reduction in cost (due to less customization, and shorter learning time), and
flexibility in design, modularity offers other benefits such as augmentation (adding new
solution by merely plugging in a new module), and exclusion. Examples of modular systems
are cars, computers, process systems, solar panels and wind turbines, elevators and modular
buildings. Earlier examples include looms, railroad signaling systems, telephone exchanges,
pipe organs and electric power distribution systems. Computers use modularity to overcome
changing customer demands and to make the manufacturing process more adaptive to change
(see modular programming). Modular design is an attempt to combine the advantages of
standardization (high volume normally equals low manufacturing costs) with those of
customization. A downside to modularity (and this depends on the extent of modularity) is
that low quality modular systems are not optimized for performance. This is usually due to
the cost of putting up interfaces between modules.
34
Integrity Constraints
Before one can start to implement the database tables, one must define the integrity
constraints. Integrity means something like 'be right' and consistent. The data in a database
must be right and in good condition. There are the domain integrity, the entity integrity, the
referential integrity and the foreign key integrity constraints.
Domain Integrity
Domain integrity means the definition of a valid set of values for an attribute. You define
- data type,
- length or size
- is null value allowed
- is the value unique or not
for an attribute.
You may also define the default value, the range (values in between) and/or specific values
for the attribute. Some DBMS allow you to define the output format and/or input mask for
the attribute.
These definitions ensure that a specific attribute will have a right and proper value in the
database.
Rule 1. You can't delete any of the rows in the Guide table that are visible in the picture since
all the car types are in use in the Car table.
Rule 2. You can't change any of the model_ids in the Guide table since all the guide are in
use in the Guide Table.
35
Rule 3. The values that you can enter in the model_id field in the Car table must be in the
model_id field in the Car Type table.
Rule 4. The model_id field in the Car table can have a null value which means that the car
type of that car in not known
36
Database Design and System Design
37
DATABASE
Table Name: - Admin_Table
Name varchar(20)
Loaction varchar(100)
Type varchar(20)
Contact_no varchar(20)
Email varchar(20)
Website varchar(40)
38
Table Name: feedback
name varchar(20)
Address varchar(20)
Age varchar(10)
Gender varchar(20)
Nationality varchar(20)
Contactno varchar(20)
Feedback varchar(200)
39
Table Name: - guideinfo
Name varchar(20)
Mobile_no varchar(20)
Name varchar(20)
Address varchar(20)
Mobile_no varchar(20)
Email varchar(20)
Nationality
40
Table Name: - guide_info
name nchar(20)
Language nchar(20)
Gender nchar(20)
Mobile_no nchar(20)
Name Varchar2(20)
Address Varchar2(20)
Contact_no Varchar2(20)
Nationality Varchar2(20)
Source Varchar2(20)
Destination Varchar2(20)
Date_of_booking Varchar2(20)
Location Varchar2(20)
Contact_no Varchar2(20)
website Varchar2(100)
42
Database Planning
The database planning includes the activities that allow the stages of the database system
development lifecycle to be realized as efficiently and effectively as possible. This phase
must be integrated with the overall Information System strategy of the organization.
The very first step in database planning is to define the mission statement and objectives for
the database system. That is the definition of the major aims of the database system.
- the purpose of the database system
- the supported tasks of the database system
- the resources of the database system
Systems Definition
In the systems definition phase, the scope and boundaries of the database application are
described. This description includes:
- links with the other information systems of the organization
- what the planned system is going to do now and in the future
- who the users are now and in the future.
The major user views are also described. i.e. what is required of a database system from the
perspectives of particular job roles or enterprise application areas.
Database Design
The database design phase is divided into three steps:
- conceptual database design
- logical database design
- physical database design
In the conceptual database design phase, the model of the data to be used independent of all
physical considerations is to be constructed. The model is based on the requirements
specification of the system. In the logical database design phase, the model of the data to be
used is based on a specific data model, but independent of a particular database management
system is constructed. This is based on the target data model for the database e.g. relational
data model. In the physical database design phase, the description of the implementation of
the database on secondary storage is created. The base relations, indexes, integrity
constraints, security, etc. are defined using the SQL language.
Application Design
In the application design phase, the design of the user interface and the application programs
that use and process the database are defined and designed.
43
Protyping :-
The purpose of a prototype is to allow the users to use the prototype to identify the features of
the system using the computer. There are horizontal and vertical prototypes. A horizontal
prototype has many features (e.g. user interfaces) but they are not working. A vertical
prototype has very few features but they are working. See the following picture.
Implementation
During the implementation phase, the physical realizations of the database and application
designs are to be done. This is the programming phase of the systems development.
Testing
Before the new system is going to live, it should be thoroughly tested. The goal of testing is
to find errors! The goal is not to prove the software is working well.
Operational Maintenance
The operational maintenance is the process of monitoring and maintaining the database
system. Monitoring means that the performance of the system is observed. If the performance
of the system falls below an acceptable level, tuning or reorganization of the database may be
required.
Design Goals
The database design course is quite often separated from the systems analysis and design
course. Why? You may wonder. On the database design course you can concentrate on
database and data design issues. You don't have to think about programming or user interface
issues so much as you must on the systems analysis and design course. Very often it is
assumed that you can adapt database design course issues on the systems analysis course.
Design Goals
There are many goals for the design of a database. Here are some of them listed:
- The database is comprehensive: it includes all the needed data and connections.
- The database is understandable: there is a clear structure which leads to easy, flexible and
fast reading and updating of the data.
- The database is expandable: it is possible to change the structure of the database with a
minimum change to the existing software.
- The database can be used in many organizations: the database can be adapted to different
kinds of environments and customers without the need to change the database structure.
- The integrity of the data: data must be correct, it must be consistent.
45
Conceptual Design
Once all the requirements have been collected and analyzed, the next step is to create a
conceptual schema for the database, using a high level conceptual data model. This phase is
called conceptual design. The result of this phase is an Entity-Relationship (ER) diagram or
UML class diagram. It is a high-level data model of the specific application area. It describes
how different entities (objects, items) are related to each other. It also describes what
attributes (features) each entity has. It includes the definitions of all the concepts (entities,
attributes) of the application area. During or after the conceptual schema design, the basic
data model operations can be used to specify the high-level user operations identified during
the functional analysis. This also serves to confirm that the conceptual schema meets all the
undefined functional requirements.
Logical Design
The result of the logical design phase (or data model mapping phase) is a set of relation
schemas. The ER diagram or class diagram is the basis for these relation schemas.
To create the relation schemas is quite a mechanical operation. There are rules how the ER
model or class diagram is transferred to relation schemas. The relation schemas are the basis
for table definitions. In this phase (if not done in previous phase) the primary keys and
foreign keys are defined.
Normalization
Normalization is the last part of the logical design. The goal of normalization is to eliminate
redundancy and potential update anomalies. Redundancy means that the same data is saved
more than once in a database. Update anomaly is a consequence of redundancy. If a piece of
46
data is saved in more than one place, the same data must be updated in more than one place.
Normalization is a technique by which one can modify the relation schema to reduce the
redundancy. Each normalization phase adds more relations (tables) into the database.
Physical Design
The goal of the last phase of database design, physical design, is to implement the database.
At this phase one must know which database management system (DBMS) is used. For
example, different DBMS's have different names for data types and have different data types.
The SQL clauses to create the database are written. The indexes, the integrity constraints
(rules) and the users' access rights are defined.
An object contains encapsulated data and procedures grouped together to represent an entity.
The 'object interface' defines how the object can be interacted with. An object-oriented
program is described by the interaction of these objects. Object-oriented design is the
discipline of defining the objects and their interactions to solve a problem that was identified
and documented during object-oriented analysis. What follows is a description of the class-
based subset of object-oriented design, which does not include object prototype-based
approaches where objects are not typically obtained by instancing classes but by cloning
other (prototype) objects. Object-oriented design is a method of design encompassing the
process of object-oriented decomposition and a notation for depicting logical and physical as
well as state and dynamic models of the system under design.
47
possible to develop the relational data model and the object-oriented design artifacts
in parallel, and the growth of an artifact can stimulate the refinement of other artifacts.
Object-oriented concepts
The five basic concepts of object-oriented design are the implementation level features that
are built into the programming language. These features are often referred to by these
common names:
Object/Class: A tight coupling or association of data structures with the methods or
functions that act on the data. This is called a class, or object (an object is created
based on a class). Each object serves a separate function. It is defined by its
properties, what it is and what it can do. An object can be part of a class, which is a
set of objects that are similar.
Information hiding: The ability to protect some components of the object from
external entities. This is realized by language keywords to enable a variable to be
declared as private or protected to the owning class.
Inheritance: The ability for a class to extend or override functionality of another class.
The so-called subclass has a whole section that is derived (inherited) from the
superclass and then it has its own set of functions and data.
Interface (object-oriented programming): The ability to defer the implementation of a
method. The ability to define the functions or methods signatures without
implementing them.
Polymorphism (specifically, Subtyping): The ability to replace an object with its
subobjects. The ability of an object-variable to contain, not only that object, but also
all of its subobjects.
Designing concepts
Defining objects, creating class diagram from conceptual diagram: Usually map entity
to class.
Identifying attributes.
Use design patterns (if applicable): A design pattern is not a finished design, it is a
description of a solution to a common problem, in a context. [1] The main advantage of
using a design pattern is that it can be reused in multiple applications. It can also be
thought of as a template for how to solve a problem that can be used in many different
situations and/or applications. Object-oriented design patterns typically show
relationships and interactions between classes or objects, without specifying the final
application classes or objects that are involved.
Define application framework (if applicable): Application framework is a term
usually used to refer to a set of libraries or classes that are used to implement the
standard structure of an application for a specific operating system. By bundling a
large amount of reusable code into a framework, much time is saved for the
developer, since he/she is saved the task of rewriting large amounts of standard code
for each new application that is developed.
Identify persistent objects/data (if applicable): Identify objects that have to last longer
than a single runtime of the application. If a relational database is used, design the
object relation mapping.
Identify and define remote objects (if applicable).
Output (deliverables) of object-oriented design
Sequence diagram: Extend the system sequence diagram to add specific objects that
handle the system events.
A sequence diagram shows, as parallel vertical lines, different processes or objects
that live simultaneously, and, as horizontal arrows, the messages exchanged between
them, in the order in which they occur.
Class diagram: A class diagram is a type of static structure UML diagram that
describes the structure of a system by showing the system's classes, their attributes,
and the relationships between the classes. The messages and classes identified through
48
the development of the sequence diagrams can serve as input to the automatic
generation of the global class diagram of the system.
Test Cases
To fully determine the effectiveness and overall usability of an application UI, it must be
tested. Testing exposes how easy or difficult the UI is to use for the broadest possible
audience. The time that it takes to test an application is well worth it.
This topic focuses on three primary testing scenarios: general usability, accessibility, and
automation.
By observing user interaction with the product and listening to user feedback, important
features that may be difficult to find and use are identified. Based on these results,
adjustments can be made to the UI as required. It is almost impossible to build a useful
product without some level of usability testing as the results provide the basis for making
better decisions about the product and improving the overall user experience.
Usability testing provides significant payback only when it is well integrated into the entire
project lifecycle. A single usability study can identify issues, but without follow-up tests it is
difficult to determine if the solutions have solved those problems or introduced new ones.
If you are a software product vendor, testing real users of your product means you are
evaluating the design. Based on how you have designed the application, can users
complete the tasks they need to do? Testing real users doing real tasks can also point
51
out if the UI guidelines you are following are working within the context of your
product, and when consistency helps or hinders the ability of a user to do their work.
If you are a software product purchaser, you can do usability testing to evaluate a
product for purchase. For example, your company might consider buying a product
for their twenty thousand employees. Before the company spends its money, it wants
to make sure that the product in question will really help employees do their jobs
better. Usability testing can also be useful to see if the proposed application follows
published UI style guidelines (internal or external). It's best to use UI guidelines as an
auxiliary, rather than primary, source of information for making purchase decisions.
Accessibility Testing
Accessibility testing encompasses two areas of a UI design: support for users with disabilities
and programmatic access by automated test frameworks.
Ensuring that an application is accessible to users with disabilities involves testing for:
Compliance - Does the application comply with various legal requirements regarding
accessibility?
Effectiveness - Can users with disabilities use the application?
Usefulness - Does the application expose adequate functionality for users with
disabilities?
Satisfaction - How is the application perceived by users with disabilities?
Ideally, each test case is independent from the others. Substitutes such as method stubs, mock
objects, fakes, and test harnesses can be used to assist testing a module in isolation. Unit tests
are typically written and run by software developers to ensure that code meets its design and
behaves as intended.
The goal of unit testing is to isolate each part of the program and show that the individual
parts are correct. A unit test provides a strict, written contract that the piece of code must
satisfy. As a result, it affords several benefits.
52
Finds problems early
Unit testing finds problems early in the development cycle. This includes both bugs in the
programmer's implementation and flaws or missing parts of the specification for the unit. The
process of writing a thorough set of tests forces the author to think through inputs, outputs,
and error conditions, and thus more crisply define the unit's desired behavior. The cost of
finding a bug before coding begins or when the code is first written is considerably lower
than the cost of detecting, identifying, and correcting the bug later; bugs may also cause
problems for the end-users of the software. Some argue that code that is impossible or
difficult to test is poorly written, thus unit testing can force developers to structure functions
and objects in better ways.
Facilitates change
Unit testing allows the programmer to refactor code or upgrade system libraries at a later
date, and make sure the module still works correctly (e.g., in regression testing). The
procedure is to write test cases for all functions and methods so that whenever a change
causes a fault, it can be quickly identified. Unit tests detect changes which may break a
design contract.
Simplifies integration
Unit testing may reduce uncertainty in the units themselves and can be used in a bottom-up
testing style approach. By testing the parts of a program first and then testing the sum of its
parts, integration testing becomes much easier.
Documentation
Unit testing provides a sort of living documentation of the system. Developers looking to
learn what functionality is provided by a unit, and how to use it, can look at the unit tests to
gain a basic understanding of the unit's interface (API).
Unit test cases embody characteristics that are critical to the success of the unit. These
characteristics can indicate appropriate/inappropriate use of a unit as well as negative
behaviors that are to be trapped by the unit. A unit test case, in and of itself, documents these
critical characteristics, although many software development environments do not rely solely
upon code to document the product in development.
Design
When software is developed using a test-driven approach, the combination of writing the unit
test to specify the interface plus the refactoring activities performed after the test is passing,
may take the place of formal design. Each unit test can be seen as a design element specifying
53
classes, methods, and observable behaviour. The following Java example will help illustrate
this point.
Here is a set of test cases that specify a number of elements of the implementation. First, that
there must be an interface called Adder, and an implementing class with a zero-argument
constructor called AdderImpl. It goes on to assert that the Adder interface should have a
method called add, with two integer parameters, which returns another integer. It also
specifies the behaviour of this method for a small range of values over a number of test
methods.
System tests are the third phase in the Testing Lifecycle. System Tests are the test of the
end-user functionality. System Tests verify the correct functioning of all the required features
as given in the specification document.
Since prior testing phases have tested the internal logic of the application, System Tests
should not repeat detailed, exhaustive testing. Instead System Tests verify that all
subsystems are cooperating successfully to yield the final desired features.
System tests are usually "black box" tests since we are testing the application without seeing
the source code. Create the test cases following the guidelines in the textbook, the black box
techniques studied in class, as well as your own experience or intuition about verifying
program correctness. Number each test case and write it in HTML (or Wiki) format.
Prepare the test cases according to these directions: System Test Case Format
The QA manager is responsible for creating the Test Matrix, which is a grid with
Requirement Numbers on one axis and Test Case Numbers on the other. It shows which test
cases cover which requirements. Use this Test Matrix Template which shows which test
cases were written by which team member and has a link to the test cases. (Alternate HTML
Test Matrix Template)
Contents of a system test plan: The contents of a software system test plan may vary from
organization to organization or project to project. It depends how we have created the
software test strategy, project plan and master test plan of the project. However, the basic
contents of a software system test plan should be:
- Scope
- Goals & Objective
- Area of focus (Critical areas)
- Deliverables
55
- System testing strategy
- Schedule
- Entry and exit criteria
- Suspension & resumption criteria for software testing
- Test Environment
- Assumptions
- Staffing and Training Plan
- Roles and Responsibilities
- Glossary
How to write system test cases: The system test cases are written in a similar way as we
write functional test cases. However, while creating system test cases following two points
needs to be kept in mind:
System test cases must cover the use cases and scenarios
- They must validate the all types of requirements - technical, UI, functional, non-functional,
performance etc. GUI software testing, Usability testing, Performance testing, Compatibility
testing, Error handling testing, Load testing, Volume testing, Stress testing, User help testing,
Security testing, Scalability testing, Capacity testing, Sanity testing, Smoke testing,
Exploratory testing, Ad hoc testing, Regression testing, Reliability testing, Recovery testing,
Installation testing, Idem potency testing, Maintenance testing, Recovery testing and failover
testing, Accessibility testing
The format of system test cases contains:
Test Case ID - a unique number
Test Suite Name
56
Access rights for different users
Managing the complexities of security administration is one of the growing concerns in any
enterprise, especially those open to e-commerce and those with large networks. In such
demanding times, the availability of Security Management is considered predominant –
affecting all sectors of an enterprise.
The foundation of any security management is a model with role-based access control,
enabling all the required functionality and authentication for a security system.
Zoho CRM provides a set of security features that defines permission to the data as well as
the features of Zoho CRM. Administrators control these security options in the organization's
account.
The role-based security ensures that data is accessible to users based on the organization's
hierarchy. Profiles, on the other hand, ensure that users have permission to only the relevant
features in CRM - various modules, data administration tools. There is also Groups that allow
you to extend the data-level access to other users with similar job profile.
Manage Users
Manage all the users in your Zoho CRM account, deactivate users who are no longer part of
the company account.
Manage Profiles
Create profiles that define the access permissions for the users. Set module-level and feature-
level permissions for different profiles.
Manage Roles
Create roles for the users in your account such as CEO, Sales Manager, Marketing Manager
etc
57
58
Code for: AddGuesthouses
using System;
usingSystem.Collections.Generic;
usingSystem.Linq;
usingSystem.Web;
usingSystem.Web.UI;
usingSystem.Web.UI.WebControls;
usingSystem.Data;
usingSystem.Data.SqlClient;
usingSystem.Configuration;
}
}
59
Code for:AddGuide
using System;
usingSystem.Collections.Generic;
usingSystem.Linq;
usingSystem.Web;
usingSystem.Web.UI;
usingSystem.Web.UI.WebControls;
usingSystem.Data;
usingSystem.Data.SqlClient;
usingSystem.Configuration;
60
Code for:Addhospitals
using System;
usingSystem.Collections.Generic;
usingSystem.Linq;
usingSystem.Web;
usingSystem.Web.UI;
usingSystem.Web.UI.WebControls;
usingSystem.Data;
usingSystem.Data.SqlClient;
usingSystem.Configuration;
61
Code for: Addhotel
using System;
usingSystem.Collections.Generic;
usingSystem.Linq;
usingSystem.Web;
usingSystem.Web.UI;
usingSystem.Web.UI.WebControls;
usingSystem.Data;
usingSystem.Data.SqlClient;
usingSystem.Configuration;
public partial class ADMIN_Default2: System.Web.UI.Page
{
SqlConnection con =new SqlConnection();
protected void Page_Load(object sender, EventArgs e)
{
con.ConnectionString=ConfigurationManager.ConnectionStrings["conn"].ConnectionString;
con.Open();
}
protected void Button1_Click1(object sender, EventArgs e)
{
stringstr1="insertintoHOTELINFOvalues(@NAME,@LOCATION,@TYPEOFHOTEL,@
CONTACTNO,@EMAIL,@WEBSITE)”;
SqlCommand cmd1 = new SqlCommand(str1, con);
cmd1.Parameters.AddWithValue("@NAME", TextBox1.Text);
cmd1.Parameters.AddWithValue("@LOCATION", TextBox2.Text);
cmd1.Parameters.AddWithValue("@TYPEOFHOTEL", TextBox3.Text);
cmd1.Parameters.AddWithValue("@CONTACTNO", TextBox4.Text);
cmd1.Parameters.AddWithValue("@EMAIL", TextBox5.Text);
cmd1.Parameters.AddWithValue("@WEBSITE", TextBox6.Text);
cmd1.ExecuteNonQuery();
Response.Write("Data saved");
TextBox1.Text = " ";
TextBox2.Text = " ";
TextBox3.Text = " ";
TextBox4.Text = " ";
TextBox5.Text = " ";
TextBox6.Text = " ";
}
protected void DropDownList1_SelectedIndexChanged(object sender, EventArgs e)
}
}
}
62
CODING FOR:CAB BOOKING
using System;
usingSystem.Collections.Generic;
usingSystem.Linq;
usingSystem.Web;
usingSystem.Web.UI;
usingSystem.Web.UI.WebControls;
usingSystem.Data;
usingSystem.Data.SqlClient;
usingSystem.Configuration;
publicpartialclassDefault2 : System.Web.UI.Page
{
SqlConnection con = newSqlConnection();
string s;
SqlCommandcmd;
SqlDataReaderdr;
protectedvoidPage_Load(object sender, EventArgs e)
{
con.ConnectionString =
ConfigurationManager.ConnectionStrings["conn"].ConnectionString;
con.Open();
//if (!IsPostBack)
autogen();
}
protectedvoid Button1_Click1(object sender, EventArgs e)
{
stringstr="insertintoCABvalues(@NAME,@ADDRESS,@CONTACTNO,@SOURCE,@DE
STINATION,@NATIONALITY,@CID,@DATE)";
SqlCommandcmd = newSqlCommand(str, con);
cmd.Parameters.AddWithValue("@cid", s);
cmd.Parameters.AddWithValue("@NAME", TextBox1.Text);
cmd.Parameters.AddWithValue("@ADDRESS", TextBox2.Text);
cmd.Parameters.AddWithValue("@CONTACTNO", TextBox3.Text);
cmd.Parameters.AddWithValue("@SOURCE", TextBox5.Text);
cmd.Parameters.AddWithValue("@DESTINATION", TextBox6.Text);
cmd.Parameters.AddWithValue("@NATIONALITY", TextBox4.Text);
cmd.Parameters.AddWithValue("@DATE", TextBox7.Text);
cmd.ExecuteNonQuery();
Label1.Text = " Kindly Note your Id For further reference " + s;
Session["NAME"] = TextBox1.Text;
Session["CID"] = s;
Session["DATE"] = TextBox7.Text;
TextBox1.Text = " ";
TextBox2.Text = " ";
TextBox3.Text = " ";
TextBox4.Text = " ";
TextBox5.Text = " ";
TextBox6.Text = " ";
TextBox7.Text = " ";
//Response.Redirect("cabbooking.aspx");
63
}
public void autogen()
{
cmd = new SqlCommand("select max(convert(int,substring(cid,2,len(cid)-1))) from cab",
con);
int i = 0;
dr = cmd.ExecuteReader();
if (dr.Read() == true)
{
if (dr[0].Equals(DBNull.Value) == true)
{
i = 1;
}
else
{
i = Convert.ToInt32(dr[0]) + 1;
}
dr.Close();
s = "C" + i.ToString("00");
}
}
protected void Button2_Click(object sender, EventArgs e)
{
Response.Redirect("cabbooking.aspx");
}
}
64
CODING OF :GUIDE BOOKING
using System;
usingSystem.Collections.Generic;
usingSystem.Linq;
usingSystem.Web;
usingSystem.Web.UI;
usingSystem.Web.UI.WebControls;
usingSystem.Data;
usingSystem.Data.SqlClient;
usingSystem.Configuration;
public partial class Guide : System.Web.UI.Page
{
SqlConnection con = new SqlConnection();
using System;
usingSystem.Collections.Generic;
usingSystem.Linq;
usingSystem.Web;
65
usingSystem.Web.UI;
usingSystem.Web.UI.WebControls;
usingSystem.Data;
usingSystem.Data.SqlClient;
usingSystem.Configuration;
}
else
{
Response.Write("Data Not Found");
}
66
}
}
67
CODING FOR :CAB BOOKING CONFIRMED
using System;
usingSystem.Collections.Generic;
usingSystem.Linq;
usingSystem.Web;
usingSystem.Web.UI;
usingSystem.Web.UI.WebControls;
usingSystem.Data;
usingSystem.Data.SqlClient;
usingSystem.Configuration;
}
protected void DropDownList1_SelectedIndexChanged(object sender, EventArgs e)
{
string str1 = "select Drivername,Contactno from Cabbooked where Cabname='" +
DropDownList1.SelectedValue + "' ";
SqlDataAdapteradp = new SqlDataAdapter(str1, con);
DataTable dt1 = new DataTable();
adp.Fill(dt1);
if (dt1.Rows.Count > 0)
{
TextBox3.Text = dt1.Rows[0]["Drivername"].ToString();
68
TextBox4.Text = dt1.Rows[0]["Contactno"].ToString();
}
else
{
Response.Write("Data Not Found");
}
}
}
69
Code for:Login
using System;
usingSystem.Collections.Generic;
usingSystem.Linq;
usingSystem.Web;
usingSystem.Web.UI;
usingSystem.Web.UI.WebControls;
usingSystem.Data;
usingSystem.Data.SqlClient;
usingSystem.Configuration;
public partial class Login : System.Web.UI.Page
{
SqlConnection con = new SqlConnection();
protected void Page_Load(object sender, EventArgs e)
{
con.ConnectionString =
ConfigurationManager.ConnectionStrings["Conn"].ConnectionString;
con.Open();
}
protected void Button1_Click(object sender, EventArgs e)
{
string str = "Select * from Reg where Username='" + TextBox1.Text + "'";
SqlDataAdapteradp = new SqlDataAdapter(str, con);
DataTabledt = new DataTable();
adp.Fill(dt);
if (dt.Rows.Count> 0)
{
Session["fname"] = dt.Rows[0][1].ToString();
Response.Redirect("Welcome.aspx");
}
else
{
Response.Write("<script>alert('invalid Username')</script>");
}
}
}
70
Code for:Feedback
using System;
usingSystem.Collections.Generic;
usingSystem.Linq;
usingSystem.Web;
usingSystem.Web.UI;
usingSystem.Web.UI.WebControls;
usingSystem.Data;
usingSystem.Data.SqlClient;
usingSystem.Configuration;
public partial class Feedback : System.Web.UI.Page
{
SqlConnection con = new SqlConnection();
protected void Page_Load(object sender, EventArgs e)
{
con.ConnectionString =
ConfigurationManager.ConnectionStrings["conn"].ConnectionString;
con.Open();
}
protected void Button1_Click(object sender, EventArgs e)
{
stringstr="insertintoFeedbackformvalues(@Name,@Address,@Age,@Gender,@Nationality,
@Contactno,@Feedback)";
SqlCommandcmd = new SqlCommand(str,con);
cmd.Parameters.AddWithValue("@Name", TextBox1.Text);
cmd.Parameters.AddWithValue("@Address", TextBox2.Text);
cmd.Parameters.AddWithValue("@Age", TextBox3.Text);
if (RadioButton1.Checked == true)
{
cmd.Parameters.AddWithValue("@Gender", RadioButton1.Text);
}
Else
{
cmd.Parameters.AddWithValue("@Gender", RadioButton2.Text);
}
cmd.Parameters.AddWithValue("@Nationality", TextBox5.Text);
cmd.Parameters.AddWithValue("@Contactno", TextBox6.Text);
cmd.Parameters.AddWithValue("@Feedback", TextBox7.Text);
cmd.ExecuteNonQuery();
Response.Write("Data saved");
TextBox1.Text = "";
TextBox2.Text = "";
TextBox3.Text = "";
TextBox5.Text = "";
71
TextBox6.Text = "";
TextBox7.Text = "";
}}
72
Code for:Registration
using System;
usingSystem.Collections.Generic;
usingSystem.Linq;
usingSystem.Web;
usingSystem.Web.UI;
usingSystem.Web.UI.WebControls;
usingSystem.Data;
usingSystem.Data.SqlClient;
usingSystem.Configuration;
73
Standardization of the coding
The number of benefits resulting from the coding standardization that is rapidly spreading
across the healthcare sector:
Code Efficiency :- Code efficiency is a broad term used to depict the reliability, speed
and programming methodology used in developing codes for an application. Code efficiency
is directly linked with algorithmic efficiency and the speed of runtime execution for software.
It is the key element in ensuring high performance. The goal of code efficiency is to reduce
resource consumption and completion time as much as possible with minimum risk to the
business or operating environment. The software product quality can be accessed and
evaluated with the help of the efficiency of the code used. Code efficiency plays a significant
role in applications in a high-execution-speed environment where performance and scalability
are paramount. One of the recommended best practices in coding is to ensure good code
efficiency. Well-developed programming codes should be able to handle complex algorithms.
Recommendations for code efficiency include:
To remove unnecessary code or code that goes to redundant processing
To make use of optimal memory and nonvolatile storage
To ensure the best speed or run time for completing the algorithm
To make use of reusable components wherever possible
To make use of error and exception handling at all layers of software, such as the user
interface, logic and data flow
To create programming code that ensures data integrity and consistency
To develop programming code that's compliant with the design logic and flow
74
To make use of coding practices applicable to the related software
To optimize the use of data access and data management practices
To use the best keywords, data types and variables, and other available programming
concepts to implement the related algorithm
Error handling - Error handling refers to the anticipation, detection, and resolution of
programming, application, and communications errors. Specialized programs, called error
handlers, are available for some applications. The best programs of this type forestall errors if
possible, recover from them when they occur without terminating the application, or (if all
else fails) gracefully terminate an affected application and save the error information to a log
file. In programming, a development error is one that can be prevented. Such an error can
occur in syntax or logic. Syntax errors, which are typographical mistakes or improper use of
special characters, are handled by rigorous proofreading. Logic errors, also called bugs, occur
when executed code does not produce the expected or desired result. Logic errors are best
handled by meticulous program debugging. This can be an ongoing process that involves, in
addition to the traditional debugging routine, beta testing prior to official release and
customer feedback after official release. A run-time error takes place during the execution of
a program, and usually happens because of adverse system parameters or invalid input data.
An example is the lack of sufficient memory to run an application or a memory conflict with
another program. On the Internet, run-time errors can result from electrical noise, various
forms of malware or an exceptionally heavy demand on a server. Run-time errors can be
resolved, or their impact minimized, by the use of error handler programs, by vigilance on the
part of network and server administrators, and by reasonable security countermeasures on
the part of Internet users.
Parameter Passing:- Association of actual and formal parameters upon function call.
Six relatively common methods are:
call-by-value
call-by-reference
call-by-value-result
call-by-name
call-by-need
75
Validations check
Data can be validated either manually or automatically (computer-based). The latter is
preferred to take advantage of the power and speed of computers, although some manual
review will always be required. Validation software may be purchased from some data logger
vendors, created in-house using popular spreadsheet programs, or adapted from other utility
environmental monitoring projects. An advantage of using spreadsheet programs is that they
can also be used to process data and generate reports. These programs require an ASCII file
format for imported data; the data logger's data management software will make this
conversion if binary data transfer is used. There are essentially two parts to data validation,
data screening and data verification. · Data Screening: The first part uses a series of
validation routines or algorithms to screen all the data for suspect (questionable and
erroneous) values. A suspect value deserves scrutiny but is not necessarily erroneous. For
example, an unusually high hourly wind speed caused by a locally severe thunderstorm may
appear on an otherwise average windy day. The result of this part is a data validation report (a
printout) that lists the suspect values and which validation routine each value failed. · Data
Verification: The second part requires a case-by-case decision on what to do with the suspect
values ¾ retain them as valid, reject them as invalid, or replace them with redundant, valid
values (if available). This part is where personal judgment by a qualified person familiar with
the monitoring equipment and local meteorology is needed. Before proceeding to the
following sections, you should first understand the limitations of data validation. There are
many possible causes of erroneous data: faulty or damaged sensors, loose wire connections,
broken wires, damaged mounting hardware, data logger malfunctions, static discharges,
sensor calibration drift, and icing conditions, among others. The goal of data validation is to
detect as many significant errors from as many causes as possible. Catching all the subtle
ones is impossible. For example, a disconnected wire can be easily detected by a long string
of zero (or random) values, but a loose wire that becomes disconnected intermittently may
only partly reduce the recorded value yet keep it within reasonable limits. Therefore, slight
deviations in the data can escape detection (although the use of redundant sensors can reduce
this possibility). Properly exercising the other quality assurance components of the
monitoring program will also reduce the chances of data problems. To preserve the original
raw data, make a copy of the original raw data set and apply the validation steps to the copy.
The next two subsections describe two types of validation routines, recommend specific
validation criteria for each measurement parameter, and discuss the treatment of suspect and
missing data.
A. Validation Routines
Validation routines are designed to screen each measured parameter for suspect values before
they are incorporated into the archived data base and used for site analysis. They can be
grouped into two main categories, general system checks and measured parameter checks.
1. General System Checks Two simple tests evaluate the completeness of the collected data: ·
Data Records: The number of data fields must equal the expected number of measured
parameters for each record. · Time Sequence: Are there any missing sequential data values?
This test should focus on the time and date stamp of each data record.
2. Measured Parameter Checks: These tests represent the heart of the data validation process
and normally consist of range tests, relational tests, and trend tests. · Range Tests: These are
the simplest and most commonly used validation tests. The measured data are compared to
allowable upper and lower limiting values.
Data validation is intended to provide certain well-defined guarantees for fitness, accuracy,
and consistency for any of various kinds of user input into an application or automated
76
system. Data validation rules can be defined and designed using any of various
methodologies, and be deployed in any of various contexts.
Data validation rules may be defined, designed and deployed, for example:
Definition and design contexts:
Deployment contexts:
as part of a user-interface
as a set of programs or business-logic routines in a programming language
as a set of stored-procedures in a database management system
77
Code and cross-reference validation
Code and cross-reference validation includes tests for data type validation, combined with
one or more operations to verify that the user-supplied data is consistent with one or more
external rules, requirements, or validity constraints relevant to a particular organization,
context or set of underlying assumptions. These additional validity constraints may involve
cross-referencing supplied data with a known look-up table or directory information service
such as LDAP.
Validation Methods:- Allowed character checks Checks that ascertain that only
expected characters are present in a field. For example a numeric field may only allow the
digits 0-9, the decimal point and perhaps a minus sign or commas. A text field such as a
personal name might disallow characters such as < and >, as they could be evidence of a
markup-based security attack. An e-mail address might require at least one @ sign and
various other structural details. Regular expressions are effective ways of implementing such
checks. (See also data type checks below)
Batch totals Checks for missing records. Numerical fields may be added together for all
records in a batch. The batch total is entered and the computer checks that the total is correct,
e.g., add the 'Total Cost' field of a number of transactions together.
Cardinality check Checks that record has a valid number of related records. For example if
Contact record classified as a Customer it must have at least one associated Order
(Cardinality > 0). If order does not exist for a "customer" record then it must be either
changed to "seed" or the order must be created. This type of rule can be complicated by
additional conditions. For example if contact record in Payroll database is marked as "former
employee", then this record must not have any associated salary payments after the date on
which employee left organization (Cardinality = 0).
Check digits Used for numerical data. An extra digit is added to a number which is
calculated from the digits. The computer checks this calculation when data are entered.
Consistency checks Checks fields to ensure data in these fields corresponds, e.g., If Title =
"Mr.", then Gender = "M".
Control totals This is a total done on one or more numeric fields which appears in every
record. This is a meaningful total, e.g., add the total payment for a number of Customers.
Cross-system consistency checks Compares data in different systems to ensure it is
consistent, e.g., The address for the customer with the same id is the same in both systems.
The data may be represented differently in different systems and may need to be transformed
to a common format to be compared, e.g., one system may store customer name in a single
Name field as 'Doe, John Q', while another in three different fields: First_Name (John),
Last_Name (Doe) and Middle_Name (Quality); to compare the two, the validation engine
would have to transform data from the second system to match the data from the first, for
example, using SQL: Last_Name || ', ' || First_Name || substr(Middle_Name, 1, 1) would
convert the data from the second system to look like the data from the first 'Doe, John Q'
Data type checks Checks the data type of the input and give an error message if the input
data does not match with the chosen data type, e.g., In an input box accepting numeric data, if
the letter 'O' was typed instead of the number zero, an error message would appear.
File existence check Checks that a file with a specified name exists. This check is essential
for programs that use file handling.
Format or picture check Checks that the data is in a specified format (template), e.g., dates
have to be in the format DD/MM/YYYY. Regular expressions should be considered for this
type of validation.
78
Hash totals This is just a batch total done on one or more numeric fields which appears in
every record. This is a meaningless total, e.g., add the Telephone Numbers together for a
number of Customers.
Limit check Unlike range checks, data are checked for one limit only, upper OR lower, e.g.,
data should not be greater than 2 (<=2).
Logic check Checks that an input does not yield a logical error, e.g., an input value should
not be 0 when it will divide some other number somewhere in a program.
Presence check Checks that important data is actually present and have not been missed out,
e.g., customers may be required to have their telephone numbers listed.
Range check Checks that the data is within a specified range of values, e.g., the month of a
person's date of birth should lie between 1 and 12.
Spelling and grammar check Looks for spelling and grammatical errors.
Uniqueness check Checks that each value is unique. This can be applied to several fields (i.e.
Address, First Name, Last Name).
Table look up check A table look up check takes the entered data item and compares it to a
valid list of entries that are stored in a database table.
Enforcement Action
Enforcement action typically rejects the data entry request and requires the input actor to
make a change that brings the data into compliance. This is most suitable for interactive use,
where a real person is sitting on the computer and making entry. It also works well for batch
upload, where a file input may be rejected and a set of messages sent back to the input source
for why the data is rejected. Another form of enforcement action involves automatically
changing the data and saving a conformant version instead of the original version. This is
most suitable for cosmetic change. For example, converting an [all-caps] entry to a [Pascal
case] entry does not need user input. An inappropriate use of automatic enforcement would
be in situations where the enforcement leads to loss of business information. For example,
saving a truncated comment if the length is longer than expected. This is not typically a good
thing since it may result in loss of significant data.
Advisory Action
Advisory actions typically allow data to be entered unchanged but sends a message to the
source actor indicating those validation issues that were encountered. This is most suitable for
non-interactive system, for systems where the change is not business critical, for cleansing
steps of existing data and for verification steps of an entry process.
Verification Action
Verification actions are special cases of advisory actions. In this case, the source actor is
asked to verify that this data is what they would really want to enter, in the light of a
suggestion to the contrary. Here, the check step suggests an alternative (e.g.: a check of your
mailing address returns a different way of formatting that address or suggests a different
address altogether). You would want in this case, to give the user the option of accepting the
recommendation or keeping their version. This is not a strict validation process, by design
and is useful for capturing addresses to a new location or to a location that is not yet
supported by the validation databases.
Validation and Security
Failures or omissions in data validation can lead to data corruption or a security vulnerability.
Data validation checks that data are valid, sensible, reasonable, and secure before they are
processed.
79
Testing
Testing is the process of evaluating a system or its component(s) with the intent to find
whether it satisfies the specified requirements or not. Testing is executing a system in order
to identify any gaps, errors, or missing requirements in contrary to the actual requirements.
This tutorial will give you a basic understanding on software testing, its types, methods,
levels, and other related terminologies. Software testing is an investigation conducted to
provide stakeholders with information about the quality of the product or service under
test. Software testing can also provide an objective, independent view of the software to
allow the business to appreciate and understand the risks of software implementation. Test
techniques include the process of executing a program or application with the intent of
finding software bugs (errors or other defects).
As the number of possible tests for even simple software components is practically infinite,
all software testing uses some strategy to select tests that are feasible for the available time
and resources. As a result, software testing typically (but not exclusively) attempts to execute
a program or application with the intent of finding software bugs (errors or other defects).
The job of testing is an iterative process as when one bug is fixed, it can illuminate other,
deeper bugs, or can even create new ones.
Software testing can provide objective, independent information about the quality of software
and risk of its failure to users and/or sponsors. Software testing can be conducted as soon as
executable software (even if partially complete) exists. The overall approach to software
development often determines when and how testing is conducted. For example, in a phased
process, most testing occurs after system requirements have been defined and then
implemented in testable programs. In contrast, under an Agile approach, requirements,
programming, and testing are often done concurrently.
Software testing is a process of executing a program or application with the intent of finding
the software bugs.
Testing Type Specific Test Plans: Plans for major types of testing like Performance
Test Plan and Security Test Plan.
Features to be tested:
81
List the features of the software/product to be tested.
Provide references to the Requirements and/or Design specifications of the features to
be tested
Approach:
Mention the overall approach to testing.
Specify the testing levels [if it’s a Master Test Plan], the testing types, and the testing
methods [Manual/Automated; White Box/Black Box/Gray Box]
Make the plan concise. Avoid redundancy and super flushness. If you think you do
not need a section that has been mentioned in the template above, go ahead and delete
that section in your test plan.
Be specific. For example, when you specify an operating system as a property of a
test environment, mention the OS Edition/Version as well, not just the OS Name.
Make use of lists and tables wherever possible. Avoid lengthy paragraphs.
Have the test plan reviewed a number of times prior to base lining it or sending it for
approval. The quality of your test plan speaks volumes about the quality of the testing
you or your team is going to perform.
Update the plan as and when necessary. An out-dated and unused document stinks
and is worse than not having the document in the first place.
Syntax error: These errors occur when code breaks the rule of the language, such as visual
Basic sub statement without a closing End sub, or a forgotten closing curly braces ({}) in c#.
These errors the easiest to locate. The language complier or integrated development
environment (IDE) will alert you to them and will not allow you to compile your program
until you correct them.
Semantic error: - These errors occur in code that is correct according to rules of the
compiler, but that causes unexpected problems such as crashes or hanging on execution. A
good example is code that execute in a loop but never exists the loop, either because the loop
depends on the variable whose values was expected to be something different than it actually
was or because the programmer forget to increment the loop counter. Another category of
errors in this area includes requesting a field from a dataset, there is no way to tell if the field
82
actually exists at compile time. These bugs are harder to detect and are one type of running
error.
Logic error: - Logic errors are like semantic errors, logic errors are runtime error. That is,
they occur while the program is running. But unlike semantic errors, logic errors do not cause
the application to crash or hang. Logic error results in unexpected values or output. This can
be a result of something as simple as a mistyped variables name that happens to match
another declared variable in the program. This type of error can be extremely difficult to track
down to eliminate.
Threats
Attacks
Technical failures and defects
Human errors
Organizational weaknesses
Force majeure
Since the beginning of computerization, there have always been threats to information
systems and IT solutions in general but in recent years, the nature of cyberspace threats has
changed. The widespread popularity of information technology has promoted the growth of
network/cyber-attacks.
Security measures
Data security means protecting data, such as a database, from destructive forces and from
the unwanted actions of unauthorized users
Disk encryption
Disk encryption refers to encryption technology that encrypts data on a hard disk drive. Disk
encryption typically takes form in either software (see disk encryption software) or hardware
(see disk encryption hardware). Disk encryption is often referred to as on-the-fly encryption
(OTFE) or transparent encryption.
Software-based security solutions encrypt the data to protect it from theft. However, a
malicious program or a hacker could corrupt the data in order to make it unrecoverable,
making the system unusable. Hardware-based security solutions can prevent read and write
access to data and hence offer very strong protection against tampering and unauthorized
access.
83
Hardware based security or assisted computer security offers an alternative to software-only
computer security. Security tokens such as those using PKCS#11 may be more secure due to
the physical access required in order to be compromised. Access is enabled only when the
token is connected and correct PIN is entered (see two-factor authentication). However,
dongles can be used by anyone who can gain physical access to it. Newer technologies in
hardware-based security solve this problem offering fool proof security for data.
Working of hardware-based security: A hardware device allows a user to log in, log out and
set different privilege levels by doing manual actions. The device uses biometric technology
to prevent malicious users from logging in, logging out, and changing privilege levels. The
current state of a user of the device is read by controllers in peripheral devices such as hard
disks. Illegal access by a malicious user or a malicious program is interrupted based on the
current state of a user by hard disk and DVD controllers making illegal access to data
impossible. Hardware-based access control is more secure than protection provided by the
operating systems as operating systems are vulnerable to malicious attacks by viruses and
hackers. The data on hard disks can be corrupted after a malicious access is obtained. With
hardware-based protection, software cannot manipulate the user privilege levels. It is
impossible for a hacker or malicious programs to gain access to secure data protected by
hardware or perform unauthorized privileged operations. This assumption is broken only if
the hardware itself is malicious or contains a backdoor. The hardware protects the operating
system image and file system privileges from being tampered. Therefore, a completely secure
system can be created using a combination of hardware-based security and secure system
administration policies.
Backups :- Backups are used to ensure data which is lost can be recovered from another
source. It is considered essential to keep a backup of any data in most industries and the
process is recommended for any files of importance to a user.
Data masking :-
Data masking of structured data is the process of obscuring (masking) specific data within a
database table or cell to ensure that data security is maintained and sensitive information is
not exposed to unauthorized personnel. This may include masking the data from users (for
example so banking customer representatives can only see the last 4 digits of a customer’s
national identity number), developers (who need real production data to test new software
releases but should not be able to see sensitive financial data), outsourcing vendors, etc.
Data erasure
These algorithms were originally performed manually but now are almost universally
computerized. They may be standardized (available in published texts or purchased
commercially) or proprietary, depending on the type of business, product, or project in
question. Simple models may use standard spreadsheet products.
84
Models typically function through the input of parameters that describe the attributes of the
product or project in question, and possibly physical resource requirements. The model then
provides as output various resources requirements in cost and time. Some models concentrate
only on estimating project costs (often a single monetary value). Little attention has been
given to the development of models for estimating the amount of resources needed for the
different elements that comprise a project.
Cost modeling practitioners often have the titles of cost estimators, cost engineers, or
parametric analysts.
PUBLISHED TECHNIQUES
We will look at three basic researched methodologies for a priori software cost estimation:
lines of code, functions, and objects. For each we will describe the methodology used, with
its accompanying advantages and disadvantages. We must note that, thus far, all researched
models have approached cost estimation through estimation of effort (generally man-months)
involved in the project.
LINES OF CODE
This general approach is actually subdivided into two different areas: SLOC (Source Lines of
Code, and SDI (Source Delivered Instructions). The difference between these two is that the
first, SLOC, takes into account all the housekeeping which must be done by the developer,
such as headers and embedded comments. The second, SDI, only takes into account the
number of executable lines of code.
The best known technique using LOC (Lines of Code) is the COCOMO (Constructive Cost
Model), developed by Boehm. This model, along with other SLOC/SDI based models, uses
not only the LOC, but also other factors such as product attributes, hardware limitations,
personnel, and development environment. These different factors lead to one or more
"adjustment" factors which adjust the direct evaluation of the effort needed. In COCOMO's
case, there are fourteen such factors derived by Boehm. This model shows a linear relation
between the LOC and the cost.
Another model for this category (LOC) is the Putnam Estimation Model. This model includes
more variables, and is non-linear in nature. The estimation is affected not only by the SDI,
but also by the software development environment and desired development time.
FUNCTIONS
Cost estimation based on expected functionality of the system was first proposed by Albrecht
in 1979, and has since been researched by several people. This cost estimation relies on
function points, and requires the identification of all occurrences of five unique function
types: External Inputs, External Outputs, Logical Internal Files, External Interfaces, and
Queries. The sum of all occurrences is called RAW-FUNCTION-COUNTS (FC). This value
must be modified by a weighted rating of Complexity Factors, giving a Technical Complexity
Factor (TCF). The Function Points are equivalent to FC*TCF for any given project.
This technique has been evaluated by several authors, and some attempts have been made at
refining the model. These estimations have proven "more successful" than the original model
at estimating cost a priori.
85
Overall, the function-points models appear to more accurately predict the effort needed for a
specific project than LOC-based models.
OBJECTS
Cost estimation based on objects has recently been introduced, given the ascendancy of
Object-Oriented-Programming (OOP) and Object-Oriented CASE tools. The basic is similar
to function-based cost estimation, yet, as the name implies counts objects, and not functions.
Research until now has been very limited, and has not shown any improvement in reliability
over function-based methods.
TRENDS
What are current trends in software cost estimation? What changes in systems development
affect software cost estimation? We will examine the major changes which have been taking
place in recent times.
USE OF SLOC/SDI
In the past few years, the practitioners trend has been to get away from SLOC and SDI, and
to work based on function points. The reasoning for this is that function points are more
"independent" (they are less dependent on the language and the programming environment)
than SLOC and SDI.
PROTOTYPING
In recent years prototyping has become a major component of many systems developments
efforts. Boehm and Papaccio's spiral development model is in essence a prototyping model in
which a system is developed in phases, with requirements specifications, cost to completion,
and the risk evaluated at each step.
In the last few years, CASE tools and program generators have developed to the point that
some companies are no longer "programming" in the traditional sense of the word. They are
in essence just doing an in depth analysis, which, when it is complete, gives them a working
system. Along the way, they may generate the system many times to test it, using the system
as a prototype development platform.
Today, most major systems developers and consultants, have a methodology to determine the
a priori cost of a software development project. This methodology is proprietary, and we can
only be aware of the externals of it. The cost estimation methodology is linked to a specific
systems analysis and design methodology. This cost estimation is based on the use of the
analysis methodology and the experience of the firm.
86
Given the differing methodologies and current trends in software development, what research
can and/or should be done? In order to see this, let's look at the overall situation, with an
evaluation of the problems and advantages each cost-estimation methodology.
It is apparent that there is room, and even desire, for improved metrics. It is clear that there is
no perfect way of a priori cost estimation, but there are ways which may be acceptable. In
order to evaluate the three methods outlined, we must fully understand the problems each
presents.
OVERALL PROBLEMS
It is clear that at the current time no well-known model is available to practitioners who
desire to put one into practice. At the same time, we can see that different companies, such as
Anderson Consulting, offer cost estimation tools to their customers, and are highly
"successful" at what they do.
From my experience and that of all practitioners who have attempted cost estimation, we note
that cost estimation is a very difficult item, much subject to the variability in human beings.
We must realize that in psychological research any model which can explain even 50% of the
variance in behavior is highly regarded. Should we consider that human behavior is a large
factor in the software development process, and therefore in the cost estimation?
Where are there successful models being built? It is in organizations which have a large
number of applications development projects and have a very structured methodology for
software development. I have been unable to find any published cost estimation methodology
that has been shown to explain more than 70% of the variance across different organizations.
FUTURE RESEARCH
In what research has been done, and in practice, no cost estimation principle is extremely
predictive without a given methodology. It is therefore necessary to attempt to study a given
cost estimation technique in relation to a given methodology to attempt to develop an
empirical model which would have a higher explanatory power than that of current models.
In the paper by Banker, Kauffman, and Kumar, it was made obvious that not only must the
cost estimation technique be stable, but also the development tools must be stable. It is very
difficult to develop a model which depends on what year in the cycle of development
techniques.
There is currently an ongoing project by Software Productivity Research, Inc. to gather a set
of over 10,000 varied projects using function point analysis [DREG89, p. 145]. This project,
if completed, promises to be the first major empirical study on cost estimation across multiple
development platforms, and multiple development techniques. In the bibliographic search
conducted, no reports of the conclusions of this study have been reported.
87
While software cost prediction models are still in relative infancy, it is clear that each
manager must be able to prepare a budget for the project. Of the techniques presented in this
paper, the function point’s analysis technique is the most robust. This is not to say that it must
be used to the exclusion of other techniques, but that it is the technique for which the largest
body of empirical research has been conducted.
Object points is a promising technique in object-oriented CASE environments, but has much
to be studied, and SLOC models are becoming outdated due to new methodologies.
Is there a "best" technique? Yes, whatever works in the given environment? With careful
calibration for a given environment it is possible for the manager to develop a cost estimation
model which closely relates to the environment. This is not without effort and much time, but
can be financially rewarding, as well as providing peace of mind for the manager.
88
Sample
layouts of
the Project
89
FORM 1:LOGIN
90
FORM 2: ADD HOTEL
91
FORM 3: FEEDBACK FORM
92
FORM 4:ADD GUIDE
93
FORM 5 : GUIDE BOOKING
94
FORM 6: GUIDE INFORMATION
95
FORM 7: CAB BOOKING
96
FORM 8:CAB INFORMATION
97
FORM 9:ADD COLLEGES
98
FORM 10:ADDHOSPITALS
99
FORM 11:CUSTOMER INFORMATION FOR CAB BOOKING
100
FORM 12:GUEST ROOMS
101
FORM 13: VISITOR REGISTRATION
102
REPORTS
Report1: CAB BOOKING
103
REPORT 2: CAB BOOKED
104
REPORT 3: GUIDE BOOKING
105
REPORT 4: GUIDE BOOKED
106
REPORT 5:LIST OF GUEST ROOMS
107
REPORT 7:LIST OF COLLEGES
108
109
REPORT 8:LIST OF HOTELS
110
BIBLIOGRAPHY
Website
1. www.searchvb.com
2. www.vbguru.com
3. www.google.com
111