Table of contents

Title Page No.

Chapter 1
Introduction/Objective 3

Chapter 2
System Analysis

2.1 Identification on Need
2.2 Preliminary Investigation
2.3 Feasibility Study
2.4 Project Planning
2.5 Project Scheduling
2.6 Software Requirements Specification
2.7 Software Engineering Paradigm applied.
2.8 Use Case Diagrams, ER Diagrams


Chapter 3
System Design

3.1 Modularisation details.
3.2 Data Integrity & Constraints.
3.3 Database Design
3.4 Use Interface Design (no available sanwar)

Chapter 4
Coding

4.1 Complete Project Coding
4.2 Comments & Description
4.3 Standardization of the coding/ code efficiency
4.4 Error Handling
4.5 Parameters calling/passing
4.6 Validation checks.



Chapter 5
TESTING

5.1 Testing Techniques & Strategies.
5.2 Debugging & Code improvement


Chapter 6
System Security Measures

6.1 Database/Data Security
6.2 Creation of User Profiles & Access Rights


Chapter 7
Cost Estimation of Project



Chapter 8
Reports (Layouts)

Chapter 9
Future Scope & Further Enhancement of the Project


BIBLIOGRAPHY
GLOSSARY














Chapter 1
Introduction
________________________________________________________________________



The purpose of this project is to develop an On-line Doctor Finder System that provides
customers/Patent with the facility to search doctor and get appointment on-line. The system will
provide all the facilities to its customers when their authentications [user id and password]
match, including viewing account information, performing transfers, giving the customer an
option of changing address, password retrieval, performing transactions, viewing appointments.
The system should also support an online enrolment facility for new customers.
The administrator should have the ability to perform various operations like inserting all
details regards Hospital for the customer and performing functions like providing facility to user
to search easily, when the customers want take appointment they have to register first and then
admin verify their status after cheeking all details . The administrator also has the privilege to
close the customer‘s account on the request of the customer. The customer should be able to
access his/her account from anywhere just by inputting the correct user-id and password.






















Chapter 2
System Analysis
________________________________________________________________________


Identification on need

Need to locate a provider quickly? Our online Doctor Finder (provider search) gives you
flexibility in a simple format. Be sure to check your criteria for the provider search webpage
most appropriate for your plan. This online Doctor Finder helps you find a perfect match for
your medical needs Doctor Finder provides you with basic professional information on virtually
every licensed physician.
While it is our goal to provide the most up to date information, our provider network is
constantly developing. Always verify that the provider you choose is participating in our
network before you receive services

Schedule appointments 24 hours a day, 7 days a week:

Whether it‘s 2:00 AM and your office is closed or it‘s 2:00 PM and your phones are busy—be
there for your patients and fill your schedules, too.


Turn your website traffic into real appointments:

Potential patients are visiting your site right now—and leaving. In a matter of minutes Doctor
Finder can allow these visitors to book appointments with you instantly.


You receive the appointment details!

Patients provide their reason for visit website, so your practice always runs smoothly. We send several
appointment reminders to make sure your patients show up on time. Patients can even book appointments
directly from your personal website after being a member of website. They can send query to the doctor
and feedback to Admin. A visitor can also contact us by filling a simple form




Search Hint:

The optimal way to search for a physician by name is to search by Last Name only and the State.
You may also want to perform a "sounds-like" search if you are unsure of the exact spelling of a
name or city, or if your search did not return the desired results. This option is available beneath
the name and address fields on the "Search by Physician Name" page.
The optimal way to search for a physician by specialty is to select a Specialty and State. If your
search result is larger than the predetermined limit, you will be asked to modify the search by
adding City and/or Zip Code.

Occasionally, a physician is difficult to locate because:
 The physician has moved to a different state and the AMA has not yet received the new
address;
 A small number of physicians have requested a "no contact" designation on their AMA
records (no contact records are managed like an unlisted phone number and are not
released);
 Physicians without active licenses do not appear in Doctor Finder;
 The physician's name may have a space in it, like "Mc Donald" (use of the space is
required);
 Doctor Finder uses the primary medical specialty designated by the physician (your
physician may practice more than one medical specialty).











2.2 Preliminary Investigation:

In this process, the development team visits the customer and studies their system. They
investigate the need for possible software automation in the given system. By the end of
Preliminary Investigation, the team furnishes a document that holds the different specific
recommendations for the candidate system. It also includes the personnel assignments, costs,
project schedule, and target dates. Main Tasks of the preliminary investigation phase are:
 Investigate the present system and identify the functions to be performed
 Identify the objectives of the new system. In general, an information system benefits a
business by increasing efficiency, improving effectiveness, or providing a competitive
advantage
 Identify problems and suggest a few solutions
 Identify constraints, i.e. the limitations placed on the project, usually relating to time,
money and resources
 Evaluate feasibility - whether the proposed system promises sufficient benefit to invest
the additional resources necessary to establish the user requirements in greater detail
To conclude the preliminary examination, the systems analyst writes a brief report to
management in which the following are listed:

 The problem that triggered the initial investigation
 The time taken by the investigation and the persons consulted
 The true nature and scope of the problem
 The recommended solutions for the problem and a cost estimate for each solution
The analyst should then arrange a meeting with management to discuss about the report and
other matters if need be. The end result, or deliverable, from the Preliminary Investigation phase
is either a willingness to proceed further, or the decision to abandon the project.

.










2.3 Feasibility Study

It is a test of system proposal, according to its workability, impact on application area,
ability to meet user need, and effective use of resources. It focuses on four major questions:

1. What are the user‘s demonstrable needs and how does a candidate system meet them?
2. What resources are available for given candidate system? Is the problem worth solving?
3. What are the likely impacts of the candidate system on application area?
4. How well does it fit within the application area?

These questions revolve around investigation and evaluation of the problem,
identification and description of candidate system, specification of performance and the cost of
each system, and final selection of the best system. Objective of feasibility study is not to solve
the problem but to acquire a sense of its scope. During the analysis, the problem definition is
crystallized and aspects of the problem to be included in the system are determined. Feasibility
analysis is to serve as decision phase to have an analysis of questions that, is there a new and
better way to do the job that will benefit the user, what are the cost and savings of the
alternatives. Three key considerations are involved in feasibility analysis: economic, technical,
and behavioural.


2.3.1 Economic Feasibility

Economic analysis is the most frequently used method for evaluating the effectiveness of
a candidate system. More commonly known as cost benefit analysis, the procedure is to
determine the benefits and savings that are expected from a candidate system and compare them
with costs. If benefits outweigh costs, then the decision is made to design and implement the
system. The benefits and savings that are expected from a candidate system are mainly in terms
of time. When a user is directly able to handle a project through interfaces provided by OBS
without the burden of coding for every kind of modification a lot of time and human effort is
saved.
There was a need of estimating the bearing cost on the resources needed (manpower and
computing systems) for the development of the system. Full Cost estimation of the resources was
done prior to the project kick off.
There was procurement costs, consultations cost, purchase of equipments, installations
cost and management cost involved with the development of the new proposed system. In
addition, there are start up costs, and no new costs for operating system software,
communications equipment installations, recruitment of new personnel, cost of disruption to the
rest of the system required. There is further no need to purchase special applications software, do
software modifications, training and data collection, and just a meager documentation
preparation cost involved. Lastly, there is a system maintenance, depreciation or rental cost
involved with the new system.

2.3.2 Technical Feasibility

Technical feasibility centers on the existing computer system (hardware, software, etc.)
and to what extent it can support the proposed addition. This phase involves financial
considerations to accommodate technical enhancements. If the budget is a serious constraint,
then the project is judged not feasible.
System Technical feasibility is one of the most difficult areas to assess at this time of
systems engineering. If right assumptions are made anything might seem possible. The
considerations that are normally associated with technical feasibility include:

1) Development Risk:

Can the system element be designed so that necessary function and performance
are achieved within the constraints uncovered during the analysis of the present system?

The new system proposes to bring significant changes to the present system to
make it more efficient. The new system proposed meets all the constraints requirements
and performance requirements identified for the system to become successful.

2) Resource availability:

Are skilled staffs available for the development of the new proposed system? Are
any other necessary resources available to build the system?
The Participants working with the proposal are seniors who have sufficient
knowledge and learning skills required to know about the development of the new
system. There is also no need of any other special need of resources with the
development of the proposed system and it can be very well developed using the
computing and non-computing resources available within the present system.

3) Technology:

Has the relevant technology progressed to a state that will support the system?

Technology in the form of different works done in the related field is already
available with the commercial world and has been already successively used in many
areas. Therefore, there is no need of any special technology to be developed.
The new system is fully capable of meeting the performance, reliability,
maintainability and predictability requirements.
The social and legal feasibility encompasses a broad range of concerns that
include contracts, liability, infringement etc. Since the system is being developed by the
students of the institute themselves there are no such concerns involved with the
development of the proposed system.
The degree to which alternatives are considered is often limited by cost and time
constraints. However variations should be considered which could provide alternative
solutions to the defined problem. Alternative systems that could provide all the
functionality of the desired system are not available and hence the present solution is
itself the most complete solution of the defined problem.
FBTS has a feasibility of around 95% to be implemented. The candidate system
fully supports the existing computer system (hardware, software, etc).

2.3.3 Behavioural Feasibility

People are inherently resistant to change, and computers have been known to facilitate
change. An estimate should be made of how strong a reaction the user is likely to have towards
the development of a system. The introduction of candidate system Work Planner will not
require special effort to educate, sell and train the user on new ways of conducting the system.
As far as performance of the system is concerned the candidate system will help attain accuracy
with least response time and minimum of programmer‘s efforts through the user-friendly
interface.

2.4 Project Planning

Planning begins with process decomposition. The project schedule provides a road map
for a software project manager. Using a schedule as a guide, the project manager can track and
control each step in the software engineering process.















2.4.1 Project Tracking


S. N. Work Task Description Timeline
1. Requirements Specification
Complete specification of the system including
the framing of policy etc.
1-2
2. Database creation List of tables and attributes of each of them. 2-4
3.
High-level and Detailed
Design
High Level Design :
 E-R Diagram
 DFD
 Use case Diagram
 Class Diagram & etc.

Detailed Design :
 Pseudo code or algorithm for each
activity
4-7
4.
Implementation of the front-
end of the system
Implementation of
 Login Screen
 Screen that giving various options for
each login
 Screens for each of the options
7-10
5.
Integrating the front-end
with the database
Screens connected to data base and updating
data base as required.

10-11
6. Integration Testing
The system should be thoroughly tested by
running all the test cases written for the system.
11-12
7. Final Review
Issues found during the previous milestone are
fixed and the system is ready for the final
review.
12-14
















2.5 Project Scheduling





























REQUIREME
NT ANALYSIS
DATABASE
CREATION
DETAILED
DESIGN
IMPLEME
NTATION
INTEGRATI
ON
TESTING &
FINAL
REVIEW

P
R
O
C
E
S
S
1-2
Weeks
2-4
Weeks
4-7
Weeks
7-10
Weeks
10-11
Weeks
11-14
Weeks
TIME (In 16 weeks)

2.6 Software Requirement Specification

2.6.1 An Introduction to ASP.Net
ASP.Net is a web development platform, which provides a programming model, a comprehensive
software infrastructure and various services required to build up robust web application for PC, as well as
mobile devices.
ASP.Net works on top of the HTTP protocol and uses the HTTP commands and policies to set a browser-
to-server two-way communication and cooperation.
ASP.Net is a part of Microsoft .Net platform. ASP.Net applications are complied codes, written using the
extensible and reusable components or objects present in .Net framework. These codes can use the entire
hierarchy of classes in .Net framework.
The ASP.Net application codes could be written in either of the following languages:
 C#
 Visual Basic .Net
 Jscript
 J#
ASP.Net is used to produce interactive, data-driven web applications over the internet. It consists of a
large number of controls like text boxes, buttons and labels for assembling, configuring and manipulating
code to create HTML pages.

ASP.Net Web Forms Model:
ASP.Net web forms extend the event-driven model of interaction to the web applications. The browser
submits a web form to the web server and the server returns a full markup page or HTML page in
response.
All client side user activities are forwarded to the server for stateful processing. The server processes the
output of the client actions and triggers the reactions.
Now, HTTP is a stateless protocol. ASP.Net framework helps in storing the information regarding the
state of the application, which consists of:
 Page state
 Session state
The page state is the state of the client, i.e., the content of various input fields in the web form. The
session state is the collective obtained from various pages the user visited and worked with, i.e., the
overall session state. To clear the concept, let us take up an example of a shopping cart as follows.
User adds items to a shopping cart. Items are selected from a page, say the items page, and the total
collected items and price are shown in a different page, say the cart page. Only HTTP cannot keep track
of all the information coming from various pages. ASP.Net session state and server side infrastructure
keeps track of the information collected globally over a session.
The ASP.Net runtime carries the page state to and from the server across page requests while generating
the ASP.Net runtime codes and incorporates the state of the server side components in hidden fields.
This way the server becomes aware of the overall application state and operates in a two-tiered connected
way.

ASP.Net Component Model:
The ASP.Net component model provides various building blocks of ASP.Net pages. Basically it is an
object model, which describes:
 Server side counterparts of almost all HTML elements or tags, like <form> and <input>.
 Server controls, which help in developing complex user-interface for example the Calendar control or the
Gridview control.
ASP.Net is a technology, which works on the .Net framework that contains all web-related functionalities.
The .Net framework is made of an object-oriented hierarchy. An ASP.Net web application is made of
pages. When a user requests an ASP.Net page, the IIS delegates the processing of the page to the ASP.Net
runtime system.
The ASP.Net runtime transforms the .aspx page into an instance of a class, which inherits from the base
class Page of the .Net framework. Therefore, each ASP.Net page is an object and all its components i.e.,
the server-side controls are also objects.
Components of .Net Framework 3.5
Before going to the next session on Visual Studio.Net, let us look at the various components of the .Net
framework 3.5. The following table describes the components of the .Net framework 3.5 and the job they
perform:
Components and their Description
(1) Common Language Runtime or CLR
It performs memory management, exception handling, debugging, security checking,
thread execution, code execution, code safety, verification and compilation. Those codes which
are directly managed by the CLR are called the managed code. When the managed code is
compiled, the compiler converts the source code into a CPU independent intermediate language
(IL) code. A Just in time compiler (JIT) compiles the IL code into native code, which is CPU
specific.

(2) .Net Framework Class Library

It contains a huge library of reusable types classes, interfaces, structures and enumerated values,
which are collectively called types.
(3) Common Language Specification
It contains the specifications for the .Net supported languages and implementation of language
integration.
(4) Common Type System
It provides guidelines for declaring, using and managing types at runtime, and cross-language
communication.
Metadata and Assemblies
Metadata is the binary information describing the program, which is either stored in a portable
executable file (PE) or in the memory. Assembly is a logical unit consisting of the assembly
manifest, type metadata, IL code and set of resources like image files etc.
(5) Windows Forms
This contains the graphical representation of any window displayed in the application.
(6) ASP.Net and ASP.Net AJAX
ASP.Net is the web development model and AJAX is an extension of ASP.Net for developing
and implementing AJAX functionality. ASP.Net AJAX contains the components that allow the
developer to update data on a website without a complete reload of the page.
(7) ADO.Net
It is the technology used for working with data and databases. It provides accesses to data
sources like SQL server, OLE DB, XML etc. The ADO .Net allows connection to data sources
for retrieving, manipulating and updating data.
(8) Windows Workflow Foundation (WF)
It helps in building workflow based applications in Windows. It contains activities, workflow
runtime, workflow designer and a rules engine.
(9)Windows Presentation Foundation
It provides a separation between the user interface and the business logic. It helps in developing
visually stunning interfaces using documents, media, two and three dimensional graphics,
animations and more.
(10) Windows Communication Foundation (WCF)
It is the technology used for building and running connected systems.
(11) Windows Card Space
It provides safety of accessing resources and sharing personal information on the internet.
(12) LINQ
It imparts data querying capabilities to .Net languages using a syntax which is similar to the
tradition query language SQL.


An architecture of .Net Framework


ASP source code runs on the personal web server of ASP. The ASP Server dynamically
generates the HTML and sends the HTML output to the client‘s web browser.


2.6.2 Why use ASP?

Microsoft ASP.NET is more than just the next generation of Active Server Pages (ASP).
It provides an entirely new programming model for creating network applications that take
advantage of the Internet.
 Enhanced Reliability
 Easy Deployment
 New Application Models
 Developer Productivity
 Improved Performance and Scalability


2.6.3 The Advantages of ASP
ASP has a number of advantages over many of its alternatives. Here are a few of them.
1. ASP.NET drastically reduces the amount of code required to build large applications.
2. With built-in Windows authentication and per-application configuration, your applications are
safe and secured.
3. It provides better performance by taking advantage of early binding, just-in-time compilation,
native optimization, and caching services right out of the box.
4. The ASP.NET framework is complemented by a rich toolbox and designer in the Visual
Studio integrated development environment. WYSIWYG editing, drag-and-drop server controls,
and automatic deployment are just a few of the features this powerful tool provides.
5. Provides simplicity as ASP.NET makes it easy to perform common tasks, from simple form
submission and client authentication to deployment and site configuration.
6. The source code and HTML are together therefore ASP.NET pages are easy to maintain and
write. Also the source code is executed on the server. This provides a lot of power and flexibility
to the web pages.
7. All the processes are closely monitored and managed by the ASP.NET runtime, so that if
process is dead, a new process can be created in its place, which helps keep your application
constantly available to handle requests.
8. It is purely server-side technology so, ASP.NET code executes on the server before it is sent to
the browser.
9. Being language-independent, it allows you to choose the language that best applies to your
application or partition your application across many languages.
10. ASP.NET makes for easy deployment. There is no need to register components because the
configuration information is built-in.11. The Web server continuously monitors the pages,
components and applications running on it. If it notices any memory leaks, infinite loops, other
illegal activities, it immediately destroys those activities and restarts itself.
12. Easily works with ADO.NET using data-binding and page formatting features. It is an
application which runs faster and counters large volumes of users without having performance
problems
In short ASP.NET, the next generation version of Microsoft‘s ASP, is a programming framework
used to create enterprise-class web sites, web applications, and technologies. ASP.NET
developed applications are accessible on a global basis leading to efficient information
management. Whether you are building a small business web site or a large corporate web
application distributed across multiple networks, ASP.NET will provide you all the features you
could possibly need…and at an affordable cost: FREE!



2.6.4 An Introduction to RDBMS

A Relational Database Management System (RDBMS) is an information system that
presents information as rows contained in a collection of tables, each table possessing a set of
one or more columns.
Now days, the relational database is at the core of the information systems for many
organizations, both public and private, large ad small. Informix, Sybase, SQL Server are
RDBMS having worldwide acceptance. Oracle is one of the powerful RDBMS products that
provide efficient and effective solutions for database management.




2.6.5 The Features of SQL Server

 Scalability and Performance
Realize the scale and performance you‘ve always wanted. Get the tools
and features necessary to optimize performance, scale up individual servers, and
scale out for very large databases.

 High Availability :
SQL Server 2008 Always On provides flexible design choices for selecting an
appropriate high availability and disaster recovery solution for your application. SQL
Server Always On was developed for applications that require high uptime, need
protection against failures within a data center (high availability) and adequate
redundancy against data center failures

 Virtualization Support :
Microsoft provides technical support for SQL Server 2005 and later versions for
the following supported hardware virtualization environments:

o Windows Server 2008 and later versions with Hyper-V
o Microsoft Hyper-V Server 2008 and later versions
o Configurations that are validated through the Server Virtualization Validation
Program (SVVP).

 Replication :
Replication is a set of technologies for copying and distributing data and database
objects from one database to another and then synchronizing between databases to
maintain consistency. Using replication, you can distribute data to different locations and
to remote or mobile users over local and wide area networks, dial-up connections,
wireless connections, and the Internet.

 Enterprise Security :
o SQL Server delivers the most secure database among leading database vendors
o SQL Server solutions provide everything you need to adhere to security and
compliance policies—out of the box. This includes the most up to date encryption
technologies built on our Trustworthy Computing initiatives.


 Management Tools:
SQL Server Management Studio is an integrated environment for
accessing, configuring, managing, administering, and developing all components
of SQL Server. SQL Server Management Studio combines a broad group of
graphical tools with a number of rich script editors to provide access to SQL
Server to developers and administrators of all skill levels.

SQL Server Management Studio combines the features of Enterprise
Manager, Query Analyzer, and Analysis Manager, included in previous releases
of SQL Server, into a single environment. In addition, SQL Server Management
Studio works with all components of SQL Server such as Reporting Services and
Integration Services. Developers get a familiar experience, and database
administrators get a single comprehensive utility that combines easy-to-use
graphical tools with rich scripting capabilities.

 Development Tools
 Programmability
 Spatial and Location Services
 Complex Event Processing (StreamInsight)
 Integration Services
 Integration Services-Advanced Adapters
 Integration Services-Advanced Transforms
 Data Warehouse
 Analysis Services
 Analysis Services-Advanced Analytic Functions
 Data Mining
 Reporting
 Business Intelligence Clients
 Master Data Services




Minimum system requirements are listed below:
Hardware and Software Requirements


T
a
b
l
e

1
:
Processor Intel Core i3
RAM: 256 MB or more
Operating System:

Windows 2008 Server, Windows XP, 2007.
Database SQL Server 2005
Hard Disk space: 50 MB
Web Server ASP Server
Web Internet Explorer 5.0 or higher, Google Chrome
Software Visual Basic.Net 2010
2.7. Software Engineering Paradigm applied.
Conceptual Model
The first consideration in any project development is to define the projects life – cycle
model. The software life – cycle encompasses all the activities required to define, develop, test,
deliver, operate and maintain a software product. Different models emphasize different aspects
of the life cycle, and no single model is appropriate for all types of software‘s. It is important to
define a life – cycle model for each product because the model provides a basis for categorizing
and controlling the various activities required to develop and maintain a software product. A
life–cycle model enhances project manageability, resource allocation, cost control, and product
quality. There are many life–cycle models, as:

i. The Waterfall Model
ii. The Prototyping Model
iii. Spiral Model
















The Waterfall Model:




The model used in the development of this project is Waterfall model. This is due to
some of the reasons like
 The model is more controlled and systematic
 All the requirements are identified at the time of initiating the project.


2.7 Use Case Diagrams, ER Diagrams

Use Case Diagram:





Data Flow Diagram:


ER Diagram:








Chapter 3

System Design
__________________________________________________________________

3.1 Modularisation details.

There are Three Categories of Users who can Use Application
1. Customer (Login User)
2. Admin(Super User)
3. Doctors ( Login doctor )
We can Divide Whole Application within 3 Modules
1) Admin Module
 Manage Profile :
Admin can manage the profile of Users as well as Doctors who is registered their
self after cheeking all details of user/doctor. Here manage Profile means, Admin activate
or authorised the current user/doctor to become a member of this website. After being a
member of this website a user or doctor is able to make transaction.
 Change password :
Admin can also change their password.

 View all doctor and searching a doctor :
Admin has full authority to view all details regards Doctor. Can also search the
doctor by entering some basic details (e.g city ,state , specialist, name).

 View all user and searching a user :
Admin has full authority to view all users who has been registered in this website.
They can also search any particular user from database(e.g active ,city, state, name)
 Feedback user and doctor :
User/Doctor can send feedback to admin and they can reply user/doctor to his/her
email or Mobile no.

 Contact us details :
In this page all details are given about website every one can
contact us easily by call, message or email. Even any visitors or
guest also can contact us.
 Add doctor details.
 Verify user account than user login (active ,deactivate account )
 Verify doctor account than doctor login.(block doctor details option)
 Submit news and update news or delete news.
 Logout.
2) Doctor Module
 Login user
 General Profile update
 Make profile Education, hospital, degree, profile photo, degree snapshot.
 View user Query and solved.
 Inbox (view all message send by user ).
 Send feedback
3) User Module
 Profile update
 Change password
 Send feedback
 Search doctor(e.g by name, by city, by state, by specialist , by hospital )
 Send query to doctor after searching disease specialist.
 View Query results.
4) Visitor Module
 View current news.
 Search doctor by name, city , state ,specialist , hospital
 About us
 Contact us
 Services
 How many register user on this websites
 And in left side show doctor list















.



Database:

3.3 Database Design

Database Design:

Registration Table for Doctor:









Doctor Education:


Registration Table for User:



Login Table(For all Users & Admin):



Appointment:





Foram:



Feedback:


News:

Chapter 4
Coding

Main Class for all connection:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
using System.Data;
using System.Data.SqlClient;

public class Class1
{
SqlConnection con = new SqlConnection("server=.\sqlexpress; database=DoctorFinder;
Integrated security=true");
public void open_con()
{
con.Open();
}
public void close_con()
{
con.Close();
}
public SqlConnection get_con()
{
return con;
}
}

















Code for Login Login :

using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
using System.Web.UI;
using System.Web.UI.WebControls;
using System.Data.SqlClient;
using System.Data;
using System.Web.Configuration;
using System.Net.Mail;
using System.Net;

public partial class Login : System.Web.UI.Page
{
static string sq = "";
static string an = "";
Class1 obj = new Class1();
protected void lnkbtn_Click(object sender, EventArgs e)
{

MultiView1.ActiveViewIndex = 0;
}
protected void btnLogin_Click(object sender, EventArgs e)
{
try
{
string usr = txtEmailId.Text;
string pwrd = txtPassword.Text;
SqlCommand cmd = new SqlCommand("sp_login", obj.get_con());
cmd.CommandType = CommandType.StoredProcedure;
cmd.Parameters.AddWithValue("U_Email_Id", usr);
SqlDataAdapter ad = new SqlDataAdapter(cmd);
DataTable dt = new DataTable();
ad.Fill(dt);
if (dt.Rows.Count == 0)
{
lbl_error_message.Text = "InValid UserName";
}

else if (dt.Rows[0][0].ToString().Trim() != pwrd)
{
lbl_error_message.Text = "InValid Password";
}
else
{
if (dt.Rows[0][1].ToString().Trim() == "Admin" && dt.Rows[0][2].ToString().Trim() == "1")
{
string uname = dt.Rows[0][3].ToString();
string email = dt.Rows[0][4].ToString();

Session["email"] = email;
Session["usr"] = uname;
Response.Redirect("~/Admin/Home.aspx");
}
else if (dt.Rows[0][1].ToString().Trim() == "Doctor" && dt.Rows[0][2].ToString().Trim() ==
"1")
{
Session["User"] = dt.Rows[0][5].ToString();
string uname = dt.Rows[0][3].ToString();
string email = dt.Rows[0][4].ToString();

Session["email"] = email;
Session["usr"] = uname;
Response.Redirect("~/Doctor/Home.aspx");
}
else if (dt.Rows[0][1].ToString().Trim() == "User" && dt.Rows[0][2].ToString().Trim() == "1")
{
string uname = dt.Rows[0][3].ToString();
string email = dt.Rows[0][4].ToString();

Session["email"] = email;
Session["usr"] = uname;
Response.Redirect("~/User/Home.aspx");
}

}
}
catch (Exception ex)
{
lbl_error_message.Text = ex.Message;
} }

protected void txtforgotpass_TextChanged(object sender, EventArgs e)
{
if (txtforgotpass.Text == "")
{
Label3.Visible = false;
}
else
{
Label3.Visible = true;
}
SqlDataAdapter da = new SqlDataAdapter("select SQuestion,U_Password,Answer
from Login where U_Email_Id='" + txtforgotpass.Text + "'", obj.get_con());
DataTable dt = new DataTable();
da.Fill(dt);
TextBox1.Text = dt.Rows[0][0].ToString();
an=dt.Rows[0][2].ToString();
}
protected void Button1_Click(object sender, EventArgs e)
{
if (txt_Answer.Text == an)
{
lbl_Message.Text = "";
try
{
SqlDataAdapter ad1 = new SqlDataAdapter("select U_Password from Login where
U_Email_Id='" + txtforgotpass.Text + "'", obj.get_con());
DataTable dt1 = new DataTable();
ad1.Fill(dt1);

var fromAddress = new MailAddress("go2vks@gmail.com", "From Name");
var toAddress = new MailAddress(txtforgotpass.Text, "To Name");
const string fromPassword = "9934697942";
string subject = "Retrieve Password";
string body = "Your Current Password is:- " + dt1.Rows[0][0].ToString();

var smtp = new SmtpClient
{
Host = "smtp.gmail.com",
Port = 587,
EnableSsl = true,
DeliveryMethod = SmtpDeliveryMethod.Network,
Credentials = new NetworkCredential(fromAddress.Address, fromPassword)
};
using (var message = new MailMessage(fromAddress, toAddress)
{
Subject = subject,
Body = body
})
{
smtp.Send(message);

}
lbl_Message.Text = "Mail send to your e-mail id.";
}
catch (Exception ex)
{
lbl_Message.Text = "Could not send the e-mail - error: " + ex.Message;
}
}
else
{
lbl_Message.Text = "please give correct answer...";
}
}
}


Code for Registration:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
using System.Web.UI;
using System.Web.UI.WebControls;
using System.Data.SqlClient;
using System.Data;

public partial class Reg_Form : System.Web.UI.Page
{
Class1 obj = new Class1();
protected void Page_Load(object sender, EventArgs e)
{
try
{
if (IsPostBack == false)
{

SqlDataAdapter ad = new SqlDataAdapter("select * from Country", obj.get_con());
DataTable dt = new DataTable();
ad.Fill(dt);
ddlCountry.DataSource = dt;
ddlCountry.DataTextField = "CName";
ddlCountry.DataValueField = "CId";
ddlCountry.DataBind();
ddlCountry.Items.Insert(0, "select");
SqlDataAdapter da = new SqlDataAdapter("select * from Security_Question",
obj.get_con());
DataTable dt1 = new DataTable();
da.Fill(dt1);
ddlSQuestion.DataSource = dt1;
ddlSQuestion.DataTextField = "SQuestion";
ddlSQuestion.DataBind();
}
}
catch (Exception ex)
{
lbl_Message.Text = ex.Message;
}
}
protected void btnSubmit_Click(object sender, EventArgs e)
{
try
{
string gender = "";
if (RbtnMale.Checked)
{
gender = "male";
}
else if (RbtnFemale.Checked)
{
gender = "Female";
}

string path = "~\images\Users";
string img = path + FileUpload1.PostedFile.FileName;
Server.MapPath(path + "\" + FileUpload1.FileName);
FileUpload1.PostedFile.SaveAs(Server.MapPath(path + FileUpload1.FileName));
//FileUpload1.PostedFile.SaveAs(img);
SqlCommand cmd = new SqlCommand("sp_insert_user", obj.get_con());
cmd.CommandType = CommandType.StoredProcedure;
cmd.Parameters.AddWithValue("@UserName", txtname.Text);
cmd.Parameters.AddWithValue("@FathersName", txtFname.Text);
cmd.Parameters.AddWithValue("@DOB", txt_Dob.Text);
cmd.Parameters.AddWithValue("@EmailId", txtEmail.Text);
cmd.Parameters.AddWithValue("@Address", txtaddress.Text);
cmd.Parameters.AddWithValue("@Country", ddlCountry.SelectedItem.ToString());
cmd.Parameters.AddWithValue("@State", ddlstate.SelectedItem.ToString());
cmd.Parameters.AddWithValue("@City", ddlCity.SelectedItem.ToString());
cmd.Parameters.AddWithValue("@PinCode", txtPinCode.Text);
cmd.Parameters.AddWithValue("@Gender", gender);
cmd.Parameters.AddWithValue("@Photo", img);
cmd.Parameters.AddWithValue("@Password", txtpass.Text);
string user1 = "User";
cmd.Parameters.AddWithValue("@UserType", user1);
cmd.Parameters.AddWithValue("@SQuestion", ddlSQuestion.SelectedItem.ToString());
cmd.Parameters.AddWithValue("@Answer", txtAns.Text);

obj.open_con();
cmd.ExecuteNonQuery();
obj.close_con();
lbl_Message.Text = "Submitted";
//Login Insert

SqlCommand cmd1 = new SqlCommand("sp_insert_login", obj.get_con());
cmd1.CommandType = CommandType.StoredProcedure;
cmd1.Parameters.AddWithValue("@U_Name", txtname.Text);
cmd1.Parameters.AddWithValue("@U_Password", txtpass.Text);
cmd1.Parameters.AddWithValue("@SQuestion",
ddlSQuestion.SelectedItem.ToString());
cmd1.Parameters.AddWithValue("@Answer", txtAns.Text);
cmd1.Parameters.AddWithValue("@U_Email_Id", txtEmail.Text);
cmd1.Parameters.AddWithValue("@User_Type", user1);
cmd1.Parameters.AddWithValue("@Verify_Status", "0");
obj.open_con();
cmd1.ExecuteNonQuery();
obj.close_con();
}
catch (Exception ex)
{
lbl_Message.Text = ex.Message;
}
}
protected void ddlCountry_SelectedIndexChanged(object sender, EventArgs e)
{
try
{
SqlDataAdapter ad = new SqlDataAdapter("select SId, SName from State where CId='"
+ ddlCountry.Text + "'", obj.get_con());
DataTable dt = new DataTable();
ad.Fill(dt);
ddlstate.DataSource = dt;
ddlstate.DataTextField = "SName";
ddlstate.DataValueField = "SId";
ddlstate.DataBind();
ddlstate.Items.Insert(0, "select");
}
catch (Exception ex)
{
lbl_Message.Text = ex.Message;
}
}
}
protected void ddlstate_SelectedIndexChanged(object sender, EventArgs e)
{
try
{
SqlCommand CMD = new SqlCommand("SELECT CityId, CityName FROM City
WHERE (SId = @SId);", obj.get_con());
CMD.Parameters.AddWithValue("@SId", Convert.ToInt32(ddlstate.SelectedValue));
SqlDataAdapter ad = new SqlDataAdapter(CMD);

DataTable dt = new DataTable();
ad.Fill(dt);
ddlCity.DataSource = dt;
ddlCity.DataTextField = "CityName";
ddlCity.DataValueField = "CityId";
ddlCity.DataBind();
ddlCity.Items.Insert(0, "select");
}
catch (Exception ex)
{
lbl_Message.Text = ex.Message;
}
}
}



















Chapter 5
Testing


5.1 Testing Techniques & Strategies.
Testing is vital to the success of any system. Testing is done at different stages within the
development phase. System testing makes a logical assumption that if all parts of the system are
correct, the goals will be achieved successfully. Inadequate testing or no testing leads to errors
that may come up after a long time when correction would be extremely difficult. Another
objective of testing is its utility as a user-oriented vehicle before implementation. The testing of
the system was done on both artificial and live data.

5.1.1. Test Strategy
The purpose of the Project Test Strategy is to document the scope and methods that will
be used to plan, execute, and manage the testing performed within the comScore File Library
Project. The purpose of the testing will be to ensure that, based on the solutions designed, the
system operates successfully.

5.1.2 Unit Testing
Unit testing focuses verification efforts on the smallest unit of software design, the
software component or module. Using the component-level design description as a guide,
important control paths are tested to uncover errors within the boundary of the module. The unit
test is while box oriented and the steps can be conducted in parallel for multiple components.

5.1.3 Integration Testing
Integration testing is a systematic technique for constructing the program structure while
at the same time conducting tests to uncover errors associated with interfacing. The objective is
to take unit tested components and build a program structure that has been dictated by design.
Integration testing was conducted by testing as different modules like client server
programs were tested that correct data is passing, retransmission module was tested that it is
giving proper times, protocol system was tested that it is sending acknowledgement and if not
received one retransmitting packets or not. The interfaces were tested thoroughly so that no
unpredictable event should occur by pressing any button.

5.1.4 Validation Testing
At the culmination of integration testing, software is completely assembled as a package,
interfacing errors have been uncovered and corrected and a final series of software tests –
validation testing may begin. Validation can be defined as successful when software functions in
a manner that can be reasonably expected by the customer. Software validation is achieved
through a series of black-box testing that demonstrate conformity with requirements. After each
validation test case has been conducted, either the function or performance characteristics
conform to specification and are accepted or a deviation from specification is uncovered and a
deficiency list is created.
In this case testing was done with a perception of user. Everything was integrated and
made sure that data was passing from one class to another properly. The protocol is working
properly with respect to client and server. Retransmission class was giving proper time and data
was shown properly. All the things are at its place and desired output is coming after giving
proper input. It was made sure that proper errors are generated if wrong inputs are given.

5.1.5 White Box Testing
It focuses on the program control structure. Here all statements in program have been
executed at least once during testing and that all logical conditions have been exercised.

5.1.6 System Testing
System testing is done when the entire system has been fully integrated. The purpose of
the system testing is to test how the different modules interact with each other and whether the
system provides the functionality that was expected. It consists of the following steps.
 Program Testing
 System Testing
 System Documentation
 User Acceptance Testing

5.1.7 Regression Testing
It is retesting of a previously tested program following modification to ensure that faults
have not been introduced or uncovered as a result of the changes made, and that the modified
system still meets its requirements. It is performed whenever the software or its environment is
changed.

5.1.8 Functional Testing
Functional Testing is performed to test whether a component or the product as a whole
will function as planned and as actually used when sold.

5.1.9 Black Box Testing
This is designed to uncover the errors in functional requirements without regard to the
internal workings of a program. This testing focuses on the information domain of the software,
deriving test case by partitioning the input and output domain of a programming - a manner that
provides thorough test coverage.

5.1.10 Equivalence Partitioning
In equivalence partitioning method we check O/P on different I/P. In MUM we check it
by entering different i/p‘s to the system, by this we check whether it is working for all i/p‘s or
not.











5.1.11 Boundary Value Analysis
In boundary value analysis we check the values at the boundary values. Like at 0
th
row or
at the end of the rows. Because sometimes we use the array from position 1 but it actually takes
the value from the 0
th
position. So because of this the system fails at boundary values. In MUM
we tested the application for all boundary values to check whether it is working fine or not.


No. Requirement Essentialor
Desirable
Description of the
Requirement
Remarks
RS1 The system
should have a
login
Essential A login box should
appear when the system
is invoked.
The logins are
assigned by Admin
when the user opens
an account with it.
RS2 The system Essential Help about the various The policies (like
should have help
screens
features of the system
should be provided in
sufficient detail in a
Q&A format.
commission
charged for various
operations etc)
should also be part
of the help.
RS3 The system
should ‗lock‘ the
login id if wrong
password is
entered 3 times in
a row
Essential After 2 false attempts
user should be given a
warning and at the 3
rd

false attempt should be
locked.
This is a must so as
to prevent fraud
users from logging
into the system.
RS4 User should have
the facilty to
change his
passwords
Desirable The login password and
the transaction password
should be different



5.1.12 Resource Management

5.1.12.1 Roles and Responsibilities

Sr.
No.
Roles Responsibilities
1 QA Engineer
 Prepare and Update Test Cases
 Testing of builds
 Logging defects
 Verifying bug fixes
 Prepare Test Results for each build
 Prepare defect summary report for each build
2 QA Lead
 Prepare Test Plan
 Review of Test cases
 Verify /Suggest changes in test strategy
 Communicate changes on build dates and
verifications
3 QC Manager
 Review of Test plan
 Overseeing testing activities



5.1.13 Test Schedule
Since the project, deliverables are dynamic and so are the test schedules

5.1.14 Assumptions
 All functional requirements are properly defined and meet users‘ needs.
 The Developers performs adequate unit testing + code review before sending modules
to QA.
 The Developers fix all the defects identified during unit testing prior to system
testing. Else the defects should be mentioned in the release notes.
 The application will be delivered on the expected delivery date according to the
schedule. Delivery and downtime delays shall cause adjustments in the test schedule
and can become a risk for on time product delivery.
 QA team should be involved in initial project discussions and should have a working
knowledge of the proposed production system prior to beginning integration and
system testing.
 Change control procedures are followed.
 The number of test cases -has a direct impact upon the amount of time it takes to
execute the test plan
 During the test process, all required interfaces are available and accessible in the QA
environment
 Testing - occurs on the most current version of the build in the QA environment
 All incidents identified during testing are documented by QA and the priority and
severity is assigned based upon previously defined guidelines
 The Project Manager is responsible for the timely resolution of all defects
 Defect resolution does not impede testing
 Communication between all groups on the project is paramount to the success of the
project, therefore QA should be involved in all relevant project communication
 Sufficient time is incorporated into the schedule not only for testing, but also for unit
testing by developer, test planning, verification of defect fixes, and regression testing
by QA

5.1.15 Defect Classification
The following are the defect priorities defined according to their precedence:
Urgent: The defect must be resolved immediately for next build drop as testing
cannot be preceded further.
High: The defect must be resolved as soon as possible because it is impairing
development and / or testing activities. System use will be severely affected until the
defect is fixed.
Medium: The defect should be resolved in the normal course of development
activities. It can wait until a new build or version is created.
Low: The defect repair can be put of indefinitely. It can be resolved in a future
major system revision or not resolved at all.

The following are the defect severities defined according to their precedence:
Causes Crash: The defect results in the failure of the complete software system,
of a sub-system, or of a software unit (program or module) within the system.
Critical: The defect results in the failure of the complete software system, of a
subsystem, or of a software unit (program or module) within the system. There is no way
to make the failed component(s), however, there are acceptable processing alternatives
that will yield the desired result.
Major: The defect does not result in a failure, but causes the system to produce
incorrect, incomplete, or inconsistent results, or the defect impairs the systems usability.
Minor: The defect does not cause a failure, does not impair usability, and the
desired processing results are easily obtained by working around the defect.
Enhancement: The defect is the result of non-conformance to a standard, is
related to the aesthetics of the system, or is a request for an enhancement. Defects at this
level may be deferred.

5.1.16 Summary
This chapter documented the results of the Quality Assurance Procedure and the different
variety of tests that were performed on the system implemented in order to verify its
completeness, correctness and user acceptability. This completes the total system development
process. We see that the system developed totally satisfies all the requirements of the user and so
is fully ready for user site deployment.

5.2 Debugging & Code Improvement
In computers, debugging is the process of locating and fixing or bypassing bugs (errors)
in computer program code or the engineering of a hardware device. To debug a program or
hardware device is to start with a problem, isolate the source of the problem, and then fix it. A
user of a program that does not know how to fix the problem may learn enough about the
problem to be able to avoid it until it is permanently fixed. When someone says they've
debugged a program or "worked the bugs out" of a program, they imply that they fixed it so that
the bugs no longer exist.
Debugging is a necessary process in almost any new software or hardware development
process, whether a commercial product or an enterprise or personal application program. For
complex products, debugging is done as the result of the unit test for the smallest unit of a
system, again at component test when parts are brought together, again at system test when the
product is used with other existing products, and again during customer beta test, when users try
the product out in a real world situation. Because most computer programs and many
programmed hardware devices contain thousands of lines of code, almost any new product is
likely to contain a few bugs. Invariably, the bugs in the functions that get most use are found and
fixed first. An early version of a program that has lots of bugs is referred to as "buggy."
Debugging tools (called debuggers) help identify coding errors at various development
stages. Some programming language packages include a facility for checking the code for errors
as it is being written.




















Chapter 6
System Security Measure


Database Security
As the use of the Web grows on both Intranets and the public Internet, information
security is becoming crucial to organizations. The Web provides a convenient, cheap, and
instantaneous way of publishing data. Now that it is extremely easy to disseminate information,
it is equally important to ensure that the information is only accessible to those who have the
rights to use it. With many systems implementing dynamic creation of Web pages from a
database, corporate information security is even more vital.
Previously, strict database access or specialized client software was required to view the
data. Now anyone with a Web browser can view data in a database that is not properly protected.
Never before has information security had so many vulnerable points. As the computing industry
moves from the mainframe era to the client/server era to the Internet era, a substantially
increasing number of points of penetration have opened up. For much of Internet security,
database specialists have had to rely on network administrators implementing precautions such
as firewalls to protect local data. Because of the nature of Intranet/ Internet information access,
however, many security functions fall into a gray area of responsibility. This article describes the
primary areas where security falls within the domain of the DBA, who must create the
information solutions.
New security procedures and technology are pioneered daily, and this article explains
the various security systems involved with solving the current problems. This article should
provide a primer. for further study of Web security and a framework for understanding
current security methodology. For Web security, you must address three primary areas:
1. Server security -- ensuring security relating to the actual data or private HTML
files stored on the server
2. User-authentication security -- ensuring login security that prevents unauthorized
access to information
3. Session security -- ensuring that data is not intercepted as it is broadcast over the
Internet or Intranet.
You can view these layers as layers of protection. For each layer of security added, the
system becomes more protected. Like a chain, however, the entire shield may be broken if there
is a weak link.
Server Security
Server security involves limiting access to data stored on the server. Although this field is
primarily the responsibility of the network administrator, the process of publishing data to the
Web often requires information systems specialists to take an active hand in installing and
implementing the security policy.
The two primary methods in which information from databases is published to the Web
are the use of static Web pages and active dynamic Web page creation. These two methods
require almost completely different security mechanisms.
Static Web Pages

Static Web pages are simply HTML files stored on the server. Many database specialists
consider static page creation the simplest and most flexible method of publishing data to the
Web. In a nutshell, a client program is written to query data from a database and generate HTML
pages that display this information.
When published as static Web pages, Web files can be uploaded to any server; for
dynamic creation, however, the Web server usually must be modified (or new scripts or
application software installed). Static pages have the secondary advantage of being generated by
traditional client/server tools such as Visual Basic or PowerBuilder. Because almost any
development system can output text files, only the necessary HTML codes must be added to
make them Web pages. The creation of the pages, therefore, uses standard methods of database
access control such as database security and login controls.
Once created, the files must be uploaded to the Web server. Protecting the documents
stored there occurs in the same manner that any other Web documents would be secured. One of
the most straightforward ways to protect sensitive HTML documents is to limit directory
browsing. Most FTP and Web servers allow directories to be configured so that files stored
within them may be read but the files may not be listed in the directory. This technique prevents
any user who does not know the exact filename from accessing it. Access may be permitted by
simply distributing the exact filenames to authorized personnel.
Directories may also be protected using the integrated operating system security. Some
Web servers allow security limitations to be placed on particular folders or directories using
standard operating system techniques (such as file attributes) and then use this security to restrict
access. This implementation will vary among Web servers. These security implementations to
gain access to particular files or folders fall under the user-authentication category of security
(described in a later section of this article).
Dynamic Page Generation
Favored by large organizations, this method is gaining popularity as the technology to
generate Web pages instantly from a database query becomes more robust. A dynamic Web page
is stored on the Web server with no actual data but instead a template for the HTML code and a
query. When a client accesses the page, the query is executed, and an HTML page containing the
data is generated on the fly. The necessary data is filled into the slots defined in the template file
in much the same way that a mail merge occurs in a word-processing program. A program may
be active on the Web server to generate the necessary Web page, or a CGI script might
dynamically create it.
One of the first security issues that a DBA must confront is setting up access to the
database from the Web server. Whether using a CGI script, server-based middleware, or a query
tool, the server itself must have access to the database.
Database Connections
With most of the dynamic connectors to databases, a connection with full access must be
granted to the Web server because various queries will need to access different tables or views to
construct the HTML from the query. The danger is obvious: A single data source on the server
must be given broad access capabilities.
This makes server security crucial. For example, an ODBC data source given full
administrator access could potentially be accessed by any other program on the server. A
program could be designed to retrieve private information from a data source regardless of
whether the program's author is permitted access. This security problem is most dangerous on a
system where users are allowed to upload CGI scripts or programs to run on the server. To
prevent unauthorized access to your data, make sure that the server that owns the database
connector is physically secure and does not permit unrestricted program execution.
Table Access Control
Standard table access control, if featured in the user authentication system, is more
important on Web applications than on traditional client/server systems. DBAs are often lax in
restricting access to particular tables because few users would know how to create a custom SQL
query to retrieve data from the database. Most access to a database on a client/server system
occurs through a specifically built client that limits access from there.
Not so with Web-based applications: Client/server development requires substantial
experience, but even some novices can program or modify HTML code, and most user
productivity applications such as word processors or spreadsheets that can access databases also
save documents as HTML pages. Therefore, more solutions will be created by intermediate users
-- and so valid security is a must. Remember, a little knowledge can be a dangerous thing.
User-Authentication Security
Authentication security governs the barrier that must be passed before the user can access
particular information. The user must have some valid form of identification before access is
granted. Logins are accomplished in two standard ways: using an HTML form or using an HTTP
security request.
If a pass-through is provided to normal database access, traditional security controls can
be brought into play. Figure 1 shows an example of a standard security login through the
Netscsape Communications Corp.'s Netscape Navigator browser.
The HTML login is simply an HTML page that contains the username and password
form fields. The actual IDs and passwords are stored in a table on the server. This information is
brought to the server through a CGI script or some piece of database middleware for lookup in a
user identification database. This method has the advantage of letting the DBA define a
particular user's privilege. By using a table created by the DBA, numerous security privileges
specific to a particular project can be defined.
Once a login has occurred, a piece of data called a "cookie" can be written onto the client
machine to track the user session. A cookie is data (similar to a key and a value in an .ini file)
sent from the Web server and stored by a client's browser. The Web server can then send a
message to the browser, and the data is returned to the server. Because an HTTP connection is
not persistent, a user ID could be written as a cookie so that the user might be identified during
the duration of the session.
HTML form login security, however, must be implemented by hand. Often this means
reinventing the wheel. Not only must a database table or other file be kept to track users and
passwords, but authentication routines must be performed, whether through CGI script or via
another method. Additionally, unless using a secured connection (see the section on SSL later in
this article), both the username and password are broadcast across the network, where they might
be intercepted.
HTML form login is excellent when security of the data is not paramount yet specialized
access controls are required. Browser login is most useful when it is integrated with existing
database security through some type of middleware. Even with users properly authenticated,
additional security concerns arise.
Session Security
After the user has supplied proper identification and access is granted to data, session
security ensures that private data is not intercepted or interfered with during the session. The
basic protocols of the network do not set up a point-to-point connection, as a telephone system
does. Instead, information is broadcast across a network for reception by a particular machine.
TCP/IP is the basic protocol for transmission on the Internet. The protocol was never
designed for security, and as such it is very insecure. Because data sent from one machine to
another is actually broadcast across the entire network, a program called a "packet sniffer" can be
used to intercept information packets bound for a particular user. Therefore, even though a user
has properly logged onto a system, any information that is accessed can be intercepted and
captured by another user on the network. There is no easy way to prevent this interception except
by encrypting all of the information that flows both ways.

VALIDATION CHECKS:
I have use following type of checks:
a. Data type
b. Length
c. Constraints
d. Blank field
e. Format
Data type:
I have use character type for character, number for numeric, and date for date type.
No numeric field insert in date. Character never inputted in numeric field as phone no
never accept character if any person input wrongly give message. When this problem is
removed then user perform further operation
Length:
When we define a max length. Then it never accepts more data .for example if I
define numeric length is 5 then it stores either equal to length or less than length. If user
gives more character than required then display message and stop processing.
Constraints:
I have defined range of data if data is less than then display error with message. For
example code of product is four-character purchase. The field of date must be 8
characters. Blank field: When users add data and some field is blank then it display
message with out halt, But stop processing.
Format:
The pre define format is used not change daily to daily for example format of date
DD/MM/YYYY: 01/01/2005 is used in all date type field. If user inserts an other format
then display message.

Public and Private Key Security
The world of encryption is often a fairly arcane field of study. The growth -- as well as
the insecurity -- of the Internet has forced users unfamiliar with even the basic concepts of
cryptography to become at least acquainted with its common implementations.
Two basic types of encryption are used in Web security: secret-key security (using a
single key) and public-key security (using two keys).
Secret-key security (which is also known as symmetrical-key security) is somewhat
familiar to most people. A Little Orphan Annie decoder ring is a common example. The secret
key, in this case the decoder ring, is used by each party to encrypt and decrypt messages. Both
parties must have access to the same private key for them to exchange messages. If the key is
lost or exposed, the system is compromised. Public-key security is a little more complicated.
With public-key security, each individual holds two keys, one public and one private. The public
key is freely published, and the private key is kept private. Once a message is encrypted with one
key, it cannot be decoded without the other key.
Using this type of encryption, someone can take a data file and encode it with your public
key. Only your private key can be used to decode it. Likewise, if you encode a data file with your
private key, it can only be decoded with your public key. Therefore, the receiver of the data file
knows that it came from you because only your private key can generate a file that can be
decoded by the public key. This is so reliable, in fact, that it is admissible in a court of law. Only
you, or someone with access to your private key, could possibly have created data that can be
decoded with your public key.
The primary difference between implementing these two systems is computational. Using
a secret-key system, encryption and decryption can take place between 100 and 10,000 times
faster than the equivalent data using a public-key system. The private-key systems often use a
smaller key, perhaps even a user password. The public-key systems use computers to generate
the keys, each of which is usually 512 or 1024 bits long. That's about 50 to 100 characters long --
not easy to remember off the top of your head. Most Internet systems use a combination of the
two to provide secure communication. Typically they use the public-key encryption system to
encrypt a secret key (usually machine-generated based on a time code).
Both the server and the client encrypt a secret key with their private keys and send the
encrypted data and their public keys to each other. Alternatively, the public keys might be
retrieved from a trusted third party such as a Certificate Server (which I describe later in this
article).
The public keys are now used to decode the data, so both the client and the server now have
secret keys. When exchanging information, the data is encrypted with the secret key and sent
between the machines. This system combines the authentication and extra security of a public-
key system with the speed and convenience of a secret-key system.
Chapter 7
MAINTENANCE:

MAINTENANCE:
Maintenance of the project is very easy due to its modular design and concept any
modification can be done very easily. All the data are stored in the software as per user need & if
user wants to change he has to change that particular data, as it will be reflected in the software
every where. Some of the maintenance applied is: -
BREAKDOWN MAINTENANCE:-
The maintenance is applied when an error occurs & system halts and further processing
cannot be done .At this time user can view documentation or consult us for rectification & we
will analyze and change the code if needed. Example: - If user gets a error ―report width is larger
than paper size‖ while printing report & reports can not be generated then by viewing the help
documentation & changing the paper size to ‗A4‘ size of default printer will rectify the
problem.‖
PREVENTATIVE MAINTENANCE: -
User does this maintenance at regular intervals for smooth functioning (operation) of
software as per procedure and steps mentioned in the manual.
Some reasons for maintenance are: -
· > Error Correction: - Errors, which were not caught during testing, after the system has,
been implemented. Rectification of such errors is called corrective maintenance.

>New or changed requirements:- when business requirements change due to changing
opportunities.
· > Improved performance or maintenance requirements: -Changes that is made to improve
system performance or to make it easier to maintain in the future are called preventive
maintenance. Advances in technology (Adaptive maintenance): - Adaptive maintenance includes
all the changes made to a system in order to introduce a new technology.
Chapter 8
Cost Estimation of Project

4.1.5 Cost Estimation
Cost in the project is due to the requirements in the software, hardware, and human
resources. The size of the project is the primary factor for cost; the other factors have a lesser
effect.
Constructive Cost Model (COCOMO model) developed by Boehm helps to estimate the
total effort in terms of person- months of the technical staff.

Overview of COCOMO
The COCOMO cost estimation model is used by thousands of software project managers,
and is based on a study of hundreds of software projects. Unlike other cost estimation models,
COCOMO is an open model, so all of the details are published, including:
 The underlying cost estimation equations
 Every assumption made in the model (e.g. "the project will enjoy good management")
 Every definition (e.g. the precise definition of the Product Design phase of a project)
 The costs included in an estimate are explicitly stated (e.g. project managers are
included, secretaries aren't)
Because COCOMO is well defined, and because it doesn't rely upon proprietary
estimation algorithms, Costar offers these advantages to its users:
 COCOMO estimates are more objective and repeatable than estimates made by
methods relying on proprietary models
 COCOMO can be calibrated to reflect your software development environment, and
to produce more accurate estimates
Costar is a faithful implementation of the COCOMO model that is easy to use on small
projects, and yet powerful enough to plan and control large projects.
Typically, you'll start with only a rough description of the software system that you'll be
developing, and you'll use Costar to give you early estimates about the proper schedule and
staffing levels. As you refine your knowledge of the problem, and as you design more of the
system, you can use Costar to produce more and more refined estimates.
Costar allows you to define a software structure to meet your needs. Your initial estimate
might be made on the basis of a system containing 3,000 lines of code. Your second estimate
might be more refined so that you now understand that your system will consist of two
subsystems (and you'll have a more accurate idea about how many lines of code will be in each
of the subsystems). Your next estimate will continue the process -- you can use Costar to define
the components of each subsystem. Costar permits you to continue this process until you arrive
at the level of detail that suits your needs.
One word of warning : It is so easy to use Costar to make software cost estimates, that it's
possible to misuse it -- every Costar user should spend the time to learn the underlying
COCOMO assumptions and definitions from Software Engineering Economics and Software
Cost Estimation with COCOMO II.









Introduction of COCOMO Model:
The most fundamental calculation in the COCOMO model is the use of the Effort
Equation to estimate the number of Person-Months required to develop a project. Most of the
other COCOMO results, including the estimates for Requirements and Maintenance, are derived
from this quantity.

Basic COCOMO Model

Source Lines of Code
The COCOMO calculations are based on your estimates of a project's size in Source
Lines of Code (SLOC). SLOC is defined such that:
 Only Source lines that are DELIVERED as part of the product are included -- test
drivers and other support software is excluded
 SOURCE lines are created by the project staff -- code created by applications
generators is excluded
 One SLOC is one logical line of code
 Declarations are counted as SLOC
 Comments are not counted as SLOC
The original COCOMO 81 model was defined in terms of Delivered Source Instructions,
which are very similar to SLOC. The major difference between DSI and SLOC is that a single
Source Line of Code may be several physical lines. For example, an "if-then-else" statement
would be counted as one SLOC, but might be counted as several DSI.
The Scale driver
In the COCOMO II model, some of the most important factors contributing to a project's
duration and cost are the Scale Drivers. You set each Scale Driver to describe your project; these
Scale Drivers determine the exponent used in the Effort Equation.
The 5 Scale Drivers are:
 Precedentedness
 Development Flexibility
 Architecture / Risk Resolution
 Team Cohesion
 Process Maturity
Note that the Scale Drivers have replaced the Development Mode of COCOMO 81. The
first two Scale Drivers, Precedentedness and Development Flexibility actually describe much the
same influences that the original Development Mode did.
Cost Driver
COCOMO II has 17 cost drivers – you assess your project, development environment,
and team to set each cost driver. The cost drivers are multiplicative factors that determine the
effort required to complete your software project. For example, if your project will develop
software that controls an airplane's flight, you would set the Required Software Reliability
(RELY) cost driver to Very High. That rating corresponds to an effort multiplier of 1.26,
meaning that your project will require 26% more effort than a typical software project.
COCOMO II defines each of the cost drivers, and the Effort Multiplier associated with
each rating. Check the Costar help for details about the definitions and how to set the cost
drivers.
COCOMO II order equation
The COCOMO II model makes its estimates of required effort (measured in Person-Months –
PM) based primarily on your estimate of the software project's size (as measured in thousands of
SLOC, KSLOC)):

Effort = 2.94 * EAF * (KSLOC)
E


Where
EAF Is the Effort Adjustment Factor derived from the Cost Drivers
E Is an exponent derived from the five Scale Drivers

As an example, a project with all Nominal Cost Drivers and Scale Drivers would have an EAF of
1.00 and exponent, E, of 1.0997. Assuming that the project is projected to consist of 8,000 source
lines of code, COCOMO II estimates that 28.9 Person-Months of effort is required to complete
it:

Effort = 2.94 * (1.0) * (8)
1.0997
= 28.9 Person-Months
Effort Adjustment Factor
The Effort Adjustment Factor in the effort equation is simply the product of the effort multipliers
corresponding to each of the cost drivers for your project.

For example, if your project is rated Very High for Complexity (effort multiplier of 1.34), and
Low for Language & Tools Experience (effort multiplier of 1.09), and all of the other cost
drivers are rated to be Nominal (effort multiplier of 1.00), the EAF is the product of 1.34 and
1.09.

Effort Adjustment Factor = EAF = 1.34 * 1.09 = 1.46

Effort = 2.94 * (1.46) * (8)
1.0997
= 42.3 Person-Months
COCOMO II schedule mequation
The COCOMO II schedule equation predicts the number of months required to complete your
software project. The duration of a project is based on the effort predicted by the effort equation:

Duration = 3.67 * (Effort)
SE


Where
Effort Is the effort from the COCOMO II effort equation
SE Is the schedule equation exponent derived from the five Scale Drivers

Continuing the example, and substituting the exponent of 0.3179 that is calculated from the scale
drivers, yields an estimate of just over a year, and an average staffing of between 3 and 4 people:

Duration = 3.67 * (42.3)
0.3179
= 12.1 months

Average staffing = (42.3 Person-Months) / (12.1 Months) = 3.5 people














Chapter 9
Snapshots

Login Page:




Registration Page:

User Home:




Future scope & Further
Enhancement of the Project
________________________________________________________________________

This project can be enhanced to provide all the functionalities to the customers. It is
unreasonable to consider a computer based information system complete or finished; the system
continues to evolve throughout its life cycle, even if it‘s successful. It is the case with this system
too. Due to the creative nature of the design, there remain some lapse-mistaken communications
between the users and the developers. So, certain aspects of the system must be modified as
operational experience is gained with it. As users work with the system, they develop ideas for
change and enhancements.

































Bibliography


1. Herbertt Schildt, ―The Complete Reference – ASP.net Using C#‖.

2. Horstmann and Gary Cornell , ―ASP.Net Volume I‖.

3. Phil Hanna, ―The Complete Reference – AJAX2.0‖ .

4. Anisha Bhakaria, ―JSP in 21 days‖ .

5. Roger Pressman, ―Software Engineering‖.

6. Gray Booch, ―UML Guide‖ .

7. Ivan Bayross, ―SQL, PL/SQL‖.

8. Bill Kennedy, ―HTML Guide‖.

9. David Flanagan, ―JavaScript Guide‖.

10. Henry Korth, ―Database System Concepts‖.

11. www.sun.com, ―ASP Docuementation‖.

12. www.oracle.com, ―Oracle Documentation‖.

13. www.google.co.in, ―Google Search‖.


















Glossary


 API – Application Programming Interface.

 DBMS- Database Management System A complex set of programs that control the
organization, storage and retrieval of data.


 GUI- A Graphical User interface that has windows, buttons and menus used to carry out
tasks.

 SQL Client (ADO.Net) Database Connectivity .


 SqlClient API – It supports application top jdbc Manager Communiactions.

 SqlClient Driver API – It supports jdbc Manager to Driver implementation
Communication.

 ODBC – Open Database Connectivity.

 Project – Any piece of work that is undertaken or attempted.


 Report – Awritten document describing the findings of some individual or group.

 SQL – Strictured Query Language.











Chapter
APPENDIX
APPENDIX A
ABOUT THE OPERATING SYSTEM
Windows is the world’s most popular operating system and one reason for this is its
Graphical Use Interface (GUI). Windows lets users issue commands by clicking icons and
work with programs within easily manipulated screens called (appropriately) windows.
Windows 98 represents the marriage of the windows operating system and Internet
accesses. This unique melding of form and function known as Web integration helps the
user to perform routine computer tasks such as writing a letter while maintaining seamless
access to the information we need from the Internet. Web integration also changes the way
we interact with the windows operating system. Command and navigation procedures, as
well as the look of the windows 98 interface, all more closely resemble their counterparts on
the web. Windows 98 lets the user to manage the files and folders that contain them using
the methodology of the Internet and the World Wide Web. Thus windows offer these
advantages:
Easier to use: With the desktop options such as single clicking to open files and the
addition of browse buttons in every window. User can use multiple monitors with a single
computer dramatically increasing the size of the workplace. Installing new hardware is easy
because windows 98supports the Universal Serial Bus standard allowing to plug in new
hardware and use it immediately without restarting computer.
More reliable: User can support online website for answers to common questions and to
keep copies of windows up-to-date. Windows 98 tools can help regularly and test hard disk
and system files and even automatically fix some problems. The troubleshooters and the Dr.
Watson diagnostic tool also help to solve computer problem.
Faster: By using the maintenance wizard we can easily improve computers speed and
efficiency. The power management feature allows newer computers to go into hibernation
mode and awaken instantly instead of requiring shutting down and restarting computer. We
can use the FAT32 file system to store files more efficiently and save hard disk space.
True web integration: The Internet connection wizard makes connecting to the web simple.
Using the web –style Active Desktop can view web pages as the desktop wallpaper. In
Microsoft Outlook @ Express we can send e-mail and post messages to Internet news
groups.
More entertaining: Windows 98 supports DV, digital audio and VRML so can play high
quality movies and audio on computer as well as see the full effect of web pages that use
virtual reality features. Can also watch television broadcasts and check TV program listings
by using Microsoft Web TV for windows.

APPENDIX B
ABOUT VISUAL BASIC .Net
The need of today’s software development in a G.U.I based front-end tool, which can
connect to relational database engines. This gives the programmer the opportunity to
develop client/server based commercial applications.
These applications give users the power and ease of use of a G.U.I with the multi-user
capabilities of NT based RDBMS engines like SQL SERVER 2008.
From the array of G.U.I based front-end tools I select Developer 2008 because as we know
that developer 2008 is a product of SQL SERVER 2008 corporation and it has best
compatibility with SQL SERVER 2008 and most of all the security in VISUAL BASIC 6.0 is as
same as in SQL SERVER 2008 database.
SQL SERVER 2008 VISUAL BASIC .Net offers a host of technical advantages over many
other front-end tools.












APPENDIX C
Introducti on to the SQL SERVER 2008 Server
This chapter provides an overview of the SQL SERVER 2008 server. The topics include:
 Introduction to Databases and Information Management
 Database Structure and Space Management
 Memory Structure and Processes
 The Object-Relational Model for Database Management
 Data Concurrency and Consistency
 Distributed Processing and Distributed Databases
 Startup and Shutdown Operations
 Database Security
 Database Backup and Recovery
 Data Access

INTRODUCTION TO DATABASES AND INFORMATION
MANAGEMENT
A database server is the key to solving the problems of information management. In
general, a server must reliably manage a large amount of data in a multi-user environment
so that many users can concurrently access the same data. All this must be accomplished
while delivering high performance. A database server must also prevent unauthorized
access and provide efficient solutions for failure recovery.
The SQL SERVER 2008 server provides efficient and effective solutions with the following
features:

Client/server To take full advantage of a given computer system or network,
environments
(distributed
processing)
SQL SERVER 2008 allows processing to be split between the
database server and the client application programs. The
computer running the database management system handles
all of the database server responsibilities while the workstations
running the database application concentrate on the
interpretation and display of data.
Large databases and
space management
SQL SERVER 2008 supports the largest of databases, which can
contain terabytes of data. To make efficient use of expensive
hardware devices, SQL SERVER 2008 allows full control of
space usage.
Many concurrent
database users
SQL SERVER 2008 supports large numbers of concurrent users
executing a variety of database applications operating on the
same data. It minimizes data contention and guarantees data
concurrency.
High transaction
processing
performance
SQL SERVER 2008 maintains the preceding features with a high
degree of overall system performance. Database users do not
suffer from slow processing performance.
High availability At some sites, SQL SERVER 2008 works 24 hours per day with
no down time to limit database throughput. Normal system
operations such as database backup and partial computer
system failures do not interrupt database use.
Controlled
availability
SQL SERVER 2008 can selectively control the availability of
data, at the database level and sub-database level. For
example, an administrator can disallow use of a specific
application so that the application's data can be reloaded,
without affecting other applications.
Openness, industry
standards
SQL SERVER 2008 adheres to industry-accepted standards for
the data access language, operating systems, user interfaces,
and network communication protocols. It is an open system
that protects a customer's investment.
SQL SERVER 2008 also supports the Simple Network
Management Protocol (SNMP) standard for system
management. This protocol allows administrators to manage
heterogeneous systems with a single administration interface.
Manageable security To protect against unauthorized database access and use, SQL
SERVER 2008 provides fail-safe security features to limit and
monitor data access. These features make it easy to manage
even the most complex design for data access.
Database enforced SQL SERVER 2008 enforces data integrity, business rules that
integrity dictate the standards for acceptable data. This reduces the
costs of coding and managing checks in many database
applications.
Portability SQL SERVER 2008 software works under different operating
systems. Applications developed for SQL SERVER 2008 can be
ported to any operating system with little or no modification.
Compatibility SQL SERVER 2008 software is compatible with industry
standards, including most industry standard operating systems.
Applications developed for SQL SERVER 2008 can be used on
virtually any system with little or no modification.
Distributed systems For networked, distributed environments, SQL SERVER 2008
combines the data physically located on different computers
into one logical database that can be accessed by all network
users. Distributed systems have the same degree of user
transparency and data consistency as non-distributed systems;
yet receive the advantages of local database management.
Replicated
environments
SQL SERVER 2008 software lets you replicate groups of tables
and their supporting objects to multiple sites. SQL SERVER
2008 supports replication of both data- and schema-level
changes to these sites. SQL SERVER 2008's flexible replication
technology supports basic primary site replication as well as
advanced dynamic and shared-ownership models.


The following sections provide a comprehensive overview of the SQL SERVER 2008
architecture. Each section describes a different part of the overall architecture.