You are on page 1of 91

Project Report

On

STUDY CENTER MANAGEMENT

Project Submitted in partial fulfillment of


the requirement of the award of degree
of

Master of Science -IT


(2006-2009)

Submitted by
Ankur Kaushik
Kuldeep Bhojak
M.Sc.IT IV Semester
Under Guidance of: Submitted to:
Sh. Hardayal Singh Sh. Hardayal Singh
H.O.D. (MCA Deptt.) H.O.D. (MCA Deptt.)
Engineering College Engineering College
Bikaner Bikaner
STUDY CENTER MANAGEMENT

A Project Report

Project Submitted in partial fulfillment


of the requirement of the award of
degree of

Master of Science -IT


(2006-2009)

Under Guidance of: Submitted By:


Sh. Hardayal Singh Ankur Kaushik
H.O.D. (MCA Deptt.) Kuldeep Bhojak
Engineering College

University Of Rajasthan, Jaipur (Rajasthan)


(Year 2008)
Certificate
Acknowledgement

A project like this takes quite a lot of time to do properly. As is often the
case, this project owes its existence and certainly its quality to a number of
people, whose name does not appear on the cover. Among them is one of the
most extra ordinary programmers it has been my pleasure to work with
Mr. Tarun Singh Bundela, who did a super job in technically editing this
project. He did more then just check the facts by offering thoughtful logic
where needed to improve the project as a whole.

We also thank to Mr. Sh. Hardayal Singh (H.O.D. -MCA Deptt.


Engineering College Bikaner) who deserves credit for helping me done the
project and taking care of all the details that most programmers really don’t
think about. Errors and confusions are my responsibility, but the quality of
the project is to their credit and we can only thank them.

We are highly thankful and feel obliged to ICFAI University Bikaner


Center staff members for nice Co-Operation and valuable suggestions in my
project work.

We owe my obligation to my friends and other colleagues in the computer


field for their co-operation and support.

We thank God for being on my side.

(Ankur Kaushik)
(Kuldeep Bhojak)
Table of Contents

Chapter 1 Introduction

Chapter 2 Development model

Chapter 3 System Study

Chapter 4 Project Monitoring System

Chapter 5 System Analysis

Chapter 6 Technology Used

Chapter 7 System Design

Chapter 8 System Testing

Chapter 9 System Implementation

Chapter 10 Conclusion

Chapter 11 Scope of the Project


Chapter 1

Introuction
About Organization

The ICFAI University refers to the Universities sponsored by the Institute of


Chartered Financial Analysts of India (hereinafter referred to as the Institute) in
Uttarakhand, Tripura, Sikkim, Meghalaya, Mizoram, Nagaland, and Jharkhand
under respective legislations. The Governments of Rajasthan, Chhattisgarh and
Punjab issued letters of intent to the Institute for the establishment of Universities.
Each University is a separate and independent legal entity. Consequently, the
University confers degrees at Bachelor’s, Master’s and Doctoral levels on eligible
students subject to the University Regulations.

The University Grants Commission has included the Institute of Chartered


Financial Analysts of India University at Dehradun (Uttarakhand) and at
Agartala (Tripura) in the list of Universities maintained by the University Grants
Commission under Section 2(f) of the UGC Act, 1956.

The University believes in creating and disseminating knowledge and skills in


core and frontier areas through innovative educational programs, research,
consulting and publishing, and developing a new cadre of professionals with a
high level of competence and deep sense of ethics and commitment to the code of
professional conduct.

A number of educational programs are offered in management, finance, banking,


insurance, accounting, law, information technology, arts, commerce, education
and science and technology at Bachelor’s and Master’s levels on full-time campus
and flexible learning formats. Examinations are conducted at over 168 test centers
all over India, four times a year. The University has no study centers outside the
authorized jurisdictions.

The University also allows private candidates to enroll in various programs if they
satisfy the eligibility criteria. The University does not provide any courseware, nor
conduct any contact classes, nor offer any other support services to private
candidates. Such candidates have to prepare for the programs on their own as per
the curriculum and are required to attend the examinations, as per the rules and
regulations of the University. The University has no study centers/branches
outside the jurisdiction.

This project is useful for manage to ICFAI study center. In this project we try to
add the entire necessary requirement of a study center.
Chapter 2

Development model
Development model

Software Process Model

Our project life cycle uses the waterfall model, also known as classic life cycle
model or linear sequential model.

System/Information
Analysis EngineeringDesign Code Test

The Waterfall Model

The waterfall model encompasses the following activities:

1. System/information Engineering and Modeling

System Engineering and Analysis encompass requirements gathering at the system


level with a small amount of Top-level design and analysis. Information
Engineering encompasses requirements gathering at the strategic business level
and at the business area level.

2. Software requirements analysis


Software requirements analysis involves requirements for both the system and the
software to be document and reviewed with the customer.

3. Design

Software design is actually a multi-step process that focuses on for distinct


attributes of a program: data structure, software architecture, interfaces
representation and procedural detail. The design process translates requirements
into a representation of the software that can be accessed for quality before coding
begins.

4. Code Generation

Code-Generation phase translates the design into a machine-readable form.

5. Testing

Once code has been generated, program testing begins. The testing focuses on the
logical internals of the software, ensuring that all statement have been tested, and
on the functional externals; that is, conducting test to uncover errors and ensure
that define input will produce actual results that agree with required results.

6. Support

Software will undoubtedly undergo change after it is delivered to the customer.


Change will occur because errors have been encountered, because the software
must be adapted to accommodate changes in its external environment or because
the customer requires functional or performance enhancements.
Chapter 3

System Study
Before the project can begin, it becomes necessary to estimate the work to be
done, the resource that will be required, and the time that will elapse from start to
finish. During making such a plan we visited site many more times.

2.1 Project planning objectives

The objective of software project planning is to provide a framework that enables


the management to make reasonable estimates of resources, cost, and schedule.
These estimates are made within limited time frame at the beginning of a software
project and should be updated regularly as the project progresses. In addition,
estimates should attempt to define best case and worst case scenarios so that
project outcomes can be bounded.

2.2 Software Scope

The first activity in software project planning is the determination of software


scope. Software scope describes the data and control to be processed, function,
performance, constraints, interfaces, and reliability.

2.2.1 Gathering Information Necessary for Scope

The most commonly used technique to bridge communication gap between


customer and the software developer to get the communication process started is
to conduct a preliminary meeting or interview. When I visited the site we have
been introduced to the Manager of the center, there were two another persons out
of one was the technical adviser and another one was the cost accountant. Neither
of us knows what to ask or say; we were very much worried that what we say will
be misinterpreted.

We started to asking context-free questions; that is, a set of questions that will lead
to a basic understanding of the problem. The first set of context-free questions was
like this:
What do you want to be done?
Who will use this solution?
What is wrong with your existing working systems?
Is there another source for the solution?

• Can you show us (or describe) the environment in which the solution
will be used?
After first round of above asked questions. We revisited the site and asked many
more questions considering to final set of questions.

• Are our questions relevant to the problem that you need to be


solved?
• Are we asking too many questions?
• Should we be asking you anything else?

2.2.2 Feasibility

Not everything imaginable is feasible, not even in software. Software feasibility


has four dimensions:

Technology—is a project technically feasible? Is it within the state of the art?

Finance – Is it financially feasible?

Time—will the project be completed within specified time?

Resources—does the organization have the resources needed to succeed?

After taking into consideration of above said dimensions, we found it could be


feasible for us to develop this project.

2.3 Software Project Estimation

Software cost and effort estimation will never be an exact science. Too may
variables—human, technical, environmental, political—can affect the ultimate
cost of software and effort applied to develop it. However, software project
estimation can be transformed a black art to a series of systematic steps that
provide estimates with acceptable risk.

To achieve reliable cost and effort estimates, a number of options arise:

1. Delay estimation until late in the project (since, we can achieve 100%
accurate estimates after the project is complete!)
2. Base estimates on similar projects that have already been completed.
3. Use relatively simple decomposition techniques to generate project cost
and effort estimates.
4. Use one or more empirical models for software cost and effort

estimation.

Unfortunately, the first option, however attractive, is not practical. Cost estimates
must be provided “Up front”. However, we should recognize that the longer we
wait, the more we know, and the more we know, the less likely we are to make
serious errors in our estimates.

The second option can work reasonably well, if the current project is quite
similar to past efforts and other project influences (e.g., the customer, business
conditions, the SEE, deadlines) are equivalent. Unfortunately past experience has
not always been a good indicator of future results.

The remaining options are viable approaches the software project estimation.
Ideally, the techniques noted for each option be applied in tandem; each used as
cross check for the other. Decomposition techniques take a “divide and conquer”
approach to software project estimation. By decomposing a project into major
functions and related software engineering activities, cost and effort estimation can
be performed in the stepwise fashion.

Empirical estimation models can be used to complement decomposition


techniques and offer a potentially valuable estimation approach in their own right. A
model based on experience (historical data) and takes the form
D = f (vi)

Where d is one of a number of estimated values (e.g., effort, cost, project


duration and we are selected independent parameters (e.g., estimated LOC (line of
code)).

Each of the viable software cost estimation options is only as good as the
historical data used to seed the estimate. If no historical data exist, costing rests on a
very shaky foundation.
Chapter 4

Project Monitoring System


4.1 PERT Chart

Program evaluation and review technique (PERT) and critical path method
(CPM) are two project scheduling methods that can be applied to software
development. These techniques are driven by following information:

• Estimates of Effort
• A decomposition of the product function
• The selection of the appropriate process model and task set
• Decomposition of tasks

PERT chart for this application software is illustrated in figure 3.1. The critical
Path for this Project is Design, Code generation and Integration and testing.
Aug-2008

Requirement Design Integration


Start Analysis and test
Aug 1, 2008 Aug 5, 2008 Oct 10 2008

Coding

Aug 15,2008

Documentati Finish
on and
Report
Oct 30,2008
Figure 4.1 PERT charts for “University Study Center Management System”.
System

4.2 Gantt Chart

Gantt chart which is also known as Timeline chart contains the information
like effort, duration, start date, completion date for each task. A timeline chart can
be developed for the entire project.
Below in figure 4.2 we have shown the Gantt chart for the project. All project
tasks have been listed in the left-hand column.
Start: Jan 1, 2008.

Planned Actual Planned Actual Notes


Work tasks start start complete Complete
1.1 Identify needs and benefits

Meet with customers Wk1,d1 Wk1,d1 Wk1,d2 Wk1,d2

Identified needs and constraints Wk1,d2 Wk1,d2 Wk1,d2 Wk1,d2

Established Product Statement Wk1,d3 Wk1,d3 Wk1,d3 Wk1,d3

Milestone:Product statement defined Wk1,d3 Wk1,d3 Wk1,d3 Wk1,d3

1.2 Defined Analysis

Desiredoutput/control/input (OCI) and design

Scope modes of interacton Wk2,d1 Wk2,d2 is more

Documented (OCI) Wk2,d1 Wk2,d3 time

FTR: reviewed OCI with customer Wk3,d3 Wk3,d5 consuming.

Revised OCI as required Wk4,d1 Wk4,d2

Milestore: OCI defined Wk4,d3 Wk4,d5

1.3 Defined the function/behavior

Milestone: Data Modeling completed Wk5,d1 Wk5,d2 Wk5,d5

1.4 Isolation software elements

Coding Wk5,d1 Wk6,d1 W7,d5

Reports Wk7,d6 W8,d6

1.5 Integration and Testing W9,d1 W9,d3 W11,d3

Finish: Oct 30, 2008

Figure: 4.2 Gant chart for the Project University Study Center Management
System. Note: Wk1—week1, d1—day1.
Chapter 5

System Analysis

Software requirements analysis is a process of discovery, refinement, modeling,


and specification. Requirement analysis proves the software designer with a
representation of information, function, and behavior that can be translated to data,
architectural interface, and component -level designs. To perform the job properly
we need to follow as set of underlying concepts and principles of Analysis.

5.1 Analysis Principles

Over the past two decades, a large number of analysis modeling methods
have been developed. Investigators have identified analysis problems and their
caused and have developed a variety of modeling notations and corresponding sets
of heuristics to overcome them. Each analysis method has a unique point of view.
However, all analysis methods are related by a set of operational principles:

1. The information domain of a problem must be represented and understood.


2. The functions that the software is to perform must be defined.
3. The behavior of the software (as a consequence of external events) must be
represented.
4. The models that depict information function and behavior must be partitioned
in a manner that uncovers detail in layered (or hierarchical) fashion.
5. The analysis process should move from essential information toward
implementation detail.

By applying these principles, we approach the problem systematically. The


information domain is examined so that function may be understood more
completely. Models are used so that the characteristics of function and behavior
can be communicated in a compact fashion. Partitioning is applied to reduce
complexity. Essential and implementation vies of the software are necessary to
accommodate the logical constraints imposed any processing requirements and the
physical constraints imposed by other system elements.
In addition to these operational analysis principles, Davis suggests a set o guiding
principles for requirements analysis:
• Understand the problem before you begin to create the analysis
model. There is a tendency to rush to a solution, even before the
problem is understood. This often leads to elegant software that solves
the wrong problem! We always tried to escape from such situation
while making this project a success.
• Develop prototypes that enable a user to understand how
human/machine interaction will occur. Since the perception of the
quality of software is ofter based on the perception ot the “friendliness”
of the interface, protoptying (and the iteration that results) are highly
recommended.
• Record the origin of and the reason for every requirement. This is
the first step in establishing traceability back to the customer.
• Use multiple views of requirements. Building data, functional, and
behavioral models provide the software developer with three views.
This reduces the likelihood that something will be missed and increases
the likelihood that inconsistency will be recognized.
• Rank requirements. Tight deadlines may preclude the
implementation of every software requirement.
• Work to eliminate ambiguity. Because most requirements are
described in a natural language, the opportunity for ambiguity abounds.
The use of formal technical reviews is one way to uncover and eliminate
ambiguity.

We have tried to takes above said principles to heart so that we could provide
an excellent foundation for design.

5.1.1 The Information Domain


All software applications can be collectively called data processing. Software is
built to process data, to transform data from one form to another; that is, to accept
input, manipulate it in some way, and produce output. This fundamental statement
of objective is true whether we build batch software for a payroll system or real-
time embedded software to control fuel flow to an automobile engine.

The first operational analysis principle requires an examination of the information


domain and the creation of a data model. The information domain contains three
different views of the data and control as each is processed by a computer
program:

(1) information contend and relationships (the data model)


(2) information flow, and
(3) Information structure.

To fully understand the information domain, each of these views should be

considered.

Information content represents the individual data and control objects that
constitute some larger collection of information transformed by the software. For
example, the data object, Status declare is a composite of a number of important
pieces of data: the aircraft’s name, the aircraft’s model, ground run, no of hour
flying and so forth. Therefore, the content of Status declares is defined by the
attributes that are needed to create it. Similarly, the content of a control object
called System status might be defined by a string of bits. Each bit represents a
separate item of information that indicates whether or not a particular device is on-
or off-line.
Data and control objects can be related to other data and control objects.
For example, the date object Status declare has one or more relationships with the
objects like total no of flying, period left for the maintenance of aircraft an others.

Information flow represents the manner in which date and control change
as each moves through a system. Referring to figure 6.1, input objects are
transformed to intermediate information (data and / or control), which is further
transformed to output. Along this transformation path, additional information may
be introduced from an existing date store ( e.g., a disk file or memory buffer). The
transformations applied to the date are functions or sub functions that a program
must perform. Data and control that move between two transformations define the
interface for each function.

Figure 5.1 Information flow and transformation.

Input Transfor Intermediate Output


Transfo
Objects m data and Object(s)
#1 rm
control #2

Data/Contro
l
Store

5.1.2 Modeling

The second and third operational analysis principles require that we build models
of function and behavior.
Functional models. Software transforms information, and in order to accomplish
this, it must perform at lease three generic functions:

• Input
• Processing
• And output.

The functional model begins with a single context level model ( i.e., the name of
the software to be built). Over a series of iterations, more and more functional
detail is gathered, until a through delineation of all system functionality is
represented.

Behavioral models. Most software responds to events from the outside


world. This stimulus/response characteristic forms the basis of the behavioral
model. A computer program always exists in some state- an externally observable
mode of behavior (e.g., waiting, computing, printing, polling) that is changed only
when some even occurs. For example, in our case the project will remain in the
wait state until:

• We click OK command button when first window appears


• An external event like mouse click cause an interrupt and consequently
main window appears by asking the username and password.
• This external system (providing password and username) signals the
project to act in desired manner as per need.

A behavioral model creates a representation of the states of the software and the
events that cause software to change state.

5.1.2 Partitioning (Divide)


Problems are often too large and complex to be understood as a whole, for
this reason, se tend to partition (divide) such problems into parts that can be easily
under stood and establish interfaces between the part so that overall function can
be accomplished. The fourth operational analysis principle suggests that the
information, functional, and behavioral domains of software can be partitioned.

In essence, partitioning decomposes problem intoits constituent parts.


Conceptually, we establish a hierarchical representation of function or information
and then partition and uppermost element by

(1) exposing increasing detail by moving vertically in the


hierarchy or
(2) Functionally decomposing the problem my moving
horizontally in the hierarchy.
To issulstate these partitioning approaches let us consider our project
“Study Center management System”. Horizontal partitioning and vertical
partitioning of Study Center Management System is shown below.

Fig 5.5 horizontal partitioning:

Study Center Management System

System configuration Password acceptance Interact with user

5.2 Software Prototyping.


Some circumstances require the construction of a prototype at the beginning of
analysis, since the model is the only means through which requirements can be
effectively derived. The model then evolves into production software.

5.2.1 Selecting the Prototype Approach


The prototyping paradigm can be either close-ended or open-ended. The
close-ended approach is often called throwaway prototyping. Using this approach,
a prototype serves solely as a rough demonstration of requirements. It is then
discarded, and the software is engineered using a different paradigm. An open-
ended approach, called evolutionary prototyping, uses the prototype as the first
part of an analysis activity that will be continued into design and construction. The
prototype of the software is the first evolution of the finished system.

Before a close-ended or open-ended approach can be chosen, it is necessary


to determine whether the system to be built is amenable to prototyping. A number
of prototyping candidacy factors can be defined: application area, application
complexity, customer characteristics, and project characteristics.
Figure 5.5 selecting the appropriate prototyping approach

Throwawa Evolutiona Additional


y ry preliminary work
Question Prototype prototype required
Is the application domain Yes Yes No
understood?
Can the problem be modeled? Yes Yes No
Is the customer certain of basic
system requirements? Yes/No Yes/No No
Are requirements established and
stable? No Yes Yes
Are any requirements
ambiguous? Yes No Yes
Are there contradictions in the
requirements? Yes No Yes

The above six questions are made as per the Andriole [and92] suggestions for
prototyping approach.
E-R DIAGRAM:

STUDY CENTER
MANAGEMENT
Center Head

Staff Student
Members

Visitor
Informatio
n New
Student
Notice
Board
Exam
Schedule

Student Library
Record & Managemen
Fee t
DATA FLOW DIAGRAM

The following DFD shows how the working of a reservation system could be
smoothly managed:

CENTER HEAD

Edit
New Student Staff Members
Viewer

Library
Student Record
Management
Student Fee
Record Visitor
Information

Notice Board

Course
Information
Exam
Schedule
Chapter 6

Technology used
6.1 Tools and Platform used for the Development

6.1.1 Front-end Environment (.NET Framework)

Microsoft .NET Framework is a software component that is a part of several


Microsoft Windows operating systems. It has a large library of pre-coded solutions
to common programming problems and manages the execution of programs
written specifically for the framework. The .NET Framework is a key Microsoft
offering and is intended to be used by most new applications created for the
Windows platform.

The pre-coded solutions that form the framework's Base Class Library cover a
large range of programming needs in a number of areas, including user interface,
data access, database connectivity, cryptography, web application development,
numeric algorithms, and network communications. The class library is used by
programmers who combine it with their own code to produce applications.

Programs written for the .NET Framework execute in a software environment that
manages the program's runtime requirements. Also part of the .NET Framework,
this runtime environment is known as the Common Language Runtime (CLR).
The CLR provides the appearance of an application virtual machine so that
programmers need not consider the capabilities of the specific CPU that will
execute the program. The CLR also provides other important services such as
security, memory management, and exception handling. The class library and the
CLR together compose the .NET Framework.

The .NET Framework is included with Windows Server 2008 and Windows Vista.
The current version of the framework can also be installed on Windows XP and
the Windows Server 2003 family of operating systems.
ASP.NET
History

After the release of Internet Information Services 4.0 in 1997, Microsoft began
researching possibilities for a new web application model that would solve
common complaints about ASP, especially with regard to separation of
presentation and content and being able to write "clean" code. Mark Anders, a
manager on the IIS team, and Scott Guthrie, who had joined Microsoft in 1997
after graduating from Duke University, were tasked with determining what that
model would look like. The initial design was developed over the course of two
months by Anders and Guthrie, and Guthrie coded the initial prototypes during the
Christmas holidays in 1997.

The initial prototype was called "XSP"; Guthrie explained in a 2007 interview
that, "People would always ask what the X stood for. At the time it really didn't
stand for anything. XML started with that; XSLT started with that. Everything
cool seemed to start with an X, so that's what we originally named it." The initial
prototype of XSP was done using Java, but it was soon decided to build the new
platform on top of the Common Language Runtime (CLR), as it offered an object-
oriented programming environment, garbage collection and other features that
were seen as desirable features that Microsoft's Component Object Model
platform didn't support. Guthrie described this decision as a "huge risk", as the
success of their new web development platform would be tied to the success of the
CLR, which, like XSP, was still in the early stages of development, so much so
that the XSP team was the first team at Microsoft to target the CLR.

With the move to the Common Language Runtime, XSP was re-implemented in
C# (known internally as "Project Cool" but kept secret from the public), and
renamed to ASP+, as by this point the new platform was seen as being the
successor to Active Server Pages, and the intention was to provide an easy
migration path for ASP developers.
Mark Anders first demonstrated ASP+ at the ASP Connections conference in
Phoenix, Arizona on May 2, 2000. Demonstrations to the wide public and initial
beta release of ASP+ (and the rest of the .NET Framework) came at the 2000
Professional Developers Conference on July 11, 2000 in Orlando, Florida. During
Bill Gates's keynote presentation, Fujitsu demonstrated ASP+ being used in
conjunction with COBOL,[5] and support for a variety of other languages was
announced, including Microsoft's new Visual Basic .NET and C# languages, as
well as Python and Perl support by way of interoperability tools created by Active
State.

Once the ".NET" branding was decided on in the second half of 2000, it was
decided to rename ASP+ to ASP.NET. Mark Anders explained on an appearance
on The MSDN Show that year that, "The .NET initiative is really about a number
of factors, it’s about delivering software as a service, it's about XML and web
services and really enhancing the Internet in terms of what it can do .... we really
wanted to bring its name more in line with the rest of the platform pieces that
make up the .NET framework."

Characteristics

Pages

ASP.NET pages, known officially as "web forms", are the main building block for
application development. Web forms are contained in files with an ASPX
extension; in programming jargon, these files typically contain static (X)HTML
markup, as well as markup defining server-side Web Controls and User Controls
where the developers place all the required static and dynamic content for the web
page. Additionally, dynamic code which runs on the server can be placed in a page
within a block <% -- dynamic code -- %> which is similar to other web
development technologies such as PHP, JSP, and ASP, but this practice is
generally discouraged except for the purposes of data binding since it requires
more calls when rendering the page.
Note that this sample uses code "inline", as opposed to code behind.
<%@ Page Language="C#" %>

<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"


"http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">

<script runat="server">

protected void Page_Load(object sender, EventArgs e)


{
Label1.Text = DateTime.Now.ToLongDateString();
}

</script>

<html xmlns="http://www.w3.org/1999/xhtml">
<head runat="server">
<title>Sample page</title>
</head>
<body>
<form id="form1" runat="server">
<div>
The current time is: <asp:Label runat="server"
id="Label1" />
</div>
</form>

</body>
</html>

Code-behind model

It is recommended by Microsoft for dealing with dynamic program code to use the
code-behind model, which places this code in a separate file or in a specially
designated script tag. Code-behind files typically have names like MyPage.aspx.cs
or MyPage.aspx.vb based on the ASPX file name (this practice is automatic in
Microsoft Visual Studio and other IDEs). When using this style of programming,
the developer writes code to respond to different events, like the page being
loaded, or a control being clicked, rather than a procedural walk through the
document.

ASP.NET's code-behind model marks a departure from Classic ASP in that it


encourages developers to build applications with separation of presentation and
content in mind. In theory, this would allow a web designer, for example, to focus
on the design markup with less potential for disturbing the programming code that
drives it. This is similar to the separation of the controller from the view in model-
view-controller frameworks.

Example

<%@ Page Language="C#"


CodeFile="SampleCodeBehind.aspx.cs"
Inherits="Website.SampleCodeBehind"
AutoEventWireup="true" %>

The above tag is placed at the beginning of the ASPX file. The CodeFile property
of the @ Page directive specifies the file (.cs or .vb) acting as the code-behind
while the Inherits property specifies the Class the Page derives from. In this
example, the @ Page directive is included in SamplePage.aspx, then
SampleCodeBehind.aspx.cs acts as the code-behind for this page:

using System;

namespace Website
{
public partial class SampleCodeBehind :
System.Web.UI.Page
{
protected override void Page_Load(EventArgs e)
{
base.OnLoad(e);
}
}
}
In this case, the Page_Load () method is called every time the ASPX page is
requested. The programmer can implement event handlers at several stages of the
page execution process to perform processing.

User controls

ASP.NET supports creating reusable components through the creation of User


Controls. A User Control follows the same structure as a Web Form, except that
such controls are derived from the System.Web.UI.UserControl class, and are
stored in ASCX files. Like ASPX files, a ASCX contains static HTML or
XHTML markup, as well as markup defining web control and other User Controls.
The code-behind model can be used.

Programmers can add their own properties, methods,[9] and event handlers.[10]
An event bubbling mechanism provides the ability to pass an event fired by a user
control up to its containing page.

State management

ASP.NET applications are hosted in a web server and are accessed over the
stateless HTTP protocol. As such, if the application uses stateful interaction, it has
to implement state management on its own. ASP.NET provides various
functionality for state management in ASP.NET applications.

Application state

Application state is a collection of user-defined variables that are shared by an


ASP.NET application. These are set and initialized when the Application_OnStart
event fires on the loading of the first instance of the applications and are available
till the last instance exits. Application state variables are accessed using the
Applications collection, which provides a wrapper for the application state
variables. Application state variables are identified by names.

Session state

Session state is a collection of user-defined session variables, which are persisted


during a user session. These variables are unique to different instances of a user
session, and are accessed using the Session collection. Session variables can be set
to be automatically destroyed after a defined time of inactivity, even if the session
does not end. At the client end, a user session is identified either by a cookie or by
encoding the session ID in the URL itself.
ASP.NET supports three modes of persistence for session variables:
In Process Mode
The session variables are maintained within the ASP.NET process. This is
the fastest way, however, in this mode the variables are destroyed when the
ASP.NET process is recycled or shut down. Since the application is
recycled from time to time this mode is not recommended for critical
applications.

ASPState Mode

In this mode, ASP.NET runs a separate Windows service that maintains the
state variables. Because the state management happens outside the
ASP.NET process, this has a negative impact on performance, but it allows
multiple ASP.NET instances to share the same state server, thus allowing
an ASP.NET application to be load-balanced and scaled out on multiple
servers. Also, since the state management service runs independent of
ASP.NET, variables can persist across ASP.NET process shutdowns.
SqlServer Mode

In this mode, the state variables are stored in a database server, accessible
using SQL. Session variables can be persisted across ASP.NET process
shutdowns in this mode as well. The main advantage of this mode is it
would allow the application to balance load on a server cluster while
sharing sessions between servers.

View state

View state refers to the page-level state management mechanism, which is


utilized by the HTML pages emitted by ASP.NET applications to maintain
the state of the web form controls and widgets. The state of the controls are
encoded and sent to the server at every form submission in a hidden field
known as __VIEWSTATE. The server sends back the variable so that when
the page is re-rendered, the controls render at their last state. At the server
side, the application might change the viewstate, if the processing results in
updating the state of any control. The states of individual controls are
decoded at the server, and are available for use in ASP.NET pages using
the ViewState collection.

Template engine
When first released, ASP.NET lacked a template engine. Because the .NET
framework is object-oriented and allows for inheritance, many developers
would define a new base class that inherits from "System.Web.UI.Page",
write methods here that render HTML, and then make the pages in their
application inherit from this new class. While this allows for common
elements to be reused across a site, it adds complexity and mixes source
code with markup. Furthermore, this method can only be visually tested by
running the application - not while designing it. Other developers have used
include files and other tricks to avoid having to implement the same
navigation and other elements in every page.
ASP.NET 2.0 introduced the concept of "master pages", which allow for template-
based page development. A web application can have one or more master pages,
which can be nested.[14] Master templates have place-holder controls, called
ContentPlaceHolders to denote where the dynamic content goes, as well as HTML
and JavaScript shared across child pages.

Child pages use those ContentPlaceHolder controls, which must be mapped to the
place-holder of the master page that the content page is populating. The rest of the
page is defined by the shared parts of the master page, much like a mail merge in a
word processor. All markup and server controls in the content page must be placed
within the ContentPlaceHolder control.

When a request is made for a content page, ASP.NET merges the output of the
content page with the output of the master page, and sends the output to the user.
The master page remains fully accessible to the content page. This means that the
content page may still manipulate headers, change title, configure caching etc. If
the master page exposes public properties or methods (e.g. for setting copyright
notices) the content page can use these as well.

Performance

ASP.NET aims for performance benefits over other script-based technologies


(including Classic ASP) by compiling the server-side code to one or more DLL
files on the web server. This compilation happens automatically the first time a
page is requested (which means the developer need not perform a separate
compilation step for pages). This feature provides the ease of development offered
by scripting languages with the performance benefits of a compiled binary.
However, the compilation might cause a noticeable but short delay to the web user
when the newly-edited page is first requested from the web server, but won't again
unless the page requested is updated further.
The ASPX and other resource files are placed in a virtual host on an Internet
Information Services server (or other compatible ASP.NET servers; see Other
Implementations, below). The first time a client requests a page, the .NET
framework parses and compiles the file(s) into a .NET assembly and sends the
response; subsequent requests are served from the DLL files. By default ASP.NET
will compile the entire site in batches of 1000 files upon first request. If the
compilation delay is causing problems, the batch size or the compilation strategy
may be tweaked.

Developers can also choose to pre-compile their code before deployment,


eliminating the need for just-in-time compilation in a production environment.

Development tools

Several available software packages exist for developing ASP.NET applications:

• Delphi 2006
• Macromedia Dreamweaver MX, Macromedia Dreamweaver MX 2004, or
Macromedia Dreamweaver 8 (doesn't support ASP.NET 2.0 features, and
produces very inefficient code for ASP.NET 1.x: also, code generation and
ASP.NET features support through version 8.0.1 was little if any changed
from version MX: version 8.0.2 does add changes to improve security
against SQL injection attacks)
• Macromedia HomeSite 5.5 (For ASP Tags)
• Microsoft Expression Web, part of the Microsoft Expression Studio
application suite.
• Microsoft SharePoint Designer
• MonoDevelop (Free/Open Source)
• SharpDevelop (Free/Open Source)
• Visual Studio .NET (for ASP.NET 1.x)
• Visual Web Developer 2005 Express Edition (free) or Visual Studio 2005
(for ASP.NET 2.0)
• Visual Web Developer 2008 Express Edition (free) or Visual Studio 2008
(for ASP.NET 2.0/3.5)
• Eiffel for ASP.NET
6.2.1 Back-end Environment

What is SQL?

SQL (pronounced "ess-que-el") stands for Structured Query Language. SQL is


used to communicate with a database. According to ANSI (American National
Standards Institute), it is the standard language for relational database
management systems. SQL statements are used to perform tasks such as update
data on a database, or retrieve data from a database. Some common relational
database management systems that use SQL are: Oracle, Sybase, Microsoft SQL
Server, Access, Ingres, etc. Although most database systems use SQL, most of
them also have their own additional proprietary extensions that are usually only
used on their system. However, the standard SQL commands such as "Select",
"Insert", "Update", "Delete", "Create", and "Drop" can be used to accomplish
almost everything that one needs to do with a database. This tutorial will provide
you with the instruction on the basics of each of these commands as well as allow
you to put them to practice using the SQL Interpreter.

SQL (Structured Query Language) is a database computer language designed for


the retrieval and management of data in relational database management systems
(RDBMS), database schema creation and modification, and database object access
control management.

SQL is a standard interactive and programming language for querying and


modifying data and managing databases. Although SQL is both an ANSI and an
ISO standard, many database products support SQL with proprietary extensions to
the standard language. The core of SQL is formed by a command language that
allows the retrieval, insertion, updating, and deletion of data, and performing
management and administrative functions. SQL also includes a Call Level
Interface (SQL/CLI) for accessing and managing data and databases remotely.
The first version of SQL was developed at IBM by Donald D. Chamberlin and
Raymond F. Boyce in the early 1970s. This version, initially called SEQUEL, was
designed to manipulate and retrieve data stored in IBM's original relational
database product, System R. The SQL language was later formally standardized
by the American National Standards Institute (ANSI) in 1986. Subsequent
versions of the SQL standard have been released as International Organization for
Standardization (ISO) standards.

Originally designed as a declarative query and data manipulation language,


variations of SQL have been created by SQL database management system
(DBMS) vendors that add procedural constructs, control-of-flow statements, user-
defined data types, and various other language extensions. With the release of the
SQL: 1999 standard, many such extensions were formally adopted as part of the
SQL language via the SQL Persistent Stored Modules (SQL/PSM) portion of the
standard.

Common criticisms of SQL include a perceived lack of cross-platform portability


between vendors, inappropriate handling of missing data (see Null (SQL)), and
unnecessarily complex and occasionally ambiguous language grammar and
semantics.

During the 1970s, a group at IBM's San Jose research center developed the System
R relational database management system, based on the model introduced by
Edgar F. Codd in his influential paper, A Relational Model of Data for Large
Shared Data Banks.[3] Donald D. Chamberlin and Raymond F. Boyce of IBM
subsequently created the Structured English Query Language (SEQUEL) to
manipulate and manage data stored in System R.[4] The acronym SEQUEL was
later changed to SQL because "SEQUEL" was a trademark of the UK-based
Hawker Siddeley aircraft company.
The first non-commercial non-SQL RDBMS, Ingres, was developed in 1974 at the
U.C. Berkeley. Ingres implemented a query language known as QUEL, which was
later supplanted in the marketplace by SQL.

In the late 1970s, Relational Software, Inc. (now Oracle Corporation) saw the
potential of the concepts described by Codd, Chamberlin, and Boyce and
developed their own SQL-based RDBMS with aspirations of selling it to the U.S.
Navy, CIA, and other government agencies. In the summer of 1979, Relational
Software, Inc. introduced the first commercially available implementation of SQL,
Oracle V2 (Version2) for VAX computers. Oracle V2 beat IBM's release of the
System/38 RDBMS to market by a few weeks.

After testing SQL at customer test sites to determine the usefulness and
practicality of the system, IBM began developing commercial products based on
their System R prototype including System/38, SQL/DS, and DB2, which were
commercially available in 1979, 1981, and 1983, respectively.

Standardization

SQL was adopted as a standard by ANSI in 1986 and ISO in 1987. In the original
SQL standard. Until 1996, the National Institute of Standards and Technology
(NIST) data management standards program was tasked with certifying SQL
DBMS compliance with the SQL standard. In 1996, however, the NIST data
management standards program was dissolved, and vendors are now relied upon to
self-certify their products for compliance.

The SQL standard has gone through a number of revisions, as shown below:
Year Name Alias Comments
198 SQL-86 SQL-87 First published by ANSI. Ratified by ISO in 1987.
6
198 SQL-89 FIPS 127-1 Minor revision, adopted as FIPS 127-1.
9
199 SQL-92 SQL2, FIPS 127-2 Major revision (ISO 9075), Entry Level SQL-92
2 adopted as FIPS 127-2.
199 SQL:1999 SQL3 Added regular expression matching, recursive
9 queries, triggers, support for procedural and control-
of-flow statements, non-scalar types, and some
object-oriented features.
200 SQL:2003 Introduced XML-related features, window functions,
3 standardized sequences, and columns with auto-
generated values (including identity-columns).
200 SQL:2006 ISO/IEC 9075-14:2006 defines ways in which SQL
6 can be used in conjunction with XML. It defines
ways of importing and storing XML data in an SQL
database, manipulating it within the database and
publishing both XML and conventional SQL-data in
XML form. In addition, it provides facilities that
permit applications to integrate into their SQL code
the use of XQuery, the XML Query Language
published by the World Wide Web Consortium
(W3C), to concurrently access ordinary SQL-data
and XML documents.

The SQL standard is not freely available. SQL: 2003 and SQL: 2006 may be
purchased from ISO or ANSI. A late draft of SQL: 2003 is freely available as a zip
archive, however, from Whitemarsh Information Systems Corporation. The zip
archive contains a number of PDF files that define the parts of the SQL: 2003
specification.
Scope and extensions

Procedural extensions
SQL is designed for a specific purpose: to query data contained in a relational
database. SQL is a set-based, declarative query language, not an imperative
language such as C or BASIC. However, there are extensions to Standard SQL
which add procedural programming language functionality, such as control-of-
flow constructs. These are:

Common
Source Full Name
Name
ANSI/ISO
SQL/PSM SQL/Persistent Stored Modules
Standard
IBM SQL PL SQL Procedural Language (implements SQL/PSM)
Microsoft/
T-SQL Transact-SQL
Sybase
MySQL SQL/PSM SQL/Persistent Stored Module (as in ISO SQL:2003)
Oracle PL/SQL Procedural Language/SQL (based on Ada)
Procedural Language/PostgreSQL Structured Query
PostgreSQL PL/pgSQL
Language (based on Oracle PL/SQL)
Procedural Language/Persistent Stored Modules (implements
PostgreSQL PL/PSM
SQL/PSM)

In addition to the standard SQL/PSM extensions and proprietary SQL extensions,


procedural and object-oriented programmability is available on many SQL
platforms via DBMS integration with other languages. The SQL standard defines
SQL/JRT extensions (SQL Routines and Types for the Java Programming
Language) to support Java code in SQL databases. SQL Server 2005 uses the
SQLCLR (SQL Server Common Language Runtime) to host managed .NET
assemblies in the database, while prior versions of SQL Server were restricted to
using unmanaged extended stored procedures which were primarily written in C.
Other database platforms, like MySQL and Postgres, allow functions to be written
in a wide variety of languages including Perl, Python, Tcl, and C.

Additional extensions

SQL: 2003 also defines several additional extensions to the standard to increase
SQL functionality overall. These extensions include:

The SQL/CLI, or Call-Level Interface, extension is defined in ISO/IEC 9075-


3:2003. This extension defines common interfacing components (structures and
procedures) that can be used to execute SQL statements from applications written
in other programming languages. The SQL/CLI extension is defined in such a way
that SQL statements and SQL/CLI procedure calls are treated as separate from the
calling application's source code.

The SQL/MED, or Management of External Data, extension is defined by


ISO/IEC 9075-9:2003. SQL/MED provides extensions to SQL that define foreign-
data wrappers and datalink types to allow SQL to manage external data. External
data is data that is accessible to, but not managed by, an SQL-based DBMS.

The SQL/OLB, or 'Object Language Bindings, extension is defined by ISO/IEC


9075-10:2003. SQL/OLB defines the syntax and symantics of SQLJ, which is
SQL embedded in Java. The standard also describes mechanisms to ensure binary
portability of SQLJ applications, and specifies various Java packages and their
contained classes.

The SQL/Schemata, or Information and Definition Schemas, extension is defined


by ISO/IEC 9075-11:2003. SQL/Schemata define the Information Schema and
Definition Schema, providing a common set of tools to make SQL databases and
objects self-describing. These tools include the SQL object identifier, structure and
integrity constraints, security and authorization specifications, features and
packages of ISO/IEC 9075, support of features provided by SQL-based DBMS
implementations, SQL-based DBMS implementation information and sizing
items, and the values supported by the DBMS implementations.

The SQL/JRT, or SQL Routines and Types for the Java Programming Language,
extension is defined by ISO/IEC 9075-13:2003. SQL/JRT specifies the ability to
invoke static Java methods as routines from within SQL applications. It also calls
for the ability to use Java classes as SQL structured user-defined types.

The SQL/XML, or XML-Related Specifications, extension is defined by ISO/IEC


9075-14:2003. SQL/XML specifies SQL-based extensions for using conjunction
with SQL. The XML data type is introduced, as well as several routines, functions,
and XML-to-SQL data type mappings to support manipulation and storage of
XML in an SQL database.

The SQL/PSM, or Persistent Stored Modules, extension is defined by ISO/IEC


9075-4:2003. SQL/PSM standardizes procedural extensions for SQL, including
flow of control, condition handling, statement condition signals and resignals,
cursors and local variables, and assignment of expressions to variables and
parameters. In addition, SQL/PSM formalizes declaration and maintenance of
persistent database language routines (e.g., "stored procedures").

The SQL language is sub-divided into several language elements, including:


Statements which may have a persistent effect on schemas and data, or which may
control transactions, program flow, connections, sessions, or diagnostics.
Queries which retrieve data based on specific criteria.

Expressions which can produce either scalar values or tables consisting of


columns and rows of data. Predicates which specify conditions that can be
evaluated to SQL three-valued logic Boolean truth values and which are used to
limit the effects of statements and queries, or to change program flow. Clauses
which are (in some cases optional) constituent components of statements and
queries.

Whitespace is generally ignored in SQL statements and queries, making it easier


to format SQL code for readability.

SQL statements also include the semicolon (";") statement terminator. Though not
required on every platform, it is defined as a standard part of the SQL grammar.

Queries

The most common operation in SQL databases is the query, which is performed
with the declarative SELECT keyword. SELECT retrieves data from a specified
table, or multiple related tables, in a database. While often grouped with Data
Manipulation Language (DML) statements, the standard SELECT query is
considered separate from SQL DML, as it has no persistent effects on the data
stored in a database. Note that there are some platform-specific variations of
SELECT that can persist their effects in a database, such as the SELECT INTO
syntax that exists in some databases.

SQL queries allow the user to specify a description of the desired result set, but it
is left to the devices of the database management system (DBMS) to plan,
optimize, and perform the physical operations necessary to produce that result set
in as efficient a manner as possible. An SQL query includes a list of columns to be
included in the final result immediately following the SELECT keyword. An
asterisk ("*") can also be used as a "wildcard" indicator to specify that all
available columns of a table (or multiple tables) are to be returned. SELECT is the
most complex statement in SQL, with several optional keywords and clauses,
including:
The FROM clause which indicates the source table or tables from which the data
is to be retrieved. The FROM clause can include optional JOIN clauses to join
related tables to one another based on user-specified criteria.

The WHERE clause includes a comparison predicate, which is used to restrict the
number of rows returned by the query. The WHERE clause is applied before the
GROUP BY clause. The WHERE clause eliminates all rows from the result set
where the comparison predicate does not evaluate to True.

The GROUP BY clause is used to combine, or group, rows with related values
into elements of a smaller set of rows. GROUP BY is often used in conjunction
with SQL aggregate functions or to eliminate duplicate rows from a result set.

The HAVING clause includes a comparison predicate used to eliminate rows after
the GROUP BY clause is applied to the result set. Because it acts on the results of
the GROUP BY clause, aggregate functions can be used in the HAVING clause
predicate.

The ORDER BY clause is used to identify which columns are used to sort the
resulting data, and in which order they should be sorted (options are ascending or
descending). The order of rows returned by an SQL query is never guaranteed
unless an ORDER BY clause is specified.

The following is an example of a SELECT query that returns a list of expensive


books. The query retrieves all rows from the Book table in which the price column
contains a value greater than 100.00. The result is sorted in ascending order by
title. The asterisk (*) in the select list indicates that all columns of the Book table
should be included in the result set.

SELECT *

FROM Book
WHERE price > 100.00

ORDER BY title;

The example below demonstrates the use of multiple tables in a join, grouping,
and aggregation in an SQL query, by returning a list of books and the number of
authors associated with each book.

SELECT Book.title, count (*) AS Authors

FROM Book

JOIN Book_author

ON Book.isbn = Book_author.isbn

GROUP BY Book.title;

Example output might resemble the following:

Title Authors

---------------------- -------

SQL Examples and Guide 3

The Joy of SQL 1

How to use Wikipedia 2

Pitfalls of SQL 1

How SQL Saved my Dog 1

(The underscore character "_" is often used as part of table and column names to
separate descriptive words because other punctuation tends to conflict with SQL
syntax. For example, a dash "-" would be interpreted as a minus sign.)
Under the precondition that isbn is the only common column name of the two
tables and that a column named title only exists in the Books table, the above
query could be rewritten in the following form:

SELECT title, count (*) AS Authors

FROM Book

NATURAL JOIN Book_author

GROUP BY title;

However, many vendors either do not support this approach, or it requires certain
column naming conventions. Thus, it is less common in practice.

Data retrieval is very often combined with data projection when the user is looking
for calculated values and not just the verbatim data stored in primitive data types,
or when the data needs to be expressed in a form that is different from how it's
stored. SQL allows the use of expressions in the select list to project data, as in the
following example which returns a list of books that cost more than 100.00 with
an additional sales_tax column containing a sales tax figure calculated at 6% of
the price.

SELECT isbn, title, price, price * 0.06 AS sales_tax

FROM Book

WHERE price > 100.00

ORDER BY title;

Some modern day SQL queries may include extra WHERE statements that are
conditional to each other. They may look like this example:

SELECT isbn, title, price, date


FROM Book

WHERE price > 100.00

AND (date = '16042004' OR date = '16042005')

ORDER BY title;
6.1.3 Hardware Specification

Server Side
• Core 2 Due 2.4GHz and Above
• 2 GB of Random Access Memory and Above
• 160 GB Hard Disk

Client Side
• Pentium-IV 1.5MHs and Above
• 512 MB of Random Access Memory and Above
• 80 GB Hard Disk
Chapter 7

System Design
1. Index page

This webpage is the starting page of the Website.It gives the followings:

 Links for other webpages.


 History of ICFAI
 Mission and Merits of ICFAI
 Display important dates
 Register for new student or user
 Links for Login, Contact us and feedback
2. Login Page

As in the above image Login webpage is displaying:

 From this authorized user can log in the website.


 And also provide link for registering new user.
3. Home for user

As in the above image Home for user webpage is displaying:

 Gives the link for Main ICFAI, Tripura Website.


 And also provide link Contact Us and FeedBack.
4. ICFAI, Tripura

As in the above image Home page webpage is displaying:

 This is the main official site of ICFAI University.


 This can provide some links for History, Programs, Faculty
resources, Careers etc.
6. Contact us

As in the above image the Contact us webpage is displaying:

 This page is access by any user


 Contain information about developer of web site.
 Links for Feedback and contact us.
7. FeedBack

As in the above image the Update panel webpage is displaying:

 This page is access by any user


 Anyone can give feedback releated to the site or cources
 Links for Feedback and contact us.
 Anyone can see feedback of other.
8. Center Head

As in the above image the center head webpage is displaying:

 Links for Staff management, Library management.


 Contains Update panel.
 Update panel can be accessed by administrator only.
9. Update panel

As in the above image the Update panel webpage is displaying:

 Only accessed by the administrator.


 Contains links for adding information about Student, staff, library etc.
 Links for Feedback and contact us.
10. Add Course Information

As in the above image the add course information webpage is displaying:

 Only accessed by the administrator.


 Add new course such as MBA, BBA, BCA
 Links for Feedback and contact us.
11. Add Examination information

As in the above image the add Examination information webpage is displaying:

 Only accessed by the administrator.


 Add new Information about the Examination time and date.
 Links for Feedback and Contact us.
12. Add New Book

As in the above image the add new book webpage is displaying:

 Only accessed by the administrator.


 Add new book in database
 Links for Add New Book, Search and Issue/Return.
13. Issue/Return Book

As in the above image the add course information webpage is displaying:

 Only accessed by the Library Management.


 Required book ID, book name, etc for issuing book
 Links for Add New Book, Search and Issue/Return.
14. Add Notice Board

As in the above image the add course information webpage is displaying:

 Only accessed by the Administration.


 Add notices for any activity such as game or annual function
 Links for New Notices, Find Notices and Feedback.
15. Add New Staff Member

As in the above image the add new staff member webpage is displaying:

 Only accessed by the Administration.


 Store data for member such as his/her name, Qualification, Salary,
Jioning data, post etc.
 Links for new Center Head, Staff Management and Library Management.
16. Add New Student

As in the above image the add new Student webpage is displaying:

 Only accessed by the Administration.


 Store data for student such as his/her name, Qualification, Course
name, fees and Student ID etc.
 Links for new New Student, Student Fees Information and Find
Student.
17. Add New Visitor Information

As in the above image adds new Visitor Information webpage is displaying:

 Only accessed by administration.


 Store data for new visitor such as his/her name, Father Name,
Phone number, purpose and visitor address.
 Links for ICFAI Tripura, Contact us and Feedback.
18. Staff Management

As in the above image the Staff Menagement webpage is displaying—

 User can enter any existing staff member name in a given textbox
 After clicking on search button
 Give information like (name, qualification, post, salary etc.) if it is
exist.
19. Search Book

As in the above image the Search Book webpage is displaying:

 Accessed by any user or student.


 User can enter any book name in a given textbox and search for the
given book weather it’s in the stock or not.
 Link for Find by Author, Find by Name and Feedback.
20. Library management

As in the above image the Library Management webform is displaying:

 Information about management of ICFAI.


 Information about staff member.
 Information about library of ICFAI and Facility provided.
 Links for add new book, seacrch book, Issue/Return book.
21. Add New Book

As in the above image the Add new book web page is displaying:

 Used for adding new Books.


 Books for student the DataBase.
 Requires the following information :
 Book_id
 Book_Name
 Author
 Cost
 When the user has entered all the information rightly the data is
stored in the database using the Submit Button.
Chapter 8

System Testing
System Testing
Once source code has been generated, software must be tested to uncover (and
correct) as many errors as possible before delivery to customer. Our goal is to
design a series of test cases that have a high likelihood of finding errors. To
uncover the errors software techniques are used. These techniques provide
systematic guidance for designing test that

(1) Exercise the internal logic of software components, and


(2) Exercise the input and output domains of the program to uncover errors
in program function, behavior and performance.

8.1 Steps. Software is tested from two different perspectives:

(1) Internal program logic is exercised using “White box” test case design
techniques.
(2) Software requirements are exercised using “block box” test case design
techniques.

In both cases, the intent is to find the maximum number of errors with the
minimum amount of effort and time.

8.2 Strategies

A strategy for software testing must accommodate low-level tests that are
necessary to verify that a small source code segment has been correctly
implemented as well as high-level tests that validate major system functions
against customer requirements. A strategy must provide guidance for the
practitioner and a set of milestones for the manager. Because the steps of the test
strategy occur at a time when deadline pressure begins to rise, progress must be
measurable and problems must surface as earl as possible.
Following testing techniques are well known and the same strategy is adopted
during this project testing.

8.2.1 Unit testing: Unit testing focuses verification effort on the smallest unit of
software design- the software component or module. The unit test is white-box
oriented. The module interface is tested to ensure that information properly flows
into and of the program unit under test the local data structure has been examined
to ensure that data stored temporarily maintains its integrity during all steps in an
algorithm’s execution. Boundary conditions are tested to ensure that the module
operated properly at boundaries established to limit or restrict processing. All
independent paths through the control structure are exercised to ensure that all
statements in a module haven executed at least once.

8.2.2 Integration testing: Integration testing is a systematic technique for


constructing the program structure while at the same time conducting tests to
uncover errors associated with interfacing. The objective of this test is to take unit
tested components and build a program structure that has been dictated by design.

8.2.3 Validation testing: At the culmination of integration testing, software is


completely assembled as a package, interfacing errors have been uncovered and
corrected, and a final series of software tests—validation testing-may begin.
Validation can be defined in many ways, but a simple definition is that validation
succeeds when software functions in a manner that can be reasonably expected by
the customer.

8.2.4 System testing: System testing is actually a series of different tests whose
primary purpose is to fully exercise the computer-based system. Below we have
described the two types of testing which have been taken for this project.
8.2.4.1 Security testing
Any computer-based system that manages sensitive information causes actions
that can improperly harm (or benefit) individuals is a target for improper or illegal
penetration. Penetration spans a broad range of activities: hackers who attempt to
penetrate system for sport; disgruntled employees who attempt to penetrate for
revenge; dishonest individuals who attempt to penetrate for illicit personal gain.

For security purposes, when anyone who is not authorized user cannot
penetrate this system. When programs first load it check for correct username and
password. If any fails to act according will be simply ignored by the system.

8.2.4.2 Performance Testing


Performance testing is designed to test the run-time performance of software
within the context of an integrated system. Performance testing occurs throughout
all steps in the testing process. Even at the unit level, the performance of an
individual module may be assessed as white-box tests are conducted.

8.3. Criteria for Completion of Testing


Every time the customer/user executes a compute program, the program is being
tested. This sobering fact underlines the importance of other software quality
assurance activities.

As much time we run our project that is still sort of testing as Musa and Ackerman
said. They have suggested a response that is based on statistical criteria: “No, we
cannot be absolutely certain that the software will never fail, but relative to a
theoretically sound and experimentally validated statistical model, we have done
sufficient testing to say with 95 percent confidence that the probability of 1000
CPU hours of failure free operation in a probabilistically defined environment is at
least 0.995.”
8.4 Validation Checks
Software testing is one element of broader topic that is often referred to as
verification and validation. Verification refers to the set of activities that ensure
that software correctly implements a specific function. Validation refers to a
different set of activities that ensure that the software that has been built is
traceable to customer requirements. Boehm state this another way:

Verification: “Are we building the product right?”


Validation: “Are we building the right product?”

Validation checks are useful when we specify the nature of data input. Let us
elaborate what I mean. In this project while entering the data to many text box you
will find the use of validation checks. When you try to input wrong data. Your
entry will be automatically abandoned.

In the very beginning of the project when user wishes to enter into the project, he
has to supply the password. This password is validated to certain string, till user
won’t supply correct word of string for password he cannot succeed. When you try
to edit the record for the trainee in Operation division you will find the validation
checks. If you supply the number (digits) for name text box, you won’t get the
entry; similarly if you data for trainee code in text (string) format it will be simply
abandoned.

A validation check facilitates us to work in a greater way. It become necessary for


certain Applications like this.
Chapter 9

System Implementation
Specification, regardless of the mode through which we accomplish it, may
be viewed as a representation process. Requirements are represented in manner
that ultimately leads to successful software implementation.

9.1 Specification principles

A number of specification principles, adapted from the work of balzer and


Goodman can be proposed:

1. Separate functionality from implementation.


2. Develop a model of the desired behavior of a system that encompasses date
and the functional responses of a system to various stimuli from the
environment.
3. Establish the context in which software operates by specifying the manner in
which other system components interact with software.
4. Define the environment in which the system operates.
5. Create a cognitive model rather than a design or implementation model. The
cognitive model describes a system as perceived by its user community.
6. Recognize that “the specifications must be tolerant of incompleteness and
augmentable.”
7. Establish the content and structure of a specification in a way that will enable it
to be amenable to change.
This list of basic specification principles provides a basis for representing
software requirements. However, principles must be translated into realization.
9.1.2 Representation

As we know software requirement may be specified in a variety of ways.


However, if requirements are committed to paper a simple set of guidelines is well
worth following:

Representation format and content should be relevant to the


problem. A general outline for the contents of a Software Requirements
Specification can be developed. However, the representation forms contained
within the specification are likely to vary with the application area. For example,
for our automation system we used different symbology, diagrams.

Information contained within the specification should be nested.


Representations should reveal layers of information so that a reader can move to
the level of detail required. Paragraph and diagram numbering schemes should
indicate the level of detail that is being presented. It is sometimes worthwhile to
present the same information at different levels of abstraction to aid in
understanding. Similar guidelines are adhered for my project.
Chapter 10

Conclusion
To conclude, Project Grid works like a component which can access all the
databases and picks up different functions. It overcomes the many limitations
incorporated in the .NET Framework. Among the many features availed by the
project, the main among them are:

• Simple editing
• Insertion of individual images on each cell
• Insertion of individual colors on each cell
• Flicker free scrolling
• Drop-down grid effect
• Placing of any type of control anywhere in the grid
Chapter 11

Scope of the Project


Future scope of the project: -
The project has a very vast scope in future. The project can be implemented on
internet in future. Project can be updated in near future as and when requirement
for the same arises, as it is very flexible in terms of expansion. With the proposed
software of Web Space Manager ready and fully functional the client is now able
to manage and hence run the entire work in a much better, accurate and error free
manner. The following are the future scope for the project: -

 The number of levels that the software is handling can be made unlimited
in future from the current status of handling up to N levels as currently laid
down by the software. Efficiency can be further enhanced and boosted up
to a great extent by normalizing and de-normalizing the database tables
used in the project as well as taking the kind of the alternative set of data
structures and advanced calculation algorithms available.
 We can in future generalize the application from its current customized
status wherein other vendors developing and working on similar
applications can utilize this software and make changes to it according to
their business needs.
 Faster processing of information as compared to the current system with
high accuracy and reliability.
 Automatic and error free report generation as per the specified format with
ease.
 Automatic calculation and generation of correct and precise Bills thus
reducing much of the workload on the accounting staff and the errors
arising due to manual calculations.
 With a fully automated solution, lesser staff, better space utilization and
peaceful work environment, the company is bound to experience high
turnover.

A future application of this system lies in the fact that the proposed system would
remain relevant in the future. In case there be any additions or deletion of the
services, addition or deletion of any reseller in any type of modification in future
can be implemented easily. The data collected by the system will be useful for
some other purposes also.
All these result in high client-satisfaction, hence, more and more business for the
company that will scale the company business to new heights in the forthcoming
future.
References
References:

• Complete Reference of C#
• Programming in C# - Deitel & Deitel
• www.w3schools.com
• http://en.wikipedia.org
• The principles of Software Engineering – Roger S.Pressman
• Software Engineering – Hudson
• MSDN help provided by Microsoft .NET
• Object Oriented Programming – Deitel & Deitel

You might also like