You are on page 1of 92

Academic Organizer

PROJECT REPORT

On

ACADEMIC ORGANISER

Submitted as partial fulfillment towards project submission, Master of Computer


Applications, BPUT, Orissa

By

Name : Chandan Keshari Roul , Arun Panda


Redg no: 0601230108 , 0601230131

DEPARTMENT OF COMPUTER SCIENCE & ENGINEERING,


SYNERGY INSTITUTE OF ENGINEERING & TECHNOLOGY,
DHENKANAL

2010

1 COMPUTER SCIENCE & ENGG DEPARTMENT


CONTENT
SUBJECT Page No.

1. INTRODUCTION

1.1: Purpose of the Project ……………………………………………………………….5

1.2: Problems in the Existing System…………………………………………………….6

1.3: Solution of These Problems………………………………………………………….6

1.4: Hardware & Software Specifications for Development…………………………....7

1.4.1. Hardware Requirements………………………………………………………….7

1.4.2.. Software Requirements ………………………………………………………….7

1.5: Hardware & Software Specifications for Deployment……………..…………………7

2: FEASIBILITY STUDY

2.1: Economic Feasibility………………………………………………………………..8

2.2: Technical Feasibility…………………………………………………………………9

2.3 Operational Feasibility……………………………………………………………….9

3. PROJECT ANALYSIS

3.1. Project Analysis……………………………………………………………………….10

3.1.1. Administration Section…………………………………………………………..10

4. SOFTWARE REQUIREMENT SPECIFICATION


4.1. Requirement Specification………………………………………………………...12
4.2. Functional Requirements………………………………………………………...13

4.2.1. Output Design…………………………………………………………....13

4.2.2. Output Definition……………………………………………………….13

4.2.3. Output Media………………………………………………………….…14

4.2.4. Input Design……………………………………………………………..14

4.2.5. Input Stages……………………………………………………………...15

4.2.6. Input Types………………………………………………………………15

4.2.7. Input Media………………………………………………………………15

5. SELECTED SOFTWARE

5.1. Visual Basic.Net…………………………………………………………………….24

5.2. Asp.Net……………………………………………………………………………....27

5.3. C # .Net……………………………………………………………………………....31

5.4. Ado.Net……………………………………………………………………………...35

6. PROJECT DESIGN…………………………………………………………………………40

7. NORMALIZATION………………………………………………………………………...44

8. DATABASE DIAGRAM…………………………………………………………………....51
9. ER-DIAGRAMS…………………………………………………………………………….52

10. DATA FLOW DIAGRAM………………………………………………………………...54

11. UML DIAGRAM…………………………………………………………………………..63

12. EXPLANATIONS……………………………………………………………………….....88

12.1. Buttons…………………………………………………………………………....88
12.2. Populate Dropdown without Condition: …………………………………….…88

12.3. Populate Dropdown with Condition: …………………………………………..88

12.4. Display Grid View ………………………………………………………………88

12.5. Display Detail View…………………………………………………………….....88

13. PROJECT TESTING………………………………………………………………….....89

14. FUTURE SCOPE OF PROJECT………………………………………………………..90

15. CONCLUSION……………………………………………………………………………92

16. BIBLIOGRAPHY …………………………………………………………………………93


INTRODUCTION

1.1: PURPOSE OF THE PROJECT

Academic Organizer is a scheduler of all the activities, done by the academic


personnel in an educational institutions. It can display stored information not only as a grid or list
but also as a calendar, map, slideshow of random items with drag and drop option with various
types of animation.

Academic Organizer is a scheduler of all activities done by academic personals such as


 Taking classes

 Tasks assigned by the institution

 Exam duty

 Administrative works

Managing time table & Course Handout according to the academic calendar.

 Attending Meetings, Seminar, Workshop, Conference

 Apply for leave.

Posting comments.

Information about Faculties.


1.2: PROBLEMS IN THE EXISTING SYSTEM

 Due to the current busy schedule of the faculties, it is very difficult for them to
remember all the appointments assigned to them, so the academic organizer is required to
show all the information needed by them.
 The existing system consumes more time in process.
 Every thing is done by manually so information about classes, meeting, Group v
activities (seminar, conference, workshop) , programmes (short term courses, faculty
development process),exam duty, paper evaluation , leave provide to faculty is slow. 
The existing system does not have the provision for meetings, counseling classes,
exam duties, paper evaluation and other administrative tasks which are compulsory
to be included in the system.

1.3: SOLUTION OF THESE PROBLEMS

 Application is based on various key focus & respective prototypes.


 Application is based on various features and any faculty can get opportunity to
store the information about classes, meeting, Group activities (seminar, conference,
workshop), programmes (short-term courses, faculty development process),exam
duty, paper evaluation ,leave. So that a faculty can get his/her requirement easily
Proper security can be mentioned in both application level as well
as data base. Application can get priority in various levels of server technology
mentioning proper authentication & authorization code access etc. Database
security can be maintained with backup & recovery configuration data can be
recovered.
 Application as most have user friendly which enhance better event driven
hence operation can be perform in faster way & user friendly way.

1.4: HARDWARE & SOFTWARE SPECIFICATIONS


FOR DEVLOPMENT

1.4.1. HARDWARE REQUIREMENTS:

 Pentium III 500MHZ or above


128MB RAM
100MB Free Hard disk space
 Standard Color Monitor
 NIC or Modem (For Remote Sources)
 LAN Network (For Remote Sources)

1.4.2. SOFTWARE REQUIREMENTS:

 WINDOWS NT 4 | 2000 | 9.X | ME


 Visual Studio .Net 2002 Enterprise Edition
 Visual Studio .Net Framework
 Asp.net, C#.net
 SQL Server 2000
1.5: HARDWARE & SOFTWARE SPECIFICATIONS FOR
DEPLOYMENT

 IIS Server
 Windows NT & later
2. FEASIBILITY STUDY

The main aim of the feasibility study activity is to determine whether it would be
financially & technically feasible to the product. This activity involves the analysis of the
problem & collection of all relevant information relating to the product such as the different
data items which would be input to the system, the processing required to be carried out on
these data, the output data required to be produced by the system, as well as various
constraints on the behavior of the system.

The collected data are analyzed to arrive at the following:


1 An abstract problem definition.
2 Formulation of the different solution strategies

2.1: ECONOMIC FEASIBILITY

Economic analysis is most frequently used for evaluation of the effectiveness of the system.
More commonly known as cost/benefit analysis the procedure is to determine the benefit and
saving that are expected from a system and compare them with costs, decisions is made to
design and implement the system.

This part of feasibility study gives the top management the economic justification for the
new system. This is an important input to the management the management, because very
often the top management does not like to get confounded by the various technicalities that
bound to be associated with a project of this kind. A simple economic analysis that gives
the actual comparison of costs and benefits is much more meaningful in such cases.

In the system, the organization is most satisfied by economic feasibility. Because, if the
organization implements this system, it need not require any additional hardware
resources as well as it will be saving lot of time.

2.2: TECHNICAL FEASIBILITY


Technical feasibility centers on the existing manual system of the test management process and
to what extent it can support the system. According to feasibility analysis procedure the
technical feasibility of the system is analyzed and the technical requirements such as
software facilities, procedure, inputs are identified. It is also one of the important phases of the
system development activities.

The system offers greater levels of user friendliness combined with greater processing
speed. Therefore, the cost of maintenance can be reduced. Since, processing speed is very
high and the work is reduced in the maintenance point of view management convince that the
project is operationally feasible.

2.3 OPERATIONAL FEASIBILITY:


Proposed projects are beneficial only if they can be turned into information systems that will
meet the organizations operating requirements. Simply stated, this test of feasibility asks if the
system will work when it is developed and installed. People are inherently resistant to
change and computer has been known to facilitate changes. An estimate should be made of
how strong the user is likely to move towards the development of computerized system.
These are various levels of users in order to ensure proper authentication and authorization and
security of sensitive data of the organization.

• Is there sufficient support for the project from management from users? If the current
system is well liked and used to the extent that persons will not be able to see reasons for
change, there may be resistance.
• Are the current business methods acceptable to the user? If they are not, Users may
welcome a change that will bring about a more operational and useful systems.

Since the proposed system was to help reduce the hardships encountered. In the
existing manual system, the new system was considered to be operational feasible.

3. PROJECT ANALYSIS
3.1. PROJECT ANALYSIS

A faculty member in an academic institution has to carried out three tasks


 Administrative Duty
 Academic Duty
 Personal Development

3.1.1. Administration Section:

 Department Details
 Branch Details
 Faculty Details
 Room Details
 Semester Details
 Subject Details
 Class Details
 Exam Duty Details
 Activity Details
 Meeting Details
 Leave Details

Instructor Class detail:


This part of the application allows the administrator to enter the detail information
about his/her classes (classroom, branch, blackboard status, projector, etc).

Exam duty Detail:


This part the application allows the administrator to enter the information about
his/her exam duty.

Instructor Activity Detail:


This part of the application allows the administrator to enter the detail information
about seminar, workshop and conference.

Meeting detail:
With this part of an application, the administrator can enter the information about the
meeting.

Instructor Leave:
In this part of application, the administrator accepts or declines his/her leave application.

Semester Details:
In this part of application, the administrator enters the entire semester name.

Branch Details:
In this part of application, the administrator enters the entire branch name.

Room Details:
In this part of application, the administrator enters the entire class room no.

Time schedule master/Class Details:


In this part of application, the administrator enters the entire time schedule for a
class.
Subject master:
In this part of application, the administrator enters the all the subject name.
4. SOFTWARE REQUIREMENT SPECIFICATION

4.1. REQUIREMENT SPECIFICATION

The requirements analysis and specification phase is there to clearly understand the
customer requirements and to systematically organize these requirements into a specification
document.

Purpose: The main purpose for preparing this document is to give a general insight into the
analysis and requirements of the existing system or situation and for determining the
operating characteristics of the system.
Scope: This document plays a vital role in the software development life cycle (SDLC).As it
describes the complete requirement of the system, it is meant for use by the developers and will
be the basic step during testing phase. Any changes made to the requirements in the future
will have to go through formal change in approval process.
Developers Responsibilities Overview:
The developer is responsible for
I. Developing the system, this meets the SRS and solves all the requirements of
the system.
II. Demonstrating the system and installing the system at client’s location after
the acceptance testing is successful.
III. Submitting the required user manual describing the system interfaces to work
on it and also the documents of the system.
IV. Conducting any user training that might be needed for using the system.
V. Maintaining the system for a period of one year after installation.
4.2. Functional Requirements:

4.2.1. OUTPUT DESIGN

Outputs from computer systems are required primarily to communicate the results of
processing to users. They are also used to provide a permanent copy of the results for later
consultation. The various types of outputs in general are:
 External Outputs, whose destination is outside the organization,
 Internal Outputs, whose destination is within an organization and they are the user’s
main interface with the computer.
 Operational outputs, whose use is purely with in the computer department.
 Interface outputs, which involve the user in communicating directly with the
organization.

4.2.2. OUTPUT DEFINITION

The output should be defined in terms of the following points:


 Type of the output
 Content of the output
 Format of the output
 Location of the output
 Frequency of the output
 Volume of the output
 Sequence of the output

It is not always desirable to print or display data as it is held on a computer. It should be


decided as which form of the output is most suitable.

4.2.3. OUTPUT MEDIA:


In the next stage it is to be decided that which medium is the most appropriate for the output.
The main considerations when deciding about the output media are:
 Suitability for the device to the particular application.
 Need for a hard copy.
Response time required.
 Location of the users.
Software and Hardware available.
 Cost.

Keeping in view the above description the project is to have output mainly coming
under the category of internal output. The main outputs desired according to the requirement
specification are:

The outputs were needed to be generated as a hard copy and as well as queries to be
viewed on the screen. Keeping the view of these outputs, the format for the output is taken
from the output, which is currently being obtained after manual processing. The standard
printer is to be used as output media for hard copies.

4.2.4. INPUT DESIGN

Input design is a part of overall system design. The main objectives during the input
designs are:
• To produce a cost-effective method of input.
• To achieve the highest possible level of accuracy.
• To ensure that the input is acceptable and understood by the user.

4.2.5. INPUT STAGES:

The main input stages can be listed as below:


• Data recording
• Data transcription
• Data conversion
• Data verification
• Data control
• Data transmission
• Data validation
• Data correction

4.2.6. INPUT TYPES:

It is necessary to determine the various types of inputs. Inputs can be categorized as


follows:
• External inputs, which are prime inputs for the system.
• Internal inputs, which are user communications with the system.
• Operational inputs, which are computer department’s communications to the
system.
• Interactive inputs, which are inputs entered during a dialogue.

4.2.7. INPUT MEDIA:

At this stage choice has to be made about the input media. To conclude about the input
media consideration has to be given to
• Type of input
• Flexibility of format
• Speed
• Accuracy

• Verification methods
• Rejection rates
• Ease of correction
• Storage and handling requirements
• Security
• Easy to use
• Portability

Keeping in view that above description of the input types and input media, it can be said
that most of the inputs are of the form of internal and interactive. As input data is to be the
directly keyed in by the user, the keyboard can be considered to be the most suitable input
device.

ERROR AVOIDANCE
At this stage care is to be taken to ensure that input data remains accurate form the stage at
which it is recorded up to the stage in which the data is accepted by the system. This can be
achieved only by means of careful control each time the data is handled.

ERROR DETECTION
Even though every effort is make to avoid the occurrence of errors, still a small proportion of
errors are always likely to occur. These types of errors can be discovered by using validations to
check the input data.

DATA VALIDATION
Procedures are designed to detect errors in data at a lower level of detail. Data validations
have been included in the system in almost every area where there is a possibility for the user
to commit errors. The system will not accept invalid data. Whenever an invalid data is keyed in,
the system immediately prompts the user and the user has to again key in the data and the
system will accept the data only if the data is correct. Validations have been included where
necessary.
The system is designed to be a user friendly one. In other words the system has been
designed to communicate effectively with the user. The system has been designed with pop up
menus.

USER INTERFACE DESIGN

It is essential to consult the system users and discuss their needs while designing the user
interface.
Classification

USER INITIATED INTERFACES

In user initiated interface, the user is in charge and controls the progress of the
user/computer dialogue.
User initiated interfaces fall into two approximate classes:
1. Command driven interfaces: In this type of interface the user inputs
commands or queries which are interpreted by the computer.
2. Forms oriented interface: The user calls up an image of the form to his/her
screen and fills in the form. The forms oriented interface is chosen
because it is the best choice.

COMPUTER INITIATED INTERFACES

In the computer initiated interfaces the computer guides the progress of the user/computer
dialogue

The following computer - initiated interfaces were used:

1. The menu system for the user is presented with a list of alternatives and
the user chooses one of alternatives.
2. Questions - answer type dialog system where the computer asks question
and takes action based on the basis of the users reply.

Right from the start, the system menu driven, and the opening menu display the available
options. Choosing one option gives another popup menu with more options. In this way
every option leads the users to do the entry of data from where the user can key in the data.

ERROR MESSAGE DESIGN:

The design of error messages is an important part of the user interface design. As user is
bound to commit some errors or other mistakes, while designing a system the system should be
designed to be helpful by providing the user with information regarding the error he/she has
committed.
This application must be able to produce output at different modules for different inputs.

PERFORMANCE REQUIREMENTS:

Performance is measured in terms of reports generated weekly and monthly. Requirement


specification plays an important part in the analysis of a system. Only when the requirement
specifications are properly given, it is possible to design a system, which will fit into required
environment. It rests largely in the part of the users of the existing system to give the
requirement specifications because they are the people who finally use the system. This is
because the requirements have to be known during the initial stages so that the system can be
designed according to those requirements. It is very difficult to change the system once it
has been designed and on the other hand, designing a system, which does not cater to the
requirements of the user, is of no use.

The requirement specification for any system can be broadly stated as given below:

• The system should be able to interface with the existing system.


• The system should be accurate.
• The system should be better than the existing system.

The existing system is completely dependent on the faculties to perform all the duties.
5. SELECTED SOFTWARE

Microsoft.NET Framework

The .NET Framework is a new computing platform that simplifies application development in
the highly distributed environment of the Internet. The .NET Framework is designed to fulfill
the following objectives:

• To provide a consistent object-oriented programming environment whether object


code is stored and executed locally, or executed locally but Internet-distributed, or
executed remotely.

• To provide a code-execution environment that minimizes software deployment and


versioning conflicts.

• To provide a code-execution environment that guarantees safe execution of code,


including code created by an unknown or semi-trusted third party.

• To provide a code-execution environment that eliminates the performance problems


of scripted or interpreted environments.

• To make the developer experience consistency across widely varying types of


applications, such as Windows-based applications and Web-based applications.

• To build all communication on industry standards to ensure that code based on the
.NET Framework can integrate with any other code.

The .NET Framework has two main components: the common language runtime and the
.NET Framework class library. The common language runtime is the foundation of the .NET
Framework. We can think of the runtime as an agent that manages code at execution time,
providing core services such as memory management, thread management, and remoting,
while also enforcing strict type safety and other forms of code accuracy that ensure security
and robustness. In fact, the concept of code management is a fundamental principle of the
runtime. Code that targets the runtime is known as managed code, while code that does not
target the runtime is known as unmanaged code. The class library, the other main component of
the .NET Framework, is a comprehensive, object-oriented collection of reusable types that you
can use to develop applications ranging from traditional command-line or graphical user
interface (GUI) applications to applications based on the latest innovations provided by
ASP.NET, such as Web Forms and XML Web services.

The .NET Framework can be hosted by unmanaged components that load the common
language runtime into their processes and initiate the execution of managed code, thereby
creating a software environment that can exploit both managed and unmanaged features. The
.NET Framework not only provides several runtime hosts, but also supports the development of
third-party runtime hosts.

For example, ASP.NET hosts the runtime to provide a scalable, server-side environment for
managed code. ASP.NET works directly with the runtime to enable Web Forms applications
and XML Web services, both of which are discussed later in this topic.

Internet Explorer is an example of an unmanaged application that hosts the runtime (in the form
of a MIME type extension). Using Internet Explorer to host the runtime enables us to embed
managed components or Windows Forms controls in HTML documents. Hosting the runtime in
this way makes managed mobile code (similar to Microsoft® ActiveX® controls) is possible, but
with significant improvements that only managed code can offer, such as semi-trusted
execution and secure isolated file storage.

The following illustration shows the relationship of the common language runtime and the class
library to your applications and to the overall system. The illustration also shows how managed
code operates within a larger architecture.

Features of the Common Language Runtime

The common language runtime manages memory, thread execution, code execution, code
safety verification, compilation, and other system services. These features are intrinsic to the
managed code that runs on the common language runtime.
With regards to security, managed components are awarded varying degrees of trust,
depending on a number of factors that include their origin (such as the Internet, enterprise
network, or local computer). This means that a managed component might or might not be
able to perform file-access operations, registry-access operations, or other sensitive
functions, even if it is being used in the same active application.

The runtime enforces code access security. For example, users can trust that an
executable embedded in a Web page can play an animation on screen or sing a song, but
cannot access their personal data, file system, or network. The security features of the
runtime thus enable legitimate Internet-deployed software to be exceptionally featuring
rich.

The runtime also enforces code robustness by implementing a strict type- and code-
verification infrastructure called the common type system (CTS). The CTS ensures that all
managed code is self-describing. The various Microsoft and third-party language compilers
generate managed code that conforms to the CTS. This means that managed code can
consume other managed types and instances, while strictly enforcing type fidelity and type
safety.

In addition, the managed environment of the runtime eliminates many common software
issues. For example, the runtime automatically handles object layout and manages references to
objects, releasing them when they are no longer being used. This automatic memory
management resolves the two most common application errors, memory leaks and invalid
memory references.

The runtime also accelerates developer productivity. For example, programmers can
write applications in their development language of choice, yet take full advantage of the
runtime, the class library, and components written in other languages by other developers.
Any compiler vendor who chooses to target the runtime can do so. Language compilers that
target the .NET Framework make the features of the .NET Framework available to existing
code written in that language, greatly easing the migration process for existing applications.
While the runtime is designed for the software of the future, it also supports software of
today and yesterday. Interoperability between managed and unmanaged code enables
developers to continue to use necessary COM components and DLLs.

The runtime is designed to enhance performance. Although the common language


runtime provides many standard runtime services, managed code is never interpreted. A
feature called just-in-time (JIT) compiling enables all managed code to run in the native
machine language of the system on which it is executing. Meanwhile, the memory manager
removes the possibilities of fragmented memory and increases memory locality-of-reference to
further increase performance.

Finally, the runtime can be hosted by high-performance, server-side applications, such


as Microsoft® SQL Server™ and Internet Information Services (IIS). This infrastructure
enables you to use managed code to write your business logic, while still enjoying the
superior performance of the industry's best enterprise servers that support runtime hosting.

.NET Framework Class Library

The .NET Framework class library is a collection of reusable types that tightly integrate with the
common language runtime. The class library is object oriented, providing types from which
your own managed code can derive functionality. This not only makes the .NET Framework
types easy to use, but also reduces the time associated with learning new features of the .NET
Framework. In addition, third-party components can integrate seamlessly with classes in the
.NET Framework.

For example, the .NET Framework collection classes implement a set of interfaces that you
can use to develop our own collection classes. Our collection classes will blend seamlessly with
the classes in the .NET Framework.

As we would expect from an object-oriented class library, the .NET Framework types
enable you to accomplish a range of common programming tasks, including tasks such as
string management, data collection, database connectivity, and file access. In addition to
these common tasks, the class library includes types that support a variety of specialized
development scenarios. For example, you can use the .NET Framework to develop the
following types of applications and services:
 Console applications.

 Scripted or hosted applications.

 Windows GUI applications (Windows Forms).

 ASP.NET applications.

XML Web services.

 Windows services.

For example, the Windows Forms classes are a comprehensive set of reusable types that
vastly simplify Windows GUI development. If you write an ASP.NET Web Form
application, you can use the Web Forms classes.

Client Application Development

Client applications are the closest to a traditional style of application in Windows-based


programming. These are the types of applications that display windows or forms on the
desktop, enabling a user to perform a task. Client applications include applications such as
word processors and spreadsheets, as well as custom business applications such as data-entry
tools, reporting tools, and so on. Client applications usually employ windows, menus, buttons,
and other GUI elements, and they likely access local resources such as the file system and
peripherals such as printers.

Another kind of client application is the traditional ActiveX control (now replaced by
the managed Windows Forms control) deployed over the Internet as a Web page. This
application is much like other client applications: it is executed natively, has access to local
resources, and includes graphical elements.

In the past, developers created such applications using C/C++ in conjunction with the
Microsoft Foundation Classes (MFC) or with a rapid application development (RAD)
environment such as Microsoft® Visual Basic®. The .NET Framework incorporates aspects

of these existing products into a single, consistent development environment that drastically
simplifies the development of client applications.

The Windows Forms classes contained in the .NET Framework are designed to be used
for GUI development. We can easily create command windows, buttons, menus, toolbars,
and other screen elements with the flexibility necessary to accommodate shifting business
needs.

For example, the .NET Framework provides simple properties to adjust visual attributes
associated with forms. In some cases the underlying operating system does not support
changing these attributes directly, and in these cases the .NET Framework automatically
recreates the forms. This is one of many ways in which the .NET Framework integrates the
developer interface, making coding simpler and more consistent.

Unlike ActiveX controls, Windows Forms controls have semi-trusted access to a user's
computer. This means that binary or natively executing code can access some of the
resources on the user's system (such as GUI elements and limited file access) without being
able to access or compromise other resources. Because of code access security, many
applications that once needed to be installed on a user's system can now be safely deployed
through the Web. Our applications can implement the features of a local application while
being deployed like a Web page.

5.1. Visual Basic.NET


Introduction to Windows Forms (Visual Basic.NET)

Windows Forms is the new platform for Microsoft Windows application development, based on
the .NET Framework. This framework provides a clear, object-oriented, extensible set of classes
that enable us to develop rich Windows applications. Additionally, Windows Forms can act as
the local user interface in a multi-tier distributed solution. Windows Forms is a framework for
building Windows client applications that utilize the common language runtime. Windows
Forms applications can be written in any language that the common language runtime
supports.
What Is a Form?

A form is a bit of screen real estate, usually rectangular, that we can use to present
information to the user and to accept input from the user. Forms can be standard windows,
multiple document interface (MDI) windows, dialog boxes, or display surfaces for graphical
routines. The easiest way to define the user interface for a form is to place controls on its
surface. Forms are objects that expose properties which define their appearance, methods
which define their behavior, and events which define their interaction with the user. By
setting the properties of the form and writing code to respond to its events, we customize the
object to meet the requirements of your application.

As with all objects in the .NET Framework, forms are instances of classes. The form
you create with the Windows Forms Designer is a class, and when we display an instance of
the form at run time, this class is the template used to create the form. The framework also
allows us to inherit from existing forms to add functionality or modify existing behavior.
When we add a form to your project, we can choose whether it inherits from the Form class
provided by the framework, or from a form we have previously created.

Additionally, forms are controls, because they are inherited from the control class.
Within a Windows Forms project, the form is the primary vehicle for user interaction. By
combining different sets of controls and writing code, we can elicit information from the user
and respond to it, work with existing stores of data, and query and write back to the file
system and registry on the user's local computer.

Although the form can be created entirely in the Code Editor, it is easier to use the
Windows Forms Designer to create and modify forms.

Some of the advantages of using Windows Forms include the following:

 Simplicity and power: Windows Forms is a programming model for developing


Windows applications that combines the simplicity of the Visual Basic 6.0
programming model with the power and flexibility of the common language runtime.

 Lower total cost of ownership: Windows Forms takes advantage of the versioning and
deployment features of the common language runtime to offer reduced deployment
costs and higher application robustness over time. This significantly lowers the
maintenance costs (TCO) for applications written in Windows Forms.

 Architecture for controls: Windows Forms offers an architecture for controls and
control containers that is based on concrete implementation of the control and
container classes. This significantly reduces control-container interoperability issues.

 Security: Windows Forms takes full advantage of the security features of the common
language runtime. This means that Windows Forms can be used implement
everything from an entrusted control running in the browser to a fully trusted
application installed on a user's hard disk.

 XML Web services support: Windows Forms offers full support for quickly and
easily connecting to XML Web services.

 Rich graphics: Windows Forms is one of the first ship vehicles for GDI+, a new
version of the Windows Graphical Device Interface (GDI) that supports alpha
blending, texture brushes, advanced transforms, rich text support, and more.

 Flexible controls: Windows Forms offers a rich set of controls that encompass all of
the controls offered by Windows. These controls also offer new features, such as "flat
look" styles for buttons, radio buttons, and check boxes.

 Data awareness: Windows Forms offers full support for the ADO data model.

 ActiveX control support: Windows Forms offers full support for ActiveX controls.
We can easily host ActiveX controls in a Windows Forms application. We can also
host a Windows Forms control as an ActiveX control.

 Licensing: Windows Forms takes advantage of the common language runtime


enhanced licensing model.

 Printing: Windows Forms offers a printing framework that enables applications to


provide comprehensive reports.

 Accessibility: Windows Forms controls implement the interfaces defined by


Microsoft Active Accessibility (MSAA), which make it simple to build applications
that support accessibility aids, such as screen readers.

 Design-time support: Windows Forms takes full advantage of the meta-data and
component model features offered by the common language runtime to provide
thorough design-time support for both control users and control implementers.

5.2. ASP.NET
ASP.NET is a set of web development technologies that enable programmers to build web
applications and XML web services.

The ASP.NET 2.0 provider model was designed with the following goals in mind:

 To make ASP.NET state storage both flexible and extensible

 To insulate application-level code and code in the ASP.NET run-time from the
physical storage media where state is stored, and to isolate the changes required to
use alternative media types to a single well-defined layer with minimal surface area

 To make writing custom providers as simple as possible by providing a robust and


well-

 Documented set of base classes from which developers can derive provider classes of
their own

It is expected that developers who wish to pair ASP.NET 2.0 with data sources for which offthe-
shelf providers are not available can, with a reasonable amount of effort, write custom
providers to do the job.
• ASP.NET drastically reduces the amount of code required to build large applications
• ASP.NET makes development simpler and easier to maintain with an event-driven,
server-side programming model.

• ASP.NET pages are easy to write and maintain because the source code and HTML
are together
• The source code is executed on the server. The pages have lots of power and
flexibility by this approach.
How Asp.net work??

When a browser requests an HTML file, the server returns the file When a browser requests an
ASP.NET file, IIS passes the request to the ASP.NET engine on the server The ASP.NET engine
reads the file, line by line, and executes the scripts in the file Finally, the ASP.NET file is
returned to the browser as plain HTML

Creating a ASP.NET Project

File › new › Web Site…


Web Site Location
File System
HTTP
FTP
Language
VB.NET
C#

Page Life Cycle :


Once the HTTP page handler class is fully identified, the ASP.NET runtime calls the
handler's Process Request to start the process. This implementation begins by calling the
method Framework Initialize (), which builds the control trees for the page. This is a
protected and virtual member of Template Control class, class from which page itself
derives.

Next the process Request () makes page transits various phases: initialization, loading
of view state and post back data, loading of page's user code and execution post back server-
side events. Then page enters in render mode, the view state is updated and HTML generated
is sent to the output console. Finally page is unloaded and request is considered completely
served.
Stages and corresponding events in the life cycle of the ASP.NET page cycle:

Stage Events/Method
Page Initialization Page_Init
View State Loading LoadViewState
Post back data processing LoadPostData
Page Loading Page Load
Post Back Change Notification RaisePostDataChangedEvent
Post Back Event Handling RaisePostBackEvent
Page Pre Rendering Phase Page_PreRender
View State Saving SaveViewState
Page Rendering Page Render
Page Unloading Page Unload

Page Execution Stages:

The first stage in the page life cycle is initialization. This is fired after the page's control tree
has been successfully created. All the controls that are statically declared in the .aspx file will
be initialized with the default values. Controls can use this event to initialize some of the
settings that can be used throughout the lifetime of the incoming web request. View state
Information will not be available at this stage.

After initialization, page framework loads the view state for the page. View state is a
collection of name/value pairs, where control's and page itself store information that is
persistent among web requests. It contains the state of the controls the last time the page
was

processed on the server. By overriding LoadViewState () method, component developer can


understand how view state is restored.
Once viewstate is restored, control will be updated with the client side changes. It loads the
posted data values. The PostBackData event gives control a chance to update their state that
reflects the state of the HTML element on the client.

At the end of the posted data changes event, controls will be reflected with changes
done on the client. At this point, load event is fired.
Key event in the life cycle is when the server-side code associated with an event triggered
on the client. When the user clicks on the button, the page posts back. Page framework calls
the RaisePostBackEvent. This event looks up for the event handler and run the associated
delegate.

After PostBack event, page prepares for rendering. PreRender event is called. This is the
place where user can do the update operations before the viewstate is stored and output is
rendered. Next stage is saving view state, all the values of the controls will be saved to their
own viewstate collection. The resultant viewstate is serialized, hashed, base24 encoded and
associated with the _viewstate hidden field.

Next the render method is called. This method takes the HtmlWriter object and uses it
to accumulate all HTML text to be generated for the control. For each control the page calls
the render method and caches the HTML output. The rendering mechanism for the control
can be altered by overriding this render method.

The final stage of the life cycle is unload event. This is called just before the page
object is dismissed. In this event, you can release critical resources you have such as database
connections, files, graphical objects etc. After this event browser receives the HTTP response
packet and displays the page.

ASP.NET Controls

Data is physically stored inside cells of memory. This memory The ASP.NET Framework
(version 2.0) contains over 70 controls. These controls can be divided into eight groups:
 Standard Controls :The standard controls enable you to render standard form
elements such as buttons, input fields, and labels
 Validation Controls: The validation controls enable you to validate form data before
you submit the data to the server. For example, you can use a RequiredFieldValidator
control to check whether a user entered a value for a required input field.
 Rich Controls The rich controls enable you to render things such as calendars, file
upload buttons, rotating banner advertisements, and multi-step wizards
 Data Controls :The data controls enable you to work with data such as database data.
For example, you can use these controls to submit new records to a database table or
display a list of database records.
 Navigation Controls: The navigation controls enable you to display standard
navigation elements such as menus, tree views, and bread crumb trails.
 Login Controls The login controls enable you to display login, change password, and
registration forms.
 Web Part Controls: The Web Part controls enable you to build personalizable portal
applications.
 HTML Controls :The HTML controls enable you to convert any HTML tag into a
server-side control.

5.3. C # .NET:

This could be physical memory (Hard disk) or logical memory (RAM). Any cell of memory is
represented with a unique address. This address is more than some combination of
numbers or symbols.

C# language provides for practically all the data types. These types can be divided in
three categories: value types, reference types and pointer types.

There are some more basic concepts to be learnt before the discussion of the data types.
This is about variables and constants. A Variable is a named cell of memory used for data
storage. A Variable value can be changed anytime. Every variable must have a type and this
type must be set before it is used. Qualifying a variable with a type is called as declaration of
variable. The type of a variable is the most important aspect and it defines the behavior of
variable. All variables can be divided into seven main categories depending on the context of
usage:

1. Static variables

2. Variable of instance

3. Array's elements

4. Parameters given by reference


5. Parameters given by value

6. Returned values

7. Local variables.

Static Variables will be alive throughout the life of a program. It can be declared using
static modifier.

Constants in C#:

Constant type of data cannot be changed. To declare a constant the keyword const is used. An
example for the constant declaration is: const double PI = 3.1415;

Values types in C#:

Value type stores real data. When the data are queried by different function a local copy of it these
memory cells are created. It guarantees that changes made to our data in one function don't
change them in some other function. Let see at a simple example:

Public class IntClass


{
public int I = 1;

Here we have simple class that contains only one public data field of integer type. Now have a
look on its usage in main function:

Static void Main(string[ ] args)


{
// test class
int i = 10;
int j = i;
j = 11;
IntClass ic1 = new IntClass();
IntClass ic2 = ic1;
ic2.I = 100;

Console.WriteLine("value of i is {0} and j is {1}",I);


Console.WriteLine();
Console.WriteLine("value of ic1.I is {0} and ic2.I is {1}",ic1.I,ic2.I);
Console.WriteLine();
}

Reference Types in C#:

In the above example, assume that First we have two value type i and j. Also assume that the
second variable is initialized with the value of the first one. It creates new copy in memory and
after it the values of these variables will be next:

i = 10;
j = i;

There are a few more things written in the above example for explaining the Reference
Types in C#. At first, the variable ic1 of IntClass is created using dynamic memory
allocation. Then we initialize the variable ic2 with value of ic1. This makes both the
variables ic1 and ic2 referring to the same address. If we change a value of ic2, it
automatically changes the value of ic1.

Now, over to the discussions about the important value types used in C#. The categories
simple types contain some predefined or system types that also are commonly used in other
programming languages. It contains integer types: byte, Sbyte, Long, Ulong, Short, Ushort, int,
Uint. These common types differs only range of values and sign.

Next simple type is character type. To declare a variable of this type need use keyword
char. It can take values of characters or numbers as 16-digit symbol of the type Unicode.

The Boolean type has only two values: true, false. But these values cannot be assigned with
a 0 or 1 as in C++ language.

Next category of simple types is floating point types. It contains two types float and
double. Float type can get values in range from 1.5*10-45 to 3.4*1038. Double type has
range of values from 5.0*10-324 to 1.7*10308.

A structural value types are struct and enum. Struct is a the same as class but it uses real values
not references. The following code snippet contains the definition for struct:

The above is declaration for a simple structure of real 3D point. As you see a class
declaration looks very similar to the struct except that the class also has a constructor.
Common types in C#:

Object in C# language is universal; it means that all supported types are derived from it. It
contains only a couple of methods: Get Type () - returns a type of object, To String () returns
string equivalent of type that called.

Next type is class. It is declared in the same manner as structure type but it has more
advanced features.

Interface is an abstract type. It is used only for declaring type with some abstract
members. It means members without implementations. Please, have a look at piece of code
with a declaration for a simple interface:

The members of interface can be methods, properties and indexers.

Next reference type to be dealt is delegate. The main goal of delegate usage is
encapsulation of methods. It most like at pointers to function in C++.

String is common type that is used in almost all programming languages of high level. An
example of string declaration and initialization:

string s = "declaration and init";

The last very used reference type is array. Array it is set of elements that have the same
type. Array contains list of references on memory cells and it can contain any amount of
members. In C# there are three types of arrays: one-dimensional, two-dimensional and
jagged array.
So, this covers almost all types used in C#. All these types can be cast to another type
using special rules. An implicit casting can be done with types when values of variables can be
converted without losing of any data. There is special type of implicit casting called
boxing. This enables us to convert any type to the reference type or to the object type.
Boxing example

5.4. ADO.NET

ADO.NET is an evolution of the ADO data access model that directly addresses user
requirements for developing scalable applications. It was designed specifically for the web with
scalability, statelessness, and XML in mind.

ADO.NET uses some ADO objects, such as the Connection and Command objects, and
also introduces new objects. Key new ADO.NET objects include the DataSet, DataReader, and
DataAdapter.

The important distinction between this evolved stage of ADO.NET and previous data
architectures is that there exists an object ,the DataSet -- that is separate and distinct from any
data stores. Because of that, the DataSet functions as a standalone entity. We can think of the
DataSet as an always disconnected record set that knows nothing about the source or
destination of the data it contains. Inside a DataSet, much like in a database, there are tables,
columns, relationships, constraints, views, and so forth.

A DataAdapter is the object that connects to the database to fill the DataSet. Then, it
connects back to the database to update the data there, based on operations performed while the
DataSet held the data. In the past, data processing has been primarily connection-based. Now,
in an effort to make multi-tiered apps more efficient, data processing is turning to a
message-based approach that revolves around chunks of information. At the center of this
approach is the DataAdapter, which provides a bridge to retrieve and save data between a
DataSet and its source data store. It accomplishes this by means of requests to the appropriate
SQL commands made against the data store.

The XML-based DataSet object provides a consistent programming model that works
with all models of data storage: flat, relational, and hierarchical. It does this by having no
'knowledge' of the source of its data, and by representing the data that it holds as collections
and data types. No matter what the source of the data within the Dataset is, it is manipulated
through the same set of standard APIs exposed through the DataSet and its subordinate
objects.

While the DataSet has no knowledge of the source of its data, the managed provider has
detailed and specific information. The role of the managed provider is to connect, fill, and
persistence to the DataSet to and from data stores. The OLE DB and SQL Server .NET Data
Providers (System.Data.OleDb and System.Data.SqlClient) that are part of the .Net
Framework provide four basic objects: the Command, Connection, DataReader and
DataAdapter. In the remaining sections of this document, we'll walk through each part of the

DataSet and the OLE DB/SQL Server .NET Data Providers explaining what they are, and
how to program against them.

The following sections will introduce you to some objects that have evolved, and some
that are new. These objects are:

 Connections. For connection to and managing transactions against a database.

 Commands. For issuing SQL commands against a database.

 Data Readers. For reading a forward-only stream of data records from a SQL Server
data source.

 Datasets. For storing, remoting and programming against flat data, XML data and
relational data.

 Data Adapters. For pushing data into a DataSet, and reconciling data against a
database.

When dealing with connections to a database, there are two different options: SQL
Server .NET Data Provider (System.Data.SqlClient) and OLE DB .NET Data Provider
(System.Data.OleDb). In these samples we will use the SQL Server .NET Data Provider.
These are written to talk directly to Microsoft SQL Server. The OLE DB .NET Data Provider is
used to talk to any OLE DB provider (as it uses OLE DB underneath).

Connections

Connections are used to 'talk to' databases, and are represented by provider-specific classes
such as SQLConnection. Commands travel over connections and result sets are returned in
the form of streams which can be read by a DataReader object, or pushed into a DataSet
object.

Commands

Commands contain the information that is submitted to a database, and are represented by
provider-specific classes such as SQLCommand. A command can be a stored procedure call, an
UPDATE statement, or a statement that returns results. We can also use input and output
parameters, and return values as part of our command syntax. The example below shows how to
issue an INSERT statement against the Northwind database.

Data Readers

The DataReader object is somewhat synonymous with a read-only/forward-only cursor over


data. The DataReader API supports flat as well as hierarchical data. A DataReader object is
returned after executing a command against a database. The format of the returned
DataReader object is different from a recordset. For example, we might use the DataReader to
show the results of a search list in a web page.

Datasets and Data Adapters

Datasets
The Dataset object is similar to the ADO Recordset object, but is more powerful, and with
one other important distinction: the DataSet is always disconnected. The DataSet object
represents a cache of data, with database-like structures such as tables, columns,
relationships, and constraints. However, though a DataSet can and does behave much like a
database, it is important to remember that DataSet objects do not interact directly with
databases, or other source data. This allows the developer to work with a programming
model that is always consistent, regardless of where the source data resides. Data coming
from a database, an XML file, from code, or user input can all be placed into DataSet objects.
Then, as changes are made to the DataSet they can be tracked and verified before updating
the source data. The GetChanges method of the DataSet object actually creates a second
DataSet that contains only the changes to the data. This DataSet is then used by a
DataAdapter (or other objects) to update the original data source.

The DataSet has many XML characteristics, including the ability to produce and
consume XML data and XML schemas. XML schemas can be used to describe schemas

interchanged via Web Services. In fact, a Dataset with a schema can actually be compiled for
type safety and statement completion. Data Adapters (OLEDB/SQL)

The Data Adapter object works as a bridge between the DataSet and the source data.
Using the provider-specific SqlDataAdapter (along with its associated SqlCommand and
SqlConnection) can increase overall performance when working with a Microsoft SQL
Server databases. For other OLE DB-supported databases, we would use the
OleDbDataAdapter object and its associated OleDbCommand and OleDbConnection
objects.

The DataAdapter object uses commands to update the data source after changes have been
made to the DataSet. Using the Fill method of the DataAdapter calls the SELECT command;
using the Update method calls the INSERT, UPDATE or DELETE command for each
changed row. We can explicitly set these commands in order to control the statements used at
runtime to resolve changes, including the use of stored procedures. For ad-hoc scenarios, a
Command Builder object can generate these at run-time based upon a select statement.
However, this run-time generation requires an extra round-trip to the server in order to gather
required metadata, so explicitly providing the INSERT, UPDATE, and DELETE commands at
design time will result in better run-time performance.

Data Adapter

 ADO.NET is the next evolution of ADO for the .Net Framework. ADO.NET was
created with n-Tier, statelessness and XML in the forefront.
 Two new objects, the DataSet and DataAdapter, are provided for these scenarios.

 ADO.NET can be used to get data from a stream, or to store data in a cache for
updates.
 There is a lot more information about ADO.NET in the documentation.
 We can execute a command directly against the database in order to do inserts,
updates, and deletes. We don't need to first put data into a DataSet in order to
insert, update, or delete it.

 Also, we can use a DataSet to bind to the data, move through the data, and
navigate data relationships .

6. PROJECT DESIGN
SOFTWARE ENGINEERING PARADIGM APPLIED- (RAD-MODEL)

The two design objectives continuously sought by developers are reliability and maintenance.
Reliable System
There are two levels of reliability. The first is meeting the right requirements. A careful and
through systems study is needed to satisfy this aspect of reliability. The second level of
systems reliability involves the actual working delivered to the user. At this level, the
systems reliability is interwoven with software engineering and development. There are three
approaches to reliability.

1) Error avoidance: Prevents errors from occurring in software.


2) Error detection and correction: In this approach errors are recognized
whenever they are encountered and correcting the error by effect of error, of
the system does not fail.
3) Error tolerance: In this approach errors are recognized whenever they occur,
but enable the system to keep running through degraded perform or by
applying values that instruct the system to continue process.

Maintenance:
The key is to reduce the need for maintenance, while working, and if possible to do these
essential tasks.
1) More accurately defining user requirement during system development.
2) Assembling better systems documentation.
3) Using more effective methods for designing, processing, and login and
communicating information with project team members.
4) Making better use of existing tools and techniques.
5) Managing system engineering process effectively.

Output Design:
One of the most important factors of an information system for the user is the output the
system produces. Without the quality of the output, the entire system may appear
unnecessary, that will make us to avoid using it and possibly causing it to fail. Designing the
output should process in an well organized manner throughout the information system. The
right output must be developed while ensuring that each output element is designed so that
people will find the system easy to use effectively.
The term output applying to information produced by an information system, whether
printed or displayed while designing the output, should identify the specific output that is
needed in information requirements. Select a method to present the formation and create a
document report or other formats that contains produced by the system.

Types of output:

Whether the output is formatted report or a simple listing of the contents of a file, a computer
process will produce the output.

 A Document
 A Message
 Retrieval from a data store
 Transmission from a process or system activity 
Directly from an output sources

Layout Design:

It is an arrangement of items on the output medium. The layouts are building a mock up of the
actual reports or document, as it will appear after the system is in operation. The output layout
has been designated to cover information.
Input design and control:

Input specifications describe the manner in which data is entered in the system for
processing. Input design features will ensure the reliability of the systems and produce results
from accurate data, or thus can result in the production of erroneous information. The input
design also determines whenever the user can interact efficiently with this system.

Objectives of input design:

Input design consists of developing specifications and procedures for data preparation, the
steps necessary to put transaction data into a usable form for processing and data entry, and
the activity of data entry into the computer processing. The five objectives of input design
are:

 Controlling the amount of input


 Avoiding delay
 Avoiding error in data
 Avoiding extra steps
 Keeping the process simple

Controlling the amount of input:

Data preparation and data entry operation depend on people, because labour costs are high, the
cost of preparing and entering data is also high. Reducing data requirement expense by
reducing input requirement, increases the speed of entire process from data capturing to
processing to provide results to users.

Avoiding delay:
The processing delay resulting from data preparation or data entry operations is called
bottlenecks. Avoiding bottlenecks should be one objective of input.

Avoiding errors:
Through input validation we control the errors in the input data.
Avoiding extra steps:
The designer should avoid the input design that cause extra steps in processing, saving or
adding a single step in large number of transactions which saves a lot of processing time or
takes less time to process.

Keeping process simple:


If it is controlled by more people, they may feel difficult to use the systems. The best
designed system fits the people who use it in a way that is comfortable for them.
7. NORMALIZATION

It is a process of converting a relation to a standard form. The process is used to handle the
problems that can arise due to data redundancy i.e. repetition of data in the database,
maintain data integrity as well as handling problems that can arise due to insertion, updation,
deletion anomalies.

Decomposing is the process of splitting relations into multiple relations to eliminate


anomalies and maintain anomalies and data integrity. To do this we use normal forms or
rules for structuring relation.

Insertion anomaly:
Inability to add data to the database due to absence of other data.

Deletion anomaly:
Unintended loss of data due to deletion of other data.

Update anomaly:
Data inconsistency resulting from data redundancy and partial update.

Normal Forms: These are the rules for structuring relations that eliminate anomalies.

First Normal Form:

A relation is said to be in first normal form if the values in the relation are atomic for every
attribute in the relation. By this we mean simply that no attribute value can be a set of values or,
as it is sometimes expressed, a repeating group.

48
Second Normal Form:
A relation is said to be in second Normal form is it is in first normal form and it should
satisfy any one of the following rules.

1) Primary key is a not a composite primary key


2) No non key attributes are present
3) Every non key attribute is fully functionally dependent on full set of primary
key.

Third Normal Form:

A relation is said to be in third normal form if their exsit no transitive dependencies.


Transitive Dependency: If two non key attributes depend on each other as well as on the
primary key then they are said to be transitively dependent.

The above normalization principles were applied to decompose the data in multiple
tables thereby making the data to be maintained in a consistent state.

Data Dictionary

After carefully understanding the requirements of the client the entire data storage
requirements are divided into tables. The below tables are normalized to avoid any anomalies
during the course of data entry.

7. TABLE
7.1. Login Master

7.2. Department Master

7.3. Branch Master

7.4. Faculty Master

7.5. Room Master


7.6. Semester Master

7.7. Subject Master

7.8. Class Master


7.9. Exam Duty Master

7.10. Activity Master


7.11. Meeting Master

7.12. Leave Application Master


7.13. Faculty Regd.
8. DATABASE DIAGRAM

9. ER-DIAGRAMS :

“E-R diagram are used to organize data as a relation, normalizing relations and finally
obtaining a relational database model”.
Elements of an E-R diagram are:

1) ENTITY: This specifies the real life objects & is represented as

2) RELATIONSHIPS: These connect entities & establish meaningful


dependencies between them and are represented by

3) ATTRIBUTES: They specify the properties of entities &


are represented by
10. DATA FLOW DIAGRAM

A data flow diagram is graphical tool used to describe and analyze movement of data through
a system. These are the central tool and the basis from which the other components are
developed. The transformation of data from input to output, through processed, may be
described logically and independently of physical components associated with the system.
These are known as the logical data flow diagrams. The physical data flow diagrams show
the actual implements and movement of data between people, departments and workstations.
A full description of a system actually consists of a set of data flow diagrams. Using two
familiar notations Yourdon, Gane and Sarson notation develops the data flow diagrams. Each
component in a DFD is labeled with a descriptive name. Process is further identified with a
number that will be used for identification purpose. The development of DFD’s is done in
several levels. Each process in lower level diagrams can be broken down into a more
detailed DFD in the next level. The top-level diagram is often called context diagram. It
consists of a single process bit, which plays vital role in studying the current system. The
process in the context level diagram is exploded into other process at the first level DFD.

The idea behind the explosion of a process into more process is that understanding at one
level of detail is exploded into greater detail at the next level. This is done until further
explosion is necessary and an adequate amount of detail is described for analyst to
understand the process.

Larry Constantine first developed the DFD as a way of expressing system requirements in a
graphical from, this lead to the modular design.

A DFD is also known as a “bubble Chart” has the purpose of clarifying system
requirements and identifying major transformations that will become programs in system
design. So it is the starting point of the design to the lowest level of detail. A DFD consists of a
series of bubbles joined by data flows in the system.

DFD SYMBOLS:
In the DFD, there are four symbols
3) A circle or a bubble represents a process that transforms incoming data
flow into outgoing data flows
4) A rectangle defines a source(originator) or destination of system data
5) An arrow identifies data flow. It is the pipeline through which the
information flows.
6) An open rectangle is a data store, data at rest or a temporary repository
of data.
(Process that transforms data flow)

(Source or Destination of data)

(Data flow)

(Data Store)

CONSTRUCTING A DFD:
Several rules of thumb are used in drawing DFD’s:

1) Process should be named and numbered for an easy reference. Each name
should be representative of the process.
2) The direction of flow is from top to bottom and from left to right. Data
traditionally flow from source to the destination although they may flow back to the
source. One way to indicate this is to draw long flow line back to a source. An
alternative way is to repeat the source symbol as a destination. Since it is used more than
once in the DFD it is marked with a short diagonal.
3) When a process is exploded into lower level details, they are numbered.
4) The names of data stores and destinations are written in capital letters. Process
and dataflow names have the first letter of each work capitalized

A DFD typically shows the minimum contents of data store. Each data store should
contain all the data elements that flow in and out.

Questionnaires should contain all the data elements that flow in and out. Missing
interfaces redundancies and like is then accounted for often through interviews.

SAILENT FEATURES OF DFD:

 The DFD shows flow of data, not of control loops and decision are controlled
considerations do not appear on a DFD.
 The DFD does not indicate the time factor involved in any process whether the
data flows take place daily, weekly, monthly or yearly.
 The sequence of events is not brought out on the DFD.

TYPES OF DATA FLOW DIAGRAMS


1) Current Physical
2) Current Logical
3) New Logical
4) New Physical

CURRENT PHYSICAL:
In Current Physical DFD process label include the name of people or their positions or the
names of computer systems that might provide some of the overall system-processing label
includes an identification of the technology used to process the data. Similarly data flows
and data stores are often labels with the names of the actual physical media on which data are
stored such as file folders, computer files, business forms or computer tapes.

CURRENT LOGICAL:
The physical aspects at the system are removed as mush as possible so that the current
system is reduced to its essence to the data and the processors that transforms them
regardless of actual physical form.

NEW LOGICAL:

This is exactly like a current logical model if the user were completely happy with the user
functionality of the current system but had problems with how it was implemented .Typically
through the new logical model will differ from current logical model while having additional
functions, absolute function removal and inefficient flows recognized.

NEW PHYSICAL:

The new physical represents only the physical implementation of the new system.

RULES GOVERNING THE DFDS

PROCESS
1) No process can have only outputs.
2) No process can have only inputs. If an object has only inputs than it must be sink.
3) A process has a verb phrase label.

DATA STORE
1) Data cannot move directly from one data store to another data store, a process
must move data.
2) Data cannot move directly from an outside source to a data store, a process, which
receives, must move data from the source and place the data into data store
3) A data store has a noun phrase label.

SOURCE OR SINK
The origin and / or destination of data.
1) Data cannot move direly from a source to sink it must be moved by a process
2) A source and /or sink has a noun phrase land
DATA FLOW
1) A Data Flow has only one direction of flow between symbols. It may flow in
both directions between a process and a data store to show a read before an
update. The later is usually indicated however by two separate arrows since these
happen at different type.
2) A join in DFD means that exactly the same data comes from any of two or more
different processes data store or sink to a common location.
3) A data flow cannot go directly back to the same process it leads. There must be
at least another process that handles the data flow..
4) A Data flow to a data store means update (delete or change).
5) A Data flow from a data store means retrieve or use.

DATA FLOW DIAGRAMS


DATA FLOW DIAGRAM
11.UML DIAGRAM

UML - Unified Modeling Language


Introduction
Modeling is an activity that has been carried out over the years in software development.
When writing applications by using the simplest languages to the most powerful and
complex languages, you still need to model. Modeling can be as straightforward as drawing a
flowchart listing the steps carried out by an application. Why do we use modeling? Defining a
model makes it easier to break up a complex application or a huge system into simple,
discrete pieces that can be individually studied. We can focus more easily on the smaller
parts of a system and then understand the "big picture." Hence, the reasons behind modeling
can be summed up in two words:

• Readability

• Reusability

Readability brings clarity—ease of understanding. Understanding a system is the first step in either
building or enhancing a system. This involves knowing what a system is made up of, how it
behaves, and so forth. Modeling a system ensures that it becomes readable and, most importantly,
easy to document. Depicting a system to make it readable involves capturing the structure of a
system and the behavior of the system.

Reusability is the byproduct of making a system readable. After a system has been modeled to
make it easy to understand, we tend to identify similarities or redundancy, be they in terms of
functionality, features, or structure.

Even though there are many techniques and tools for modeling, in this article series,
we will be concerning ourselves with modeling object-oriented systems and applications
using the Unified Modeling Language. The Unified Modeling Language, or UML, as it is
popularly known by its TLA (three-letter acronym!), is the language that can be used to
model systems and make them readable. This essentially means that UML provides the
ability to capture the characteristics of a system by using notations. UML provides a wide
array of simple, easy to understand notations for documenting systems based on the object
oriented design principles. These notations are called the nine diagrams of UML.

So the question arises, why is UML the preferred option that should be used for
modeling? Well, the answer lies in one word: "standardization!" Different languages have
been used for depicting systems using object-oriented methodology. The prominent among
these were the Rumbaing methodology, the Brooch methodology, and the Jacobson
methodology. The problem was that, although each methodology had its advantages, they
were essentially disparate. Hence, if you had to work on different projects that used any of
these methodologies, you had to be well versed with each of these methodologies. A very tall
order indeed! The Unified Modeling Language is just that. It "unifies" the design principles
of each of these methodologies into a single, standard, language that can be easily applied
across the board for all object-oriented systems. But, unlike the different methodologies that
tended more to the design and detailed design of systems, UML spans the realm of
requirements, analysis, and design and, uniquely, implementation as well. The beauty of
UML lies in the fact that any of the nine diagrams of UML can be used on an incremental
basis as the need arises. For example, if you need to model requirements for a given system,
you can use the use case diagrams only without using the other diagrams in UML.
Considering all these reasons, it is no wonder that UML is considered "the" language of
choice.

UML does not have any dependencies with respect to any technologies or languages.
This implies that you can use UML to model applications and systems based on either of the
current hot technologies; for example, J2EE and .NET. Every effort has been made to keep
UML as a clear and concise modeling language without being tied down to any technologies.

This series aims to cover the basics of UML, including each of the nine diagrams of
UML. In addition, you will get to learn about the tools available that support UML. At the end
of each article, we will incrementally build each of the nine UML diagrams for a case study
system in the coming weeks. We will wrap our study of UML by expanding into two different
areas—Rational Unified Process and Design Patterns.
UML Diagrams

The underlying premise of UML is that no one diagram can capture the different elements of a
system in its entirety. Hence, UML is made up of nine diagrams that can be used to model a
system at different points of time in the software life cycle of a system. The nine UML
diagrams are:

• Use case diagram: The use case diagram is used to identify the primary
elements and processes that form the system. The primary elements are
termed as "actors" and the processes are called "use cases." The use case
diagram shows which actors interact with each use case.

• Class diagram: The class diagram is used to refine the use case diagram and
define a detailed design of the system. The class diagram classifies the actors
defined in the use case diagram into a set of interrelated classes. The
relationship or association between the classes can be either an "is-a" or "has-
a" relationship. Each class in the class diagram may be capable of providing
certain functionalities. These functionalities provided by the class are termed
"methods" of the class. Apart from this, each class may have certain
"attributes" that uniquely identify the class.

• Object diagram: The object diagram is a special kind of class diagram. An


object is an instance of a class. This essentially means that an object
represents the state of a class at a given point of time while the system is
running. The object diagram captures the state of different classes in the
system and their relationships or associations at a given point of time.

• State diagram: A state diagram, as the name suggests, represents the


different states that objects in the system undergo during their life cycle.
Objects in the system change states in response to events. In addition to this, a
state diagram also captures the transition of the object's state from an initial
state to a final state in response to events affecting the system.

• Activity diagram: The process flows in the system are captured in the
activity diagram. Similar to a state diagram, an activity diagram also consists
of activities, actions, transitions, initial and final states, and guard conditions.

• Sequence diagram: A sequence diagram represents the interaction between


different objects in the system. The important aspect of a sequence diagram is
that it is time-ordered. This means that the exact sequence of the interactions
between the objects is represented step by step. Different objects in the
sequence diagram interact with each other by passing "messages".

• Collaboration diagram: A collaboration diagram groups together the


interactions between different objects. The interactions are listed as numbered
interactions that help to trace the sequence of the interactions. The
collaboration diagram helps to identify all the possible interactions that each
object has with other objects.

• Component diagram: The component diagram represents the high-level parts


that make up the system. This diagram depicts, at a high level, what
components form part of the system and how they are interrelated. A
component diagram depicts the components culled after the system has
undergone the development or construction phase.

• Deployment diagram: The deployment diagram captures the configuration of


the runtime elements of the application. This diagram is by far most useful
when a system is built and ready to be deployed.

UML Diagram Classification—Static, Dynamic, and Implementation

A software system can be said to have two distinct characteristics: a structural, "static" part
and a behavioral, "dynamic" part. In addition to these two characteristics, an additional
characteristic that a software system possesses is related to implementation. Before we
categorize UML diagrams into each of these three characteristics, let us take a quick look at
exactly what these characteristics are.

• Static: The static characteristic of a system is essentially the structural


aspect of the system. The static characteristics define what parts the
system is made up of.

• Dynamic: The behavioral features of a system; for example, the ways a


system behaves in response to certain events or actions are the dynamic
characteristics of a system.

• Implementation: The implementation characteristic of a system is an


entirely new feature that describes the different elements required for
deploying a system.

The UML diagrams that fall under each of these categories are:

• Static

o Use case diagram

o Class diagram

• Dynamic

o Object diagram

o State diagram

o Activity diagram

o Sequence diagram

oCollaboration diagram
• Implementation

o Component diagram

o Deployment diagram
View of UML Diagrams

Considering that the UML diagrams can be used in different stages in the life cycle of a
system, let us take a look at the "4+1 view" of UML diagrams. The 4+1 view offers a
different perspective to classify and apply UML diagrams. The 4+1 view is essentially how a
system can be viewed from a software life cycle perspective. Each of these views represents
how a system can be modeled. This will enable us to understand where exactly the UML
diagrams fit in and their applicability.

These different views are:

• Design View: The design view of a system is the structural view of the
system. This gives an idea of what a given system is made up of. Class
diagrams and object diagrams form the design view of the system.

• Process View: The dynamic behavior of a system can be seen using the
process view. The different diagrams such as the state diagram, activity
diagram, sequence diagram, and collaboration diagram are used in this
view.

• Component View: Next, you have the component view that shows the
grouped modules of a given system modeled using the component
diagram.
• Deployment View: The deployment diagram of UML is used to identify
the deployment modules for a given system.

• Use case View: Finally, we have the use case view. Use case diagrams of
UML are used to view a system from this perspective as a set of discrete
activities or transactions.

SYSTEM

FORM1
Dispose (protected, overloads, overwrites)
SYSTEM.WEB.UI.FORMS.FORM
Initialize Component (private)
New (public) Load (private)

Sender (system object)


E (system.eventargs)
Disposing (Boolean)
LOGIN PAGE
LOGIN SOURCE CODE
public partial class Login : System.Web.UI.MasterPage
{
globalclass gb = new globalclass();

protected void Page_Load(object sender, EventArgs e)


{

}
protected void btn_Click(object sender, EventArgs e)
{
string str = "select user_password,Faculty_Id ,User_type from login_master where
Faculty_Id='" + txtuserid.Text + "'";
SqlDataReader dr = gb.logindata(str);
//string type=;

if (dr.Read())
{
if (dr["User_Password"].ToString()== txtpassword.Text)
{
Session["User Name"] = txtuserid.Text;
if (ddl_Usertype.SelectedItem.Text == "Adminstrator" && dr["User_Type"].ToString() == "A")
{
Response.Redirect("HomeAdmin.aspx");
}
else if(ddl_Usertype.SelectedItem.Text=="Faculty" && dr["User_Type"].ToString()=="F")
{
Response.Redirect("HomeFac.aspx");
}
}
else
{
Ltlp.Text="Invalid Password";
}

}
else
{
Ltlu.Text="Invalid User Id";
}
}

ADMIN HOMEPAGE
FACULTY HOMEPAGE
INFORMATION UPDATION

INFORMATION UPDATATION SOURCE CODE


public partial class FacultyRegistration : System.Web.UI.Page
{
globalclass gb = new globalclass();
static string ss;
protected void Page_Load(object sender, EventArgs e)
{
if (!Page.IsPostBack)
{
if (Session["User Name"] == null)
{
Response.Redirect("login.aspx");
}
else
{
}
Photo.Visible = false;
}
}
protected void Btn_Disp_Click(object sender, EventArgs e)
{
string s = File_Upimg.FileName;
ss = "Fac_img" + "/" + s;
File_Upimg.SaveAs(Server.MapPath(ss));
Photo.ImageUrl = ss;
Photo.Visible = true;
}
protected void Btn_Submit_Click(object sender, EventArgs e)
{
string ss1 = Session["User Name"].ToString();
gb.insert("Facreg '" + ss1 + "','" + Txt_Name.Text + "',
'" + Rbl_Gen.SelectedValue + "','" + Txt_Tempadd.Text + "',
'" + Txt_Peradd.Text + "','" + Txt_Email.Text + "',
'" + Txt_Phone.Text + "','" + Txt_Qual.Text + "','"+Txt_Spec.Text+"',
,'" + Txt_Univer.Text + "','" + Txt_Dob.Text + "',
'" + ss + "','"+Txt_Dept.Text+"'");
}
protected void Btn_Update_Click(object sender, EventArgs e)
{
string ss1 = Session["User Name"].ToString();
gb.update("UpdateFacredg '" + ss1 + "','" + Txt_Name.Text + "',
'" + Rbl_Gen.SelectedValue + "','" + Txt_Tempadd.Text + "','" +
Txt_Peradd.Text + "','" + Txt_Email.Text + "','" + Txt_Phone.Text
+ "','" + Txt_Qual.Text + "','" + Txt_Spec.Text + "',
'" + Txt_Univer.Text + "','" + Txt_Dob.Text + "',,'" + ss + "'");
}}
OUR FACULTY
FACULTY INFORMATION DISPLAY
CLASS DETAILS/TIME TABLE
CLASS DETAILS SOURCE CODE

public partial class Section_master : System.Web.UI.Page


{
globalclass gb = new globalclass();
DataSet ds = new DataSet();
protected void Page_Load(object sender, EventArgs e)
{

Lbl_Class_Save.Visible = false;
Lbl_Class_Update.Visible = false;
if (!Page.IsPostBack)
{
if (Session["User Name"] == null)
{
Response.Redirect("login.aspx");
}

else
{

}
Btn_Update.Visible = false;
GridView1.Visible = false;
gb.dispyadrp(ddl_branchname, "Branch_Master",
"Branch_Name", "Branch_Id");
gb.dispyadrp(ddl_semname, "Semister_Master", "Sem_Name"
,"Sem_Id");
gb.dispyadrp(ddl_roomno, "Room_Master", "Room_No",
"Room_Id");
gb.dispyadrp(ddl_FacName, "Faculty_Master",
"Faculty_Name", "Faculty_Id");
gb.dispyadrp(ddl_Subject, "Subject_Master", "Sub_Name", "Sub_Id");
//ddl_branchname.SelectedItem.Text = "--Select--";

//ddl_semname.SelectedItem.Text = " --Select--";


//ddl_roomno.SelectedItem.Text = " --Select--";
}
}
protected void Btn_Save_Click(object sender, EventArgs e)
{
string str = "select max(isnull(Substring(Sec_id,4,len(Sec_id)),000))Sec_id from section_master";

string ss = gb.autogenerated(str);
string ss1 = "SEC" + ss;

gb.insert("Sectionmaster '" + ss1 + "','" + ddl_branchname.SelectedValue + "','" +


ddl_semname.SelectedValue+"'
,'" + Txt_SecName.Text + "','" + ddl_branchname.SelectedItem + "',
'" + ddl_semname.SelectedItem + "','" + ddl_roomno.SelectedValue +"'
,'" + ddl_roomno.SelectedItem + "','"+ddl_FacName.SelectedValue+"','"+ddl_FacName.SelectedItem+"',
'"+Txt_Day.Text+"','"+Txt_Starttime.Text+"','"+Txt_Endtime.Text+"',
'"+ddl_Subject.SelectedValue+"','"+ddl_Subject.SelectedItem+"'");
Lbl_Class_Save.Visible = true;
}
protected void Btn_Cancel_Click(object sender, EventArgs e)
{
Txt_SecName.Text = "";
Txt_Day.Text = "";
Txt_Starttime.Text = "";
Txt_Endtime.Text = "";

}
protected void GridView1_SelectedIndexChanged(object sender,
EventArgs e)
{

string id = GridView1.SelectedValue.ToString();

string str1 = "Select * from Section_Master where


Sec_Id='" + id + "'";
ds = gb.selectdata(str1);

Txt_SecName.Text = ds.Tables[0].Rows[0][3].ToString();
ddl_branchname.SelectedItem.Text = ds.Tables[0].Rows[0][4].ToString();
ddl_semname.SelectedItem.Text = ds.Tables[0].Rows[0][5]
.ToString();
ddl_roomno.SelectedItem.Text = ds.Tables[0].Rows[0][7]
.ToString();
ddl_FacName.SelectedItem.Text = ds.Tables[0].Rows[0][0]
.ToString();
Txt_Day.Text = ds.Tables[0].Rows[0][10].ToString();
Txt_Starttime.Text = ds.Tables[0].Rows[0][11].ToString();
Txt_Endtime.Text = ds.Tables[0].Rows[0][12].ToString();
ddl_Subject.SelectedItem.Text = ds.Tables[0].Rows[0][14].ToString();

Btn_Save.Visible = false;
Btn_Update.Visible = true;
}
protected void Btn_Display_Click(object sender, EventArgs e)
{
GridView1.Visible = true;
gb.dispdata(GridView1, "ViewSectionmaster");
}
protected void Btn_Update_Click(object sender, EventArgs e)
{
string id = GridView1.SelectedValue.ToString();

gb.update("Updatesectionmaster '" + id + "','" + ddl_branchname.SelectedValue + "','" +


ddl_semname.SelectedValue+"'
,'" + Txt_SecName.Text + "','" + ddl_branchname.SelectedItem + "'
,'" + ddl_semname.SelectedItem + "','" + ddl_roomno.SelectedValue+"'
,'" + ddl_roomno.SelectedItem + "','" + ddl_FacName.SelectedValue+"'
,'" + ddl_FacName.SelectedItem + "','" + Txt_Day.Text + "','" + Txt_Starttime.Text + "','" + Txt_Endtime.Text
+ "','"+ddl_Subject.SelectedValue+"','"+ddl_Subject.SelectedItem+"'");

gb.dispdata(GridView1, "ViewSectionmaster");
Lbl_Class_Update.Visible = true;
}
protected void GridView1_PageIndexChanging(object sender, GridViewPageEventArgs e)
{
GridView1.PageIndex = e.NewPageIndex;
gb.dispdata(GridView1, "ViewSectionmaster");
}
ACTIVITY DETAILS
FACULTY VIEW OF TIME TABLE
LEAVE APPLICATION
GLOBAL CLASS DECLARETION
using System;
using System.Data;
using System.Configuration;
using System.Linq;
using System.Web;
using System.Web.Security;
using System.Web.UI;
using System.Web.UI.HtmlControls;
using System.Web.UI.WebControls;
using System.Web.UI.WebControls.WebParts;
using System.Xml.Linq;
using System.Data.SqlClient;

public class globalclass


{
SqlConnection cn = new SqlConnection(System.Configuration.ConfigurationManager.
ConnectionStrings["Webcs"].ToString());
DataSet ds;
SqlDataAdapter da;

public void insert(String PName)


{

da = new SqlDataAdapter(PName, cn);


ds = new DataSet();
da.Fill(ds);

}
public void dispyadrp(DropDownList drp, string tbname,
string disval, string hidval)
{

da = new SqlDataAdapter("select "+disval+","+hidval+" from "+tbname+"",cn);


ds = new DataSet();

da.Fill(ds);
drp.DataSource = ds;
drp.DataTextField = disval;

drp.DataValueField = hidval;

drp.DataBind();

}
public void dispdata(GridView gd, string SPname)
{
ds = new DataSet();
da = new SqlDataAdapter(SPname, cn);
da.Fill(ds,SPname);
gd.DataSource = ds;
gd.DataBind();

}
public DataSet selectdata(string selectdataa)
{

da = new SqlDataAdapter(selectdataa, cn);


ds = new DataSet();
da.Fill(ds);
return ds;
}
public void update(string strupdate)
{

da = new SqlDataAdapter(strupdate, cn);


ds = new DataSet();
da.Fill(ds);
}

public string autogenerated(string strselect)


{

da= new SqlDataAdapter(strselect,cn);


DataSet ds = new DataSet();
//da = new SqlDataAdapter(str, cn);
da.Fill(ds);
string s = "";
foreach (DataRow dr in ds.Tables[0].Rows)
{
if (dr.IsNull(0))
{
int i = 1;
s = i.ToString("000");
// txtGroupCode.Text = "GID" + s;
}
else
{
string str2 = ds.Tables[0].Rows[0].
ItemArray[0].ToString();
int x = int.Parse(str2) + 1;
s = x.ToString("000");
// txtGroupCode.Text = "GID" + s1;
}
}
return s;
}
public DataSet return_dataset(string tbname, string cond, string val)
{
ds = new DataSet();
da = new SqlDataAdapter("select * from " + tbname +
" where " + cond + " = " + val + "", cn);
da.Fill(ds);
return ds;
}

public DataSet return_dataset1(string strftech)


{
ds = new DataSet();
da = new SqlDataAdapter(strftech, cn);
da.Fill(ds);
return ds;
}
public SqlDataReader logindata (string str)
{
SqlDataReader dr;
cn.Open();
SqlCommand cmd = new SqlCommand(str, cn);
dr= cmd.ExecuteReader();
return dr;

cn.Close();
}
public void displaydatalist(string tbname, DataList dtl)
{
ds = new DataSet();
SqlDataAdapter da = new SqlDataAdapter("select * from
"+tbname+"",cn);
da.Fill(ds);
dtl.DataSource = ds;
dtl.DataBind();

}
}

12. EXPLANATIONS
12.1. BUTTONS:
1. SAVE
To insert the new values and information in the database.
2. CANCEL
To cancel the data that has been wrongly entered.
3. VIEW
To see the data that are present in the databases.
4. UPDATE
To change any value or values that the user wants.
5. DELETE
To delete some unwanted data from the database after selecting any data from the search
option.
6. SEARCH
To search any data from the database and see its complete details. (Dropdown list)
12.2. POPULATE DROPDOWN WITHOUT CONDITION:
To display any data from database without any condition, just setting the property like data text
field=column name and value field=column name.
12.3. POPULATE DROPDOWN WITH CONDITION:
To display any data from database without any condition, just setting the property like data text
field=column name and value field=column name as per our condition using the
dropdown list control.
12.4. DISPLAY GRID VIEW:
It is a procedure through which we can display the data from the database as per the inputs and
the database in a column wise format.
12.5. DISPLAY DETAIL VIEW:
It is a procedure through which we can display the data from the database as per the inputs and
the database in a row wise format.

13. PROJECT TESTING

1) COMPILATION TEST:
• It was a good idea to do our stress testing early on, because it gave us time to fix
some of the unexpected deadlocks and stability problems that only occurred when
components were exposed to very high transaction volumes.
2) EXECUTION TEST:

• This program was successfully loaded and executed because of


good programming there was no execution error.

3) OUTPUT TEST:
• The successful output screens are placed in the output screens section.

14. FUTURE SCOPE OF PROJECT

The project has met the standards required to work for provide service according to user
requirement. If the policies remain same the project can be ported to any institute with minor
changes in the working procedure of the project. The project can be used as an availability to
develop a project for different colleges or institutions with different logic where in the
commonalties in certain areas remain the same at any level. By using the common features in
future development the development time as well as the cost of development can be
decreased considerably.
By shifting the project to the Mobile Based Environment through Microsoft.Net Compact
Framework the project can be made into a wider range by which the restrictions of the
software & hardware requirements can be scaled down.

Implementation of security mechanisms at various levels

According to security we provide security our application by using asp.net security levels
like

1. Authentication
2. Authorization
3. Impersonation

Authentication

It is the process of validating the identity of a user to allow or deny a request. This involves
accepting credentials (e.g. username and password) from the users and validating it against a
designated authority. After the identity is verified and validated, the user is considered to be
legal and the resource request is fulfilled.

Authorization
This process ensures that users with valid identity are allowed to access specific resources.

Impersonation
This process enables an application to ensure the identity of the user, and in turn make
request to the other resources. Access to resources will be granted or denied based on the
identity that is being impersonated.

15. CONCLUSION

• The project has been appreciated by all the users in the organization.
• It is easy to use, since it uses the GUI provided in the user dialog.
• User friendly screens are provided.
• The usage of software increases the efficiency, decreases the effort.
• It can be efficiently employed as a Web Based Academic Organizer.
• It has been thoroughly tested and implemented.
• Any endeavor is incomplete without the spirit of teamwork.
We could not only muster up support for fostering this project but
also gather up the enthusiastic team.
16. BIBLIOGRAPHY

1. SOFTWARE ENGINEERING
by Rogers Pressman

2. VISUAL BASIC.NET Black Book


By Evangeleous Petersons

3. SQL FOR PROFESSIONALS


by Jain

4. MSDN 2008
by Microsoft

5. FUNDAMENTALS OF SOFTWARE ENGINEERING

by Rajib Mall

6. ASP.NET 2.0 Unleashed


by Stephen Walther
Online References:
1. www.codeproject.com
2. www.netspider.com
3. www.c#corner.com
4. www.microsoft.com
5. www.codeguru.com

You might also like