You are on page 1of 56

AADHAR SECURE TRAVEL

IDENTITY
CHAPTER 1

INTRODUCTION

1.1 INTRODUCTION

This system is used to create a tool that manages the handling of passport and license using
the unique identification associated with each individual. The application deals with allowing the
citizens to register for a unique identity. The ID is supported with a pin. Citizen’s being issued
passport or those have a passport is then associated with the UID. The UID will provide access to
the passport from the airport for the airline from the centralized server. The details and profile of
the citizen with the photo can be viewed as part of security check.
The police department can also use the application to trace or stop any person from travelling
abroad. The airline gets a notification when the airport staff has access to the citizen’s passport.
The police department can stop or trace either using the UID or passport number. They could also
pass the name of the person and the system can generate a list of photo previews of people having
passport.
The citizen uses the Aadhar scheme to apply for license. The details of the citizen are picked from
the registration database. The citizen is provided with the test details by the application. The details
contain the location, date and time information. The test details are provided to the citizen on
completion of the test. The license issue and denial is recorded.

1.2 SCOPE AND OBJECTIVE


To create a Tool that manages the handling of travel that includes passport, license and crime using
the unique identification associated with each individual.
CHAPTER 2
LITERATURE SURVEY

CHAPTER 2
LITERATURE SURVEY

2.1 AADHAR SECURE


Aadhar, which means ' foundation' is a 12 digit unique-identity number issued to all Indian
residents based on their biometric and demographic data. The data is collected by the Unique
Identification Authority of India (UIDAI), a statutory authority established in January 2009 by the
Government of India, under the Ministry of Electronics and Information Technology, under the
provisions of the Aadhar (Targeted Delivery of Financial and other Subsidies, benefits and
services) Act, 2016

2.2 REASON FOR CHOOSING .NET


Disadvantages of C Language

• C does not have concept of OOPs, that’s why C++ is developed.

• There is no runtime checking in C language.

• There is no strict type checking. For example, we can pass an integer value for the floating
data type.

• C doesn’t have the concept of namespace.

• C doesn’t have the concept of constructor or destructor.

Disadvantages of C++

• Does not provide very strong type-checking. C++ code is easily prone to errors related to
data types, their conversions, for example, while passing arguments to functions.

• Does not provide efficient means for garbage collection, as already mentioned.

• No built in support for threads.

• Gets complex when u want to develop a graphics rich application in C++


• portability of code on various platforms, etc

Disadvantages of Java:

• Performance is significantly slower and more memory-consuming.

• Java is predominantly a single-paradigm language.

• No local constants.

.NET provides solution to all the above mentioned problems.

2.3 .NET FRAMEWORK

The Microsoft .Net Framework is a platform that provides tools and technologies you need
to build Networked Applications as well as Distributed Web Services and Web Applications. The
.Net Framework provides the necessary compile time and run-time foundation to build and run
any language that conforms to the Common Language Specification (CLS).The main two
components of .Net Framework are Common Language Runtime (CLR) and .Net Framework
Class Library (FCL).

The Common Language Runtime (CLR) is the runtime environment of the .Net Framework
that executes and manages all running code like a Virtual Machine. The .Net Framework Class
Library (FCL) is a huge collection of language-independent and type-safe reusable classes. The
.Net Framework Class Libraries (FCL) is arranged into a logical grouping according to their
functionality and usability is called Namespaces.

Microsoft .Net Languages Source Code is compiled into Microsoft Intermediate Language
(MSIL). MSIL we can call it as Intermediate Language (IL) or Common Intermediate Language
(CIL). Microsoft Intermediate Language (MSIL) is a CPU independent set of instructions that can
be converted to the native code. Metadata also created in the course of compile time with Microsoft
Intermediate Language (MSIL) and stored it with the compiled code. Metadata is completely self-
describing. Metadata is stored in a file called Manifest, and it contains information about the
members, types, references and all the other data that the Common Language Runtime (CLR)
needs for execution.
The Common Language Runtime (CLR) uses metadata to locate and load classes, generate
native code, provide security, and execute Managed Code. Both Microsoft Intermediate Language
(MSIL) and Metadata assembled together are known as Portable Executable (PE) file. Portable
Executable (PE) is supposed to be portable across all 32-bit operating systems by Microsoft .Net
Framework.

During the runtime the Common Language Runtime (CLR)'s Just in Time (JIT) compiler
converts the Microsoft Intermediate Language (MSIL) code into native code to the Operating
System. The native code is Operating System independent and this code is known as Managed
Code, that is, the language's functionality is managed by the .NET Framework. The Common
Language Runtime (CLR) provides various Just in Time (JIT) compilers, and each works on a
different architecture depends on Operating Systems, that mean the same Microsoft Intermediate
Language (MSIL) can be executed on different Operating Systems.

2.4 INTRODUCTION TO ASP.NET


ASP.NET is an open source server-side Web application framework designed for Web
development to produce dynamic Web pages. It was developed by Microsoft to
allow programmers to build dynamic web sites, web applications and web services.

It was first released in January 2002 with version 1.0 of the .NET Framework, and is the
successor to Microsoft's Active Server Pages (ASP) technology. ASP.NET is built on
the Common Language Runtime (CLR), allowing programmers to write ASP.NET code using any
supported .NET language. The ASP.NET SOAP extension framework allows ASP.NET
components to process SOAP messages.

ASP.NET is in the process of being re-implemented as a modern and modular web


framework, together with other frameworks like Entity Framework. The new framework will make
use of the new open-source .NET Compiler Platform (code-name "Roslyn") and be cross platform
. ASP.NET MVC, ASP.NET Web API, and ASP.NET Web Pages (a platform using
only Razor pages) will merge into a unified MVC 6. The project is called "ASP.NET vNext".

Dot-NET consists of a number of technologies that allow software developers to build


Internet-based distributed systems. Individual pieces of these systems, called software
components, can be built using several different programming languages and by several different
organizations. Through a common set of core functionality, Dot-NET allows these components to
work reliably with each other.

Microsoft's core implementation of Dot-NET includes:

• C# (a new programming language)


• the Common Language Runtime (for support of other programming languages)
• a collection of components that provide support for networking, security, and other "base"
services commonly needed in distributed applications
• Windows Forms (Win Forms) and Web Forms, rich Windows user interface components
• ASP.NET, a new version of Active Server Pages
• ADO.NET, new data access objects in the tradition of the original Active Data Objects

A new version of Microsoft's system development environment, Visual Studio .NET, is the
primary tool used to build dot-NET software.

2.5 INTRODUCTION TO C#
C#(pronounced as see sharp) is a multi-paradigm programming
language encompassing strong typing, imperative, declarative, functional, generic, object-
oriented (class-based), and component-oriented programming disciplines. It was developed
by Microsoft within its .NET initiative and later approved as a standard by Ecma (ECMA-334)
and ISO (ISO/IEC 23270:2006). C# is one of the programming languages designed for
the Common Language Infrastructure.

C# is intended to be a simple, modern, general-purpose, object-oriented programming


language. Its development team is led by Anders Hejlsberg. The most recent version is C# 5.0,
which was released on August 15, 2012.

The ECMA standard lists these design goals for C#:

• The C# language is intended to be a simple, modern, general-purpose, object-oriented


programming language.
• The language, and implementations thereof, should provide support for software engineering
principles such as strong type checking, array bounds checking, detection of attempts to
use uninitialized variables, and automatic garbage collection. Software robustness, durability,
and programmer productivity are important.
• The language is intended for use in developing software components suitable for deployment
in distributed environments.
• Source code portability is very important, as is programmer portability, especially for those
programmers already familiar with C and C++.
• Support for internationalization is very important.
• C# is intended to be suitable for writing applications for both hosted and embedded systems,
ranging from the very large that use sophisticated operating systems, down to the very small
having dedicated functions.
• Although C# applications are intended to be economical with regard to memory
and processing power requirements, the language was not intended to compete directly on
performance and size with C or assembly language.

2.6 ADO.NET – DATABASE CONNECTIVITY


ADO.NET provides consistent access to data sources such as SQL Server and XML and to
data sources exposed through OLE DB and ODBC. Data-sharing consumer applications can use
ADO.NET to connect to these data sources and retrieve, handle, and update the data that they
contain.
ADO.NET separates data access from data manipulation into discrete components that can
be used separately or in tandem. ADO.NET includes .NET Framework data providers for
connecting to a database, executing commands, and retrieving results. Those results are either
processed directly, placed in an ADO.NET Dataset object in order to be exposed to the user in an
ad hoc manner, combined with data from multiple sources, or passed between tiers.
The Dataset object can also be used independently of a .NET Framework data provider to manage
data local to the application or sourced from XML.
The ADO.NET classes are found in System.Data.dll, and are integrated with the XML
classes found in System.Xml.dll. For sample code that connects to a database, retrieves data from
it, and then displays that data in a console window, see ADO.NET Code Examples.
ADO.NET provides functionality to developers who write managed code similar to the
functionality provided to native component object model (COM) developers by ActiveX Data
Objects (ADO). We recommend that you use ADO.NET, not ADO, for accessing data in your
.NET applications.
ADO.NET provides the most direct method of data access within the .NET Framework.
For a higher-level abstraction that allows applications to work against a conceptual model instead
of the underlying storage model, see the ADO.NET Entity Framework.

2.7 INTRODUCTION TO SQL SERVER


Microsoft SQL Server 2005 is a database platform for large-scale online transaction
processing (OLTP), data warehousing, and e-commerce applications; it is also a business
intelligence platform for data integration, analysis, and reporting solutions.

SQL Server 2005 introduces "studios" to help you with development and management
tasks: SQL Server Management Studio and Business Intelligence Development Studio. In
Management Studio, you develop and manage SQL Server Database Engine and notification
solutions, manage deployed Analysis Services solutions, manage and run Integration Services
packages, and manage report servers and Reporting Services reports and report models. In BI
Development Studio, you develop business intelligence solutions using Analysis Services projects
to develop cubes, dimensions, and mining structures; Reporting Services projects to create reports;
the Report Model project to define models for reports; and Integration Services projects to create
packages.

In the studios, SQL Server 2005 provides the graphical tools you need to design, develop,
deploy, and administer relational databases, analytic objects, data transformation packages,
replication topologies, reporting servers and reports, and notification servers. Additionally, SQL
Server 2005 includes command prompt utilities to perform administrative tasks from the command
prompt. To quickly get to important high-level topics for tools and utilities, go to Tools and
Utilities Documentation Map.

The Database Engine is the core service for storing, processing, and securing data. The
Database Engine provides controlled access and rapid transaction processing to meet the
requirements of the most demanding data consuming applications within your enterprise.
Use the Database Engine to create relational databases for online transaction processing or online
analytical processing data. This includes creating tables for storing data, and database objects such
as indexes, views, and stored procedures for viewing, managing, and securing data. You can use
SQL Server Management Studio to manage the database objects, and SQL Server Profiler for
capturing server events.

2.8 INTERNET INFORMATION SERVER (IIS)


IIS (Internet Information Server) is a group of Internet servers (including a Web or Hypertext
Transfer Protocol server and a File Transfer Protocol server) with additional capabilities for
Microsoft's Windows NT and Windows 2000 Server operating systems. IIS is Microsoft's entry to
compete in the Internet server market that is also addressed by Apache, Sun Microsystems,
O'Reilly, and others. With IIS, Microsoft includes a set of programs for building and administering
Web sites, a search engine, and support for writing Web-based applications that access databases.
Microsoft points out that IIS is tightly integrated with the Windows NT and 2000 Servers in a
number of ways, resulting in faster Web page serving.

A typical company that buys IIS can create pages for Web sites using Microsoft's Front
Page product. Web developers can use Microsoft's Active Server Page (ASP) technology, which
means that applications - including ActiveX controls - can be imbedded in Web pages that modify
the content sent back to users. Developers can also write programs that filter requests and get the
correct Web pages for different users by using Microsoft's Internet Server Application Program
Interface (ISAPI) interface. ASPs and ISAPI programs run more efficiently than common gateway
interface (CGI) and server-side include (SSI) programs, two current technologies. (However, there
are comparable interfaces on other platforms.)
Microsoft includes special capabilities for server administrators designed to appeal to
Internet service providers (ISPs). It includes a single window (or "console") from which all
services and users can be administered. It's designed to be easy to add components as snap-ins that
you didn't initially install. The administrative windows can be customized for access by individual
customers.
CHAPTER 3
SYSTEM ANALYSIS
CHAPTER 3

SYSTEM ANALYSIS

3.1 EXISTING SYSTEM


• The citizen is identified by multiple identity cards.
• There is no unique identity in India.
• Police department cannot talk to the private airlines to trace or stop travel of a citizen
instantly.
Disadvantage of Existing System:
• There is no unique identity in India unlike the SSN in the USA.
• An individual can hold more than one passport.
• Possibility of passport being lost or damaged.
• Physically on stopping or tracing a citizen in travel. License can be applied multiple times,
duplication possible, lost or damaged anytime.

3.2 PROPOSED SYSTEM


• A citizen is provided with a UID.
• The id is associated with a pin number.
• A physical verification is taken up by the surveyor on whose confirmation the ID is issued.
• A citizen holding the ID can only apply for passport or license.
• Based on the type of application the application is forwarded either to the Police
department for verification or to the RTO for driving test status.
• Citizen has an online mode where he can check the status of each application.
• The police department integrates with the airlines and identifies citizen who has a
conditional travel.

ADVANTAGES
• For transaction related to government departments the ID and pin number should be
quoted.
• The citizen does not have to approach agents for applications.
• Aadhar seva centers would facilitate the application processing.
• The citizen has his application auto-filled when he visits the seva centers.
• No commission is involved.
• The UID doesn’t allow duplicate application for any type of card.
• The citizen can apply the next time only when he fails a verification or test.
• Address changes easily updated.

3.3 APPROACH ADAPTED


WATERFALL MODEL

The Waterfall Model was first Process Model to be introduced. It is also referred to as a
linear-sequential life cycle model. It is very simple to understand and use. In a waterfall model,
each phase must be completed fully before the next phase can begin. This type of model is basically
used for the project which is small and there are no uncertain requirements. At the end of each
phase, a review takes place to determine if the project is on the right path and whether or not to
continue or discard the project. In this model the testing starts only after the development is
complete. In waterfall model phases do not overlap.

The waterfall model is a sequential (non-iterative) design process, used in software


development processes, in which progress is seen as flowing steadily downwards (like a waterfall)
through the phases of conception, initiation, analysis, design,
construction, testing, production/implementation and maintenance. Despite the development of
new software development process models, the Waterfall method is still the dominant process
model with over a third of software developers still using it.

The waterfall development model originates in the manufacturing and


construction industries: highly structured physical environments in which after-the-fact changes
are prohibitively costly, if not impossible. Because no formal software development
methodologies existed at the time, this hardware-oriented model was simply adapted for software
development.

It is also referred to as a linear-sequential life cycle model. It is very simple to understand


and use. In a waterfall model, each phase must be completed before the next phase can begin and
there is no overlapping in the phases. Waterfall model is the earliest SDLC approach that was
used for software development.

Advantages of waterfall model:

• This model is simple and easy to understand and use.


• It is easy to manage due to the rigidity of the model – each phase has specific deliverables
and a review process.
• In this model phases are processed and completed one at a time. Phases do not overlap.
• Waterfall model works well for smaller projects where requirements are very well
understood.
Requirement Analysis & Definition:

This phase is focused on possible requirements of the system for the development are captured.
Requirements are gathered subsequent to the end user consultation.

System & Software Design:

Prior to beginning the actual coding, it is inevitable to understand what actions are to be taken and
what they should like. The requirement specifications are studied in detail in this phase and the
design of the system is prepared. The design specifications are the base for the implementation
and unit testing model phase.

Implementation:
Subsequent to receiving the system design documents, the work is shared into various modules
and the real coding is commenced. The system is developed into small coding units. These units
are later integrated in the subsequent phase.

Integration & Testing:

The modules that are divided into units are integrated into a complete system and tested for proper
coordination among modules and system behaves as per the specifications. Every unit is tested for
its functionality. Once the testing is completed, the software product is delivered to the customer.

Deployment:

Involves conversions of new system design into operation. This may involve implementing the
software system and training the operating staff before the software system is functional.

Operations & Maintenance:

It is a never ending phase. Once the system is running in production environment, problems come
up. The issues that are related to the system are solved only after deployment of the system. The
problems arise from time to time and need to be solved; hence this phase is referred as
maintenance.
CHAPTER 4
REQUIREMENT ANALYSIS
CHAPTER 4

REQUIREMENT ANALYSIS

SRS – SOFTWARE REQUIREMENT SPECIFICATION


Software requirements specification (SRS) is a description of a software system to be
developed, laying out functional and non-functional requirements, and may include a set of use
cases that describe interactions the users will have with the software. Software requirements
specification establishes the basis for an agreement between customers and contractors or suppliers
(in market-driven projects, these roles may be played by the marketing and development divisions)
on what the software product is to do as well as what it is not expected to do. Software requirements
specification permits a rigorous assessment of requirements before design can begin and reduces
later redesign. It should also provide a realistic basis for estimating product costs, risks, and
schedules. The software requirements specification document enlists enough and necessary
requirements that are required for the project development. To derive the requirements we need to
have clear and thorough understanding of the products to be developed or being developed. This
is achieved and refined with detailed and continuous communications with the project team and
customer till the completion of the software.

4. 1 PRODUCT PERSPECTIVE
4.1.1 USER INTERFACE

• Admin
• License
• Police
• Visitor

4.2 HARDWARE AND SOFTWARE REQUIREMENTS


HARDWARE REQUIREMENTS

Processor : Pentium 4 +
RAM : 2GB

Hard Disk : 20GB

Speed : 1.2 GHz+

SOFTWARE REQUIREMENTS

Operating System : Windows XP or Higher

IDE : Visual Studio 2010

Language : C#

Framework : ASP.NET 4.0

Back End : MS SQL Server

4.3 FUNCTIONAL REQUIREMENTS


1. Admin:

• Approve/Reject Aadhar
Admin can approve or reject aadhar identity
• Add monitors
Admin can add monitors
• Forward license
Admin will be forwarding the documents for verification for providing license
• Forward passport holders
Admin will be forwarding the documents for verification for providing passport
2. License:
• Manage license
License in charge will be managing the process of providing license by verifying the
documents forwarded by the admin.
• Update profile
License in charge can update their profile.
3. Police:
• Manage license
Police will be verifying the documents forwarded by the admin and will be managing the
process of providing the passport.
• Update profile
Police can update their profile.

4. Visitor:

• Registration
Visitors have to first get registered to the application to access it.
• Check status
Visitors can check their status regarding the license or passport details which they have
applied for.
• Apply passport
Visitors can apply for passport through this application.
• Apply license
Visitors can apply for license through this application.
5. Crime:
• Update profile
Crime can update their profile.
• Update password
Crime can update password of their profile.
• Add crime information
Crime will be updating the information regarding the crime done by the users.

NON-FUNCTIONAL REQUIREMENT

Non-functional requirement is a requirement that specifies criteria that can be used to judge
the operation of a system, rather than specific behaviors. Functional requirements define what a
system is supposed to do whereas non-functional requirements define how a system is supposed
to be. The non-functional requirements are the constraints or the environment in which the
software is developed.

Reliability- is the ability of a person or system to perform and maintain its functions in
routine circumstances, as well as hostile or unexpected circumstances.

Security- regarding security or privacy issues surrounding use of the product or protection
of the data used or created by the product. Define any user identity authentication requirements.

Usability- The word "usability" also refers to methods for improving ease-of-use during
the design process.

Interoperability- is a property referring to the ability of diverse systems and organizations


to work together (inter-operate).
CHAPTER 5
SYSTEM DESIGN
CHAPTER 5

SYSTEM DESIGN

The purpose of the design phase is to plan a solution of the problem specified by the
requirements document. This phase is the first step in moving from the problem domain to the
solution domain. In other words, starting with what is needed; design takes us toward how to satisfy
the needs. The design of a system is perhaps the most critical factor affecting the quality of the
software; it has a major impact on the later phases particularly testing and maintenance.

The design activity often results in three separate outputs –

• Architecture design.

• High level design.

• Detailed design.

Architecture Design:

• Architecture focuses on looking at a system as a combination of many different


components, and how they interact with each other to produce the desired result. The focus
is on identifying components or subsystems and how they connect. In other words, the
focus is on what major components are needed.

High Level Design:


• In high level design identifies the modules that should be built for developing the system
and the specifications of these modules. At the end of system design all major data
structures, file format, output formats, etc., are also fixed. The focus is on identifying the
modules. In other words, the attention is on what modules are needed.

Detailed Design:

In the detailed design the internal logic of each of the modules is specified. The focus is on
designing the logic for each of the modules. In other words how modules can be implemented in
software is the issue.

A design methodology is a systematic approach to creating a design by application of a set


of techniques and guidelines. Most methodologies focus on high level design.

5.15.1 ARCHITECTURAL DESIGN:

In this project, three tier architecture is used.

Introduction:

As a developer, the .NET framework and Visual Studio present many choices for choosing
the right architecture, from placing the data access code directly in the UI through datasets and
data source controls, to creating a data access layer that talks to the database, all the way to creating
an n-tier architecture approach that consists of multiple layers, and use data-transfer objects to pass
data back and forth.

Layer:

A layer is a reusable portion of code that performs a specific function. In the .NET
environment, a layer is usually setup as a project that represents this specific function. This specific
layer is in charge of working with other layers to perform some specific goal. In an application
where the presentation layer needs to extract information from a backend database, the
presentation would utilize a series of layers to retrieve the data, rather than having the database
calls embedded directly within it. Now we will look briefly at the latter situation first.
5.1.1. Two-Tier Architecture

When the .NET 2.0 framework became available to the world, there were some neat
features that allowed the developer to connect the framework’s GUI controls directly to the
database. This approach is very handy when rapidly developing applications. However, it’s not
always favorable to embed all of the business logic and data access code directly in the web site,
for several reasons:

• Putting all of the code in the web site (business logic and data access) can make the
application harder to maintain and understand.

• Reusing database queries in the presentation layer often isn’t done, because of the typical
data source control setup in the ASP.NET framework.

• Relying on the data source controls can make debugging more difficult, often due to vague
error messages.

So in looking for an alternative, we can separate the data access code and business logic into
separate “layers”.

5.1.2 5.1.2 Three tier Architecture:

Three tier architecture consists of three layers. They are:

The Data Layer:

The key component to most applications is the data. The data has to be served to the
presentation layer somehow. The data layer is a separate component (often setup as a separate
single or group of projects in a .NET solution), whose sole purpose is to serve up the data from the
database and return it to the caller. Through this approach, data can be logically reused, meaning
that a portion of an application reusing the same query can make a call to one data layer method,
instead of embedding the query multiple times. This is generally more maintainable.
Business Layer:

Though a web site could talk to the data access layer directly, it usually goes through
another layer called the business layer. The business layer is vital in that it validates the input
conditions before calling a method from the data layer. This ensures the data input is correct before
proceeding, and can often ensure that the outputs are correct as well. This validation of input is
called business rules, meaning the rules that the business layer uses to make “judgments” about
the data.

One of the best reasons for reusing logic is that applications that start off small usually
grow in functionality. The business layer helps move logic to a central layer for “maximum
reusability.”

Presentation Layer:

The ASP.NET web site or windows forms application (the UI for the project) is called the
presentation layer. The presentation layer is the most important layer simply because it’s the one
that everyone sees and uses. Even with a well structured business and data layer, if the presentation
layer is designed poorly, this gives the users a poor view of the system.

Three tier Architecture:

The presentation tier contains the UI (User Interface) elements of the site, and includes
all the logic that managers the interaction between the visitor and the client’s business. (ASP.NET
Web Forms, Web User Controls, ASP.NET Master Pages)

The business tier receives requests from the presentation tier and returns a result to the
presentation tier depending on the business logic it contains. (C# Classes)

The data tier is responsible for storing the application’s data and sending it to the business
tier when requested. (SQL Server Stored Procedures).
Fig- Three tier architecture

5.2 HIGH LEVEL DESIGN:

5.2.1 Data Flow Diagram:

A data flow diagram (DFD) is a graphical representation of the "flow" of data through an
information system. DFDs can also be used for the visualization of data processing (structured
design).

On a DFD, data items flow from an external data source or an internal data store to an
internal data store or an external data sink, via an internal process.

A DFD provides no information about the timing of processes, or about whether processes
will operate in sequence or in parallel. It is therefore quite different from a flowchart, which shows
the flow of control through an algorithm, allowing a reader to determine what operations will be
performed, in what order, and under what circumstances, but not what kinds of data will be input
to and output from the system, nor where the data will come from and go to, nor where the data
will be stored (all of which are shown on a DFD).
Symbols used in DFD’s:

Processes:

A process transforms data values. The lowest processes are our functions without side effects.

Data Flows:

A data flow connects the output of an object or process to the input of another object or
process. It represents the intermediate data values within the computation. It is draws as an arrow
between the procedure and the consumer of the data value. The arrow is labeled with the
description of the data, usually its name or type.

Actors:

An actor is an active object that drives the data flow graph by producing or consuming values.
Actors are attached to the inputs and the outputs of a dataflow graph. In sense, the actors lie on the
boundary of the flow graph but terminate the flow of data as sources and sinks of data, and so are
sometimes called terminators.

Data Store:

A data store is a passive object within a data flow diagram that stores data for later access.
Unlike an actor, a data store does not generate any operations on its own but merely responds to
requests to store and access data.

Level 1 (high level diagram):

This level (level 1) shows all processes at the first level of numbering, data stores, external
entities and the data flows between them. The purpose of this level is to show the major and high-
level processes of the system and their interrelation. A process model will have one, and only one,
level-1 diagram. A level-1 diagram must be balanced with its parent context level diagram, i.e.
there must be the same external entities and the same data flows, these can be broken down to
more detail in the level1.
Fig: DFD for Admin

Data flow diagram for Admin represents the role of admin in the system. Admin has an
authentication and after valid authentication, admin can perform the following processes in the
system. All transactions in the system can be stored and retrieved from the database.

Fig: DFD for License


Data flow diagram for License represents the role of the person who manages license in the system.
License monitor has an authentication and after valid authentication, he/she can perform the
following processes in the system. All transactions in the system can be stored and retrieved from
the database.

Fig: DFD for Police

Data flow diagram for Police represents the role of the person who manages and verifies for license
in the system. Police has an authentication and after valid authentication, he/she can perform the
following processes in the system. All transactions in the system can be stored and retrieved from
the database.

5.3 5.3 DETAILED DESIGN

5.3.1 5.3.1 Use Case Diagram:

A use case diagram in the Unified Modeling Language (UML) is a type of behavioral
diagram defined by and created from a Use-case analysis. Its purpose is to present a graphical
overview of the functionality provided by a system in terms of actors, their goals (represented as
use cases), and any dependencies between those use cases.
The main purpose of a use case diagram is to show what system functions are performed
for which actor. Roles of the actors in the system can be depicted.

Interaction among actors is not shown on the use case diagram. If this interaction is
essential to a coherent description of the desired behavior, perhaps the system or use case
boundaries should be re-examined. Alternatively, interaction among actors can be part of the
assumptions used in the use case.

Use cases:

A use case describes a sequence of actions that provide something of measurable value to
an actor and is drawn as a horizontal ellipse.

Actors:

An actor is a person, organization, or external system that plays a role in one or more
interactions with the system.

System boundary boxes:

A rectangle is drawn around the use cases, called the system boundary box, to indicate the
scope of system. Anything within the box represents functionality that is in scope and anything
outside the box is not.
Fig: Use Case Diagram for Admin

Use Case Diagram for Admin represents the functionalities carried out by the admin in this system.

Fig: Use Case Diagram for License

Use Case Diagram for License represents the functionalities carried out by the License monitor in
this system.
Fig: Use Case Diagram for Police

Use Case Diagram for Police represents the functionalities carried out by the police in this system.

5.3.2 Sequence Diagram:


A Sequence diagram is an interaction diagram that shows how processes operate with one
another and in what order. It describes interactions among classes in terms of an exchange of
messages over time. Sequence diagrams are used to show how objects interact in a given
situation. An important characteristic of a sequence diagram is that time passes from top to
bottom: the interaction starts near the top of the diagram and ends at the bottom

Targets/Class roles/State:
Objects as well as classes can be targets on a sequence diagram, which means that
messages can be sent to them. A target is displayed as a rectangle with some text in it. Below the
target, its lifeline extends for as long as the target exists. Targets can be actor, boundary, control,
entity and database
.
Messages:
Messages are arrows that represent communication between objects.

Lifelines:
Lifelines are vertical dashed lines that indicate the object's presence over time.
Fig: Sequence Diagram for Admin

Sequence diagram for admin represents the flow of the processes performed by admin in this
system.
Fig: Sequence Diagram for License

Sequence diagram for License represents the flow of the processes performed by License monitor
in this system.

Fig: Sequence Diagram for Police

Sequence diagram for police represents the flow of the processes performed by police in this
system.

5.4 ENTITY RELATIONSHIP DIAGRAM


Entity–relationship model (ER model) is a data model for describing the data or
information aspects of a business domain or its process requirements, in an abstract way that
lends itself to ultimately being implemented in a database such as a relational database. The main
components of ER models are entities (things) and the relationships that can exist among them,
and databases. ER Diagram is a graphical representation of entities and their relationships to each
other, typically used in computing in regard to the organization of data within databases or
information systems. An entity is a piece of data-an object or concept about which data is stored.
A relationship is how the data is shared between entities. There are three types of relationships
between entities:
• One-to-One: One instance of an entity is associated with one other instance of
another entity.

• One-to-Many: One instance of an entity is associated with zero, one or many


instances of another entity, but for one instance of entity B there is only one instance
of entity A.

• Many-to-Many: One instance of an entity (A) is associated with one, zero or many
instances of another entity (B), and one instance of entity B is associated with one,
zero or many instances of entity A.

Fig: ER Diagram

The ER diagram represents the relationship between the entities which are included in this system.
5.5 RELATIONS ESTABLISHED IN DATABASE TABLES
CHAPTER 6
IMPLEMENTATION
CHAPTER 6

IMPLEMENTATION

Features of Object Oriented Paradigm:

This web application is implemented using object oriented programming language.


Object oriented programming is an approach that provides a way of modularizing programs by
creating partitioned memory area for both data and functions that can be used as templates for
creating copies of such modules on demand.

Features of Object Oriented paradigm:

• Emphasis is on data rather than procedure.


• Programs are divided into what are known as objects.
• Data structures are designed such that they characterize the objects.
• Methods that operate on the data of an object are tied together in the data structure.
• Objects may communicate with each other through methods.
• New data and methods can be easily added whenever necessary.
• Follows bottom-up approach in program design.
• Data is hidden and cannot be accessed by external functions.
This project is implemented using three tier architecture. ASP.NET is used in the
presentation layer, C# classes are used in the Business logic, Table adopter is used in the data
tier and MS SQL server 2005 (database) is used as the backend.

Implementation Steps:

Presentation Layer is Asp.net (front end) which invokes the Business logic through button click
or page load event or SelectedIndexChange event of the dropdownlist.

Business Logic contains the common methods. An object for Business logic class is created and
object will invoke the method.

The business logic object will call table Adopter method. Table Adopter will open the
database connection. Since SQL server 2005 is used as the backend, to interact with the database
SqlDataSource is used.
CHAPTER 7
TESTING
CHAPTER 7

TESTING

7.1 TESTING PURPOSE

Software Testing has different goals and objectives. The major objectives of
Software testing are as follows:

• Finding defects which may get created by the programmer while developing the
software.
• To prevent defects.
• To make sure that the end result meets the business and user requirements.
• To ensure that it satisfies the BRS that is Business Requirement Specification and SRS
that is System Requirement Specifications.
• To gain the confidence of the customers by providing them a quality product.

Software testing is performed to verify that the completed software package functions
according to the expectations defined by the requirements/specifications. The overall objective
to not to find every software bug that exists, but to uncover situations that could negatively impact
the customer, usability and/or maintainability.
From the module level to the application level, this article defines the different types of
testing. Depending upon the purpose for testing and the software requirements/specs, a
combination of testing methodologies is applied. One of the most overlooked areas of testing is
regression testing and fault tolerant testing.
7.2 LEVELS OF TESTING
There are four levels of software testing.

1. Unit Testing:
It is a level of the software testing process where individual units/components of a
software/system are tested. The purpose is to validate that each unit of the software performs as
designed.

2. Integration Testing:
It is a level of the software testing process where individual units are combined and tested as a
group. The purpose of this level of testing is to expose faults in the interaction between integrated
units.
3. System Testing:
It is a level of the software testing process where a complete, integrated system/software is tested.
The purpose of this test is to evaluate the system’s compliance with the specified requirements.

4. Acceptance Testing:
It is a level of the software testing process where a system is tested for acceptability. The purpose
of this test is to evaluate the system’s compliance with the business requirements and assess
whether it is acceptable for delivery.

7.3 REGRESSION TESTING


Regression testing is retesting sub-systems/modules/units to insure that modifications to
one sub-system/module/unit do not cause unexpected results in another sub-
system/module/unit. This is also known as ripple effect testing.

Why is Regression Testing Necessary?


Regression testing is necessary because many times modifications in one part of the code cause
unexpected problems in a "totally unrelated" area of the code.

7.4 TYPES OF TESTING


White box testing:
It is a software testing method in which the internal structure/design/implementation of the
item being tested is known to the tester. The tester chooses inputs to exercise paths through the
code and determines the appropriate outputs. Programming know-how and the implementation
knowledge is essential. This method is named so because the software program, in the eyes of the
tester, is like a white/transparent box; inside which one clearly sees. Internal software and code
working should be known for this type of testing. Tests are based on coverage of code statements,
branches, paths, conditions. Also known as structural testing and Glass box Testing.
Black box testing:
Internal system design is not considered in this type of testing. Tests are based on
requirements and functionality. This method is named so because the software program, in the
eyes of the tester, is like a black box; inside which one cannot see. Black box testing is a testing
technique that ignores the internal mechanism of the system and focuses on the output generated
against any input and execution of the system. It is also called functional testing.

CHAPTER 8
CONCLUSION AND
FUTURE ENHANCEMENT
CHAPTER 8

CONCLUSION AND FUTURE ENHANCEMENT

8.1 CONCLUSION
The application can now identify each individual uniquely. Every citizen is identified for all the
Govt transactions with the help of his Aadhar card. The application integrates various Govt
departments into a single point of Contact. This system is used to create a tool that manages the
handling of passport and license using the unique identification associated with each individual.
The application deals with allowing the citizens to register for a unique identity.

This helps in avoiding unnecessary delays or find where the delay is happening when applications
are processed. The application can be extended to all the Govt departments with modification.
New modules can be added without affecting the existing modules.

8.2 FUTURE ENHANCEMENT


In future the proposed work can be enhanced in following ways:

• Usage of biometric devices to identify citizen.

• Usage of the card to perform financial transactions.

• PAN, Voter ID, Ration Card etc can also be processed by using this card.
CHAPTER 9
ANNEXURE
CHAPTER 9

ANNEXURE
CHAPTER 10
REFERENCES

CHAPTER 10

REFERENCES

REFERRED WEBSITES:
www.wikipedia.org

www.csharphelp.com

www.w3schools.com

www.dotnetspider.com

www.csharpcorner.com

www.codeguru.com

www.techopedia.com
www.asp.net

www.asptoday.com

www.aspfree.com

www.edrawsoft.com/Data-Flow-Diagram-Symbols.php

www.tutorialspoint.com/uml/uml_use_case_diagram.htm

www.uml-diagrams.org/

REFERRED BOOKS:
“Programming C#, Fourth Edition- Building .NET Applications with C#” -By Jesse Liberty

“Working with Microsoft Visual Studio 2005 Team System” -By Richard Hundhausen

“Begin ASP.NET 2.0 with visual C#.NET”, Wrox, By Chris Ullman.

Software Engineering, Ian Summerville, Sixth Edition, Pearson Education Ltd, 2001

You might also like