You are on page 1of 54

A

PROJECT REPORT
ON
SMART CIRCULAR

ABSTRACT

The main aim of the smart circular is a innovative approach to display the notices to all the
students in a innovative way. It is a platform where all the type of circulars are sent at a time
through android application.
Project Objective:
Main aim of smart circular is to develop a portal through students. In present trend most of
the work is done through system like attendance system...Etc. Implementing portal for
display notice is very helpful to students and origination staff.
Working Procedure:
User sends the notification through android app which is in the form of pdf, or text. It will
receive by remote system by using WIFI the system is displayed on the monitor.

CHAPTER-I

1. INTRODUCTION
Smart circular is a innovative approach to display the notices to all the students in a
innovative way. It is a platform where all the type of circulars are sent at a time through
android application.
1.1 PROBLEM:
Till now we have used the regular notice board, papers to display the notices. using this
notice boards we cannot pass all the important information to all the students and it is also a

difficult task for the students to view the notices at the notice board in a short span of time
and even there is a chance of missing some circulars.
1.2 SOLUTION:
An innovative android based notice display system that allows the students to view all the
notifications. It is capable of receiving the notifications in the pdf form,text form and in the
image form. Particular time is mentioned to display the information which will be easy for
the students to view.
1.3 HANDLING PROBLEM:
To demonstrate the problem we use android application i.e the information to be displayed is
send from android app or website and it is stored in the cloud server then it is viewed in the
monitor. By using this approach we can reduce the time and man power by sending all the
information just by a click.
1.4 IMPACT OF PROJECT:
Saves the time and notice can be sent from anywhere and will be displayed in fraction of
seconds to everyone.

CHAPTER -II

One of the most prominent features to be taken care of in todays reckless pace of innovation
is Compatibility of the application in various environments and paradigms. An application
that provides a user friendly environment, which is compatible and efficient, is the need of
the hour.

2.1REQUIREMENTS SPECIFICATION DOCUMENT


The requirement specification document comprises of two prominent requirements. They are:

Functional Requirements

Non Functional Requirements

2.1.1 Functional Requirements

User Interfaces
The application is provided with keyboard shortcuts with a facility to use the mouse to trigger
the required actions. They act as shortcuts and provide an easy navigation within the
software. Appropriate error handling is done using Exceptions in-order to isolate abnormal
results or conditions. These messages are popped up to the user in the form of dynamic
HTML or alerts.
The application concentrates on the online and communicates over the internet/intranet. A
well connected internet connection either using a modem or cable or Wi-Fi or any other form
should exist. TCP/IP configured, http supported protocol configuration should exist. The
client only requires a browser for communication. For Intranet requirement hubs/switches etc
is a must. Software Interfaces
The incoming data to the product would be raw text data and images. The outgoing data
would be the text and images. A database is maintained to store the text and URL information
about the images. Ms-access/SQLServer/SQLite server is the database with a version of
minimum 2003 as requirement. The server on the ISP requires tomcat web server. To execute
or deploy the application JVM is required. A compatible browser is required to access the

data from the client.


2.1.2 Non-Functional Requirements
Performance Requirements
Good band width, less congestion on the network. Identifying the shortest route to reach the
destination would enhance performance.
Safety Requirements
No harm is expected from the use of the product either to the OS or any data that resides on
the client system.
Product Security Requirements
The product is protected from un-authorized users from using it. The system allows only
authenticated users to work on the application.

2.2 Software Requirements


Only available for Android 4.0 and above software/Firmware versions.
The emulator is created with the help of Android Studio.
Java development toolkit is necessary as it is used to link various user interfaces that are
created in the emulator/device.
Operating System
:

Windows 2000 or Higher

Service Pack: 2+

Platform: Java

Scripting : JSP

Backend: Ms-Access/Ms-Sql server/SQLite server

2.3 Hardware Requirements

Setting up the environment requires a device which should be enabled with internet facilities
like Wi-Fi.
If the output is to be displayed on a device rather than an emulator then a device with
developer options needs to be used.
The minimum hardware requirements for this android application are as follows:

Processor: Intel core i3- 2350 M CPU @ 2.30 GHz

Installed memory (RAM): 4GB (preferably)

System type: 64 bit operating system, x64 based processor

Hard Disk: 10 GB Space

Monitor: VGA Color (256)

CHAPTER -III

3. Technologies/Tools Used For the Implementation

Android studio

Java Development Kit

Sqlite

3.1. ANDROID STUDIO


Android Studio is an integrated development environment (IDE) for developing on
the Android platform. It was announced on May 16, 2013 at the Google I/O conference
by Google's Product Manager, Ellie Powers. Android Studio is freely available under
the Apache License 2.0. Android Studio was in early access preview stage starting from
version 0.1 in May 2013, then entered beta stage starting from version 0.8 which was released
in June 2014.
The first stable build was released in December 2014, starting from version 1.0.Based
on JetBrains' IntelliJ IDEA software, the Studio is designed specifically for Android
development. It is available for download on Windows, Mac OS X and Linux, and replaced
Eclipse as Google's primary IDE for native Android application development.
Several features are expected to be rolled out to the users as the software matures; currently,
however the following features are provided for:
Live Layout: WYSIWYG Editor - Live Coding - Real-time App Rendering.
Developer Console: optimization tips, assistance for translation, referral tracking,
campaigning and promotions - Usage Metrics.
Provision for beta releases and staged rollout.
Gradle-based build support.
Android-specific refactoring and quick fixes.
Lint tools to catch performance, usability, version compatibility and other problems.
Procured and app-signing capabilities.
Template-based wizards to create common Android designs and components.
A rich layout editor that allows users to drag-and-drop UI components, option to preview
layouts on multiple screen configurations.

Support for building Android Wear apps


Built-in support for Google Cloud Platform, enabling integration with Google Cloud
Messaging and App Engine
HISTORY OF ANDROID:Android Inc.founded in Palo Alto, California, united states in October 2003 by Andy Rubin
co-founder of danger, rich miner co-founder of wildfire communication Inc., and nick sears
once VP at T-mobile, and Chris white headed design and interface development at web TV to
develop.
WHAT IS ANDROID:It is a open source software platform and operating system for mobile devices
Based on the Linux kernel
Developed by Google and later the Open Handset Alliance (OHA)
Allows writing managed code in the Java language
Android has its own virtual machine i.e. DVM (Dalvik Virtual Machine), which is used for
executing the android application.
Android is a free downloadable open source software stack for mobile devices that include an
Operating system
Android os is developed under a code name based on dessert items.
OPEN HANDSET ALLIANCE:The open handset alliance (OHA) is a business alliance of firm to develop open standard for
mobile devices.
Devoted to advancing open standards for mobile devices
Develop technologies that will significantly lower the cost of developing and distributing
mobile devices and services.
ANDROID VERSION:-

Android 1.0 (Angel Cake)-The first version of the open source software was released
back in 2008.

Android 1.1 (Battenberg)-In Feb 2009, version 1.1 Android 1.5 (Cupcake)-Launched
in April 2009

Android 1.6 (Donut)-released in September 2009

Android 2.0 / 2.1 (clair)-released in 26 October 2009 and January 2010

Android 2.2 (Froyo) frozen yogurt:-Released in the summer of 2010

Android 2.3 (Gingerbread):- Gingerbread landed by the end of 2010

Android 3.0 (Honeycomb) :- For the first time Google released a software that was
totally focused on tablets. This version, released in July/august 2011

Android 4.0 (Ice Cream Sandwich 4.0) -released in October 2011

Android 4.1 (jelly bean4.1) -released in 26th June 2012

After so many desserts named version of android is going to offer something with even tastier
dessert. The upcoming version of android 4.4 was KITKAT released on October 2013.

ANDROID ARCHITECTURE:The software stack is split into Four Layers:-

The application layer

The application framework

The libraries and runtime

The kernel

LINUX KERNEL:The architecture is based on the Linux2.6 kernel.


This layer is core of android architecture.
It provides service like power management, memory management, security etc.
It helps in software or hardware binding for better communication.
NATIVE LIBRARIES:Android has its own libraries, which is written in C/C++. These libraries cannot be accessed
directly. With the help of application framework, we can access these libraries. There are
many libraries like web libraries to access web browsers, libraries for android and video
formats etc.
Android Run Time:Dalvik virtual machine-The Android Runtime was designed specifically for Android to meet
the needs of running in an embedded environment where you have limited battery, limited
memory; limited CPU.Dalvik is the process virtual machine in Google's android operating
system. It is the software that runs the apps on android devices. Dalvik is thus an integral part
of android, which is typically used on mobile devices such as mobile phones and tablet
computers. Programs are commonly written in java and compiled to byte code.
Core libraries-This is in blue, meaning that it's written in theJava programming language.
The core library contains all of the collection classes, utilities, IO, all the utilities and tools
that youve come to expected to use.

Application Framework:-

This is all written in a Java programming language and the application framework is the
toolkit that all applications use.
These applications include the ones that come with a phone like the home applications, or the
phone application.
It includes applications written by Google, and it includes apps that will be written by you.
So, all apps use the same framework and the same APIs.
These are as follows:

Activity manager:-It manages the lifecycle of applications. It enables proper


management of all the activities. All the activities are controlled by activity manager.

Resource manager:-It provides access to non-code resources such as graphics etc.

Notification manager:-It enables all applications to display custom alerts in status


bar.

Location manager:- It fires alerts when user enters or leaves a specified geographical
location.

Package manager:-It is use to retrieve the data about installed packages on device.

Window manager:-It is use to create views and layouts.

Telephony manager:-It is use to handle settings of network connection and all


information about services on device.

APPLICATION LAYER:-The final layer on top is Applications. It includes the home


application, the contacts application, the browser, and apposite is the most upper layer in
android architecture. All the applications like camera, Google maps, browser, sms, calendars,
contacts are native applications. These applications works with end user with the help of
application framework to operate.
SECURITY
Android is a multi-process system, in which each application (and parts of the system) runs in
its own process. Most security between applications and the system is enforced at the process
level through standard Linux facilities, such as user and group IDs that are assigned to

applications.
Android is designed having multi layer security which provides flexibility for this platform.
When attackers attempt attack on device, android platform help to reduce the portability of
the attack.
There are key components of android security which are described as follows:Design review:-when a security model is designed then it will be reviewed by the developers
so that risk level will be very less while using the model.
Code review and penetrating testing:-the goal of this code review is that in which it will be
checked that how the system will become strong?
Open source and community review:-android uses open source technologies that have
significant external review such as Linux kernel.
Incident response:-android team enables the rapid mitigation of vulnerabilities to ensure that
potential risks to all android users are minimized.
FEATURES OF ANDROID:

Background Wi-Fi location still runs even when Wi-Fi is turned off

Developer logging and analyzing enhancements

It is optimized for mobile devices.

It enables reuse and replacement of components.

Java support ,media support, multi touch, video calling,multi tasking ,voice based
features, screen capture, camera ,bluetooth,gps,compass and accelerometer,3G

ADVANTAGE:The ability for anyone to customize the Google Android platform

It gives you better notification.

It lets you choose your hardware.

It has better app market(1,80,000 application)

A more mature platform with the support of many applications, the user can change
the screen display.

With Google chrome you can open many windows at once.

Supports all Google services: Android operating system supports all of Google
services ranging from Gmail to Google reader. All Google services can you have with
one operating system, namely Android.

DISADVANTAGES:

Android Market is less control of the manager, sometimes there are malware.

Wasteful Batteries, This is because the OS is a lot of "process" in the background


causing the battery quickly drains.

Sometimes slow device company issued an official version of Android your own .

Extremely inconsistence in design among apps.

Very unstable and often hang or crash.

LIMITATIONS OF ANDROID:

Development requirements inJava

Android SDK

Bluetooth limitations:Android doesn't support Bluetooth stereo, Contacts exchange, Modem pairing
Wireless keyboards, Firefox mobile isn't coming to android because of android limitations
Apps in Android Market need to be programmed with a custom form of Java
There are no split or interval times available.
Small memory size.
Continuous Internet connection is required

3.2. JAVA DEVELOPMENT KIT


The Java Development Kit (JDK) is an implementation of either one of the Java SE, Java
EE or Java ME platforms released by Oracle Corporation in the form of a binary product
aimed at Java developers on Solaris, Linux, Mac OS X or Windows. The JDK includes a
private JVM and a few other resources to finish the recipe to a Java Application. Since the
introduction of the Java platform, it has been by far the most widely used Software
Development Kit (SDK).On 17 November 2006, Sun announced that it would be released
under the GNU General Public License (GPL), thus making it free software. This happened
in large part on 8 May 2007, when Sun contributed the source code to the OpenJDK.
The JDK forms an extended subset of a software development kit (SDK). It includes "tools
for developing, debugging, and monitoring Java applications". ] Oracle strongly suggests that
they now use the term "JDK" to refer to the Java SE Development Kit. The Java EE SDK is
available with or without the "JDK", by which they specifically mean the Java SE 7 JDK.
The JDK has as its primary components a collection of programming tools, including:

java the loader for Java applications. This tool is an interpreter and can interpret the
class files generated by the javac compiler. Now a single launcher is used for both
development and deployment. The old deployment launcher, jre, no longer comes with
Sun JDK, and instead it has been replaced by this new java loader.

javac the Java Compiler, which converts source code into Java Bytecode

jdb the debugger

VisualVM

visual

tool

integrating

several command

line JDK

tools

and

lightweight] performance and memory profiling capabilities

HTML - Hyper Text Markup Language


Hypertext Markup Language, commonly referred to as HTML, is the standard markup
language used to create web pages. It is written in the form of HTML elements consisting of
tags enclosed in angle brackets. HTML tags most commonly come in pairs like <h1> and
</h1>, although some tags represent empty elements and so are unpaired, for example

<img>. The first tag in a pair is the start tag, and the second tag is the end tag.
HTML elements form the building blocks of all websites. HTML allows images and objects
to be embedded and can be used to create interactive forms. It provides a means to create
structured documents by denoting structural semantics for text such as headings, paragraphs,
lists, links, quotes and other items. It can embed scripts written in languages such as
JavaScript which affect the behavior of HTML web pages.
Basic HTML Tags:
<!-- -- >
Specifies comments
<A>...</A>
Creates hypertext links
<B>...</B>
Formats text as bold
<BIG>...<BIG Formats text in large font
<BODY>...</BOD Contains all tags and texts in the HTML document
<DD>...</DDDefinition of a term
<DL>...</DL>
Creates Definition list
<HEAD>...</HEAD>Contains tags that specify information of a document
<HR>...</HR>Creates a horizontal rule
<HTML>...<HTML>Contains all other HTML tags
<TABLE>...</TABLE>Creates a table
<TR>...</TR>
Creates a table row
<TH>...</TH>
Creates a heading in a table

Attributes

An HTML attribute is a modifier of an HTML element. They are usually added to a HTML
start tag, which along with content and end tag comprises most HTML elements. Particular
attributes are only supported by particular element types; if they are added to elements that do
not support them they will not function.
Java Script - Script Language
JavaScript is a dynamic computer programming language. It is most commonly used as part
of Web browsers, whose implementations allow client-side scripts to interact with the user,
control the browser, communicate asynchronously, and alter the document content that is
displayed.
It is also used in server-side network programming with runtime environments such as
Node.js, game development and the creation of desktop and mobile applications. With the
rise of the single-page Web app and JavaScript-heavy sites, it is increasingly being used as a
compile target for source-to-source compilers from both dynamic languages and static
languages.
In particular, Emscripten and highly optimized JIT compilers, in tandem with asm.js that is
friendly to AOT compilers like Odin Monkey, have enabled C and C++ programs to be
compiled into JavaScript and execute at near-native speeds, making JavaScript be considered
the "assembly language of the Web", according to its creator and others.
JavaScript is classified as a prototype-based scripting language with dynamic typing and firstclass functions. This mix of features makes it a multi-paradigm language, supporting objectoriented, imperative, and functional programming styles. Despite some naming, syntactic,
and standard library similarities, JavaScript and Java are otherwise unrelated and have very
different semantics. The syntax of JavaScript is actually derived from C, while the semantics
and design are influenced by the self and Scheme programming languages.
]
Advantages of JavaScript:

The merits of using JavaScript are:

Less server interaction: You can validate user input before sending the page off to the
server. This saves server traffic, which means fewer loads on your server.

Immediate feedback to the visitors: They don't have to wait for a page reload to see if
they have forgotten to enter something.

Increased interactivity: You can create interfaces that react when the user hovers over
them with a mouse or activates them via the keyboard.

Richer interfaces: You can use JavaScript to include such items as drag-and-drop
components and sliders to give a Rich Interface to your site visitors.

The script tag takes two important attributes:

language: This attribute specifies what scripting language you are using. Typically, its
value will be javascript. Although recent versions of HTML (and XHTML, its successor)
have phased out the use of this attribute.

type: This attribute is what is now recommended to indicate the scripting language in use
and its value should be set to "text/javascript".

Basic Program:
<html>
<body>
<scriptlanguage="javascript"type="text/javascript"><!-document.write("HelloWorld!")//-->
</script>
</body>
</html>

XML - Extensive Mark Up Language

Extensible Markup Language (XML) is a markup language that defines a set of rules for
encoding documents in a format which is both human-readable and machine-readable. It is
defined by the W3C's XML 1.0 and by several other related specifications, all of which are
free open standards.
The design goals of XML emphasize simplicity, generality and usability across the Internet. It
is a textual data format with strong support via Unicode for different human languages.
Although the design of XML focuses on documents, it is widely used for the representation
of arbitrary data structures such as those used in web services.
Several schema systems exist to aid in the definition of XML-based languages, while
many application programming interfaces (APIs) have been developed to aid the processing
of XML data.
JAVA
Java is an object-oriented programming language developed by Sun Microsystems a
company best known for its high end UNIX workstations. Java language was designed to be
small, simple, and portable across platforms, operating systems, both at the source and at the
binary level, which means that Java programs (applet and application) can run on any
machine that has the Java virtual machine (JVM) installed.
J2EE
Java Platform, Enterprise EditionorJava EEis a widely used platform for server programming
in the Java programming language. The Java platform (Enterprise Edition) differs from the
Java Standard Edition Platform (Java SE) in that it adds libraries which provide functionality
to deploy fault-tolerant, distributed, multi-tier Java software, based largely on modular
components running on an application server.

My SQL

MySQL is a relational database management system (RDBMS), and ships with no GUI tools
to administer MySQL databases or manage data contained within the databases. Users may
use the included command line tools, or use MySQL "front-ends", desktop software and web
applications that create and manage MySQL databases, build database structures, back up
data, inspect status, and work with data records. The official set of MySQL front-end
tools, MySQL Workbench is actively developed by Oracle, and is freely available for use.
MySQL was created by a Swedish company, MySQL AB, founded by David Axmark, Allan
Larsson and Michael "Monty" Widenius. The first version of MySQL appeared on 23 May
1995. It was initially created for personal usage from mSQL based on the low-level
language ISAM, which the creators considered too slow and inflexible. They created a
new SQLinterface, while keeping the same API as mSQL. By keeping the API consistent
with the mSQL system, many developers were able to use MySQL instead of the
(proprietarily licensed) mSQL antecedent.
MySQL is written in C and C++. Its SQL parser is written in yacc, but it uses a homebrewed lexical

analyzer. Many programming

languages with

language-

specific APIs includelibraries for accessing MySQL databases. These include MySQL
Connector/Net

for

integration

with

Microsoft's Visual

Studio (languages

such

as C# and VB are most commonly used) and the JDBC driver for Java. In addition,
an ODBC interface called MyODBC allows additional programming languages that support
the ODBC interface to communicate with a MySQL database, such as ASP or ColdFusion.
The HTSQL URL-based query method also ships with a MySQL adapter, allowing direct
interaction between a MySQL database and any web client via structured URLs.
PHP
PHP is a server-side scripting language designed for web development but also used as
a general-purpose programming language. As of January 2013, PHP was installed on more
than 240 million websites (39% of those sampled) and 2.1 million web servers.Originally
created by RasmusLerdorf in 1994, the reference implementation of PHP (powered by
the Zend Engine) is now produced by The PHP Group. While PHP originally stood
for Personal Home Page, it now stands for PHP: Hypertext Preprocessor, which is

a recursive backronym.
PHP code can be simply mixed with HTML code, or it can be used in combination with
various tinplating engines and web frameworks. PHP code is usually processed by a
PHP interpreter, which is usually implemented as a web server's native module or aCommon
Gateway Interface (CGI) executable. After the PHP code is interpreted and executed, the web
server sends resulting output to its client, usually in form of a part of the generated web page;
for example, PHP code can generate a web page's HTML code, an image, or some other data.
PHP has also evolved to include a command-line interface (CLI) capability and can be used
in standalone graphical.
The canonical PHP interpreter, powered by the Zend Engine, is free software released under
the PHP License. PHP has been widely ported and can be deployed on most web servers on
almost every operating system and platform, free of charge.
3.3. SQL
The following sections will introduce you to some objects that have evolved, and some that
are new. These objects are:

Connections. For connection to and managing transactions against a database.

Commands. For issuing SQL commands against a database.

DataReaders. For reading a forward-only stream of data records from a SQL Server data
source.

DataSets. For storing, remoting and programming against flat data, XML data and
relational data.

DataAdapters. For pushing data into a DataSet, and reconciling data against a database.

When dealing with connections to a database, there are two different options: SQL Server
.NET Data Provider (System.Data.SqlClient) and OLE DB .NET Data Provider
(System.Data.OleDb). In these samples we will use the SQL Server .NET Data Provider.
These are written to talk directly to Microsoft SQL Server. The OLE DB .NET Data Provider
is used to talk to any OLE DB provider (as it uses OLE DB underneath).
Connections

Connections are used to 'talk to' databases, and are respresented by provider-specific classes
such as SQLConnection. Commands travel over connections and resultsets are returned in
the form of streams, which can be read by a DataReader object, or pushed into a DataSet
object.
Commands
Commands contain the information that is submitted to a database, and are represented by
provider-specific classes such as SQLCommand. A command can be a stored procedure call,
an UPDATE statement, or a statement that returns results. You can also use input and output
parameters, and return values as part of your command syntax. The example below shows
how to issue an INSERT statement against the Northwind database.
DataReaders
The DataReader object is somewhat synonymous with a read-only/forward-only cursor over
data. The DataReader API supports flat as well as hierarchical data. A DataReader object is
returned after executing a command against a database. The format of the returned
DataReader object is different from a recordset. For example, you might use the
DataReader to show the results of a search list in a web page.
DataSets and DataAdapters
DataSets
The DataSet object is similar to the ADO Recordset object, but more powerful, and with one
other important distinction: the DataSet is always disconnected. The DataSet object
represents a cache of data, with database-like structures such as tables, columns,
relationships, and constraints. However, though a DataSet can and does behave much like a
database, it is important to remember that DataSet objects do not interact directly with
databases, or other source data. This allows the developer to work with a programming model
that is always consistent, regardless of where the source data resides. Data coming from a
database, an XML file, from code, or user input can all be placed into DataSet objects. Then,

as changes are made to the DataSet they can be tracked and verified before updating the
source data. The GetChanges method of the DataSet object actually creates a second DatSet
that contains only the changes to the data. This DataSet is then used by a DataAdapter (or
other objects) to update the original data source.
The DataSet has many XML characteristics, including the ability to produce and consume
XML data and XML schemas. XML schemas can be used to describe schemas interchanged
via WebServices. In fact, a DataSet with a schema can actually be compiled for type safety
and statement completion.
DataAdapters (OLEDB/SQL)
The DataAdapter object works as a bridge between the DataSet and the source data. Using
the provider-specific SqlDataAdapter (along with its associated SqlCommand and
SqlConnection) can increase overall performance when working with a Microsoft SQL
Server databases. For other OLE DB-supported databases, you would use the
OleDbDataAdapter object and its associated OleDbCommand and OleDbConnection
objects.
The DataAdapter object uses commands to update the data source after changes have been
made to the DataSet. Using the Fill method of the DataAdapter calls the SELECT
command; using the Update method calls the INSERT, UPDATE or DELETE command for
each changed row. You can explicitly set these commands in order to control the statements
used at runtime to resolve changes, including the use of stored procedures. For ad-hoc
scenarios, a CommandBuilder object can generate these at run-time based upon a select
statement. However, this run-time generation requires an extra round-trip to the server in
order to gather required metadata, so explicitly providing the INSERT, UPDATE, and
DELETE commands at design time will result in better run-time performance.
4.3 SQL SERVER 2000
Microsoft SQL Server 2000 is a set of components that work together to meet the data
storage and analysis needs of the largest Web sites and enterprise data processing systems.
The topics in SQL Server Architecture describe how the various components work together to
manage data effectively.

Features of SQL Server 2000


Microsoft SQL Server 2000 features include:

Internet Integration.
The SQL Server 2000 database engine includes integrated XML support. It also has the
scalability, availability, and security features required to operate as the data storage
component of the largest Web sites. The SQL Server 2000 programming model is
integrated with the Windows DNA architecture for developing Web applications, and
SQL Server 2000 supports features such as English Query and the Microsoft Search
Service to incorporate user-friendly queries and powerful search capabilities in Web
applications.

Scalability and Availability.


The same database engine can be used across platforms ranging from laptop computers
running Microsoft Windows 98 through large, multiprocessor servers running
Microsoft Windows 2000 Data Center Edition. SQL Server 2000 Enterprise Edition
supports features such as federated servers, indexed views, and large memory support
that allow it to scale to the performance levels required by the largest Web sites.

Enterprise-Level Database Features.


The SQL Server 2000 relational database engine supports the features required to support
demanding data processing environments. The database engine protects data integrity
while minimizing the overhead of managing thousands of users concurrently modifying
the database. SQL Server 2000 distributed queries allow you to reference data from
multiple sources as if it were a part of a SQL Server 2000 database, while at the same
time, the distributed transaction support protects the integrity of any updates of the
distributed data. Replication allows you to also maintain multiple copies of data, while
ensuring that the separate copies remain synchronized. You can replicate a set of data to
multiple, mobile, disconnected users, have them work autonomously, and then merge

their modifications back to the publisher.

Ease of installation, deployment, and use.


SQL Server 2000 includes a set of administrative and development tools that improve
upon the process of installing, deploying, managing, and using SQL Server across
several sites. SQL Server 2000 also supports a standards-based programming model
integrated with the Windows DNA, making the use of SQL Server databases and data
warehouses a seamless part of building powerful and scalable systems. These features
allow you to rapidly deliver SQL Server applications that customers can implement with
a minimum of installation and administrative overhead.

Data warehousing.
SQL Server 2000 includes tools for extracting and analyzing summary data for online
analytical processing. SQL Server also includes tools for visually designing databases
and analyzing data using English-based questions.

Relational Database Components


The database component of Microsoft SQL Server 2000 is a Structured Query Language
(SQL)based, scalable, relational database with integrated Extensible Markup Language
(XML) support for Internet applications. Each of the following terms describes a fundamental
part of the architecture of the SQL Server 2000 database component:
Database
A database is similar to a data file in that it is a storage place for data. Like a data file, a
database does not present information directly to a user; the user runs an application that
accesses data from the database and presents it to the user in an understandable format.
Database systems are more powerful than data files in that data is more highly organized. In a
well-designed database, there are no duplicate pieces of data that the user or application must
update at the same time. Related pieces of data are grouped together in a single structure or
record, and relationships can be defined between these structures and records.

When working with data files, an application must be coded to work with the specific
structure of each data file. In contrast, a database contains a catalog that applications use to
determine how data is organized. Generic database applications can use the catalog to present
users with data from different databases dynamically, without being tied to a specific data
format.
A database typically has two main parts: first, the files holding the physical database and
second, the database management system (DBMS) software that applications use to access
data. The DBMS is responsible for enforcing the database structure, including:

Maintaining relationships between data in the database.

Ensuring that data is stored correctly and that the rules defining data relationships are not
violated.

Recovering all data to a point of known consistency in case of system failures.

Relational Database
Although there are different ways to organize data in a database, relational databases are one
of the most effective. Relational database systems are an application of mathematical set
theory to the problem of effectively organizing data. In a relational database, data is collected
into tables (called relations in relational theory).
A table represents some class of objects that are important to an organization. For example, a
company may have a database with a table for employees, another table for customers, and
another for stores. Each table is built of columns and rows (called attributes and topples in
relational theory). Each column represents some attribute of the object represented by the
table. For example, an Employee table would typically have columns for attributes such as
first name, last name, employee ID, department, pay grade, and job title. Each row represents
an instance of the object represented by the table. For example, one row in the Employee
table represents the employee who has employee ID 12345.
When organizing data into tables, you can usually find many different ways to define tables.
Relational database theory defines a process called normalization, which ensures that the set
of tables you define will organize your data effectively.

Scalable
SQL Server 2000 supports having a wide range of users access it at the same time. An
instance of SQL Server 2000 includes the files that make up a set of databases and a copy of
the DBMS software. Applications running on separate computers use a SQL Server 2000
communications component to transmit commands over a network to the SQL Server 2000
instance. When an application connects to an instance of SQL Server 2000, it can reference
any of the databases in that instance that the user is authorized to access. The communication
component also allows communication between an instance of SQL Server 2000 and an
application running on the same computer. You can run multiple instances of SQL Server
2000 on a single computer.
SQL Server 2000 is designed to support the traffic of the largest Web sites or enterprise data
processing systems. Instances of SQL Server 2000 running on large, multiprocessor servers
are capable of supporting connections to thousands of users at the same time. The data in
SQL Server tables can be partitioned across multiple servers, so that several multiprocessor
computers can cooperate to support the database processing requirements of extremely large
systems. These groups of database servers are called federations.
Although SQL Server 2000 is designed to work as the data storage engine for thousands of
concurrent users who connect over a network, it is also capable of working as a stand-alone
database directly on the same computer as an application. The scalability and ease-of-use
features of SQL Server 2000 allow it to work efficiently on a single computer without
consuming too many resources or requiring administrative work by the stand-alone user. The
same features allow SQL Server 2000 to dynamically acquire the resources required to
support thousands of users, while minimizing database administration and tuning. The SQL
Server 2000 relational database engine dynamically tunes itself to acquire or free the
appropriate computer resources required to support a varying load of users accessing an
instance of SQL Server 2000 at any specific time. The SQL Server 2000 relational database
engine has features to prevent the logical problems that occur if a user tries to read or modify
data currently used by others.
Structured Query Language

To work with data in a database, you have to use a set of commands and statements
(language) defined by the DBMS software. Several different languages can be used with
relational databases; the most common is SQL. The American National Standards Institute
(ANSI) and the International Standards Organization (ISO) define software standards,
including standards for the SQL language. SQL Server 2000 supports the Entry Level of
SQL-92, the SQL standard published by ANSI and ISO in 1992. The dialect of SQL
supported by Microsoft SQL Server is called Transact-SQL (T-SQL). T-SQL is the primary
language used by Microsoft SQL Server applications.
Extensible Markup Language
XML is the emerging Internet standard for data. XML is a set of tags that can be used to
define the structure of a hypertext document. The Hypertext Markup Language can easily
process XML documents, which is the most important language for displaying Web pages.
Although most SQL statements return their results in a relational, or tabular, result set, the
SQL Server 2000 database component supports a FOR XML clause that returns results as an
XML document. SQL Server 2000 also supports XPath queries from Internet and intranet
applications. XML documents can be added to SQL Server databases, and the OPENXML
clause can be used to expose data from an XML document as a relational result set.
Database Architecture
Microsoft SQL Server 2000 data is stored in databases. The data in a database is
organized into the logical components visible to users. A database is also physically
implemented as two or more files on disk.
When using a database, you work primarily with the logical components such as tables,
views, procedures, and users. The physical implementation of files is largely transparent.
Typically, only the database administrator needs to work with the physical implementation.
Each instance of SQL Server has four system databases (master, model, tempdb, and msdb)
and one or more user databases. Some organizations have only one user database, containing
all the data for their organization. Some organizations have different databases for each group
in their organization, and sometimes a database used by a single application. For example, an
organization could have one database for sales, one for payroll, one for a document

management application, and so on. Sometimes an application uses only one database; other
applications may access several databases.
It is not necessary to run multiple copies of the SQL Server database engine to allow multiple
users to access the databases on a server. An instance of the SQL Server Standard or
Enterprise Edition is capable of handling thousands of users working in multiple databases at
the same time. Each instance of SQL Server makes all databases in the instance available to
all users that connect to the instance, subject to the defined security permissions.
When connecting to an instance of SQL Server, your connection is associated with a
particular database on the server. This database is called the current database. You are usually
connected to a database defined as your default database by the system administrator,
although you can use connection options in the database APIs to specify another database.
You can switch from one database to another using either the Transact-SQL USE database
name statement, or an API function that changes your current database context.
SQL Server 2000 allows you to detach databases from an instance of SQL Server, then
reattach them to another instance, or even attach the database back to the same instance. If
you have a SQL Server database file, you can tell SQL Server when you connect to attach
that database file with a specific database name.
Implementation Details
The topics in this section provide information about the editions of Microsoft SQL
Server 2000 and the environments that support these editions. Information about the
maximum capacities and memory usage of SQL Server 2000 objects is also provided.
Managing Permissions
When users connect to an instance of Microsoft SQL Server, the activities they can
perform are determined by the permissions granted to:

Their security accounts.

The Microsoft Windows NT 4.0 or Windows 2000 groups or role hierarchies to

which their security accounts belong.

The user must have the appropriate permissions to perform any activity that involves
changing the database definition or accessing data.
Managing permissions includes granting or revoking user rights to:

Work with data and execute procedures (object permissions).

Create a database or an item in the database (statement permissions).

Utilize permissions granted to predefined roles (implied permissions).

OBJECT PERMISSIONS
Working with data or executing a procedure requires a class of permissions known as object
permissions:

SELECT, INSERT, UPDATE, and DELETE statement permissions, which can be applied
to the entire table and view.

SELECT and UPDATE statement permissions, which can be selectively applied to


individual columns of a table or view.

SELECT permissions, which may be applied to user-defined functions.

INSERT and DELETE statement permissions, which affect the entire row, and therefore
can be applied only to the table and view and not to individual columns.

EXECUTE statement permissions, which affect stored procedures and functions.

Statement Permissions
Activities involved in creating a database or an item in a database, such as a table or stored
procedure, require a different class of permissions called statement permissions. For example,
if a user must be able to create a table within a database, then grant the CREATE TABLE
statement permission to the user. Statement permissions, such as CREATE DATABASE, are
applied to the statement itself, rather than to a specific object defined in the database.
Statement permissions are:

BACKUP DATABASE

BACKUP LOG

CREATE DATABASE

CREATE DEFAULT

CREATE FUNCTION

CREATE PROCEDURE

CREATE RULE

CREATE TABLE

CREATE VIEW

Implied Permissions
Implied permissions control those activities that can be performed only by members of
predefined system roles or owners of database objects. For example, a member of the
sysadmin fixed server role inherits automatically full permission to do or see anything in a
SQL Server installation.
Database object owners also have implied permissions that allow them to perform all
activities with the object they own. For example, a user who owns a table can view, add, or
delete data, alter the table definition, or control permissions that allow other users to work
with the table.
Granting Permissions
Grant statement and object permissions that allow a user account to:

Perform activities or work with data in the current database.

Restrict them from activities or information not part of their intended function.

For example, you may be inclined to grant SELECT object permission on the payroll table to
all members of the personnel role, allowing all members of personnel to view payroll.
Months later, you may overhear members of personnel discussing management salaries,
information not meant to be seen by all personnel members. In this situation, grant SELECT
access to personnel for all columns in payroll except the salary column.
Note: It is possible to grant permissions only to user accounts in the current database, for

objects in the current database. If a user needs permissions to objects in another database,
create the user account in the other database, or grant the user account access to the other
database, as well as the current database. System stored procedures are the exception because
EXECUTE permissions are already granted to the public role, which allows everyone to
execute them. However, after EXECUTE has been issued, the system stored procedures
check the user's role membership. If the user is not a member of the appropriate fixed server
or database role necessary to run the stored procedure, the stored procedure will not continue.
Revoking Permissions
You can revoke a permission that has been granted or denied previously. Revoking is similar
to denying in that both remove a granted permission at the same level. However, although
revoking permission removes a granted permission, it does not prevent the user, group, or
role from inheriting a granted permission from a higher level. Therefore, if you revoke
permission for a user to view a table, you do not necessarily prevent the user from viewing
the table because permission to view the table was granted to a role to which he belongs.
For example, removing SELECT access on the Employees table from the Human Resources
role revokes permission so that Human Resources can no longer use the table. If Human
Resources is a member of the Administration role. If you later grant SELECT permission on
Employees to Administration, members of Human Resources can see the table through their
membership in Administration. However, if you deny permission to Human Resources, the
permission is not inherited if later granted to Administration because the deny permission
cannot be undone by a permission at a different level.
Similarly, it is also possible to remove a previously denied permission by revoking the deny
for the permission. However, if a user has other denied permissions at the group or role level,
then the user still is denied access.
Note: You can revoke permissions to user accounts only in the current database, for objects
in the current database.

2.4.6. Permissions for User-Defined Functions

Functions are subroutines made up of one or more Transact-SQL statements that can be used
to encapsulate code for reuse. Microsoft SQL Server 2000 allows users to create their
own user-defined functions.
User-defined functions are managed through the following statements:
CREATE FUNCTION, which creates a user-defined function.
ALTER FUNCTION, which modifies user-defined functions.
DROP FUNCTION, which drops user-defined functions.
Each fully qualified user-defined function name database_name.owner_name.function_name)
must be unique.
You must have been granted CREATE FUNCTION permissions to create, alter, or drop userdefined functions. Users other than the owner must be granted EXECUTE permission on a
function (if the function is scalar-valued) before they can use it in a Transact-SQL statement.
If the function is table-valued, the user must have SELECT permissions on the function
before referencing it. If a CREATE TABLE or ALTER TABLE statement references a userdefined function in a CHECK constraint, a DEFAULT clause, or a computed column, the
table owner must also own the function. If the function is being schema-bound, you must
have REFERENCE permission on tables, views, and functions referenced by the function.

CHAPTER- IV

4. DESIGN
Design patterns brought a paradigm shift in the way object oriented systems are designed.
Instead of relying on the knowledge of problem domain alone, design patterns allow past
experience to be utilized while solving new problems. Traditional object oriented design
(OOD) approaches such as Booch, OMT etc. advocated identification and specification of
individual objects and classes. Design patterns on the other hand promote identification and
specification of collaborations of objects and classes. However much of the focus of the
recent research has been towards identification and cataloguing of new design patterns. The
effort has been to assimilate knowledge gained from designing system of the past, in various
problem domains. The problem analysis phase has gained little benefit from this paradigm.
Most projects still use Traditional object oriented analysis (OOA) approaches to identify
classes from problem description. Responsibilities to these classes are assigned based upon
the obvious description of entities given in the problem description.
Pattern Oriented Technique (POT) is a methodology for identifying interactions among
classed and mapping them to one or more design patterns, However, this methodology also
uses traditional OOA for assigning class responsibilities. As a result, its interaction oriented
design phase (driven by design patterns) receives its input in the form of class definitions that
might not lead to best possible design.
The missing piece here is of the lack of an analysis method that can help in identifying class
definitions and collaborations between them which would be amenable to the application of
interaction oriented design. There are two key issues here. First is to come up with good class
definitions and the second is to identify good class collaborations.
It has been observed in that even arriving at good class definitions from the given class
definition is non-trivial. The key to various successful designs in the presence of abstract
classes (such as event handler) which are not modelled as entities in the physical world and
hence do not appear in the problem description. In anticipating change has been proposed as

the method for identifying change in such abstract classes in a problem domain. Another
difficult task is related to assignment of responsibilities to entities identified from the
problem description. Different responsibility assignments could lead to completely different
designs. Current approaches such as Coad and Yourdon, POT etc. follow the simple approach
f using entity descriptions in the problem statement to define classes and fix responsibilities.
We propose to follow a flexible approach towards assigning responsibilities to classes so that
the best responsibility assignment can be chosen.
The second issue is to identify class collaborations. Techniques such as POT analyze
interactions among different sets of classes as specified in the problem description. Such
interacting classes are then grouped together to identify design patterns that may be
applicable. However as mentioned earlier, only the interactions among obvious classes are
determined currently. Other interactions involving abstract classes not present in the problem
or interactions that become feasible due to different responsibility assignments are not
considered. We present some techniques that enable the designer to capture some interactions
as well.

INTERACTION BASED ANALYSIS AND DESIGN


Top-down approach:
A top-down approach (also known as stepwise design and in some cases used as a synonym
of decomposition) is essentially the breaking down of a system to gain insight into its
compositional sub-systems. In a top-down approach an overview of the system is formulated,
specifying but not detailing any first-level subsystems. Each subsystem is then refined in yet
greater detail, sometimes in many additional subsystem levels, until the entire specification is
reduced to base elements. A top-down model is often specified with the assistance of "black
boxes", these make it easier to manipulate. However, black boxes may fail to elucidate
elementary mechanisms or be detailed enough to realistically validate the model. Top down
approach starts with the big picture. It breaks down from there into smaller segments.
4.

Top-down approaches emphasize planning and a complete understanding of the

system. It is inherent that no coding can begin until a sufficient level of detail has been
reached in the design of at least some part of the system. Top-down approaches are

implemented by attaching the stubs in place of the module. This, however, delays testing of
the ultimate functional units of a system until significant design is complete.
Top-down is a programming style, the mainstay of traditional procedural languages, in which
design begins by specifying complex pieces and then dividing them into successively smaller
pieces. The technique for writing a program using topdown methods is to write a main
procedure that names all the major functions it will need. Later, the programming team looks
at the requirements of each of those functions and the process is repeated. These
compartmentalized sub-routines eventually will perform actions so simple they can be easily
and concisely coded. When all the various sub-routines have been coded the program is ready
for testing. By defining how the application comes together at a high level, lower level work
can be self-contained. By defining how the lower level abstractions are expected to integrate
into higher level ones, interfaces become clearly defined.

Bottom-up Approach:
A bottom-up approach is the piecing together of systems to give rise to more complex
systems, thus making the original systems sub-systems of the emergent system. Bottom-up
processing is a type of information processing based on incoming data from the environment
to form a perception. From a Cognitive Psychology perspective, information enters the eyes
in one direction (sensory input, or the "bottom"), and is then turned into an image by the
brain that can be interpreted and recognized as a perception (output that is "built up" from
processing to final cognition). In a bottom-up approach the individual base elements of the
system are first specified in great detail. These elements are then linked together to form
larger subsystems, which then in turn are linked, sometimes in many levels, until a complete
top-level system is formed. This strategy often resembles a "seed" model, whereby the
beginnings are small but eventually grow in complexity and completeness. However,
"organic strategies" may result in a tangle of elements and subsystems, developed in isolation
and subject to local optimization as opposed to meeting a global purpose.
Bottom-up emphasizes coding and early testing, which can begin as soon as the first module
has been specified. This approach, however, runs the risk that modules may be coded without
having a clear idea of how they link to other parts of the system, and that such linking may
not be as easy as first thought. Re-usability of code is one of the main benefits of the bottomup approach.

In a bottom-up approach, the individual base elements of the system are first specified in
great detail. These elements are then linked together to form larger subsystems, which then in
turn are linked, sometimes in many levels, until a complete top-level system is formed. This
strategy often resembles a "seed" model, whereby the beginnings are small, but eventually
grow in complexity and completeness. Object-oriented programming (OOP) is a paradigm
that uses "objects" to design applications and computer programs. This bottom-up approach
has one weakness. Good intuition is necessary to decide the functionality that is to be
provided by the module. If a system is to be built from existing system, this approach is more
suitable as it starts from some existing modules.

OPEN ISUES
Identifying interactions:
This is the crucial step in the analysis phase and the success of remaining phases depend on
it. The issue here is to identify interactions which are not evident from the problem
description but may hold the key to efficient design solution. The bottom up approach
proposed in this paper takes a step in this direction but a lot more work is needed. The
analysis method should be such that it is able to incorporate abstract classes such as event
handlers, proxies etc. Moreover, current analysis methods map entities to responsibilities of
individual classes in terms of services they provide and methods they invoke on other classes.
However, an entity may be realized by a set of classes. For instance an adapter class hides the
interface of an adoptee class and they collectively provide the desired functionality. Similarly
an abstraction and its implementation provide a single functionality through separate classes
resulting in increased maintainability. The analysis method needs to be able to determine
when is it appropriate to realize an entity responsibility by means of multiple interacting
classes.

Representation of Class Responsibilities:


Since we need to specify different alternative class responsibilities, as in bottom-up approach,
a mechanism is required to document them in a machine interpretable format. Some of these
responsibilities would get captured in the form of methods a class exports or methods it

invokes on other classes. However, other responsibilities with respect this interaction with
other classes need to be explicitly specified.

Language for Specifying Design Patterns:


The approaches for OO designed proposed in this paper favour automatic techniques over
manual ones for reasons described earlier. This means that we need a mechanism to be able to
express design patterns in a format amendable to read and interpreted by programs. Some
attempts have been made at defining such patter description languages. One of these or some
variation of these could be used to express design patterns in a formal language.

Comparison of Software Designs:


Once we have alternative designs available, they need to be compared to arrive at the best
one. Each design may consist of multiple design patterns. The criteria here would not be to
simply count the number of design patterns used but to evaluate the interaction between
patterns and also between other design elements used. This would involve understanding of
good and bad interactions and an ability to identify them in a given design. The final
challenge would be t do it automatically.

4.1 System Architecture


A system architecture or systems architecture is the conceptual model that defines the
structure, behavior, and more views of a system. An architecture description is a formal
description and representation of a system, organized in a way that supports reasoning about
the structures and behaviors of the system.
System architecture can comprise system components, the externally visible properties of
those components, the relationships (e.g. the behaviour) between them. It can provide a plan
from which products can be procured, and systems developed, that will work together to
implement the overall system. There have been efforts to formalize languages to describe
system architecture, collectively these are called architecture description languages (ADLs).
One can think of system architecture as a set of representations of an existing (or future)
system. It conveys the informational content of the elements comprising a system, the
relationships among those elements, and the rules governing those relationships. The

architectural components and set of relationships between these components that an


architecture description may consist of hardware, software, documentation, facilities, manual
procedures, or roles played by organizations or people.
A system architecture primarily concentrates on the internal interfaces among the system's
components or subsystems, and on the interface(s) between the system and its external
environment, especially the user. (In the specific case of computer systems, this latter,
special, interface is known as the computer human interface, AKA human computer
interface, or CHI; formerly called the man-machine interface.)

4.1.1Modules of the System


Module 1:
Module 2:
Module 3:
Module 3:
4.1.2 Architecture Diagram
4.2

UML Diagrams

4.2.1. Use Case Diagram


A use case diagram at its simplest is a representation of a user's interaction with the system
that shows the relationship between the user and the different use cases in which the user is
involved. A use case diagram can identify the different types of users of a system and the
different use cases and will often be accompanied by other types of diagrams as well. The
primary goals of Use Case diagram include:

Providing a high level view of what the system does.

Identifying the users (actors) of the system.

Determining the areas needing human computer interfaces.

Use Cases extend beyond pictorial diagrams. In fact, text based use case descriptions are
often used to supplement diagrams, and explore use case functionality in more detail.

4.2.2. Sequence diagram:


Sequence diagram documents the interactions between classes to achieve a result, such as a
use case. The sequence diagram lists objects horizontally, and time vertically, and models
these messages over time.

4.3 Database Design


Database design is the process of producing a detailed data model of a database. This logical
data model contains all the needed logical and physical design choices and physical storage
parameters needed to generate a design in a data definition language, which can then be used
to create a database. A fully attributed data model contains detailed attributes for each entity.
The term database design can be used to describe many different parts of the design of an
overall database system. Principally, and most correctly, it can be thought of as the logical
design of the base data structures used to store the data. In the relational model these are

the tables and view. In an object database the entities and relationships map directly to object
classes and named relationships. However, the term database design could also be used to
apply to the overall process of designing, not just the base data structures, but also the forms
and queries used as part of the overall database application within the database management
system (DBMS).
The process of doing database design generally consists of a number of steps which will be
carried out by the database designer. Usually, the designer must:

Determine the relationships between the different data elements.

Superimpose a logical structure upon the data on the basis of these relationships

4.3.1 ER Diagrams (Entity Relationship Diagrams)


Database designs also include ER (entity-relationship model) diagrams. An ER diagram is a
diagram that helps to design databases in an efficient way.
Attributes in ER diagrams are usually modelled as an oval with the name of the attribute,
linked to the entity or relationship that contains the attribute.
Within the relational model the final step can generally be broken down into two further
steps, that of determining the grouping of information within the system, generally
determining what are the basic objects about which information is being stored, and then
determining the relationships between these groups of information, or objects. This step is not
necessary with an Object database.

Design Process

Determine the purpose of the database - This helps prepare for the remaining steps.

Find and organize the information required - Gather all of the types of information
to record in the database, such as product name and order number.

Divide the information into tables - Divide information items into major entities or
subjects, such as Products or Orders. Each subject then becomes a table.

Turn information items into columns - Decide what information needs to be stored
in each table. Each item becomes a field, and is displayed as a column in the table.

For example, an Employees table might include fields such as Last Name and Hire
Date.

Specify primary keys - Choose each tables primary key. The primary key is a
column, or a set of columns, that is used to uniquely identify each row. An example
might be Product ID or Order ID.

Set up the table relationships - Look at each table and decide how the data in one
table is related to the data in other tables. Add fields to tables or create new tables to
clarify the relationships, as necessary.

Refine the design - Analyze the design for errors. Create tables and add a few records
of sample data. Check if results come from the tables as expected. Make adjustments
to the design, as needed.

Apply the normalization rules - Apply the data normalization rules to see if tables
are structured correctly. Make adjustments to the tables

CHAPTER V

Integration and Testing


5.1 Integration
Integration testing is one the important phase in software testing life cycle (STLC). With the
fast growth of internet and web services, web-based applications are also growing rapidly and
their importance and complexity is also increasing. Heterogeneous and diverse nature of
distributed components, applications, along with their multi-platform support and
cooperativeness make these applications more complex and swiftly increasing in their size.
Quality assurance of these applications is becoming more crucial and important. Testing is
one of the key processes to achieve and ensure the quality of these software or Web-based
products. There are many testing challenges involved in Web-based applications. But most
importantly integration is the most critical testing associated with Web-based applications.
There are number of challenging factors involved in integration testing efforts. These factors
have almost 70 percent to 80 percent impact on overall quality of Web-based applications. In
software industry different kind of testing approaches are used by practitioners to solve the
issues associated with integration which are due to ever increasing complexities of Webbased applications

5.2 Testing

Software testing is the process of executing a program with intension of finding errors
in the code. It is a process of evolution of system or its parts by manual or automatic
means to verify that it is satisfying specified or requirements or not.

Generally, no system is perfect due to communication problems between user and


Developer, time constraints, or conceptual mistakes by developer.

The purpose of system testing is to check and find out these errors or faults as early as
possible so losses due to it can be saved.

Testing is the fundamental process of software success.

Testing is not a distinct phase in system development life cycle but should be
applicable throughout all phases i.e. design development and maintenance phase.

Testing is used to show incorrectness and considered to success when an error is


detected.

Objectives of Software Testing


The software testing is usually performed for the following objectives:-

Software Quality Improvement:The computer and the software are mainly used for complex and critical applications and a
bug or fault in software
causes severe losses. So a great consideration is required for checking for quality of
software.

Verification and Validation:Verification means to test that we are building the product in right way .i.e. are we using the
correct procedure for the development of software so that it can meet the user requirements.
Validation means to check whether we are building the right product or not.

Software Reliability Estimation:The objective is to discover the residual designing errors before delivery to the customer. The
failure data during process are taken down in order to estimate the software reliability.

Principles of Software Testing


Software testing is an extremely creative and challenging task. Some important principles of
software testing are as given:

All tests should be traceable to customer requirements.

Testing time and resources should be limited i.e. avoid redundant testing.

It is impossible to test everything.

Use effective resources to test.

Test should be planned long before testing begins i.e. after requirement phase.

Test for invalid and unexpected input conditions as well as valid conditions.

Testing should begin in in the small and progress towards testing in the large.

For the most effective testing should be conducted by an independent party.

Keep software static (without change mean while) during test.

Document test cases and test results.

Examining what the software not doing which it expected to do and also checking what it
is doing that was not expected to do.

Strategy for Software Testing


Different levels of testing are used in the test process; each level of testing aims to test
different aspects of the system.

The first level is unit testing. In this testing, individual components are tested to ensure
that they operate correctly. It focuses on verification efforts.

The second level is integration testing. It is a systematic technique for constructing the
program structure. In this testing, many tested modules are combined into the subsystems
which are then tested. The good here is to see if the modules can be integrated properly.

Third level is integration testing. System testing is actually a series of different tests
whose primary purpose is to fully exercise computer based system. These tests fall
outside scope of software process and are not conducted solely by software engineers.

6.2.1 Test Cases


Name

Input

Output

User Registration

User details

Successfully Created

During User Login

User Credentials

Successful Login

User login to view & write posts

Normal Login

Select

and

report

accident

Test Case 1 (User Registrations):

If the user is new to the portal, then he/she has to register himself with the portal to
view and write portal in the portal.

User has to fill in certain basic details of himself inorder to register. User details will
be stored in database.

Test Case 2 (Login Validation):

If the user does not enter any details and directly clicks login button then he/she
cannot login into the application.

If the user credentials given during login attempt does not match with the details
present in the database then he/she cannot login into the application successfully.

If the user credentials are correct then login will be success.

Test Case 3 (User Functionalities):

After successful login, the portal navigates to home screen.

Functionalities are :

Entering the location of the accident

Giving the number of casualties

Entering the detailed description of the accident

Giving the date and time of the accident

Retrieving the information

the

CHAPTER -VI

7.1 OUTPUT SCREENS

CONCLUSION
The project was successfully completed within the time span allotted. Every effort has been
made to present the system in more user-friendly manner. All the activities provide a feeling
like an easy walk over to the user who is interfacing with the system. All the disadvantages of the
existing system have been overcome using the present system of "Matrimonial" which has been
successfully implemented at clients location. A trial run of the system has been made and is giving
good results.

The system has been developed in an attractive dialogs fashion and the entire user interface is
attractive and user friendly and suits all the necessities laid down by the clients initially. So user with
minimum knowledge about the computers and the system can easily work with the system.

You might also like