You are on page 1of 81

A MOBILE CLOUD COMPUTING SYSTEM FOR EMERGENCY

MANAGEMENT (AUTOMATIC ACCIDENTS)

ABSTRACT

Emergency management systems deal with the dynamic processing of data, where


response teams must continuously adapt to the changing conditions at the scene of
the emergency. Response teams must make critical decisions in highly demanding situations
using large volumes of sensor data. Mobiledevices have limited processing, storage, and battery
resources; therefore, the sensed data from the scene of the emergency must be transmitted and
processed quickly using the best available networks and clouds. Mobile cloud computing (MCC)
is expected to play a critical role in the computation and storage offloading of sensor data to the
best available clouds. However, applications running on mobiledevices using clouds and
heterogeneous access networks such as Wi-Fi and 3G are prone to
unpredictable cloud workloads, network congestion, and handoffs. This article presents M2C2,
asystem for mobility management in MCC,that supports mechanisms for multihoming, cloud and
network probing, and cloud and network selection. Through a prototype implementation and
experiments, the authors show that M2C2 supports mobility efficiently in MCC scenarios such
asemergency management.
1.INTRODUCTION

During the last decades, the total number of vehicles in our roads has experienced a
remarkablegrowth, making traffic density higher and increasing the drivers’ attention
requirements. The imme-diate effect of this situation is the dramatic increase of traffic accidents
on the road, representinga serious problem in most countries. As an example, 2,478 people died
in Spanish roads in 2010,which means one death for every 18,551 inhabitants, and 34,500 people
in the whole EuropeanUnion died as a result of a traffic accident in 2009.To reduce the number
of road fatalities, vehicular networkswill play an increasing role in theIntelligent Transportation
Systems(ITS) area. Most ITS applications, such as road safety, fleetmanagement, and navigation,
will rely on data exchanged between the vehicle and the roadsideinfrastructure (V2I), or even
directly between vehicles (V2V).

The integration of sensoringcapabilities on-board of vehicles, along with peer-to-peer


mobile communication among vehicles,forecast significant improvements in terms of safety in
the near future.Before arriving to the zero accident objective on the long term, a fast and efficient
rescue operationduring the hour following a traffic accident (the so-calledGolden Hour)
significantly increasesthe probability of survival of the injured, and reduces the injury severity.
Hence, to maximize thebenefits of using communication systems between vehicles, the
infrastructure should be supported byintelligent systems capable of estimating the severity of a
ccidents, and automatically deploying theactions required, thereby reducing the time needed to
assist injured passengers.

Many of the manualdecisions taken nowadays by emergency services are based


onincomplete or inaccurate data, whichmay be replaced by automatic systems that adapt to the
specific characteristics of each accident.A preliminary assessment of the severity of the accident
will help emergency services to adapt thehuman and material resources to the conditions of the
accident, with the consequent assistance qualityimprovement.
In this paper, we take advantage of the use of vehicular networks to collect precise information
aboutroad accidents that is then used to estimate the severity of the collision. We propose an
estimationbased on data mining classification algorithms, trained using historical data about
previous accidents. Our proposal does not focus on directly reducing the number of accidents,
but on improving postcollisionassistance.

ARCHITECTURE OVERVIEW
Figure presents the overview of the vehicular architectur e used to develop our system.
The proposed system consists of several components with differ ent functions. Firstly, vehicles
should incorporate an On-Board unit (OBU) responsible for: (i) detecting when there has been a
potentially dangerous impact for the occupants, (ii) collecting available information coming from
sensors in the vehicle, and (iii) communicating the situation to a Control Unit (CU) that will
accordingly address the handling of the warning notification.
Fig. 1. Architecture of our proposed system for automatic accident notification and assistance
using vehicular networks

Next, the notification of the detected accidents is made througha combination of both
V2V and V2I communications. Finally, the destination of all the collectedinformation is the
Control Unit; it will handle the warning notification, estimating the severity of theaccident, and
communicating the incident to the appropriate emergency services.The OBU definition is crucial
for the proposed system. This device must be technically andeconomically feasible, as its
adoption in a wide range of vehicles could become massive in a nearfuture. In addition, this
system should be open to future software updates.

Although the design ofthe hardware to be included in vehicles initially consistedof


special-purpose systems, this trend isheading towards general-purpose systems because of the
constant inclusion of new services.The information exchange between the OBUs and the CU is
made through the Internet, eitherthrough other vehicles acting as Internet gateways (via UMT
S, for example), or by reaching infras-tructure units (Road-Side Units, RSU) that provide this
service. If the vehicle does not get directaccess to the CU on its own, it can generate messages to
be broadcast by nearby vehicles until theyreach one of the aforementioned communication paths.
Thesemessages, when disseminated amongthe vehicles in the area where the accident took place,
also serve the purpose of alerting driverstraveling to the accident area about the state of the
affected vehicle, and its possible interference on the normal traffic flow.

Our proposed architecture provides: (i) direct communication between the vehicles
involved in theaccident, (ii) automatic sending of a data file containing important information
about the accidentto the Control Unit, and (iii) a preliminary and automatic assessment of the
damage of the vehicleand its occupants, based on the information coming from the involved
vehicles, and a database ofaccident reports. According to the reported information and the
preliminary accident estimation, thesystem will alert the required rescue resources to optimize
the accident assistance.
ON-BOARD UNIT STRUCTURE
The main objective of the proposed OBU lies in obtaining the available information from
sensorsinside the vehicle to determine when a dangerous situation occurs, and reporting that
situation to thenearest Control Unit, as well as to other nearby vehicles that may be affected.
Figure 2 shows the OBU system, which relies on the interaction between sensors, the data acqui-
sition unit, the processing unit, and wireless interfaces:

• In-vehicle sensors. They are required to detect accidents and provide information about
its causes.Accessing the data from in-vehicle sensors is possible nowadays using the On-Board
Diagnostics (OBD) standard interface, which serves as the entry point to the vehicle’s internal
bus. Thisstandard is mandatory in Europe and USA since 2001. This encompasses the majority
of thevehicles of the current automotive park, since the percentage of compatible vehicles will
keepgrowing as very old vehicles are replaced by new ones.

Fig. 2. On-Board Unit structure diagram

Data Acquisition Unit (DAU). This device is responsible forperiodically collecting data from
the sensors available in the vehicle (airbag triggers, speed, fuel levels, etc.), converting them to
a common format, and providing the collected data set to the OBU Processing Unit.
• OBU Processing Unit. It is in charge of processing the data coming from
sensors,determiningwhether an accident occurred, and notifying dangerous situations to nearby
vehicles, or directlyto the Control Unit. The information from the DAU is gathered, interpreted
and used to determinethe vehicle’s current status. This unit must also have access to a
positioning device (such as aGPS receiver), and to different wireless interfaces, thereby enabling
communication between thevehicle and the remote control center.

CONTROL UNIT STRUCTURE

The Control Unit (CU) is associated to the response center incharge of receiving
notifications ofaccidents from the OBUs installed in vehicles. In particular, the Control Unit is
responsible for dealingwith warning messages, retrieving information from them, and notifying
the emergency services aboutthe conditions under which the accident occurred.
The KDD approach can be defined as the nontrivial process of identifying valid, novel,
potentially useful, and understandable patterns from existing data.

The KDD process begins with the under-standing of the application specific domain and
the necessary prior knowledge. After the acquisition of initial data, a series of phases are
performed:

1) Selection: This phase determines the information sources that may be useful, and then it
transforms the data into a common format.
2) Preprocessing: In this stage, the selected data must be cleaned (noise reduction or modeling)
and preprocessed (missing data handling).
3) Transformation: This phase is in charge of performing a reduction and projection of the data
to find relevant features that represent the data depending on the purpose of the task.
4) Data mining: This phase basically selects mining algorithms and selection methods which will
be used to find patterns in data.
5) Interpretation/Evaluation: Finally, the extracted patterns must be interpreted. This step may
also include displaying the patterns and models, or displaying the data taking into account such
Models

Fig. 3. Control Unit modular structure.

We propose to develop a complete KDD process, starting by selecting a useful data


source con-taining instances of previous accidents. The data collected will be structured and
preprocessed to easethe work to be done in the transformation and data mining phases. The final
step will consist oninterpreting the results, and assessing their utility for the specific task of
estimating the severity ofroad accidents. The phases from the KDD process will be performed
using the open-source Wekacollection, which is a set of machine learning algorithms. Weka is
open source software issuedunder the GNU General Public License which contains tools for data
pre-processing, classification,regression, clustering, association rules, and visualization.

We will deal with road accidents in twodimensions: (i) damage on the vehicle (indicating
the possibility of traffic problems or the need ofcranes in the area of the accident), and (ii)
passenger injuries. These two dimensions seem to berelated, since heavily damaged vehicles are
usually associated with low survival possibilities of theoccupants. Consequently, we will use the
estimations obtained with our system about the damage onthe vehicle to help in the prediction of
the occupants’ injuries.Finally, our system will benefit from additional knowledge to improve its
accuracy, groupingaccidents according to their degree of similarity.

We can use the criteria used in numerous studiesabout accidents, including some tests
such as the Euro NCAP, in which crashes are divided andanalyzed separately depending on the
main direction of the impact registered due to the collision.The following sections contain the
results of the differentphases of our KDD proposal.

DATA ACQUISITION, SELECTION ANDPREPROCESSINGPHASES

Developing a useful algorithm to estimate accident severity needs historical data to


ensure that thecriteria used are suitable and realistic. TheNational Highway Traffic Safety
Administration(NHTSA)maintains a database with information about road accidentswhich began
operating in 1988: theGeneral Estimates System (GES). The data for this database is obtained
from a sample of realPolice Accident Reports (PARs) collected all over the USA roads, and it is
made public as electronic data sets.

In the traffic accidents domain, the most relevant sets of information in GES are:
(i) Accident,which contains the crash characteristics and environmental conditions at
the time of the accident,
(ii) Vehicle, which refers to vehicles and drivers involved in the crash,and
(iii) Person, i.e., peopleinvolved in the crash.
We will integrate the data harvested during the year 2011 into two differentself-built sets:
one for the vehicles and another one for theoccupants.Using the data contained in the GES
database, we classify thedamage in vehicles in three categories:

(i) Minor(the vehicle can be driven safely after the accident),


(ii) Moderate(the vehicle shows defectsthat make it dangerous to be driven), and
(iii) Severe(the vehicle cannot be driven at all, and needsto be towed).

Focusing on passenger injuries, we will also use three different classes to determine
their severity level:

(i) no injury(unharmed passenger),


(ii) non-incapacitating injury(the person hasminor injuries that does not make him lose
consciousness, orprevent him from walking), and
(iii) incapacitating or fatal injury(the occupants’ wounds impede them from moving, or
they are fatal).

After preprocessing the selected GES data, no noise or inaccuracies were detected as all
the nominaland numerical values contained reasonable values. Due to the large number of
records availablein the database, we decided to only use those accident records with all the
required informationcomplete. After removing incomplete instances, our data sets consist of
14,227 full instances ofaccident reports (5,604 front crashes, 4,551 side crashes,and 4,072 rear-
end crashes). These accidentsrepresent different types of collisions in both urban and inter-urban
areas.

The distribution of accidentsdepending on the area is the following:

• Front collisions: 1,418 (25.3%) in urban area, and 4,186 (74.7%) in inter-urban area.
• Side collisions: 1,593 (35.0%) in urban area, and 2,958 (65.0%) in inter-urban area.
• Rear-end collisions: 1,613 (39.6%) in urban area, and 2,459(60.4%) in inter-urban area.
Using these data, we achieve a total distribution of 4,624 accidents in urban areas, which
correspondto 32.5% of the total accidents, and 9,603 accidents in inter-urban areas (67.5%).

TRANSFORMATIONPHASE

This phase consists on developing a reduction and projection of the data to find relevant
features thatrepresent the characteristics of the data depending on the objective. We selected a
potential subset ofvariables which could be obtained from the on-board sensorsof the vehicle or
auxiliary devices suchas the GPS. Those variables include the type of vehicle,the speed just
before the accident, andthe airbag status.Even if the GES database does not include information
about the themeasuredaccelerations, this information could be filled using our proposed system
with data collected fromnotified accidents, and incorporated in future versions of the
classification algorithm.

Concerningpassengers, there are specific characteristics for each person that are not
directly accessible, but mighthelp to improve the prediction accuracy. We added two of these
personal variables to our data –ageand sex-, which will be used to study their relevance on the in
juries suffered. Weka provides a widevariety of feature selection algorithms. Among them, we
selected three of the most commonly used.
2.LITERATURE SURVEY

Several approaches can be found in the existing literature with the objective of increasing
trafficsafety through the use of telecommunication technologies,and also in the field of accident
severityestimation using historical data.A. Using Telecommunication Technologies to Improve
TrafficSafetyThe U.S. Department of Transportation (DOT) developed someprojects similar to
ours withthe goal of improving traffic safety through the use of vehicular communication, based
on testingthe effectiveness and safety benefits of the wireless connected vehicle technology in
real-world,multimodal driving conditions.

Some preliminary results regarding the distance traveled bywarning messages, number of
relaying vehicles, and communication times in a real experimentin the streets of Los Angeles
(USA) in 2011 can be found. However, these experimentsonly include V2V communications and
the notification of erous situations between vehicles,whereas our system mainly concentrates on
improving the decision making process that follows theoccurrence of an accident. In addition,
the tests were performed using the IEEE 802.11b and 802.11gstandards, instead of the IEEE
802.11p standard, which was specially designed to be used in vehicularenvironments.
There are also some projects that make use of ECG (Electrocardiogram) sensors to
monitor thedrivers’ condition and detect possible health problems that could endanger traffic
safety.The authors propose a condition monitoring system toobtain physiological signals for car
driver’s health condition monitoring by means of ECG and PPG(Photoplethysmogram) sensors
attached to the steering wheel. The signals concerning heart rate are transmitted to a server PC
via a Personal Area Network (PAN) for practical tests, beinganalyzed to detect drowsiness and
fatigue. The results obtained from this system could be usedto inform nearby vehicles about a
dangerous driver status, but the system does not include notification of abnormal statuses since
only a PAN is considered.
Palantei et al. designed a wireless system for remotelymonitoring heart pulses obtained
from ECG sensors. This system allows communication of the monitoring information 50 to 250
meters far from the person, but it is not indicated how the signal is processed to classify thestatus
of the person.The monitoring system proposed includes non-intrusive active electrodes installed
on theseats of the vehicle. The data collected is sent through a wireless PAN and the processing
of thedata concerning heart rate variability in time and frequency allowed determining if the
driver istired or stressful.
We can see that these approaches are too limited by the wireless technology used, which
provides avery short communication range. Moreover, they have only been used to determine the
fatigue or stresslevel, probably due to the inviability of finding real cases to test their efficiency
on the estimation ofthe injuries suffered after an accident. The integration ofECG sensors in
modern vehicles could bean excellent opportunity to collect information about heal
th signs after the occurrence of an accident,since our proposed architecture would allow the
notification of the gathered data to the Control Unitfor further processing and classification by
means of intelligent algorithms.
B. Previous Approaches towards Accident Severity Estimation using Data MiningDespite
the interest that may arise from understanding the influence of various factors on road
accidents, the number of works about this topic in the literature is not particularly large. In
addition,most attempts to carry out a data mining process related to traffic accidents only
considered datafrom a single city or a very small area, making results only slightly
representative.Several works are based on data obtained from the Traffic Office of Ethiopia,
since this countrypresents one of the largest number of accidents per capita. Beshah and Hill
used data from 18,288accidents around Addis Ababa as the basic data set. This study uses Naïve-
Bayes, decision trees, andk-nearest neighbors (KNN) algorithms to classify the data using a
cross-validation methodology,with accuracy values close to 80%.
However, the authors onlyprovided estimations for the wholeaccident, not for single
occupants. Data from Ethiopia was also used to build regression tree modelsfor accident
classification. Only 13 out of 36 variables available in the data were used to buildthe
classification models, but the selection process was notshown, and again only estimations about
the whole accident were provided.The area of South Korea was also selected to develop classific
ation models based on artificialneural networks, decision trees, and logistic regression. The data
set involved 11,564 accidents,and the authors concluded that the different
classificationalgorithms provide similar results in termsof accuracy, being the use of protection
devices, such as theseat belt and the airbag, the mostrelevant factors to classify accidents.
This work was extended using ensemble methods (i.e.,multiple models to obtain better
predictive performance than could be obtained from any of theconstituent models) combined
with a prior assignation of instances through clustering, attaching adifferent classification model
to each cluster, which produced a better class assignment.More recently, Chong et al. selected
data from all over the United States obtained during the1995-2000 period to propose a set of
models based on artificial neural networks, decision trees, andSupport Vector Machines (SVMs).
All the classification models presented similar accuracy results,and they were highly
effective at recognizing fatal injuries.Finally, some authors have focused on the characteristics o
f specific road segments, instead ofusing the data from individual vehicles. Clustering of
accident hotspots were performed by Anderson in order to determine effective strategies for the
reduction of high density areas of accidents.Authors studied the spatial patterns of injury related
to road accidents in London (UK), and theyfound several hotspots with relevant significance
using K-means clustering.

Nayak et al. useda road-based approach for modeling the crash proneness of road
segments using available road andcrash attributes, classifying the road depending on their “crash
proneness". They also used a priorclustering of the accidents with similar features, and a con
sequent classification of the data by meansof decision trees. However, they did not establish
different severity levels on the accidents studied.From previous works, we detected significant
shortcomings when attempting to combine theirresults with vehicular networks, since existing
works about estimating the severity of road accidentshave not been used to improve the
assistance to injured passengers. All the above papers used awhole variety of attributes to build
the classification models, whereas only some of them could beeffectively extracted from the
vehicle itself (e.g., the driver’s inebriation level). In addition, none ofthem used an adequate
feature selection algorithm to selectthe optimal variable subset.

Finally, someof the models are extensively used (decision trees), while other interesting
methods received minorattention (SVMs, and Bayesian networks). To the best of our knowledge,
none of these approacheshave been implemented and tested in a real environment, since they
only make use of historical data.It is also noteworthy that existing proposals in the literature
trying to estimate the severity of atraffic accident do not develop a complete KDD process. In
fact, the only phase of the KDD processthat has received widespread attention is the data mining
phase, while the rest has been overlookedor summarized as much as possible. Although data
mining is a very important phase, the resultsobtained when omitting the previous phases can
loose their interest or utility.

Howard R. Champion, Research Professor of Surgery,Uniformed Services University of the


Health Sciences,PrincipalInvestigatordescribeThe National Highway traffic
SafetyAdministration (NHTSA) developed software calledURGENCY for use with Automatic
Crash Notification(ACN) technologies to improve triage, transport, andtreatment decision-
making. The aim is to identify,instantly and automatically, the approximately 250,000crashed
vehicles with serious injuries occurring each yearfrom the 28,000,000 crashed vehicles with
minor or no
Injuries.
Dr. LelithaVanajakshidescribeIntelligent Transportation Systems (ITS) is an
establishedroute to resolve, or at least minimize traffic problems. ITS encompassallmodes of
transportation -air, sea, road and rail, and intersects various components of each mode -vehicles,
infrastructure, communication and operational systems. Various
countrieshavedevelopedstrategies and techniques, based on their geographic, cultural,socio-
economic and environmentalbackground, to integrate the various components into an
interrelatedsystem.

In general, any of the ITS applicationsuses a Traffic Management Centre (TMC) where
data is collected, analysed and combined with other operational and control concepts to manage
the complex transportation problems. Typically, several agencies share the administration of
transport infrastructure, through a network of traffic operation centres. There isoften,a localized
distribution of dataand information and the centres adopt different criteria to achieve the goals of
traffic management. This inter-dependent autonomy in operations and decision-making is
essential because of the heterogeneity of demand and performance characteristics of interacting
subsystems.
3.SYSTEM ANALYSIS

FEASIBILITY STUDY

Feasibility study is a process which defines exactly what a project is and what strategic issues
need to be considered to assess its feasibility, or likelihood of succeeding. Feasibility studies are
useful both when starting a new business, and identifying a new opportunity for an existing
business. Ideally, the feasibility study process involves making rational decisions about a number
of enduring characteristics of a project, including:

 Technical feasibility- do we’ have the technology’? If not, can we get it?
 Operational feasibility- do we have the resources to build the system? Will the system be
acceptable? Will people use it?
 Economic feasibility, technical feasibility, schedule feasibility, and operational
feasibility- are the benefits greater than the costs?

TECHNICAL FEASIBILITY

Technical feasibility is concerned with the existing computer system (Hardware,


Software etc.) and to what extend it can support the proposed addition. For example, if particular
software will work only in a computer with a higher configuration, an additional hardware is
required. This involves financial considerations and if the budget is a serious constraint, then the
proposal will be considered not feasible.

OPERATIONAL FEASIBILITY

Operational feasibility is a measure of how well a proposed system solves the problems,
and takes advantages of the opportunities identified during scope definition and how it satisfies
the requirements identified in the requirements identified in the requirements analysis phase of
system development.

ECONOMIC FEASIBILITY

Economic analysis is the most frequently used method for evaluating the effectiveness of
a candidate system. More commonly known as cost/ benefit analysis, the procedure is to
determine the benefits and savings that are expected from a candidate system and compare them
with costs. If benefits outweigh costs, then the decision is made to design and implement the
system.

2.2 EXISTING SYSTEM:


 A preliminary assessment of the severity of the accident will help emergency services to
adapt the human and material resources to the conditions of the accident, with the
consequent assistance quality improvement.
 Many of the manual decisions taken nowadays by emergency services are based on
incomplete or inaccurate data, which may be replaced by automatic systems that adapt to
the specific characteristics of each accident.
 Before arriving to the zero accident objectives on the long term, a fast and efficient
rescue operation during the hour following a traffic accident significantly increases the
probability of survival of the injured, and reduces the injury severity.
 To maximize the benefits of using communication systems between vehicles, the
infrastructure should be supported by intelligent systems capable of estimating the
severity of accidents, and automatically deploying the actions required, thereby reducing
the time needed to assist injured passengers.

DISADVANTAGES:
1. Emergency services are based on incomplete or inaccurate data
2. Dramatic increase of traffic accidents

2.3 PROPOSED SYSTEM:


 This project directly estimates the accident severity by comparing the obtained data with
information coming from previous accidents stored in a database.
 This approach collects information available when a traffic accident occurs, which is
captured bysensors installed on-board the vehicles.
 The data collected are structured in a packet, and forwardedto a remote Control Unit
through a combination of V2V and V2I wireless communication.
 Based on this information, our project determines the most suitable setof resources in a
rescue operation.
 It considers the information obtained just when the accident occurs, to estimate its
severity immediately
 It is limited by the data automatically retrievable, omitting other information, e.g.,about
the driver’s degree of attention, drowsiness, etc

ADVANTAGES:
1. A fast and accurate estimation of the severity of the accident
2. Reduces the number of road fatalities, vehicular networks
4. SYSTEM SPECIFICATION

Hardware Requirements:

 Hard Disk : 1TB


 Ram : 4GB.

Software Requirements:

 Operating system : Windows 7 Ultimate.


 Coding Language : C#
 Front-End : Visual Studio 2010 Professional.
 Data Base : SQL Server 2008.

SOFTWARE DESCRIPTION

ABOUT THE SOFTWARE

What is .NET?

Many people reckon that it's Microsoft's way of controlling the Internet, which is
false. .NET is Microsoft's strategy of software that provides services to people any time, any
place, on any device. An accurate definition of .NET is, it's an XML Web Services platform
which allows us to build rich .NET applications, which allows users to interact with the Internet
using wide range of smart devices (tablet devices, pocket PC's, web phones etc), which allows to
build and integrate Web Services .

What is .NET Built On?


.NET is built on the Windows Server System to take major advantage of the OS and
which comes with a host of different servers which allows for building, deploying, managing and
maintaining Web-based solutions. The Windows Server System is designed with performance as
priority and it provides scalability, reliability, and manageability for the global, Web-enabled
enterprise.Since the initial announcement of the .NET Framework, it's taken on many new and
different meanings to different people.

To a developer, .NET means a great environment for creating robust distributed


applications. To an IT manager, .NET means simpler deployment of applications to end users,
tighter security, and simpler management.

To a CTO or CIO, .NET means happier developers using state-of-the-art development


technologies and a smaller bottom line.To understand why all these statements are true, you need
to get a grip on what the .NET Framework consists of, and how it's truly a revolutionary step
forward for application architecture, development, and deployment.
I.
II. NET Framework
Now that you are familiar with the major goals of the .NET Framework, let's briefly
examine its architecture. As you can see in Figure 1-2, the .NET Framework sits on top of the
operating system, which can be a few different flavors of Windows and consists of a number of
components .NET is essentially a system application that runs on Windows.

Conceptually, the CLR and the JVM are similar in that they are both runtime
infrastructures that abstract the underlying platform differences. However, while the JVM
officially supports only the Java language, the CLR supports any language that can be
represented in itsCommon Intermediate Language (CIL). The JVM executes bytecode, so it can,
in principle, support many languages, too.
Another conceptual difference between the two infrastructures is that Java code runs on
any platform with a JVM, whereas .NET code runs only on platforms that support the CLR. In
April, 2003, the International Organization for Standardization and the International
Electrotechnical Committee (ISO/IEC) recognized a functional subset of the CLR, known as the
Common Language Interface (CLI), as an international standard.
This development, initiated by Microsoft and developed by ECMA International, a
European standards organization, opens the way for third parties to implement their own versions
of the CLR on other platforms.
The layer on top of the CLR is a set of framework base classes. This set of classes is
similar to the set of classes found in STL, MFC, ATL, or Java. These classes support
rudimentary input and output functionality, string manipulation, security management, network
communications, thread management, text management, reflection functionality, collections
functionality, as well as other functions.
On top of the framework base classes is a set of classes that extend the base classes to
support data management and XML manipulation. These classes, called ADO.NET, support
persistent data management—data that is stored on backend databases. Alongside the data
classes, the .NET Framework supports a number of classes to let you manipulate XML data and
perform XML searching and XML translations.
Classes in three different technologies (including web services, Web Forms, and
Windows Forms) extend the framework base classes and the data and XML classes. Web
services include a number of classes that support the development of lightweight distributed
components, which work even in the face of firewalls and NAT software. These components
support plug-and-play across the Internet, because web services employ standard HTTP and
SOAP.
Web Forms, the key technology behind ASP.NET, include a number of classes that allow
you to rapidly develop webGraphical User Interface (GUI) applications. If you're currently
developing web applications with Visual Interdev, you can think of Web Forms as a facility that
allows you to develop web GUIs.
Windows Forms support a set of classes that allow you to develop nativeWindows GUI
applications. You can think of these classes collectively as a much better version of the MFC in
C++ because they support easier and more powerful GUI development and provide a common,
consistent interface that can be used in all languages.
Features OF. Net
Microsoft .NET is a set of Microsoft software technologies for rapidly building
and integrating XML Web services, Microsoft Windows-based applications, and Web solutions.
The .NET Framework is a language-neutral platform for writing programs that can easily and
securely interoperate. There’s no language barrier with .NET: there are numerous languages
available to the developer including Managed C++, C#, Visual Basic and Java Script. The .NET
framework provides the foundation for components to interact seamlessly, whether locally or
remotely on different platforms. It standardizes common data types and communications
protocols so that components created in different languages can easily interoperate.
“.NET” is also the collective name given to various software components built
upon the .NET platform. These will be both products (Visual Studio.NET and Windows.NET
Server, for instance) and services (like Passport, .NET My Services, and so on).

III. The Common Language Runtime


At the heart of the .NET Framework is the common language runtime. The common
language runtime is responsible for providing the execution environment that code written in
a .NET language runs under.
The common language runtime can be compared to the Visual Basic 6 runtime, except that
the common language runtime is designed to handle all .NET languages, not just one, as the
Visual Basic 6 runtime did for Visual Basic 6. The following list describes some of the benefits
the common language runtime gives you:
 Automatic memory management
 Cross-language debugging
 Cross-language exception handling
 Full support for component versioning
 Access to legacy COM components
 XCOPY deployment
 Robust security model
You might expect all those features, but this has never been possible using Microsoft
development tools.Since the initial announcement of the .NET Framework, it's taken on many
new and different meanings to different people. To a developer, .NET means a great
environment for creating robust distributed applications.
To an IT manager, .NET means simpler deployment of applications to end users, tighter
security, and simpler management. To a CTO or CIO, .NET means happier developers using
state-of-the-art development technologies and a smaller bottom line.
To understand why all these statements are true, you need to get a grip on what the .NET
Framework consists of, and how it's truly a revolutionary step forward for application
architecture, development, and deployment.
IV. The .NET Framework Class Library

The second most. But to write the code, you need a foundation of available classes to
access the resources of the operating system, database server, or file server. The FCL is made up
of a hierarchy of namespaces that expose classes, structures, interfaces, enumerations, and
delegates that give you access to these resources. The namespaces are logically defined by
functionality. For example, the System.Data namespace contains all the functionality available
to accessing databases. This namespace is further broken down into System.Data.SqlClient,
which exposes functionality specific to SQL Server, and System.Data.OleDb, which exposes
specific functionality for accessing OLEDB data sources. The bounds of a namespace aren't
necessarily defined by specific assemblies within the FCL; rather, they're focused on
functionality and logical grouping. In total, there are more than 20,000 classes in the FCL, all
logically grouped in a hierarchical manner. Figure 1.8 shows where the FCL fits into the .NET
Framework and the logical grouping of namespaces.

To use an FCL class in your application, you use the Imports statement in Visual
Basic .NET or theusing statement in C#. When you reference a namespace in Visual Basic .NET
or C#, you also get the convenience of auto-complete and auto-list members when you access the
objects' types using Visual Studio .NET. This makes it very easy to determine what types are
available for each class in the namespace you're using. As you'll see over the next several weeks,
it's very easy to start coding in Visual Studio .NET.
The Structure of a .NET Application

To understand how the common language runtime manages code execution, you must
examine the structure of a .NET application. The primary unit of a .NET application is the
assembly. An assembly is a self-describing collection of code, resources, and metadata. The
assembly manifest contains information about what is contained within the assembly. The
assembly manifest provides:

 Identity information, such as the assembly’s name and version number


 A list of all types exposed by the assembly
 A list of other assemblies required by the assembly
 A list of code access security instructions, including permissions required by the
assembly and permissions to be denied the assembly

Each assembly has one and only one assembly manifest, and it contains all the description
information for the assembly. However, the assembly manifest can be contained in its own file or
within one of the assembly’s modules. An assembly contains one or more modules. A module
contains the code that makes up your application or library, and it contains metadata that
describes that code. When you compile a project into an assembly, your code is converted from
high-level code to IL. Because all managed code is first converted to IL code, applications
written in different languages can easily interact. For example, one developer might write an
application in Visual C# that accesses a DLL in Visual Basic .NET. Both resources will be
converted to IL modules before being executed, thus avoiding any language-incompatibility
issues.

The Structure of a .NET Application

To understand how the common language runtime manages code execution, you must
examine the structure of a .NET application. The primary unit of a .NET application is the
assembly. An assembly is a self-describing collection of code, resources, and metadata. The
assembly manifest contains information about what is contained within the assembly. The
assembly manifest provides:
 Identity information, such as the assembly’s name and version number
 A list of all types exposed by the assembly
 A list of other assemblies required by the assembly
 A list of code access security instructions, including permissions required by the
assembly and permissions to be denied the assembly

Each assembly has one and only one assembly manifest, and it contains all the description
information for the assembly. However, the assembly manifest can be contained in its own file or
within one of the assembly’s modules.

An assembly contains one or more modules. A module contains the code that makes up
your application or library, and it contains metadata that describes that code. When you compile
a project into an assembly, your code is converted from high-level code to IL. Because all
managed code is first converted to IL code, applications written in different languages can easily
interact. For example, one developer might write an application in Visual C# that accesses a
DLL in Visual Basic .NET. Both resources will be converted to IL modules before being
executed, thus avoiding any language-incompatibility issues.

Each module also contains a number of types. Types are templates that describe a set of
data encapsulation and functionality. There are two kinds of types: reference types (classes) and
value types (structures). These types are discussed in greater detail in Lesson 2 of this chapter.
Each type is described to the common language runtime in the assembly manifest. A type can
contain fields, properties, and methods, each of which should be related to a common
functionality.

For example, you might have a class that represents a bank account. It contains fields,
properties, and methods related to the functions needed to implement a bank account. A field
represents storage of a particular type of data. One field might store the name of an account
holder, for example. Properties are similar to fields, but properties usually provide some kind of
validation when data is set or retrieved. You might have a property that represents an account
balance.
When an attempt is made to change the value, the property can check to see if the
attempted change is greater than a predetermined limit. If the value is greater than the limit, the
property does not allow the change. Methods represent behavior, such as actions taken on data
stored within the class or changes to the user interface. Continuing with the bank account
example, you might have a Transfer method that transfers a balance from a checking account to a
savings account, or an Alert method that warns users when their balances fall below a
predetermined level.

Compilation and Execution of a .NET Application

When you compile a .NET application, it is not compiled to binary machine code; rather,
it is converted to IL. This is the form that your deployed application takes—one or more
assemblies consisting of executable files and DLL files in IL form. At least one of these
assemblies will contain an executable file that has been designated as the entry point for the
application.

When execution of your program begins, the first assembly is loaded into memory. At
this point, the common language runtime examines the assembly manifest and determines the
requirements to run the program. It examines security permissions requested by the assembly and
compares them with the system’s security policy. If the system’s security policy does not allow
the requested permissions, the application will not run. If the application passes the system’s
security policy, the common language runtime executes the code. It creates a process for the
application to run in and begins application execution.

When execution starts, the first bit of code that needs to be executed is loaded into
memory and compiled into native binary code from IL by the common language runtime’s Just-
In-Time (JIT) compiler. Once compiled, the code is executed and stored in memory as native
code. Thus, each portion of code is compiled only once when an application executes. Whenever
program execution branches to code that has not yet run, the JIT compiler compiles it ahead of
execution and stores it in memory as binary code. This way, application performance is
maximized because only the parts of a program that are executed are compiled.
The .NET Framework base class library contains the base classes that provide many of
the services and objects you need when writing your applications. The class library is organized
into namespaces. A namespace is a logical grouping of types that perform related functions. For
example, the System .Windows Forms namespace contains all the types that make up Windows
forms and the controls used in those forms.

Namespaces are logical groupings of related classes. The namespaces in the .NET base
class library are organized hierarchically. The root of the .NET Framework is the System
namespace. Other namespaces can be accessed with the period operator. A typical namespace
construction appears as follows:

System
System.Data
System.Data.SQLClient

The first example refers to the System namespace. The second refers to the System.Data
namespace. The third example refers to the System.Data.SQLClient namespace.

Table -introduces some of the more commonly used .NET base class namespaces.

Table-.Representative .NET Namespaces

Namespace Description
This namespace is the root for many of the low-level types required by
the .NET Framework. It is the root for primitive data types as well,
System
and it is the root for all the other namespaces in the .NET base class
library.
This namespace contains classes that represent a variety of different
container types, such as Array List, Sorted List, Queue, and Stack.
System. Collections
You also can find abstract classes, such as Collection Base, which are
useful for implementing your own collection functionality.
This namespace contains classes involved in component creation and
System.ComponentModel
containment, such as attributes, type converters, and license providers.
Table-.Representative .NET Namespaces

Namespace Description
This namespace contains classes required for database access and
System. Data
manipulations, as well as additional namespaces used for data access.
This namespace contains a set of classes that are shared by the .NET
System.Data.Common
managed data providers.
This namespace contains classes that make up the managed data
System.Data.OleDb
provider for OLE DB data access.
This namespace contains classes that are optimized for interacting with
System.Data.SQLClient
Microsoft SQL Server.
This namespace exposes GDI+ functionality and provides classes that
System.Drawing
facilitate graphics rendering.
System.IO In this namespace, you will find types for handling file system I/O.
This namespace is home to common mathematics functions such as
System. Math
extracting roots and trigonometry.
This namespace provides support for obtaining information and
System. Reflection
dynamic creation of types at runtime.
This namespace is home to types dealing with permissions,
System. Security
cryptography, and code access security.
This namespace contains classes that facilitate the implementation of
System. Threading
multithreaded applications.
This namespace contains types involved in creating standard Windows
System.Windows.Forms applications. Classes that represent forms and controls reside here as
well.

The namespace names are self-descriptive by design. Straightforward names make


the .NET Framework easy to use and allow you to rapidly familiarize yourself with its contents.

Introduction to Object-Oriented Programming


Programming in the .NET Framework environment is done with objects. Objects are
programmatic constructs that represent packages of related data and functionality. Objects are
self-contained and expose specific functionality to the rest of the application environment
without detailing the inner workings of the object itself. Objects are created from a template
called a class. The .NET base class library provides a set of classes from which you can create
objects in your applications.

Objects, Members, and Abstraction

An object is a programmatic construct that represents something. In the real world,


objects are cars, bicycles, laptop computers, and so on. Each of these items exposes specific
functionality and has specific properties. In your application, an object might be a form, a control
such as a button, a database connection, or any of a number of other constructs. Each object is a
complete functional unit, and contains all of the data and exposes all of the functionality required
to fulfill its purpose. The ability of programmatic objects to represent real-world objects is called
abstraction.

ADO.NET Data Architecture

Data access in ADO.NET relies on two entities: the DataSet, which stores data on the
local machine, and the Data Provider, a set of components that mediates interaction between the
program and the database.

The Dataset

The DataSet is a disconnected, in-memory representation of data. It can be thought of as


a local copy of the relevant portions of a database. Data can be loaded into a DataSet from any
valid data source, such as a SQL Server database, a Microsoft Access database, or an XML file.
The DataSet persists in memory, and the data therein can be manipulated and updated
independent of the database. When appropriate, the DataSet can then act as a template for
updating the central database.
The DataSet object contains a collection of zero or more DataTable objects, each of
which is an in-memory representation of a single table. The structure of a particular DataTable is
defined by the DataColumns collection, which enumerates the columns in a particular table, and
the Constraint collection, which enumerates any constraints on the table. Together, these two
collections make up the table schema. A DataTable also contains a DataRows collection, which
contains the actual data in the DataSet.

The DataSet contains a DataRelations collection. A DataRelation object allows you to


create associations between rows in one table and rows in another table. The DataRelations
collection enumerates a set of DataRelation objects that define the relationships between tables
in the DataSet. For example, consider a DataSet that contains two related tables: an Employees
table and a Projects table. In the Employees table, each employee is represented only once and is
identified by a unique EmployeeID field. In the Projects table, an employee in charge of a project
is identified by the EmployeeID field, but can appear more than once if that employee is in
charge of multiple projects. This is an example of a one-to-many relationship; you would use a
DataRelation object to define this relationship.Additionally, a DataSet contains an
ExtendedProperties collection, which is used to store custom information about the DataSet

The Data Provider

The link to the database is created and maintained by a data provider. A data provider is not a
single component, rather it is a set of related components that work together to provide data in an
efficient, performance-driven manner. The first version of the Microsoft .NET Framework
shipped with two data providers: the SQL Server .NET Data Provider, designed specifically to
work with SQL Server 7 or later, and the OleDb .NET Data Provider, which connects with other
types of databases. Microsoft Visual Studio .NET 2003 added two more data providers: the
ODBC Data Provider and the Oracle Data Provider. Each data provider consists of versions of
the following generic component classes:

 The Connection object provides the connection to the database.


 The Command object executes a command against a data source. It can execute non-
query commands, such as INSERT, UPDATE, or DELETE, or return a DataReader with
the results of a SELECT command.
 The Data Reader object provides a forward-only, read-only, connected recordset.
 The Data Adapter object populates a disconnected DataSet or DataTable with data and
performs updates.

Data access in ADO.NET is facilitated as follows: a Connection object establishes a


connection between the application and the database. This connection can be accessed directly
by a Command object or by a DataAdapter object.

The Command object provides direct execution of a command to the database. If the
command returns more than a single value, the Command object returns a DataReader to provide
the data. This data can be directly processed by application logic. Alternatively, you can use the
DataAdapter to fill a DataSet object. Updates to the database can be achieved through the
Command object or through the DataAdapter.The generic classes that make up the data providers
are summarized in the following sections.

REPORT:

A report is used to vies and print information from the database. The report can ground
records into many levels and compute totals and average by checking values from many
records at once. Also the report is attractive and distinctive because we have control over the
size and appearance of it.

MACRO:

A macro is a set of actions. Each action in macros does something. Such as opening a form or
printing a report .We write macros to automate the common tasks the work easy and save the
time.

C# (pronounced C Sharp) is a multi-paradigm programming language that encompasses


functional, imperative, generic, object-oriented (class-based), and component-oriented
programming disciplines. It was developed by Microsoft as part of the .NET initiative and later
approved as a standard by ECMA (ECMA-334) and ISO (ISO/IEC 23270). C# is one of the 44
programming languages supported by the .NET Framework's Common Language Runtime.

C# is intended to be a simple, modern, general-purpose, object-oriented programming language.


Anders Hejlsberg, the designer of Delphi, leads the team which is developing C#. It has an
object-oriented syntax based on C++ and is heavily influenced by other programming languages
such as Delphi and Java. It was initially named Cool, which stood for "C like Object Oriented
Language". However, in July 2000, when Microsoft made the project public, the name of the
programming language was given as C#. The most recent version of the language is C# 3.0
which was released in conjunction with the .NET Framework 3.5 in 2007. The next proposed
version, C# 4.0, is in development.

History:-

In 1996, Sun Microsystems released the Java programming language with Microsoft soon
purchasing a license to implement it in their operating system. Java was originally meant to be a
platform independent language, but Microsoft, in their implementation, broke their license
agreement and made a few changes that would essentially inhibit Java's platform-independent
capabilities. Sun filed a lawsuit and Microsoft settled, deciding to create their own version of a
partially compiled, partially interpreted object-oriented programming language with syntax
closely related to that of C++.

During the development of .NET, the class libraries were originally written in a
language/compiler called Simple Managed C (SMC). In January 1999, Anders Hejlsberg formed
a team to build a new language at the time called Cool, which stood for "C like Object Oriented
Language".Microsoft had considered keeping the name "Cool" as the final name of the language,
but chose not to do so for trademark reasons. By the time the .NET project was publicly
announced at the July 2000 Professional DevelopersConference, the language had been renamed
C#, and the class libraries and ASP.NET runtime had been ported to C#.

C#'s principal designer and lead architect at Microsoft is Anders Hejlsberg, who was previously
involved with the design of Visual J++, BorlandDelphi, and Turbo Pascal. In interviews and
technical papers he has stated that flaws in most major programming languages (e.g. C++, Java,
Delphi, and Smalltalk) drove the fundamentals of the Common Language Runtime (CLR),
which, in turn, drove the design of the C# programming language itself. Some argue that C#
shares roots in other languages.

Features of C#:-

By design, C# is the programming language that most directly reflects the underlying Common
Language Infrastructure (CLI). Most of C#'s intrinsic types correspond to value-types
implemented by the CLI framework. However, the C# language specification does not state the
code generation requirements of the compiler: that is, it does not state that a C# compiler must
target a Common Language Runtime (CLR), or generate Common Intermediate Language (CIL),
or generate any other specific format. Theoretically, a C# compiler could generate machine code
like traditional compilers of C++ or FORTRAN; in practice, all existing C# implementations
target CIL.

Some notable C# distinguishing features are:

 There are no global variables or functions. All methods and members must be declared
within classes. It is possible, however, to use static methods/variables within public
classes instead of global variables/functions.
 Local variables cannot shadow variables of the enclosing block, unlike C and C++.
Variable shadowing is often considered confusing by C++ texts.
 C# supports a strict Boolean data type, bool. Statements that take conditions, such as
while and if, require an expression of a Boolean type. While C++ also has a Boolean
type, it can be freely converted to and from integers, and expressions such as if(a) require
only that a is convertible to bool, allowing a to be an int, or a pointer. C# disallows this
"integer meaning true or false" approach on the grounds that forcing programmers to use
expressions that return exactly bool can prevent certain types of programming mistakes
such as if (a = b) (use of = instead of ==).
 In C#, memory address pointers can only be used within blocks specifically marked as
unsafe, and programs with unsafe code need appropriate permissions to run. Most object
access is done through safe object references, which are always either pointing to a valid,
existing object, or have the well-defined null value; a reference to a garbage-collected
object, or to random block of memory, is impossible to obtain. An unsafe pointer can
point to an instance of a value-type, array, string, or a block of memory allocated on a
stack. Code that is not marked as unsafe can still store and manipulate pointers through
the System.IntPtr type, but cannot dereference them.
 Managed memory cannot be explicitly freed, but is automatically garbage collected.
Garbage collection addresses memory leaks. C# also provides direct support for
deterministic finalization with the using statement (supporting the Resource Acquisition
Is Initialization idiom).
 Multiple inheritance is not supported, although a class can implement any number of
interfaces. This was a design decision by the language's lead architect to avoid
complication, avoid dependency hell and simplify architectural requirements throughout
CLI.
 C# is more type safe than C++. The only implicit conversions by default are those which
are considered safe, such as widening of integers and conversion from a derived type to a
base type. This is enforced at compile-time, during JIT, and, in some cases, at runtime.
There are no implicit conversions between Booleans and integers, nor between
enumeration members and integers (except for literal 0, which can be implicitly
converted to any enumerated type). Any user-defined conversion must be explicitly
marked as explicit or implicit, unlike C++ copy constructors (which are implicit by
default) and conversion operators (which are always implicit).
 Enumeration members are placed in their own scope.
 C# provides syntactic sugar for a common pattern of a pair of methods, accessor (getter)
and mutator (setter) encapsulating operations on a single attribute of a class, in form of
properties.
 Full type reflection and discovery is available.
 C# currently (as of 3 June 2008) has 77 reserved words.

Common Type system (CTS)

C# has a unified type system. This unified type system is called Common Type System (CTS).
A unified type system implies that all types, including primitives such as integers, are subclasses
of the System. Object class. For example, every type inherits a To String () method. For
performance reasons, primitive types (and value types in general) are internally allocated on the
stack.

V. Categories of data types

CTS separate data types into two categories:

 Value types
 Reference types

Value types are plain aggregations of data. Instances of value types do not have referential
identity or referential comparison semantics - equality and inequality comparisons for value
types compare the actual data values within the instances, unless the corresponding operators are
overloaded. Value types are derived from System.ValueType, always have a default value, and
can always be created and copied. Some other limitations on value types are that they cannot
derive from each other (but can implement interfaces) and cannot have a default (parameter less)
constructor. Examples of value types are some primitive types, such as int (a signed 32-bit
integer), float (a 32-bit IEEE floating-point number), char (a 16-bit Unicode code point), and
System.DateTime (identifies a specific point in time with millisecond precision).

In contrast, reference types have the notion of referential identity - each instance of reference
type is inherently distinct from every other instance, even if the data within both instances is the
same. This is reflected in default equality and inequality comparisons for reference types, which
test for referential rather than structural equality, unless the corresponding operators are
overloaded (such as the case for System. String). In general, it is not always possible to create
an instance of a reference type, nor to copy an existing instance, or perform a value comparison
on two existing instances, though specific reference types can provide such services by exposing
a public constructor or implementing a corresponding interface (such as ICloneable or I
Comparable). Examples of reference types are object (the ultimate base class for all other C#
classes), System. String (a string of Unicode characters), and System. Array (a base class for
all C# arrays).
Both type categories are extensible with user-defined types.

VI. Boxing and unboxing Boxing is the operation of converting a value of a value type
into a value of a corresponding reference type.
ACCESS PRIVIEGES

IIS provides several new access levels. The following values can set the type of access
allowed to specific directories:

o Read
o Write
o Script
o Execute
o Log Access
o Directory Browsing.
ActiveX

ActiveX is a specification develops by Microsoft that allows ordinary Windows programs


to be run within a Web page. ActiveX programs can be written in languages such as Visual Basic
and they are complied before being placed on the Web server.

ActiveX application, called controls, are downloaded and executed by the Web
browser, like Java applets. Unlike Java applets, controls can be installed permanently when they
are downloaded; eliminating the need to download them again. ActiveX’s main advantage is that
it can do just about anything.

This can also be a disadvantage:

Several enterprising programmers have already used ActiveX to bring exciting new
capabilities to Web page, such as “the Web page that turns off your computer” and “the Web
page that formats disk drive”.

Fortunately, ActiveX includes a signature feature that identifies the source of the control
and prevents controls from being modified. While this won’t prevent a control from damaging
system, we can specify which sources of controls we trust.
ActiveX has two main disadvantages

It isn’t as easy to program as scripting language or Java.

ActiveX is proprietary.

It works only in Microsoft Internet Explorer and only Windows platforms.

ADO.NET

ADO.NET provides consistent access to data sources such as Microsoft SQL Server, as
well as data sources exposed via OLE DB and XML. Data-sharing consumer applications can
use ADO.NET to connect to these data sources and retrieve, manipulate, and update data.

ADO.NET cleanly factors data access from data manipulation into discrete components
that can be used separately or in tandem. ADO.NET includes .NET data providers for connecting
to a database, executing commands, and retrieving results. Those results are either processed
directly, or placed in an ADO.NET Dataset object in order to be exposed to the user in an ad-hoc
manner, combined with data from multiple sources, or remote between tiers. The ADO.NET
Dataset object can also be used independently of a .NET data provider to manage data local to
the application or sourced from XML.

Why ADO.NET?

As application development has evolved, new applications have become loosely coupled
based on the Web application model. More and more of today's applications use XML to encode
data to be passed over network connections. Web applications use HTTP as the fabric for
communication between tiers, and therefore must explicitly handle maintaining state between
requests. This new model is very different from the connected, tightly coupled style of
programming that characterized the client/server era, where a connection was held open for the
duration of the program's lifetime and no special handling of state was required.

In designing tools and technologies to meet the needs of today's developer, Microsoft
recognized that an entirely new programming model for data access was needed, one that is built
upon the .NET Framework. Building on the .NET Framework ensured that the data access
technology would be uniform—components would share a common type system, design
patterns, and naming conventions.

ADO.NET was designed to meet the needs of this new programming model:
disconnected data architecture, tight integration with XML, common data representation with the
ability to combine data from multiple and varied data sources, and optimized facilities for
interacting with a database, all native to the .NET Framework.

Leverage Current ADO Knowledge

Microsoft's design for ADO.NET addresses many of the requirements of today's


application development model. At the same time, the programming model stays as similar as
possible to ADO, so current ADO developers do not have to start from scratch in learning a
brand new data access technology. ADO.NET is an intrinsic part of the .NET Framework
without seeming completely foreign to the ADO programmer.

ADO.NET coexists with ADO. While most new .NET applications will be written using
ADO.NET, ADO remains available to the .NET programmer through .NET COM
interoperability services. For more information about the similarities and the differences between
ADO.NET and ADO.

ADO.NET provides first-class support for the disconnected, n-tier programming


environment for which many new applications are written. The concept of working with a
disconnected set of data has become a focal point in the programming model. The ADO.NET
solution for n-tier programming is the Dataset.

XML Support

XML and data access are intimately tied—XML is all about encoding data, and data access is
increasingly becoming all about XML. The .NET Framework does not just support Web
standards—it is built entirely on top of them.
5,SYSTEM DESIGN:-

DATA FLOW DIAGRAM

Level 0

Level 1
Level 2
UML DIAGRAMS

Use case diagram


Class diagram
Sequence diagram
Activity diagram
DATABASE DESIGN

History Table:

Vehicle Table:
6. SYSTEM IMPLEMENTATION

MODULE DESCRIPTION

RECEPTION/INTERPRETATION MODULE
The first step for the CU is to receive a warning message from acollided vehicle, and so
there must be a module waiting for the arrival of messages and retrievingthe values from the
different fields.

ACCIDENT SEVERITY ESTIMATION MODULE


When a new accident notification is received, this modulewill determine how serious the
collision was, and the severity of the passengers’ injuriesResource assignment module. After
deciding the severity ofthe accident, an additional moduleis used to define resource sets adapted
to the specific situation.One of the most important modules in the Control Unit is in charge of the
Accident SeverityEstimation, i.e., providing a relative measure of the potential effect of the
collision on the integrityof the vehicles and people involved. To obtain this estimation, we make
use of historical informationabout previous accidents contained in an existing database, through
a process of Knowledge Discoveryin Databases (KDD).

DATABASE UPDATE MODULE


The data collected from the notifiedaccident are stored into the existingdatabase of
previous accidents, increasing the knowledge about the accident domain.

WEB SERVER MODULE


The Control Unit incorporates a Web Server to allow easy visualizationof the historical
information recorded and the current accident situations requiring assistance. Aweb interface was
chosen in order to increase user friendliness and interoperability.

EMERGENCY SERVICES NOTIFICATION MODULE


When the information has been correctly managed,the notification module sends
messages to the emergency services including all the informationcollected, the estimated
severity, the recommended set of resources, as well as additional in-formation about the vehicles
involved in the collision (forpreliminary planning of the rescueoperation). The information about
vehicles consists of standard rescue sheets, which highlightthe important or dangerous parts of a
specific vehicle that should be taken into account duringa rescue operation: batteries, fuel tanks,
etc

NORMALIZATION

The basic objective of normalization is to be reducing redundancy which means that


information is to be stored only once. Storing information several times leads to wastage of
storage space and increase in the total size of the data stored.
If a Database is not properly designed it can gives rise to modification anomalies. Modification
anomalies arise when data is added to, changed or deleted from a database table. Similarly, in
traditional databases as well as improperly designed relational databases, data redundancy can be
a problem. These can be eliminated by normalizing a database.

Normalization is the process of breaking down a table into smaller tables. So that each
table deals with a single theme. There are three different kinds of modifications of anomalies and
formulated the first, second and third normal forms (3NF) is considered sufficient for most
practical purposes. It should be considered only after a thorough analysis and complete
understanding of its implications.

FIRST NORMAL FORM (1NF):

This form also called as a “flat file”. Each column should contain data in respect of a
single attributes and no two rows may be identical. To bring a table to First Normal Form,
repeating groups of fields should be identified and moved to another table.

SECOND NORMAL FORM (2NF):

A relation is said to be in 2NF if it is 1NF and non-key attributes are functionality


dependent on the key attributes. A ‘Functional Dependency’ is a relationship among attributes.
One attribute is said to be functionally dependent on another if the value of the first attribute
depends on the value of the second attribute. In the given description flight number and halt code
is the composite key.
THIRD NORMAL FORM (3NF) :

Third Normal Form normalization will be needed where all attributes in a relation tuple are not
functionally dependent only on the key attribute. A transitive dependency is one in which one in
which one attribute depends on second which is turned depends on a third and so on.
7. SYSTEM TESTING
The purpose of testing is to discover errors. Testing is the process of trying to discover
every conceivable fault or weakness in a work product. It provides a way to check the
functionality of components, sub assemblies, assemblies and/or a finished product It is the
process of exercising software with the intent of ensuring that the

Software system meets its requirements and user expectations and does not fail in an
unacceptable manner. There are various types of test. Each test type addresses a specific testing
requirement.

TYPES OF TESTS
Unit testing
Unit testing involves the design of test cases that validate that the internal program logic is
functioning properly, and that program inputs produce valid outputs. All decision branches and
internal code flow should be validated. It is the testing of individual software units of the
application .it is done after the completion of an individual unit before integration. This is a
structural testing, that relies on knowdge of its construction and is invasive. Unit tests perform
basic tests at component level and test a specific business process, application, and/or system
configuration. Unit tests ensure that each unique path of a business process performs accurately
to the documented specifications and contains clearly defined inputs and expected results.

Integration testing
Integration tests are designed to test integrated software components to determine if they
actually run as one program. Testing is event driven and is more concerned with the basic
outcome of screens or fields. Integration tests demonstrate that although the components were
individually satisfaction, as shown by successfully unit testing, the combination of components is
correct and consistent. Integration testing is specifically aimed at exposing the problems that
arise from the combination of components.

Functional test
Functional tests provide systematic demonstrations that functions tested are available
apecified by the business and technical requirements, system documentation, and user manuals.

Functional testing is centered on the following items:


Valid Input : identified classes of valid input must be accepted.

Invalid Input : identified classes of invalid input must be rejected.

Functions : identified functions must be exercised.

Output : identified classes of application outputs must be exercised.

Systems/Procedures: interfacing systems or procedures must be invoked.

Organization and preparation of functional tests is focused on requirements, key functions, or


special test cases. In addition, systematic coverage pertaining to identify Business process flows;
data fields, predefined processes, and successive processes must be considered for testing.
Before functional testing is complete, additional tests are identified and the effective value of
current tests is determined.

System Test
System testing ensures that the entire integrated software system meets requirements. It tests a
configuration to ensure known and predictable results. An example of system testing is the
configuration oriented system integration test. System testing is based on process descriptions
and flows, emphasizing pre-driven process links and integration points

White Box Testing


White Box Testing is a testing in which in which the software tester has knowledge of the inner
workings, structure and language of the software, or at least its purpose. It is purpose. It is used
to test areas that cannot be reached from a black box level.

Black Box Testing


Black Box Testing is testing the software without any knowledge of the inner workings,
structure or language of the module being tested. Black box tests, as most other kinds of tests,
must be written from a definitive source document, such as specification or requirements
document, such as specification or requirements document. It is a testing in which the software
under test is treated, as a black box .you cannot “see” into it. The test provides inputs and
responds to outputs without considering how the software works.
6.1 Unit Testing:

Unit testing is usually conducted as part of a combined code and unit test phase of the
software lifecycle, although it is not uncommon for coding and unit testing to be conducted as
two distinct phases.

Test strategy and approach


Field testing will be performed manually and functional tests will be written in detail.
Test objectives
 All field entries must work properly.
 Pages must be activated from the identified link.
 The entry screen, messages and responses must not be delayed.
Features to be tested
 Verify that the entries are of the correct format
 No duplicate entries should be allowed
 All links should take the user to the correct page.

6.2 Integration Testing


Software integration testing is the incremental integration testing of two or more
integrated software components on a single platform to produce failures caused by interface
defects.

The task of the integration test is to check that components or software applications, e.g.
components in a software system or – one step up – software applications at the company level –
interact without error.

Test Results: All the test cases mentioned above passed successfully. No defects encountered.

6.3 Acceptance Testing


User Acceptance Testing is a critical phase of any project and requires significant
participation by the end user. It also ensures that the system meets the functional requirements.

Test Results: All the test cases mentioned above passed successfully. No defects encountered.
Sample Code:
Emergency Server:
using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Data;
using System.Drawing;
using System.Text;
using System.Windows.Forms;
using System.Net;
using System.Net.Sockets;
using System.IO;
using System.Data.SqlClient;

namespace receiver
{
public partial class Form1 : Form
{

public Form1()
{
InitializeComponent();
reciver1.receivedPath = "";
}
public static string kl;
public static string lm;

private void Form1_Load(object sender, EventArgs e)


{
kl = "n";
reciver1.receivedPath = @"E:\";
}

private void button2_Click(object sender, EventArgs e)


{
timer1.Start();
if (reciver1.receivedPath.Length > 0)
backgroundWorker1.RunWorkerAsync();
else
MessageBox.Show("Please select file receiving
path","Information",MessageBoxButtons.OK,MessageBoxIcon.Information);

private void backgroundWorker1_DoWork(object sender, DoWorkEventArgs e)


{
obj.StartServer();
}

private void timer1_Tick(object sender, EventArgs e)


{

label2.Text = reciver1.receivedPath;
label4.Text = reciver1.curMsg;
if (kl == "y")
{
string vehicle, route, speed, accident, time, no_per = "", siv = "";
StreamReader sw = new StreamReader("E:\\sf2.txt");
string k = sw.ReadToEnd();
string[] split = k.Split('\n');
vehicle = split[0].Trim();
route = split[1].Trim();
speed = split[2].Trim();
accident = split[3].Trim();
time = split[4].Trim();
no_per = split[5].Trim();
siv = split[6].Trim();
sw.Close();
listBox1.Items.Add(vehicle);
listBox2.Items.Add(route);
listBox3.Items.Add(speed);
listBox4.Items.Add(accident);
listBox5.Items.Add(time);
listBox6.Items.Add(no_per);
listBox7.Items.Add(siv);
int mk = 1;
FileInfo f1 = new FileInfo("E:\\sf2.txt");
f1.Delete();

SqlConnection con = new SqlConnection("Data Source=SHAMEER\\SQLEXPRESS;Initial


Catalog=auto;Integrated Security=True");
con.Open();
SqlCommand cmd = new SqlCommand("insert into Histry
values('"+vehicle+"','"+route+"','"+speed+"','"+accident+"','"+time+"','"+no_per+"','"+siv+"')",con);
cmd.ExecuteNonQuery();
con.Close();
int amb = 1;
string fire ="no";

if (siv == "high")
{
if (int.Parse(no_per) > 1 && int.Parse(no_per) < 5)
{
amb = 2;

}
else if (int.Parse(no_per) > 4 && int.Parse(no_per) < 10)
{
amb = 4;

}
else if (int.Parse(no_per) > 10 && int.Parse(no_per) < 20)
{
amb = 6;

}
else if (int.Parse(no_per) > 20 && int.Parse(no_per) <= 30)
{
amb = 15;

}
else if (int.Parse(no_per) > 30 && int.Parse(no_per) <= 50)
{
amb = 25;
}
else if (int.Parse(no_per) > 50)
{
amb = 30;

}
else
{
}

if (route == "rout1")
{
fire = "yes";

}
if (route == "rout7")
{
fire = "yes";
}
if (route == "rout5")
{
fire = "yes";
}
if (route == "rout9")
{
fire = "yes";
}
}
else if (siv == "low")
{
if (int.Parse(no_per) > 1 && int.Parse(no_per) < 5)
{
amb = 2;

}
else if (int.Parse(no_per) > 4 && int.Parse(no_per) < 10)
{
amb = 3;

}
else if (int.Parse(no_per) > 10 && int.Parse(no_per) < 20)
{
amb = 4;

}
else if (int.Parse(no_per) > 20 && int.Parse(no_per) <= 30)
{
amb = 8;

}
else if (int.Parse(no_per) > 30 && int.Parse(no_per) <= 50)
{
amb = 10;

}
else if (int.Parse(no_per) > 50)
{
amb = 15;
}
else
{
}

if (route == "rout1")
{
fire = "yes";

}
if (route == "rout7")
{
fire = "yes";
}
if (route == "rout5")
{
fire = "yes";
}
if (route == "rout9")
{
fire = "yes";
}

}
else if (siv == "average")
{
if (int.Parse(no_per) > 1 && int.Parse(no_per) < 5)
{
amb = 2;

}
else if (int.Parse(no_per) > 4 && int.Parse(no_per) < 10)
{
amb = 3;

}
else if (int.Parse(no_per) > 10 && int.Parse(no_per) < 20)
{
amb = 7;

}
else if (int.Parse(no_per) > 20 && int.Parse(no_per) <= 30)
{
amb = 20;

}
else if (int.Parse(no_per) > 30 && int.Parse(no_per) <= 50)
{
amb = 20;

}
else if (int.Parse(no_per) > 50)
{
amb = 25;

}
else
{
}

if (route == "rout1")
{
fire = "yes";

}
if (route == "rout7")
{
fire = "yes";
}
if (route == "rout5")
{
fire = "yes";
}
if (route == "rout9")
{
fire = "yes";
}
}
else if (siv == "very high")
{
if (int.Parse(no_per) > 1 && int.Parse(no_per) < 5)
{
amb = 4;

}
else if (int.Parse(no_per) > 4 && int.Parse(no_per) < 10)
{
amb = 8;

}
else if (int.Parse(no_per) > 10 && int.Parse(no_per) < 20)
{
amb = 12;

}
else if (int.Parse(no_per) > 20 && int.Parse(no_per) <= 30)
{
amb = 20;

}
else if (int.Parse(no_per) > 30 && int.Parse(no_per) <= 50)
{
amb = 30;

}
else if (int.Parse(no_per) > 50)
{
amb = 40;

}
else
{
}

if (route == "rout1")
{
fire = "yes";
}
if (route == "rout7")
{
fire = "yes";
}
if (route == "rout5")
{
fire = "yes";
}
if (route == "rout9")
{
fire = "yes";
}
}
else if (siv == "low")
{
if (int.Parse(no_per) > 1 && int.Parse(no_per) < 5)
{
amb = 2;

}
else if (int.Parse(no_per) > 4 && int.Parse(no_per) < 10)
{
amb = 2;

}
else if (int.Parse(no_per) > 10 && int.Parse(no_per) < 20)
{
amb = 3;

}
else if (int.Parse(no_per) > 20 && int.Parse(no_per) <= 30)
{
amb = 6;

}
else if (int.Parse(no_per) > 30 && int.Parse(no_per) <= 50)
{
amb = 10;

}
else if (int.Parse(no_per) > 50)
{
amb = 20;

}
else
{
}

if (route == "rout1")
{
fire = "yes";

}
if (route == "rout7")
{
fire = "yes";
}
if (route == "rout5")
{
fire = "yes";
}
if (route == "rout9")
{
fire = "yes";
}
}
else
{

}
label11.Text = "NO OF Amulance Alert : " + amb.ToString()+ "\nFire service : " + fire + "\nPolice : YES";

ats_KMean.KMean clus = new ats_KMean.KMean();


Random r1 = new Random();
speed = r1.Next(130, 150).ToString(); ;
no_per = r1.Next(20, 50).ToString();
siv = clus.kMeanCluster(speed, vehicle, no_per);

for (int ik = 1; ik <= 2; ik++)


{
///////////////////////////////////////
con = new SqlConnection("Data Source=SHAMEER\\SQLEXPRESS;Initial Catalog=auto;Integrated
Security=True");
con.Open();
cmd = new SqlCommand("insert into Histry values('" + vehicle + "','" + route + "','" + speed + "','" + accident +
"','" + time + "','" + no_per + "','" + siv + "')", con);
cmd.ExecuteNonQuery();
con.Close();
amb = 1;
fire = "no";

if (siv == "high")
{
if (int.Parse(no_per) > 1 && int.Parse(no_per) < 5)
{
amb = 2;

}
else if (int.Parse(no_per) > 4 && int.Parse(no_per) < 10)
{
amb = 4;

}
else if (int.Parse(no_per) > 10 && int.Parse(no_per) < 20)
{
amb = 6;

}
else if (int.Parse(no_per) > 20 && int.Parse(no_per) <= 30)
{
amb = 15;

}
else if (int.Parse(no_per) > 30 && int.Parse(no_per) <= 50)
{
amb = 25;

}
else if (int.Parse(no_per) > 50)
{
amb = 30;

}
else
{
}

if (route == "rout1")
{
fire = "yes";

}
if (route == "rout7")
{
fire = "yes";
}
if (route == "rout5")
{
fire = "yes";
}
if (route == "rout9")
{
fire = "yes";
}
}
else if (siv == "low")
{
if (int.Parse(no_per) > 1 && int.Parse(no_per) < 5)
{
amb = 2;

}
else if (int.Parse(no_per) > 4 && int.Parse(no_per) < 10)
{
amb = 3;

}
else if (int.Parse(no_per) > 10 && int.Parse(no_per) < 20)
{
amb = 4;

}
else if (int.Parse(no_per) > 20 && int.Parse(no_per) <= 30)
{
amb = 8;

}
else if (int.Parse(no_per) > 30 && int.Parse(no_per) <= 50)
{
amb = 10;

}
else if (int.Parse(no_per) > 50)
{
amb = 15;

}
else
{
}

if (route == "rout1")
{
fire = "yes";

}
if (route == "rout7")
{
fire = "yes";
}
if (route == "rout5")
{
fire = "yes";
}
if (route == "rout9")
{
fire = "yes";
}

}
else if (siv == "average")
{
if (int.Parse(no_per) > 1 && int.Parse(no_per) < 5)
{
amb = 2;

}
else if (int.Parse(no_per) > 4 && int.Parse(no_per) < 10)
{
amb = 3;

}
else if (int.Parse(no_per) > 10 && int.Parse(no_per) < 20)
{
amb = 7;

}
else if (int.Parse(no_per) > 20 && int.Parse(no_per) <= 30)
{
amb = 20;

}
else if (int.Parse(no_per) > 30 && int.Parse(no_per) <= 50)
{
amb = 20;

}
else if (int.Parse(no_per) > 50)
{
amb = 25;

}
else
{
}

if (route == "rout1")
{
fire = "yes";

}
if (route == "rout7")
{
fire = "yes";
}
if (route == "rout5")
{
fire = "yes";
}
if (route == "rout9")
{
fire = "yes";
}
}
else if (siv == "very high")
{
if (int.Parse(no_per) > 1 && int.Parse(no_per) < 5)
{
amb = 4;

}
else if (int.Parse(no_per) > 4 && int.Parse(no_per) < 10)
{
amb = 8;

}
else if (int.Parse(no_per) > 10 && int.Parse(no_per) < 20)
{
amb = 12;

}
else if (int.Parse(no_per) > 20 && int.Parse(no_per) <= 30)
{
amb = 20;

}
else if (int.Parse(no_per) > 30 && int.Parse(no_per) <= 50)
{
amb = 30;

}
else if (int.Parse(no_per) > 50)
{
amb = 40;

}
else
{
}

if (route == "rout1")
{
fire = "yes";

}
if (route == "rout7")
{
fire = "yes";
}
if (route == "rout5")
{
fire = "yes";
}
if (route == "rout9")
{
fire = "yes";
}
}
else if (siv == "low")
{
if (int.Parse(no_per) > 1 && int.Parse(no_per) < 5)
{
amb = 2;

}
else if (int.Parse(no_per) > 4 && int.Parse(no_per) < 10)
{
amb = 2;

}
else if (int.Parse(no_per) > 10 && int.Parse(no_per) < 20)
{
amb = 3;

}
else if (int.Parse(no_per) > 20 && int.Parse(no_per) <= 30)
{
amb = 6;

}
else if (int.Parse(no_per) > 30 && int.Parse(no_per) <= 50)
{
amb = 10;

}
else if (int.Parse(no_per) > 50)
{
amb = 20;

}
else
{
}

if (route == "rout1")
{
fire = "yes";

}
if (route == "rout7")
{
fire = "yes";
}
if (route == "rout5")
{
fire = "yes";
}
if (route == "rout9")
{
fire = "yes";
}
}
else
{

}
label12.Text = "NO OF Amulance Alert : " + amb.ToString() + "\nFire service : " + fire + "\nPolice : YES";

ats_KMean.KMean clus1 = new ats_KMean.KMean();


Random r11 = new Random();
speed = r11.Next(130, 150).ToString(); ;
no_per = r11.Next(20, 50).ToString();
siv = clus1.kMeanCluster(speed, vehicle, no_per);

///////////////////////////////////////////
}
if (siv == "high" || siv == "very high")
{
system.SendFile();
timer2.Start();
}

asd = 1;
kl = "n";

}
reciver1 obj = new reciver1();

class reciver1
{

IPEndPoint ipEnd;
Socket sock;
public reciver1()
{
IPAddress[] ipAddress = Dns.GetHostAddresses("127.0.0.1");
ipEnd = new IPEndPoint(ipAddress[0], 1000);
sock = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.IP);
sock.Bind(ipEnd);
}
DateTime DT;
public static string fileName;
public static string receivedPath;
public static string curMsg = "Stopped";
public void StartServer()
{
int filesize = 0;
//try
//{
hj:
curMsg = "Starting...";
sock.Listen(100);
// MessageBox.Show(curMsg);

curMsg = "Running and waiting to receive file.";


//MessageBox.Show(curMsg);
Socket clientSock = sock.Accept();

byte[] clientData = new byte[1024 * 5000];

int receivedBytesLen = clientSock.Receive(clientData);


curMsg = "Receiving data...";
//MessageBox.Show(curMsg);
int fileNameLen = BitConverter.ToInt32(clientData, 0);
fileName = Encoding.ASCII.GetString(clientData, 4, fileNameLen);

BinaryWriter bWrite = new BinaryWriter(File.Open(receivedPath + "/" + fileName, FileMode.Append));


bWrite.Write(clientData, 4 + fileNameLen, receivedBytesLen - 4 - fileNameLen);
filesize = receivedBytesLen;
curMsg = "Saving file...";
//MessageBox.Show(curMsg);
bWrite.Close();
// c();
clientSock.Close();

curMsg = "Recived & Saved file;";


kl = "y";
goto hj;

class system
{
public static string curMsg;
public static void SendFile()
{
string fileName = "E:\\s\\1.txt";
IPAddress[] ipAddress = Dns.GetHostAddresses("127.0.0.1");//server ipaddress
IPEndPoint ipEnd = new IPEndPoint(ipAddress[0], 1002);
int filelen = 0;
try
{

Socket cliendSock = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.IP);


string filePath = "";
fileName = fileName.Replace("\\", "/");
while (fileName.IndexOf("/") > -1)
{
filePath += fileName.Substring(0, fileName.IndexOf("/") + 1);
fileName = fileName.Substring(fileName.IndexOf("/") + 1);
}
byte[] fileNameByte = Encoding.ASCII.GetBytes(fileName);
if (fileNameByte.Length > 850 * 1024)
{
curMsg = "File size is more than 850kb,please try with small file.";
return;
}
curMsg = "Buffering ...";
// MessageBox.Show(curMsg);
byte[] fileData = File.ReadAllBytes(filePath + fileName);
byte[] clientData = new byte[4 + fileNameByte.Length + fileData.Length];
byte[] fileNameLen = BitConverter.GetBytes(fileNameByte.Length);
filelen = clientData.Length;
fileNameLen.CopyTo(clientData, 0);
fileNameByte.CopyTo(clientData, 4);
fileData.CopyTo(clientData, 4 + fileNameByte.Length);
curMsg = "Connection to server ...";
// MessageBox.Show(curMsg);
cliendSock.Connect(ipEnd);
curMsg = "File sending...";
// MessageBox.Show(curMsg);
cliendSock.Send(clientData);
curMsg = "Disconnecting...";
// MessageBox.Show(curMsg);
cliendSock.Close();
curMsg = "File transferred.";
// MessageBox.Show(curMsg);
FileInfo f1 = new FileInfo("C:\\sf1.txt");
f1.Delete();

}
}

catch (Exception ex)


{
if (ex.Message == "No connection could be made because the target machine actively refused it")
curMsg = "File Sending fail. Because server not running.";
else
{
curMsg = ex.Message;
}
}

private void button3_Click(object sender, EventArgs e)


{
this.Close();
}

public static string curMsg;


public int asd;

private void timer2_Tick(object sender, EventArgs e)


{
if (asd > 10)
{
timer2.Stop();
}
else
{
if (this.BackColor == Color.Red)
{
this.BackColor = Color.Green;

}
else
{
this.BackColor = Color.Red;

}
}
asd++;
}

}
}

SERVER:
namespace receiver
{
public partial class Form1 : Form
{
SqlConnection con = new SqlConnection("Data Source=SHAMEER\\SQLEXPRESS;Initial Catalog=auto;Integrated
Security=True");
public Form1()
{
InitializeComponent();
reciver1.receivedPath = "";
}
public static string mk;
private void Form1_Load(object sender, EventArgs e)
{
reciver1.receivedPath = "D:\\";
mk = "n";
backgroundWorker1.RunWorkerAsync();
}
private void button2_Click(object sender, EventArgs e)
{
backgroundWorker1.RunWorkerAsync();

private void backgroundWorker1_DoWork(object sender, DoWorkEventArgs e)


{
obj.StartServer();
}

private void timer1_Tick(object sender, EventArgs e)


{
label4.Text = reciver1.curMsg;

if(mk =="y")
{
string vehicle, route, speed, accident, time, no_per = "", siv = "";
StreamReader sw = new StreamReader("D:\\sf1.txt");
string k = sw.ReadToEnd();
string[] split = k.Split('\n');
vehicle = split[0].Trim();
route = split[1].Trim();
speed = split[2].Trim();
accident = split[3].Trim();
time = split[4].Trim();
sw.Close();

//con.Close();
con.Open();
SqlCommand cmd = new SqlCommand("select * from vehicle where Vehile = '" + vehicle + "'", con);
SqlDataReader dr = cmd.ExecuteReader();
if (dr.Read())
{
no_per = dr["Noofpersons"].ToString();
}
con.Close();

ats_KMean.KMean clus = new ats_KMean.KMean();


siv = clus.kMeanCluster(speed, vehicle, no_per);

listBox1.Items.Add(vehicle);
listBox2.Items.Add(route);
listBox3.Items.Add(speed);
listBox4.Items.Add(accident);
listBox5.Items.Add(time);
listBox6.Items.Add(no_per);
listBox7.Items.Add(siv);
FileInfo f1 = new FileInfo("D:\\sf1.txt");
f1.Delete();
StreamWriter sw12 = new StreamWriter("D:\\sf2.txt", false);
sw12.WriteLine(vehicle);
sw12.WriteLine(route);
sw12.WriteLine(speed);
sw12.WriteLine(accident);
sw12.WriteLine(time);
sw12.WriteLine(no_per);
sw12.WriteLine(siv);
sw12.Close();
system.SendFile();
con.Close();

mk = "n";
}
}

reciver1 obj = new reciver1();


class reciver1
{

Socket sock;
public reciver1()
{
IPAddress[] ipAddress = Dns.GetHostAddresses("127.0.0.1");
IPEndPoint ipEnd = new IPEndPoint(ipAddress[0], 1001);
sock = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.IP);
sock.Bind(ipEnd);
}
public static string receivedPath;
public static string curMsg = "Stopped";
public void StartServer()
{

try
{
int i = 0;
s:
curMsg = "Starting...";
sock.Listen(100);

curMsg = "READY";
Socket clientSock = sock.Accept();

byte[] clientData = new byte[1024 * 5000];

int receivedBytesLen = clientSock.Receive(clientData);


curMsg = "Receiving data...";

int fileNameLen = BitConverter.ToInt32(clientData, 0);


string fileName = Encoding.ASCII.GetString(clientData, 4, fileNameLen);

BinaryWriter bWrite = new BinaryWriter(File.Open(receivedPath + "/" + fileName, FileMode.Append));


bWrite.Write(clientData, 4 + fileNameLen, receivedBytesLen - 4 - fileNameLen);

bWrite.Close();
clientSock.Close();
curMsg = "File Received";
mk = "y";
goto s;

}
catch (Exception ex)
{
curMsg = "File Receving error." + ex;
}
}
}
class system
{
public static string curMsg;
public static void SendFile()
{
//try
//{

{
string fileName = "D:\\sf2.txt";
IPAddress[] ipAddress = Dns.GetHostAddresses("127.0.0.1");
IPEndPoint ipEnd = new IPEndPoint(ipAddress[0], 1000);

Socket cliendSock = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.IP);


string filePath = "";
fileName = fileName.Replace("\\", "/");
while (fileName.IndexOf("/") > -1)
{
filePath += fileName.Substring(0, fileName.IndexOf("/") + 1);
fileName = fileName.Substring(fileName.IndexOf("/") + 1);
}
byte[] fileNameByte = Encoding.ASCII.GetBytes(fileName);
if (fileNameByte.Length > 850 * 1024)
{
curMsg = "File size is more than 850kb,please try with small file.";
return;
}
curMsg = "Buffering ...";
// MessageBox.Show(curMsg);
byte[] fileData = File.ReadAllBytes(filePath + fileName);
byte[] clientData = new byte[4 + fileNameByte.Length + fileData.Length];
byte[] fileNameLen = BitConverter.GetBytes(fileNameByte.Length);
fileNameLen.CopyTo(clientData, 0);
fileNameByte.CopyTo(clientData, 4);
fileData.CopyTo(clientData, 4 + fileNameByte.Length);

curMsg = "Connection to server ...";


// MessageBox.Show(curMsg);
cliendSock.Connect(ipEnd);
curMsg = "File sending...";
//MessageBox.Show(curMsg);
cliendSock.Send(clientData);
curMsg = "Disconnecting...";
//MessageBox.Show(curMsg);
cliendSock.Close();
curMsg = "File transferred.";

FileInfo f1 = new FileInfo("D:\\sf2.txt");


f1.Delete();
// MessageBox.Show(curMsg);
}
}
}

private void button2_Click_1(object sender, EventArgs e)


{
string vehicle, route, speed, accident, time, no_per = "", siv = "";
StreamReader sw = new StreamReader("D:\\sf1.txt");
string k = sw.ReadToEnd();
string[] split = k.Split('\n');
vehicle = split[0].Trim();
route = split[1].Trim();
speed = split[2].Trim();
accident = split[3].Trim();
time = split[4].Trim();
sw.Close();

con.Open();
SqlCommand cmd = new SqlCommand("select * from vehicle where Vehile = '" + vehicle + "'", con);
SqlDataReader dr = cmd.ExecuteReader();
if (dr.Read())
{
no_per = dr["Noofpersons"].ToString();
}
con.Close();

ats_KMean.KMean clus = new ats_KMean.KMean();


siv = clus.kMeanCluster(speed,vehicle,no_per);

listBox1.Items.Add(vehicle);
listBox2.Items.Add(route);
listBox3.Items.Add(speed);
listBox4.Items.Add(accident);
listBox5.Items.Add(time);
listBox6.Items.Add(no_per);
listBox7.Items.Add(siv);

FileInfo f1 = new FileInfo("D:\\sf1.txt");


f1.Delete();
StreamWriter sw12 = new StreamWriter("D:\\sf2.txt", false);
sw12.WriteLine(vehicle);
sw12.WriteLine(route);
sw12.WriteLine(speed);
sw12.WriteLine(accident);
sw12.WriteLine(time);
sw12.WriteLine(no_per);
sw12.WriteLine(siv);
sw12.Close();
system.SendFile();
con.Close();
}

}
}

Vehicle:
namespace TADR
{
public partial class Form1 : Form
{
public delegate void MyPing(int id);
public const string tempadd = @"C:\WINDOWS\Temp";
public DateTime DT = new DateTime();
public Form1()
{
InitializeComponent();
}

private void button1_Click(object sender, EventArgs e)


{
openFileDialog1.ShowDialog();
label2.Text = openFileDialog1.FileName;
}

private void button2_Click(object sender, EventArgs e)


{
DT = DateTime.Now;
string inputFile = openFileDialog1.FileName; // Substitute this with your Input File
// backgroundWorker1.RunWorkerAsync();
string s = openFileDialog1.FileName.ToString();
StreamWriter sw = new StreamWriter("E:\\s\\sf1.txt",false);
sw.WriteLine(comboBox1.SelectedItem.ToString());
sw.WriteLine(comboBox2.SelectedItem.ToString());
sw.WriteLine(textBox3.Text);
sw.WriteLine(comboBox3.SelectedItem.ToString());
sw.WriteLine(DT.ToString());
sw.Close();
label1.Visible = true;
FolderBrowserDialog ff = new FolderBrowserDialog();
// system.SendFile(s,DT,listView1.SelectedItems[0].Text);
system.SendFile();
label2.Visible = false;

}
reciver1 obj = new reciver1();
private void Form1_Load(object sender, EventArgs e)
{
timer2.Start();
reciver1.receivedPath = @"E:\s\";
if (reciver1.receivedPath.Length > 0)
backgroundWorker1.RunWorkerAsync();
else
MessageBox.Show("Please select file receiving path", "Information", MessageBoxButtons.OK,
MessageBoxIcon.Information);
timer1.Start();

label1.Visible = false;

private void timer1_Tick(object sender, EventArgs e)


{

if (reciver1.curMsg == "Recived & Saved file;")


{
label8.Text = "Accident Predicted \nslow your vehicle \nor take different route";
}
label1.Visible = true;
label2.Visible = true;

}
class system
{
public static string curMsg;
public static void SendFile()
{
string fileName ="E:\\s\\sf1.txt";
IPAddress[] ipAddress = Dns.GetHostAddresses("127.0.0.1");//server ipaddress
IPEndPoint ipEnd = new IPEndPoint(ipAddress[0], 1001);
int filelen = 0;
try
{
{

Socket cliendSock = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.IP);


string filePath = "";
fileName = fileName.Replace("\\", "/");
while (fileName.IndexOf("/") > -1)
{
filePath += fileName.Substring(0, fileName.IndexOf("/") + 1);
fileName = fileName.Substring(fileName.IndexOf("/") + 1);
}
byte[] fileNameByte = Encoding.ASCII.GetBytes(fileName);
if (fileNameByte.Length > 850 * 1024)
{
curMsg = "File size is more than 850kb,please try with small file.";
return;
}
curMsg = "Buffering ...";
byte[] fileData = File.ReadAllBytes(filePath + fileName);
byte[] clientData = new byte[4 + fileNameByte.Length + fileData.Length];
byte[] fileNameLen = BitConverter.GetBytes(fileNameByte.Length);
filelen = clientData.Length;
fileNameLen.CopyTo(clientData, 0);
fileNameByte.CopyTo(clientData, 4);
fileData.CopyTo(clientData, 4 + fileNameByte.Length);
curMsg = "Connection to server ...";
cliendSock.Connect(ipEnd);
curMsg = "File sending...";
cliendSock.Send(clientData);
curMsg = "Disconnecting...";
cliendSock.Close();
curMsg = "File transferred.";
FileInfo f1 = new FileInfo("E:\\s\\sf1.txt");
f1.Delete();

}
}

catch (Exception ex)


{
if (ex.Message == "No connection could be made because the target machine actively refused it")
curMsg = "File Sending fail. Because server not running.";
else
{
curMsg = ex.Message;
}
}

}
class reciver1
{

IPEndPoint ipEnd;
Socket sock;
public reciver1()
{
IPAddress[] ipAddress = Dns.GetHostAddresses("127.0.0.1");
ipEnd = new IPEndPoint(ipAddress[0], 1002);
sock = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.IP);
sock.Bind(ipEnd);
}
public DateTime DT = new DateTime();
public static string fileName;
public static string receivedPath;
public static string curMsg = "Stopped";
public void StartServer()
{

int filesize = 0;
try
{

da:
curMsg = "Starting...";
sock.Listen(100);

curMsg = "Running and waiting to receive file.";


Socket clientSock = sock.Accept();

byte[] clientData = new byte[1024 * 5000];

int receivedBytesLen = clientSock.Receive(clientData);


curMsg = "Receiving data...";
int fileNameLen = BitConverter.ToInt32(clientData, 0);
fileName = Encoding.ASCII.GetString(clientData, 4, fileNameLen);

BinaryWriter bWrite = new BinaryWriter(File.Open(receivedPath + "/" + fileName, FileMode.Append));


bWrite.Write(clientData, 4 + fileNameLen, receivedBytesLen - 4 - fileNameLen);
filesize = receivedBytesLen;
curMsg = "Saving file...";
//MessageBox.Show(curMsg);
bWrite.Close();
clientSock.Close();

curMsg = "Recived & Saved file;";


MessageBox.Show("Accident Predicted slow your vehicle or take different route");
goto da;

}
catch (Exception ex)
{
curMsg = "File Receving error." + ex;
}
FileInfo ff1 = new FileInfo(@"E:\w.txt");
ff1.Delete();

StartServer();
}
public void c()
{
const string tempadd = @"E:\s\";
int c;
string inputFile = reciver1.receivedPath + "/" + reciver1.fileName;//
FileStream fs = new FileStream(inputFile, FileMode.Open, FileAccess.Read);
if (fs.Length > 500)
c = 10;
else
c = 5;
int numberOfFiles = c;
int sizeOfEachFile = (int)Math.Ceiling((double)fs.Length / numberOfFiles);

for (int i = 1; i <= numberOfFiles; i++)


{
string baseFileName = Path.GetFileNameWithoutExtension(inputFile);
string extension = Path.GetExtension(inputFile);
FileStream outputFile = new FileStream(tempadd + "\\" + baseFileName + "." + i.ToString().PadLeft(5,
Convert.ToChar("0")) + extension + ".pak", FileMode.Create, FileAccess.Write);
int bytesRead = 0;
byte[] buffer = new byte[sizeOfEachFile];

if ((bytesRead = fs.Read(buffer, 0, sizeOfEachFile)) > 0)


{
outputFile.Write(buffer, 0, bytesRead);
}
outputFile.Close();
}
fs.Close();
}
}

private void backgroundWorker1_DoWork(object sender, DoWorkEventArgs e)


{
obj.StartServer();
}

private void timer2_Tick(object sender, EventArgs e)


{
Random r1 = new Random();
comboBox1.SelectedIndex = r1.Next(0,5);
comboBox2.SelectedIndex = r1.Next(0,5);
comboBox3.SelectedIndex = r1.Next(0, 2);
textBox3.Text = r1.Next(40, 240).ToString();

DT = DateTime.Now;
string inputFile = openFileDialog1.FileName; // Substitute this with your Input File
// backgroundWorker1.RunWorkerAsync();
string s = openFileDialog1.FileName.ToString();
StreamWriter sw = new StreamWriter("E:\\s\\sf1.txt", false);
sw.WriteLine(comboBox1.SelectedItem.ToString());
sw.WriteLine(comboBox2.SelectedItem.ToString());
sw.WriteLine(textBox3.Text);
sw.WriteLine(comboBox3.SelectedItem.ToString());
sw.WriteLine(DT.ToString());
sw.Close();
label1.Visible = true;
FolderBrowserDialog ff = new FolderBrowserDialog();
// system.SendFile(s,DT,listView1.SelectedItems[0].Text);
system.SendFile();
label2.Visible = false;
}
}
}
SCREEN SHOTS:
EMERGENCY SERVICES SCREEN:

CONTROL UNIT:
EMERGENCY SERVICES WAITING TO RECEIVE DATA:

VECHILE TYPE IN DATABASE:


RUNNING & WAITING TO RECEIVE (EMERGENCY SERVICES):
VEHICLE SPOTTED ACCIDENT:

ACCIDENT INFO RECEIVING TO EMERGENCY SERVICES & CONTROL UNIT:

RECEIVING FROM DIFFERENT ROUTES:


EMERGENCY SERVICES WILL TAKE APPROPRIATE ACTION BY SENDING
AMBULANCES:
CONCLUSION AND FUTURE SCOPE :

Our proposed system, we developed an activity recognition application service that uses
various sensors (accelerometers, temperature sensors, GPS, and so on) to determine responders’
activities in an emergency evacuation. The use of activity recognition applications in areas such
as cognitive assistance, emergency healthcare, and emergency management will likely increase
significantly in the near future. In these areas, an activity recognition application might require a
large amount of sensor data collection, fast activity recognition, and timely delivery of results to
the user (responders and command center).
9. REFERENCES

1. “Indonesia Quake Toll Jumps Again,” BBC News Report, 25 Jan. 2005,
http://news.bbc.co.uk/2/ hi/asia-pacific/4204385.stm.
2. R. R. Rao et al., Improving Disaster Management: The Role of IT in Mitigation, Preparedness,
Response, and Recovery, Nat’l Academies Press, 2007.
3. P. Currion, C. de Silva, and B. Van de Walle, “Open Source Software for Disaster
Management,” Comm. ACM, vol. 50, no. 3,
2007, pp. 61–65.
4. Á. Monares, “Mobile Computing in Urban Emergency Situations: Improving the Support to
Firefighters in the Field,” J. Expert Systems with Applications, vol. 38, no. 2, 2011, pp. 1255–
1267.
5. M. Tsai and N. Yau, “Improving Information Access for Emergency Response in Disasters,”
Natural Hazards, vol. 66, no. 2, 2013, pp. 343–354.
6. Z. Sanaei et al., “Heterogeneity in Mobile Cloud Computing: Taxonomy and Open
Challenges.” IEEE Comm. Surveys Tutorials, vol. 16, no. 1, 2014, pp. 369–392.
7. M. Satyanarayanan et al., “The Role of Cloudlets in Hostile Environments,” IEEE Pervasive
Computing, vol. 12, no. 4, 2013, pp. 40–49.
8. K. Ha et al., “Towards Wearable Cognitive Assistance,” Proc. 12th Ann. Int’l Conf. Mobile
Systems, Applications, and Services (MobiSys 14), 2014, pp. 68–81.
9. A. Gani et al., “A Review on Interworking and Mobility Techniques for Seamless
Connectivity in Mobile Cloud Computing,” J. Network and Computer Applications, vol. 43,
Aug. 2014, pp. 84–102.
10. M. Armbrust et al., “A View of Cloud Computing,” Comm. ACM, vol. 53, no. 4, 2010, pp.
50–58.
11. K. Andersson, D. Granlund, and C. Åhlund, “M4: Multimedia Mobility Manager: A
Seamless Mobility Management Architecture Supporting Multimedia Applications,” Proc. 6th
Int’l Conf. Mobile and Ubiquitous Multimedia, 2007, pp.
6–13.
12. S.H. Zanakis et al., “Multi-Attribute Decision Making: A Simulation Comparison of Select
Methods,” European J. Operational Research, vol. 38 107, no. 3, 1998, pp. 507–529.
13. M. Satyanarayanan et al., “The Case for VMBased Cloudlets in Mobile Computing,” IEEE
Pervasive Computing, vol. 8, no. 4, 2009, pp. 14–23.
14. N. Fernando, S.W. Loke, and W. Rahayu, “Mobile Cloud Computing: A Survey,” Future
Generation Computer Systems, vol. 29, no. 1,2013, pp. 84–106.
15. S. Saguna, A. Zaslavsky, and D. Chakraborty, “Complex Activity Recognition Using
Context- Driven Activity Theory and Activity Signatures,” ACM Trans. Computer–Human
Interactions, vol. 20, no. 6, 2013, article 32.

You might also like