Professional Documents
Culture Documents
ABSTRACT
During the last decades, the total number of vehicles in our roads has experienced a
remarkablegrowth, making traffic density higher and increasing the drivers’ attention
requirements. The imme-diate effect of this situation is the dramatic increase of traffic accidents
on the road, representinga serious problem in most countries. As an example, 2,478 people died
in Spanish roads in 2010,which means one death for every 18,551 inhabitants, and 34,500 people
in the whole EuropeanUnion died as a result of a traffic accident in 2009.To reduce the number
of road fatalities, vehicular networkswill play an increasing role in theIntelligent Transportation
Systems(ITS) area. Most ITS applications, such as road safety, fleetmanagement, and navigation,
will rely on data exchanged between the vehicle and the roadsideinfrastructure (V2I), or even
directly between vehicles (V2V).
ARCHITECTURE OVERVIEW
Figure presents the overview of the vehicular architectur e used to develop our system.
The proposed system consists of several components with differ ent functions. Firstly, vehicles
should incorporate an On-Board unit (OBU) responsible for: (i) detecting when there has been a
potentially dangerous impact for the occupants, (ii) collecting available information coming from
sensors in the vehicle, and (iii) communicating the situation to a Control Unit (CU) that will
accordingly address the handling of the warning notification.
Fig. 1. Architecture of our proposed system for automatic accident notification and assistance
using vehicular networks
Next, the notification of the detected accidents is made througha combination of both
V2V and V2I communications. Finally, the destination of all the collectedinformation is the
Control Unit; it will handle the warning notification, estimating the severity of theaccident, and
communicating the incident to the appropriate emergency services.The OBU definition is crucial
for the proposed system. This device must be technically andeconomically feasible, as its
adoption in a wide range of vehicles could become massive in a nearfuture. In addition, this
system should be open to future software updates.
Our proposed architecture provides: (i) direct communication between the vehicles
involved in theaccident, (ii) automatic sending of a data file containing important information
about the accidentto the Control Unit, and (iii) a preliminary and automatic assessment of the
damage of the vehicleand its occupants, based on the information coming from the involved
vehicles, and a database ofaccident reports. According to the reported information and the
preliminary accident estimation, thesystem will alert the required rescue resources to optimize
the accident assistance.
ON-BOARD UNIT STRUCTURE
The main objective of the proposed OBU lies in obtaining the available information from
sensorsinside the vehicle to determine when a dangerous situation occurs, and reporting that
situation to thenearest Control Unit, as well as to other nearby vehicles that may be affected.
Figure 2 shows the OBU system, which relies on the interaction between sensors, the data acqui-
sition unit, the processing unit, and wireless interfaces:
• In-vehicle sensors. They are required to detect accidents and provide information about
its causes.Accessing the data from in-vehicle sensors is possible nowadays using the On-Board
Diagnostics (OBD) standard interface, which serves as the entry point to the vehicle’s internal
bus. Thisstandard is mandatory in Europe and USA since 2001. This encompasses the majority
of thevehicles of the current automotive park, since the percentage of compatible vehicles will
keepgrowing as very old vehicles are replaced by new ones.
Data Acquisition Unit (DAU). This device is responsible forperiodically collecting data from
the sensors available in the vehicle (airbag triggers, speed, fuel levels, etc.), converting them to
a common format, and providing the collected data set to the OBU Processing Unit.
• OBU Processing Unit. It is in charge of processing the data coming from
sensors,determiningwhether an accident occurred, and notifying dangerous situations to nearby
vehicles, or directlyto the Control Unit. The information from the DAU is gathered, interpreted
and used to determinethe vehicle’s current status. This unit must also have access to a
positioning device (such as aGPS receiver), and to different wireless interfaces, thereby enabling
communication between thevehicle and the remote control center.
The Control Unit (CU) is associated to the response center incharge of receiving
notifications ofaccidents from the OBUs installed in vehicles. In particular, the Control Unit is
responsible for dealingwith warning messages, retrieving information from them, and notifying
the emergency services aboutthe conditions under which the accident occurred.
The KDD approach can be defined as the nontrivial process of identifying valid, novel,
potentially useful, and understandable patterns from existing data.
The KDD process begins with the under-standing of the application specific domain and
the necessary prior knowledge. After the acquisition of initial data, a series of phases are
performed:
1) Selection: This phase determines the information sources that may be useful, and then it
transforms the data into a common format.
2) Preprocessing: In this stage, the selected data must be cleaned (noise reduction or modeling)
and preprocessed (missing data handling).
3) Transformation: This phase is in charge of performing a reduction and projection of the data
to find relevant features that represent the data depending on the purpose of the task.
4) Data mining: This phase basically selects mining algorithms and selection methods which will
be used to find patterns in data.
5) Interpretation/Evaluation: Finally, the extracted patterns must be interpreted. This step may
also include displaying the patterns and models, or displaying the data taking into account such
Models
We will deal with road accidents in twodimensions: (i) damage on the vehicle (indicating
the possibility of traffic problems or the need ofcranes in the area of the accident), and (ii)
passenger injuries. These two dimensions seem to berelated, since heavily damaged vehicles are
usually associated with low survival possibilities of theoccupants. Consequently, we will use the
estimations obtained with our system about the damage onthe vehicle to help in the prediction of
the occupants’ injuries.Finally, our system will benefit from additional knowledge to improve its
accuracy, groupingaccidents according to their degree of similarity.
We can use the criteria used in numerous studiesabout accidents, including some tests
such as the Euro NCAP, in which crashes are divided andanalyzed separately depending on the
main direction of the impact registered due to the collision.The following sections contain the
results of the differentphases of our KDD proposal.
In the traffic accidents domain, the most relevant sets of information in GES are:
(i) Accident,which contains the crash characteristics and environmental conditions at
the time of the accident,
(ii) Vehicle, which refers to vehicles and drivers involved in the crash,and
(iii) Person, i.e., peopleinvolved in the crash.
We will integrate the data harvested during the year 2011 into two differentself-built sets:
one for the vehicles and another one for theoccupants.Using the data contained in the GES
database, we classify thedamage in vehicles in three categories:
Focusing on passenger injuries, we will also use three different classes to determine
their severity level:
After preprocessing the selected GES data, no noise or inaccuracies were detected as all
the nominaland numerical values contained reasonable values. Due to the large number of
records availablein the database, we decided to only use those accident records with all the
required informationcomplete. After removing incomplete instances, our data sets consist of
14,227 full instances ofaccident reports (5,604 front crashes, 4,551 side crashes,and 4,072 rear-
end crashes). These accidentsrepresent different types of collisions in both urban and inter-urban
areas.
• Front collisions: 1,418 (25.3%) in urban area, and 4,186 (74.7%) in inter-urban area.
• Side collisions: 1,593 (35.0%) in urban area, and 2,958 (65.0%) in inter-urban area.
• Rear-end collisions: 1,613 (39.6%) in urban area, and 2,459(60.4%) in inter-urban area.
Using these data, we achieve a total distribution of 4,624 accidents in urban areas, which
correspondto 32.5% of the total accidents, and 9,603 accidents in inter-urban areas (67.5%).
TRANSFORMATIONPHASE
This phase consists on developing a reduction and projection of the data to find relevant
features thatrepresent the characteristics of the data depending on the objective. We selected a
potential subset ofvariables which could be obtained from the on-board sensorsof the vehicle or
auxiliary devices suchas the GPS. Those variables include the type of vehicle,the speed just
before the accident, andthe airbag status.Even if the GES database does not include information
about the themeasuredaccelerations, this information could be filled using our proposed system
with data collected fromnotified accidents, and incorporated in future versions of the
classification algorithm.
Concerningpassengers, there are specific characteristics for each person that are not
directly accessible, but mighthelp to improve the prediction accuracy. We added two of these
personal variables to our data –ageand sex-, which will be used to study their relevance on the in
juries suffered. Weka provides a widevariety of feature selection algorithms. Among them, we
selected three of the most commonly used.
2.LITERATURE SURVEY
Several approaches can be found in the existing literature with the objective of increasing
trafficsafety through the use of telecommunication technologies,and also in the field of accident
severityestimation using historical data.A. Using Telecommunication Technologies to Improve
TrafficSafetyThe U.S. Department of Transportation (DOT) developed someprojects similar to
ours withthe goal of improving traffic safety through the use of vehicular communication, based
on testingthe effectiveness and safety benefits of the wireless connected vehicle technology in
real-world,multimodal driving conditions.
Some preliminary results regarding the distance traveled bywarning messages, number of
relaying vehicles, and communication times in a real experimentin the streets of Los Angeles
(USA) in 2011 can be found. However, these experimentsonly include V2V communications and
the notification of erous situations between vehicles,whereas our system mainly concentrates on
improving the decision making process that follows theoccurrence of an accident. In addition,
the tests were performed using the IEEE 802.11b and 802.11gstandards, instead of the IEEE
802.11p standard, which was specially designed to be used in vehicularenvironments.
There are also some projects that make use of ECG (Electrocardiogram) sensors to
monitor thedrivers’ condition and detect possible health problems that could endanger traffic
safety.The authors propose a condition monitoring system toobtain physiological signals for car
driver’s health condition monitoring by means of ECG and PPG(Photoplethysmogram) sensors
attached to the steering wheel. The signals concerning heart rate are transmitted to a server PC
via a Personal Area Network (PAN) for practical tests, beinganalyzed to detect drowsiness and
fatigue. The results obtained from this system could be usedto inform nearby vehicles about a
dangerous driver status, but the system does not include notification of abnormal statuses since
only a PAN is considered.
Palantei et al. designed a wireless system for remotelymonitoring heart pulses obtained
from ECG sensors. This system allows communication of the monitoring information 50 to 250
meters far from the person, but it is not indicated how the signal is processed to classify thestatus
of the person.The monitoring system proposed includes non-intrusive active electrodes installed
on theseats of the vehicle. The data collected is sent through a wireless PAN and the processing
of thedata concerning heart rate variability in time and frequency allowed determining if the
driver istired or stressful.
We can see that these approaches are too limited by the wireless technology used, which
provides avery short communication range. Moreover, they have only been used to determine the
fatigue or stresslevel, probably due to the inviability of finding real cases to test their efficiency
on the estimation ofthe injuries suffered after an accident. The integration ofECG sensors in
modern vehicles could bean excellent opportunity to collect information about heal
th signs after the occurrence of an accident,since our proposed architecture would allow the
notification of the gathered data to the Control Unitfor further processing and classification by
means of intelligent algorithms.
B. Previous Approaches towards Accident Severity Estimation using Data MiningDespite
the interest that may arise from understanding the influence of various factors on road
accidents, the number of works about this topic in the literature is not particularly large. In
addition,most attempts to carry out a data mining process related to traffic accidents only
considered datafrom a single city or a very small area, making results only slightly
representative.Several works are based on data obtained from the Traffic Office of Ethiopia,
since this countrypresents one of the largest number of accidents per capita. Beshah and Hill
used data from 18,288accidents around Addis Ababa as the basic data set. This study uses Naïve-
Bayes, decision trees, andk-nearest neighbors (KNN) algorithms to classify the data using a
cross-validation methodology,with accuracy values close to 80%.
However, the authors onlyprovided estimations for the wholeaccident, not for single
occupants. Data from Ethiopia was also used to build regression tree modelsfor accident
classification. Only 13 out of 36 variables available in the data were used to buildthe
classification models, but the selection process was notshown, and again only estimations about
the whole accident were provided.The area of South Korea was also selected to develop classific
ation models based on artificialneural networks, decision trees, and logistic regression. The data
set involved 11,564 accidents,and the authors concluded that the different
classificationalgorithms provide similar results in termsof accuracy, being the use of protection
devices, such as theseat belt and the airbag, the mostrelevant factors to classify accidents.
This work was extended using ensemble methods (i.e.,multiple models to obtain better
predictive performance than could be obtained from any of theconstituent models) combined
with a prior assignation of instances through clustering, attaching adifferent classification model
to each cluster, which produced a better class assignment.More recently, Chong et al. selected
data from all over the United States obtained during the1995-2000 period to propose a set of
models based on artificial neural networks, decision trees, andSupport Vector Machines (SVMs).
All the classification models presented similar accuracy results,and they were highly
effective at recognizing fatal injuries.Finally, some authors have focused on the characteristics o
f specific road segments, instead ofusing the data from individual vehicles. Clustering of
accident hotspots were performed by Anderson in order to determine effective strategies for the
reduction of high density areas of accidents.Authors studied the spatial patterns of injury related
to road accidents in London (UK), and theyfound several hotspots with relevant significance
using K-means clustering.
Nayak et al. useda road-based approach for modeling the crash proneness of road
segments using available road andcrash attributes, classifying the road depending on their “crash
proneness". They also used a priorclustering of the accidents with similar features, and a con
sequent classification of the data by meansof decision trees. However, they did not establish
different severity levels on the accidents studied.From previous works, we detected significant
shortcomings when attempting to combine theirresults with vehicular networks, since existing
works about estimating the severity of road accidentshave not been used to improve the
assistance to injured passengers. All the above papers used awhole variety of attributes to build
the classification models, whereas only some of them could beeffectively extracted from the
vehicle itself (e.g., the driver’s inebriation level). In addition, none ofthem used an adequate
feature selection algorithm to selectthe optimal variable subset.
Finally, someof the models are extensively used (decision trees), while other interesting
methods received minorattention (SVMs, and Bayesian networks). To the best of our knowledge,
none of these approacheshave been implemented and tested in a real environment, since they
only make use of historical data.It is also noteworthy that existing proposals in the literature
trying to estimate the severity of atraffic accident do not develop a complete KDD process. In
fact, the only phase of the KDD processthat has received widespread attention is the data mining
phase, while the rest has been overlookedor summarized as much as possible. Although data
mining is a very important phase, the resultsobtained when omitting the previous phases can
loose their interest or utility.
In general, any of the ITS applicationsuses a Traffic Management Centre (TMC) where
data is collected, analysed and combined with other operational and control concepts to manage
the complex transportation problems. Typically, several agencies share the administration of
transport infrastructure, through a network of traffic operation centres. There isoften,a localized
distribution of dataand information and the centres adopt different criteria to achieve the goals of
traffic management. This inter-dependent autonomy in operations and decision-making is
essential because of the heterogeneity of demand and performance characteristics of interacting
subsystems.
3.SYSTEM ANALYSIS
FEASIBILITY STUDY
Feasibility study is a process which defines exactly what a project is and what strategic issues
need to be considered to assess its feasibility, or likelihood of succeeding. Feasibility studies are
useful both when starting a new business, and identifying a new opportunity for an existing
business. Ideally, the feasibility study process involves making rational decisions about a number
of enduring characteristics of a project, including:
Technical feasibility- do we’ have the technology’? If not, can we get it?
Operational feasibility- do we have the resources to build the system? Will the system be
acceptable? Will people use it?
Economic feasibility, technical feasibility, schedule feasibility, and operational
feasibility- are the benefits greater than the costs?
TECHNICAL FEASIBILITY
OPERATIONAL FEASIBILITY
Operational feasibility is a measure of how well a proposed system solves the problems,
and takes advantages of the opportunities identified during scope definition and how it satisfies
the requirements identified in the requirements identified in the requirements analysis phase of
system development.
ECONOMIC FEASIBILITY
Economic analysis is the most frequently used method for evaluating the effectiveness of
a candidate system. More commonly known as cost/ benefit analysis, the procedure is to
determine the benefits and savings that are expected from a candidate system and compare them
with costs. If benefits outweigh costs, then the decision is made to design and implement the
system.
DISADVANTAGES:
1. Emergency services are based on incomplete or inaccurate data
2. Dramatic increase of traffic accidents
ADVANTAGES:
1. A fast and accurate estimation of the severity of the accident
2. Reduces the number of road fatalities, vehicular networks
4. SYSTEM SPECIFICATION
Hardware Requirements:
Software Requirements:
SOFTWARE DESCRIPTION
What is .NET?
Many people reckon that it's Microsoft's way of controlling the Internet, which is
false. .NET is Microsoft's strategy of software that provides services to people any time, any
place, on any device. An accurate definition of .NET is, it's an XML Web Services platform
which allows us to build rich .NET applications, which allows users to interact with the Internet
using wide range of smart devices (tablet devices, pocket PC's, web phones etc), which allows to
build and integrate Web Services .
Conceptually, the CLR and the JVM are similar in that they are both runtime
infrastructures that abstract the underlying platform differences. However, while the JVM
officially supports only the Java language, the CLR supports any language that can be
represented in itsCommon Intermediate Language (CIL). The JVM executes bytecode, so it can,
in principle, support many languages, too.
Another conceptual difference between the two infrastructures is that Java code runs on
any platform with a JVM, whereas .NET code runs only on platforms that support the CLR. In
April, 2003, the International Organization for Standardization and the International
Electrotechnical Committee (ISO/IEC) recognized a functional subset of the CLR, known as the
Common Language Interface (CLI), as an international standard.
This development, initiated by Microsoft and developed by ECMA International, a
European standards organization, opens the way for third parties to implement their own versions
of the CLR on other platforms.
The layer on top of the CLR is a set of framework base classes. This set of classes is
similar to the set of classes found in STL, MFC, ATL, or Java. These classes support
rudimentary input and output functionality, string manipulation, security management, network
communications, thread management, text management, reflection functionality, collections
functionality, as well as other functions.
On top of the framework base classes is a set of classes that extend the base classes to
support data management and XML manipulation. These classes, called ADO.NET, support
persistent data management—data that is stored on backend databases. Alongside the data
classes, the .NET Framework supports a number of classes to let you manipulate XML data and
perform XML searching and XML translations.
Classes in three different technologies (including web services, Web Forms, and
Windows Forms) extend the framework base classes and the data and XML classes. Web
services include a number of classes that support the development of lightweight distributed
components, which work even in the face of firewalls and NAT software. These components
support plug-and-play across the Internet, because web services employ standard HTTP and
SOAP.
Web Forms, the key technology behind ASP.NET, include a number of classes that allow
you to rapidly develop webGraphical User Interface (GUI) applications. If you're currently
developing web applications with Visual Interdev, you can think of Web Forms as a facility that
allows you to develop web GUIs.
Windows Forms support a set of classes that allow you to develop nativeWindows GUI
applications. You can think of these classes collectively as a much better version of the MFC in
C++ because they support easier and more powerful GUI development and provide a common,
consistent interface that can be used in all languages.
Features OF. Net
Microsoft .NET is a set of Microsoft software technologies for rapidly building
and integrating XML Web services, Microsoft Windows-based applications, and Web solutions.
The .NET Framework is a language-neutral platform for writing programs that can easily and
securely interoperate. There’s no language barrier with .NET: there are numerous languages
available to the developer including Managed C++, C#, Visual Basic and Java Script. The .NET
framework provides the foundation for components to interact seamlessly, whether locally or
remotely on different platforms. It standardizes common data types and communications
protocols so that components created in different languages can easily interoperate.
“.NET” is also the collective name given to various software components built
upon the .NET platform. These will be both products (Visual Studio.NET and Windows.NET
Server, for instance) and services (like Passport, .NET My Services, and so on).
The second most. But to write the code, you need a foundation of available classes to
access the resources of the operating system, database server, or file server. The FCL is made up
of a hierarchy of namespaces that expose classes, structures, interfaces, enumerations, and
delegates that give you access to these resources. The namespaces are logically defined by
functionality. For example, the System.Data namespace contains all the functionality available
to accessing databases. This namespace is further broken down into System.Data.SqlClient,
which exposes functionality specific to SQL Server, and System.Data.OleDb, which exposes
specific functionality for accessing OLEDB data sources. The bounds of a namespace aren't
necessarily defined by specific assemblies within the FCL; rather, they're focused on
functionality and logical grouping. In total, there are more than 20,000 classes in the FCL, all
logically grouped in a hierarchical manner. Figure 1.8 shows where the FCL fits into the .NET
Framework and the logical grouping of namespaces.
To use an FCL class in your application, you use the Imports statement in Visual
Basic .NET or theusing statement in C#. When you reference a namespace in Visual Basic .NET
or C#, you also get the convenience of auto-complete and auto-list members when you access the
objects' types using Visual Studio .NET. This makes it very easy to determine what types are
available for each class in the namespace you're using. As you'll see over the next several weeks,
it's very easy to start coding in Visual Studio .NET.
The Structure of a .NET Application
To understand how the common language runtime manages code execution, you must
examine the structure of a .NET application. The primary unit of a .NET application is the
assembly. An assembly is a self-describing collection of code, resources, and metadata. The
assembly manifest contains information about what is contained within the assembly. The
assembly manifest provides:
Each assembly has one and only one assembly manifest, and it contains all the description
information for the assembly. However, the assembly manifest can be contained in its own file or
within one of the assembly’s modules. An assembly contains one or more modules. A module
contains the code that makes up your application or library, and it contains metadata that
describes that code. When you compile a project into an assembly, your code is converted from
high-level code to IL. Because all managed code is first converted to IL code, applications
written in different languages can easily interact. For example, one developer might write an
application in Visual C# that accesses a DLL in Visual Basic .NET. Both resources will be
converted to IL modules before being executed, thus avoiding any language-incompatibility
issues.
To understand how the common language runtime manages code execution, you must
examine the structure of a .NET application. The primary unit of a .NET application is the
assembly. An assembly is a self-describing collection of code, resources, and metadata. The
assembly manifest contains information about what is contained within the assembly. The
assembly manifest provides:
Identity information, such as the assembly’s name and version number
A list of all types exposed by the assembly
A list of other assemblies required by the assembly
A list of code access security instructions, including permissions required by the
assembly and permissions to be denied the assembly
Each assembly has one and only one assembly manifest, and it contains all the description
information for the assembly. However, the assembly manifest can be contained in its own file or
within one of the assembly’s modules.
An assembly contains one or more modules. A module contains the code that makes up
your application or library, and it contains metadata that describes that code. When you compile
a project into an assembly, your code is converted from high-level code to IL. Because all
managed code is first converted to IL code, applications written in different languages can easily
interact. For example, one developer might write an application in Visual C# that accesses a
DLL in Visual Basic .NET. Both resources will be converted to IL modules before being
executed, thus avoiding any language-incompatibility issues.
Each module also contains a number of types. Types are templates that describe a set of
data encapsulation and functionality. There are two kinds of types: reference types (classes) and
value types (structures). These types are discussed in greater detail in Lesson 2 of this chapter.
Each type is described to the common language runtime in the assembly manifest. A type can
contain fields, properties, and methods, each of which should be related to a common
functionality.
For example, you might have a class that represents a bank account. It contains fields,
properties, and methods related to the functions needed to implement a bank account. A field
represents storage of a particular type of data. One field might store the name of an account
holder, for example. Properties are similar to fields, but properties usually provide some kind of
validation when data is set or retrieved. You might have a property that represents an account
balance.
When an attempt is made to change the value, the property can check to see if the
attempted change is greater than a predetermined limit. If the value is greater than the limit, the
property does not allow the change. Methods represent behavior, such as actions taken on data
stored within the class or changes to the user interface. Continuing with the bank account
example, you might have a Transfer method that transfers a balance from a checking account to a
savings account, or an Alert method that warns users when their balances fall below a
predetermined level.
When you compile a .NET application, it is not compiled to binary machine code; rather,
it is converted to IL. This is the form that your deployed application takes—one or more
assemblies consisting of executable files and DLL files in IL form. At least one of these
assemblies will contain an executable file that has been designated as the entry point for the
application.
When execution of your program begins, the first assembly is loaded into memory. At
this point, the common language runtime examines the assembly manifest and determines the
requirements to run the program. It examines security permissions requested by the assembly and
compares them with the system’s security policy. If the system’s security policy does not allow
the requested permissions, the application will not run. If the application passes the system’s
security policy, the common language runtime executes the code. It creates a process for the
application to run in and begins application execution.
When execution starts, the first bit of code that needs to be executed is loaded into
memory and compiled into native binary code from IL by the common language runtime’s Just-
In-Time (JIT) compiler. Once compiled, the code is executed and stored in memory as native
code. Thus, each portion of code is compiled only once when an application executes. Whenever
program execution branches to code that has not yet run, the JIT compiler compiles it ahead of
execution and stores it in memory as binary code. This way, application performance is
maximized because only the parts of a program that are executed are compiled.
The .NET Framework base class library contains the base classes that provide many of
the services and objects you need when writing your applications. The class library is organized
into namespaces. A namespace is a logical grouping of types that perform related functions. For
example, the System .Windows Forms namespace contains all the types that make up Windows
forms and the controls used in those forms.
Namespaces are logical groupings of related classes. The namespaces in the .NET base
class library are organized hierarchically. The root of the .NET Framework is the System
namespace. Other namespaces can be accessed with the period operator. A typical namespace
construction appears as follows:
System
System.Data
System.Data.SQLClient
The first example refers to the System namespace. The second refers to the System.Data
namespace. The third example refers to the System.Data.SQLClient namespace.
Table -introduces some of the more commonly used .NET base class namespaces.
Namespace Description
This namespace is the root for many of the low-level types required by
the .NET Framework. It is the root for primitive data types as well,
System
and it is the root for all the other namespaces in the .NET base class
library.
This namespace contains classes that represent a variety of different
container types, such as Array List, Sorted List, Queue, and Stack.
System. Collections
You also can find abstract classes, such as Collection Base, which are
useful for implementing your own collection functionality.
This namespace contains classes involved in component creation and
System.ComponentModel
containment, such as attributes, type converters, and license providers.
Table-.Representative .NET Namespaces
Namespace Description
This namespace contains classes required for database access and
System. Data
manipulations, as well as additional namespaces used for data access.
This namespace contains a set of classes that are shared by the .NET
System.Data.Common
managed data providers.
This namespace contains classes that make up the managed data
System.Data.OleDb
provider for OLE DB data access.
This namespace contains classes that are optimized for interacting with
System.Data.SQLClient
Microsoft SQL Server.
This namespace exposes GDI+ functionality and provides classes that
System.Drawing
facilitate graphics rendering.
System.IO In this namespace, you will find types for handling file system I/O.
This namespace is home to common mathematics functions such as
System. Math
extracting roots and trigonometry.
This namespace provides support for obtaining information and
System. Reflection
dynamic creation of types at runtime.
This namespace is home to types dealing with permissions,
System. Security
cryptography, and code access security.
This namespace contains classes that facilitate the implementation of
System. Threading
multithreaded applications.
This namespace contains types involved in creating standard Windows
System.Windows.Forms applications. Classes that represent forms and controls reside here as
well.
Data access in ADO.NET relies on two entities: the DataSet, which stores data on the
local machine, and the Data Provider, a set of components that mediates interaction between the
program and the database.
The Dataset
The link to the database is created and maintained by a data provider. A data provider is not a
single component, rather it is a set of related components that work together to provide data in an
efficient, performance-driven manner. The first version of the Microsoft .NET Framework
shipped with two data providers: the SQL Server .NET Data Provider, designed specifically to
work with SQL Server 7 or later, and the OleDb .NET Data Provider, which connects with other
types of databases. Microsoft Visual Studio .NET 2003 added two more data providers: the
ODBC Data Provider and the Oracle Data Provider. Each data provider consists of versions of
the following generic component classes:
The Command object provides direct execution of a command to the database. If the
command returns more than a single value, the Command object returns a DataReader to provide
the data. This data can be directly processed by application logic. Alternatively, you can use the
DataAdapter to fill a DataSet object. Updates to the database can be achieved through the
Command object or through the DataAdapter.The generic classes that make up the data providers
are summarized in the following sections.
REPORT:
A report is used to vies and print information from the database. The report can ground
records into many levels and compute totals and average by checking values from many
records at once. Also the report is attractive and distinctive because we have control over the
size and appearance of it.
MACRO:
A macro is a set of actions. Each action in macros does something. Such as opening a form or
printing a report .We write macros to automate the common tasks the work easy and save the
time.
History:-
In 1996, Sun Microsystems released the Java programming language with Microsoft soon
purchasing a license to implement it in their operating system. Java was originally meant to be a
platform independent language, but Microsoft, in their implementation, broke their license
agreement and made a few changes that would essentially inhibit Java's platform-independent
capabilities. Sun filed a lawsuit and Microsoft settled, deciding to create their own version of a
partially compiled, partially interpreted object-oriented programming language with syntax
closely related to that of C++.
During the development of .NET, the class libraries were originally written in a
language/compiler called Simple Managed C (SMC). In January 1999, Anders Hejlsberg formed
a team to build a new language at the time called Cool, which stood for "C like Object Oriented
Language".Microsoft had considered keeping the name "Cool" as the final name of the language,
but chose not to do so for trademark reasons. By the time the .NET project was publicly
announced at the July 2000 Professional DevelopersConference, the language had been renamed
C#, and the class libraries and ASP.NET runtime had been ported to C#.
C#'s principal designer and lead architect at Microsoft is Anders Hejlsberg, who was previously
involved with the design of Visual J++, BorlandDelphi, and Turbo Pascal. In interviews and
technical papers he has stated that flaws in most major programming languages (e.g. C++, Java,
Delphi, and Smalltalk) drove the fundamentals of the Common Language Runtime (CLR),
which, in turn, drove the design of the C# programming language itself. Some argue that C#
shares roots in other languages.
Features of C#:-
By design, C# is the programming language that most directly reflects the underlying Common
Language Infrastructure (CLI). Most of C#'s intrinsic types correspond to value-types
implemented by the CLI framework. However, the C# language specification does not state the
code generation requirements of the compiler: that is, it does not state that a C# compiler must
target a Common Language Runtime (CLR), or generate Common Intermediate Language (CIL),
or generate any other specific format. Theoretically, a C# compiler could generate machine code
like traditional compilers of C++ or FORTRAN; in practice, all existing C# implementations
target CIL.
There are no global variables or functions. All methods and members must be declared
within classes. It is possible, however, to use static methods/variables within public
classes instead of global variables/functions.
Local variables cannot shadow variables of the enclosing block, unlike C and C++.
Variable shadowing is often considered confusing by C++ texts.
C# supports a strict Boolean data type, bool. Statements that take conditions, such as
while and if, require an expression of a Boolean type. While C++ also has a Boolean
type, it can be freely converted to and from integers, and expressions such as if(a) require
only that a is convertible to bool, allowing a to be an int, or a pointer. C# disallows this
"integer meaning true or false" approach on the grounds that forcing programmers to use
expressions that return exactly bool can prevent certain types of programming mistakes
such as if (a = b) (use of = instead of ==).
In C#, memory address pointers can only be used within blocks specifically marked as
unsafe, and programs with unsafe code need appropriate permissions to run. Most object
access is done through safe object references, which are always either pointing to a valid,
existing object, or have the well-defined null value; a reference to a garbage-collected
object, or to random block of memory, is impossible to obtain. An unsafe pointer can
point to an instance of a value-type, array, string, or a block of memory allocated on a
stack. Code that is not marked as unsafe can still store and manipulate pointers through
the System.IntPtr type, but cannot dereference them.
Managed memory cannot be explicitly freed, but is automatically garbage collected.
Garbage collection addresses memory leaks. C# also provides direct support for
deterministic finalization with the using statement (supporting the Resource Acquisition
Is Initialization idiom).
Multiple inheritance is not supported, although a class can implement any number of
interfaces. This was a design decision by the language's lead architect to avoid
complication, avoid dependency hell and simplify architectural requirements throughout
CLI.
C# is more type safe than C++. The only implicit conversions by default are those which
are considered safe, such as widening of integers and conversion from a derived type to a
base type. This is enforced at compile-time, during JIT, and, in some cases, at runtime.
There are no implicit conversions between Booleans and integers, nor between
enumeration members and integers (except for literal 0, which can be implicitly
converted to any enumerated type). Any user-defined conversion must be explicitly
marked as explicit or implicit, unlike C++ copy constructors (which are implicit by
default) and conversion operators (which are always implicit).
Enumeration members are placed in their own scope.
C# provides syntactic sugar for a common pattern of a pair of methods, accessor (getter)
and mutator (setter) encapsulating operations on a single attribute of a class, in form of
properties.
Full type reflection and discovery is available.
C# currently (as of 3 June 2008) has 77 reserved words.
C# has a unified type system. This unified type system is called Common Type System (CTS).
A unified type system implies that all types, including primitives such as integers, are subclasses
of the System. Object class. For example, every type inherits a To String () method. For
performance reasons, primitive types (and value types in general) are internally allocated on the
stack.
Value types
Reference types
Value types are plain aggregations of data. Instances of value types do not have referential
identity or referential comparison semantics - equality and inequality comparisons for value
types compare the actual data values within the instances, unless the corresponding operators are
overloaded. Value types are derived from System.ValueType, always have a default value, and
can always be created and copied. Some other limitations on value types are that they cannot
derive from each other (but can implement interfaces) and cannot have a default (parameter less)
constructor. Examples of value types are some primitive types, such as int (a signed 32-bit
integer), float (a 32-bit IEEE floating-point number), char (a 16-bit Unicode code point), and
System.DateTime (identifies a specific point in time with millisecond precision).
In contrast, reference types have the notion of referential identity - each instance of reference
type is inherently distinct from every other instance, even if the data within both instances is the
same. This is reflected in default equality and inequality comparisons for reference types, which
test for referential rather than structural equality, unless the corresponding operators are
overloaded (such as the case for System. String). In general, it is not always possible to create
an instance of a reference type, nor to copy an existing instance, or perform a value comparison
on two existing instances, though specific reference types can provide such services by exposing
a public constructor or implementing a corresponding interface (such as ICloneable or I
Comparable). Examples of reference types are object (the ultimate base class for all other C#
classes), System. String (a string of Unicode characters), and System. Array (a base class for
all C# arrays).
Both type categories are extensible with user-defined types.
VI. Boxing and unboxing Boxing is the operation of converting a value of a value type
into a value of a corresponding reference type.
ACCESS PRIVIEGES
IIS provides several new access levels. The following values can set the type of access
allowed to specific directories:
o Read
o Write
o Script
o Execute
o Log Access
o Directory Browsing.
ActiveX
ActiveX application, called controls, are downloaded and executed by the Web
browser, like Java applets. Unlike Java applets, controls can be installed permanently when they
are downloaded; eliminating the need to download them again. ActiveX’s main advantage is that
it can do just about anything.
Several enterprising programmers have already used ActiveX to bring exciting new
capabilities to Web page, such as “the Web page that turns off your computer” and “the Web
page that formats disk drive”.
Fortunately, ActiveX includes a signature feature that identifies the source of the control
and prevents controls from being modified. While this won’t prevent a control from damaging
system, we can specify which sources of controls we trust.
ActiveX has two main disadvantages
ActiveX is proprietary.
ADO.NET
ADO.NET provides consistent access to data sources such as Microsoft SQL Server, as
well as data sources exposed via OLE DB and XML. Data-sharing consumer applications can
use ADO.NET to connect to these data sources and retrieve, manipulate, and update data.
ADO.NET cleanly factors data access from data manipulation into discrete components
that can be used separately or in tandem. ADO.NET includes .NET data providers for connecting
to a database, executing commands, and retrieving results. Those results are either processed
directly, or placed in an ADO.NET Dataset object in order to be exposed to the user in an ad-hoc
manner, combined with data from multiple sources, or remote between tiers. The ADO.NET
Dataset object can also be used independently of a .NET data provider to manage data local to
the application or sourced from XML.
Why ADO.NET?
As application development has evolved, new applications have become loosely coupled
based on the Web application model. More and more of today's applications use XML to encode
data to be passed over network connections. Web applications use HTTP as the fabric for
communication between tiers, and therefore must explicitly handle maintaining state between
requests. This new model is very different from the connected, tightly coupled style of
programming that characterized the client/server era, where a connection was held open for the
duration of the program's lifetime and no special handling of state was required.
In designing tools and technologies to meet the needs of today's developer, Microsoft
recognized that an entirely new programming model for data access was needed, one that is built
upon the .NET Framework. Building on the .NET Framework ensured that the data access
technology would be uniform—components would share a common type system, design
patterns, and naming conventions.
ADO.NET was designed to meet the needs of this new programming model:
disconnected data architecture, tight integration with XML, common data representation with the
ability to combine data from multiple and varied data sources, and optimized facilities for
interacting with a database, all native to the .NET Framework.
ADO.NET coexists with ADO. While most new .NET applications will be written using
ADO.NET, ADO remains available to the .NET programmer through .NET COM
interoperability services. For more information about the similarities and the differences between
ADO.NET and ADO.
XML Support
XML and data access are intimately tied—XML is all about encoding data, and data access is
increasingly becoming all about XML. The .NET Framework does not just support Web
standards—it is built entirely on top of them.
5,SYSTEM DESIGN:-
Level 0
Level 1
Level 2
UML DIAGRAMS
History Table:
Vehicle Table:
6. SYSTEM IMPLEMENTATION
MODULE DESCRIPTION
RECEPTION/INTERPRETATION MODULE
The first step for the CU is to receive a warning message from acollided vehicle, and so
there must be a module waiting for the arrival of messages and retrievingthe values from the
different fields.
NORMALIZATION
Normalization is the process of breaking down a table into smaller tables. So that each
table deals with a single theme. There are three different kinds of modifications of anomalies and
formulated the first, second and third normal forms (3NF) is considered sufficient for most
practical purposes. It should be considered only after a thorough analysis and complete
understanding of its implications.
This form also called as a “flat file”. Each column should contain data in respect of a
single attributes and no two rows may be identical. To bring a table to First Normal Form,
repeating groups of fields should be identified and moved to another table.
Third Normal Form normalization will be needed where all attributes in a relation tuple are not
functionally dependent only on the key attribute. A transitive dependency is one in which one in
which one attribute depends on second which is turned depends on a third and so on.
7. SYSTEM TESTING
The purpose of testing is to discover errors. Testing is the process of trying to discover
every conceivable fault or weakness in a work product. It provides a way to check the
functionality of components, sub assemblies, assemblies and/or a finished product It is the
process of exercising software with the intent of ensuring that the
Software system meets its requirements and user expectations and does not fail in an
unacceptable manner. There are various types of test. Each test type addresses a specific testing
requirement.
TYPES OF TESTS
Unit testing
Unit testing involves the design of test cases that validate that the internal program logic is
functioning properly, and that program inputs produce valid outputs. All decision branches and
internal code flow should be validated. It is the testing of individual software units of the
application .it is done after the completion of an individual unit before integration. This is a
structural testing, that relies on knowdge of its construction and is invasive. Unit tests perform
basic tests at component level and test a specific business process, application, and/or system
configuration. Unit tests ensure that each unique path of a business process performs accurately
to the documented specifications and contains clearly defined inputs and expected results.
Integration testing
Integration tests are designed to test integrated software components to determine if they
actually run as one program. Testing is event driven and is more concerned with the basic
outcome of screens or fields. Integration tests demonstrate that although the components were
individually satisfaction, as shown by successfully unit testing, the combination of components is
correct and consistent. Integration testing is specifically aimed at exposing the problems that
arise from the combination of components.
Functional test
Functional tests provide systematic demonstrations that functions tested are available
apecified by the business and technical requirements, system documentation, and user manuals.
System Test
System testing ensures that the entire integrated software system meets requirements. It tests a
configuration to ensure known and predictable results. An example of system testing is the
configuration oriented system integration test. System testing is based on process descriptions
and flows, emphasizing pre-driven process links and integration points
Unit testing is usually conducted as part of a combined code and unit test phase of the
software lifecycle, although it is not uncommon for coding and unit testing to be conducted as
two distinct phases.
The task of the integration test is to check that components or software applications, e.g.
components in a software system or – one step up – software applications at the company level –
interact without error.
Test Results: All the test cases mentioned above passed successfully. No defects encountered.
Test Results: All the test cases mentioned above passed successfully. No defects encountered.
Sample Code:
Emergency Server:
using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Data;
using System.Drawing;
using System.Text;
using System.Windows.Forms;
using System.Net;
using System.Net.Sockets;
using System.IO;
using System.Data.SqlClient;
namespace receiver
{
public partial class Form1 : Form
{
public Form1()
{
InitializeComponent();
reciver1.receivedPath = "";
}
public static string kl;
public static string lm;
label2.Text = reciver1.receivedPath;
label4.Text = reciver1.curMsg;
if (kl == "y")
{
string vehicle, route, speed, accident, time, no_per = "", siv = "";
StreamReader sw = new StreamReader("E:\\sf2.txt");
string k = sw.ReadToEnd();
string[] split = k.Split('\n');
vehicle = split[0].Trim();
route = split[1].Trim();
speed = split[2].Trim();
accident = split[3].Trim();
time = split[4].Trim();
no_per = split[5].Trim();
siv = split[6].Trim();
sw.Close();
listBox1.Items.Add(vehicle);
listBox2.Items.Add(route);
listBox3.Items.Add(speed);
listBox4.Items.Add(accident);
listBox5.Items.Add(time);
listBox6.Items.Add(no_per);
listBox7.Items.Add(siv);
int mk = 1;
FileInfo f1 = new FileInfo("E:\\sf2.txt");
f1.Delete();
if (siv == "high")
{
if (int.Parse(no_per) > 1 && int.Parse(no_per) < 5)
{
amb = 2;
}
else if (int.Parse(no_per) > 4 && int.Parse(no_per) < 10)
{
amb = 4;
}
else if (int.Parse(no_per) > 10 && int.Parse(no_per) < 20)
{
amb = 6;
}
else if (int.Parse(no_per) > 20 && int.Parse(no_per) <= 30)
{
amb = 15;
}
else if (int.Parse(no_per) > 30 && int.Parse(no_per) <= 50)
{
amb = 25;
}
else if (int.Parse(no_per) > 50)
{
amb = 30;
}
else
{
}
if (route == "rout1")
{
fire = "yes";
}
if (route == "rout7")
{
fire = "yes";
}
if (route == "rout5")
{
fire = "yes";
}
if (route == "rout9")
{
fire = "yes";
}
}
else if (siv == "low")
{
if (int.Parse(no_per) > 1 && int.Parse(no_per) < 5)
{
amb = 2;
}
else if (int.Parse(no_per) > 4 && int.Parse(no_per) < 10)
{
amb = 3;
}
else if (int.Parse(no_per) > 10 && int.Parse(no_per) < 20)
{
amb = 4;
}
else if (int.Parse(no_per) > 20 && int.Parse(no_per) <= 30)
{
amb = 8;
}
else if (int.Parse(no_per) > 30 && int.Parse(no_per) <= 50)
{
amb = 10;
}
else if (int.Parse(no_per) > 50)
{
amb = 15;
}
else
{
}
if (route == "rout1")
{
fire = "yes";
}
if (route == "rout7")
{
fire = "yes";
}
if (route == "rout5")
{
fire = "yes";
}
if (route == "rout9")
{
fire = "yes";
}
}
else if (siv == "average")
{
if (int.Parse(no_per) > 1 && int.Parse(no_per) < 5)
{
amb = 2;
}
else if (int.Parse(no_per) > 4 && int.Parse(no_per) < 10)
{
amb = 3;
}
else if (int.Parse(no_per) > 10 && int.Parse(no_per) < 20)
{
amb = 7;
}
else if (int.Parse(no_per) > 20 && int.Parse(no_per) <= 30)
{
amb = 20;
}
else if (int.Parse(no_per) > 30 && int.Parse(no_per) <= 50)
{
amb = 20;
}
else if (int.Parse(no_per) > 50)
{
amb = 25;
}
else
{
}
if (route == "rout1")
{
fire = "yes";
}
if (route == "rout7")
{
fire = "yes";
}
if (route == "rout5")
{
fire = "yes";
}
if (route == "rout9")
{
fire = "yes";
}
}
else if (siv == "very high")
{
if (int.Parse(no_per) > 1 && int.Parse(no_per) < 5)
{
amb = 4;
}
else if (int.Parse(no_per) > 4 && int.Parse(no_per) < 10)
{
amb = 8;
}
else if (int.Parse(no_per) > 10 && int.Parse(no_per) < 20)
{
amb = 12;
}
else if (int.Parse(no_per) > 20 && int.Parse(no_per) <= 30)
{
amb = 20;
}
else if (int.Parse(no_per) > 30 && int.Parse(no_per) <= 50)
{
amb = 30;
}
else if (int.Parse(no_per) > 50)
{
amb = 40;
}
else
{
}
if (route == "rout1")
{
fire = "yes";
}
if (route == "rout7")
{
fire = "yes";
}
if (route == "rout5")
{
fire = "yes";
}
if (route == "rout9")
{
fire = "yes";
}
}
else if (siv == "low")
{
if (int.Parse(no_per) > 1 && int.Parse(no_per) < 5)
{
amb = 2;
}
else if (int.Parse(no_per) > 4 && int.Parse(no_per) < 10)
{
amb = 2;
}
else if (int.Parse(no_per) > 10 && int.Parse(no_per) < 20)
{
amb = 3;
}
else if (int.Parse(no_per) > 20 && int.Parse(no_per) <= 30)
{
amb = 6;
}
else if (int.Parse(no_per) > 30 && int.Parse(no_per) <= 50)
{
amb = 10;
}
else if (int.Parse(no_per) > 50)
{
amb = 20;
}
else
{
}
if (route == "rout1")
{
fire = "yes";
}
if (route == "rout7")
{
fire = "yes";
}
if (route == "rout5")
{
fire = "yes";
}
if (route == "rout9")
{
fire = "yes";
}
}
else
{
}
label11.Text = "NO OF Amulance Alert : " + amb.ToString()+ "\nFire service : " + fire + "\nPolice : YES";
if (siv == "high")
{
if (int.Parse(no_per) > 1 && int.Parse(no_per) < 5)
{
amb = 2;
}
else if (int.Parse(no_per) > 4 && int.Parse(no_per) < 10)
{
amb = 4;
}
else if (int.Parse(no_per) > 10 && int.Parse(no_per) < 20)
{
amb = 6;
}
else if (int.Parse(no_per) > 20 && int.Parse(no_per) <= 30)
{
amb = 15;
}
else if (int.Parse(no_per) > 30 && int.Parse(no_per) <= 50)
{
amb = 25;
}
else if (int.Parse(no_per) > 50)
{
amb = 30;
}
else
{
}
if (route == "rout1")
{
fire = "yes";
}
if (route == "rout7")
{
fire = "yes";
}
if (route == "rout5")
{
fire = "yes";
}
if (route == "rout9")
{
fire = "yes";
}
}
else if (siv == "low")
{
if (int.Parse(no_per) > 1 && int.Parse(no_per) < 5)
{
amb = 2;
}
else if (int.Parse(no_per) > 4 && int.Parse(no_per) < 10)
{
amb = 3;
}
else if (int.Parse(no_per) > 10 && int.Parse(no_per) < 20)
{
amb = 4;
}
else if (int.Parse(no_per) > 20 && int.Parse(no_per) <= 30)
{
amb = 8;
}
else if (int.Parse(no_per) > 30 && int.Parse(no_per) <= 50)
{
amb = 10;
}
else if (int.Parse(no_per) > 50)
{
amb = 15;
}
else
{
}
if (route == "rout1")
{
fire = "yes";
}
if (route == "rout7")
{
fire = "yes";
}
if (route == "rout5")
{
fire = "yes";
}
if (route == "rout9")
{
fire = "yes";
}
}
else if (siv == "average")
{
if (int.Parse(no_per) > 1 && int.Parse(no_per) < 5)
{
amb = 2;
}
else if (int.Parse(no_per) > 4 && int.Parse(no_per) < 10)
{
amb = 3;
}
else if (int.Parse(no_per) > 10 && int.Parse(no_per) < 20)
{
amb = 7;
}
else if (int.Parse(no_per) > 20 && int.Parse(no_per) <= 30)
{
amb = 20;
}
else if (int.Parse(no_per) > 30 && int.Parse(no_per) <= 50)
{
amb = 20;
}
else if (int.Parse(no_per) > 50)
{
amb = 25;
}
else
{
}
if (route == "rout1")
{
fire = "yes";
}
if (route == "rout7")
{
fire = "yes";
}
if (route == "rout5")
{
fire = "yes";
}
if (route == "rout9")
{
fire = "yes";
}
}
else if (siv == "very high")
{
if (int.Parse(no_per) > 1 && int.Parse(no_per) < 5)
{
amb = 4;
}
else if (int.Parse(no_per) > 4 && int.Parse(no_per) < 10)
{
amb = 8;
}
else if (int.Parse(no_per) > 10 && int.Parse(no_per) < 20)
{
amb = 12;
}
else if (int.Parse(no_per) > 20 && int.Parse(no_per) <= 30)
{
amb = 20;
}
else if (int.Parse(no_per) > 30 && int.Parse(no_per) <= 50)
{
amb = 30;
}
else if (int.Parse(no_per) > 50)
{
amb = 40;
}
else
{
}
if (route == "rout1")
{
fire = "yes";
}
if (route == "rout7")
{
fire = "yes";
}
if (route == "rout5")
{
fire = "yes";
}
if (route == "rout9")
{
fire = "yes";
}
}
else if (siv == "low")
{
if (int.Parse(no_per) > 1 && int.Parse(no_per) < 5)
{
amb = 2;
}
else if (int.Parse(no_per) > 4 && int.Parse(no_per) < 10)
{
amb = 2;
}
else if (int.Parse(no_per) > 10 && int.Parse(no_per) < 20)
{
amb = 3;
}
else if (int.Parse(no_per) > 20 && int.Parse(no_per) <= 30)
{
amb = 6;
}
else if (int.Parse(no_per) > 30 && int.Parse(no_per) <= 50)
{
amb = 10;
}
else if (int.Parse(no_per) > 50)
{
amb = 20;
}
else
{
}
if (route == "rout1")
{
fire = "yes";
}
if (route == "rout7")
{
fire = "yes";
}
if (route == "rout5")
{
fire = "yes";
}
if (route == "rout9")
{
fire = "yes";
}
}
else
{
}
label12.Text = "NO OF Amulance Alert : " + amb.ToString() + "\nFire service : " + fire + "\nPolice : YES";
///////////////////////////////////////////
}
if (siv == "high" || siv == "very high")
{
system.SendFile();
timer2.Start();
}
asd = 1;
kl = "n";
}
reciver1 obj = new reciver1();
class reciver1
{
IPEndPoint ipEnd;
Socket sock;
public reciver1()
{
IPAddress[] ipAddress = Dns.GetHostAddresses("127.0.0.1");
ipEnd = new IPEndPoint(ipAddress[0], 1000);
sock = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.IP);
sock.Bind(ipEnd);
}
DateTime DT;
public static string fileName;
public static string receivedPath;
public static string curMsg = "Stopped";
public void StartServer()
{
int filesize = 0;
//try
//{
hj:
curMsg = "Starting...";
sock.Listen(100);
// MessageBox.Show(curMsg);
class system
{
public static string curMsg;
public static void SendFile()
{
string fileName = "E:\\s\\1.txt";
IPAddress[] ipAddress = Dns.GetHostAddresses("127.0.0.1");//server ipaddress
IPEndPoint ipEnd = new IPEndPoint(ipAddress[0], 1002);
int filelen = 0;
try
{
}
}
}
else
{
this.BackColor = Color.Red;
}
}
asd++;
}
}
}
SERVER:
namespace receiver
{
public partial class Form1 : Form
{
SqlConnection con = new SqlConnection("Data Source=SHAMEER\\SQLEXPRESS;Initial Catalog=auto;Integrated
Security=True");
public Form1()
{
InitializeComponent();
reciver1.receivedPath = "";
}
public static string mk;
private void Form1_Load(object sender, EventArgs e)
{
reciver1.receivedPath = "D:\\";
mk = "n";
backgroundWorker1.RunWorkerAsync();
}
private void button2_Click(object sender, EventArgs e)
{
backgroundWorker1.RunWorkerAsync();
if(mk =="y")
{
string vehicle, route, speed, accident, time, no_per = "", siv = "";
StreamReader sw = new StreamReader("D:\\sf1.txt");
string k = sw.ReadToEnd();
string[] split = k.Split('\n');
vehicle = split[0].Trim();
route = split[1].Trim();
speed = split[2].Trim();
accident = split[3].Trim();
time = split[4].Trim();
sw.Close();
//con.Close();
con.Open();
SqlCommand cmd = new SqlCommand("select * from vehicle where Vehile = '" + vehicle + "'", con);
SqlDataReader dr = cmd.ExecuteReader();
if (dr.Read())
{
no_per = dr["Noofpersons"].ToString();
}
con.Close();
listBox1.Items.Add(vehicle);
listBox2.Items.Add(route);
listBox3.Items.Add(speed);
listBox4.Items.Add(accident);
listBox5.Items.Add(time);
listBox6.Items.Add(no_per);
listBox7.Items.Add(siv);
FileInfo f1 = new FileInfo("D:\\sf1.txt");
f1.Delete();
StreamWriter sw12 = new StreamWriter("D:\\sf2.txt", false);
sw12.WriteLine(vehicle);
sw12.WriteLine(route);
sw12.WriteLine(speed);
sw12.WriteLine(accident);
sw12.WriteLine(time);
sw12.WriteLine(no_per);
sw12.WriteLine(siv);
sw12.Close();
system.SendFile();
con.Close();
mk = "n";
}
}
Socket sock;
public reciver1()
{
IPAddress[] ipAddress = Dns.GetHostAddresses("127.0.0.1");
IPEndPoint ipEnd = new IPEndPoint(ipAddress[0], 1001);
sock = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.IP);
sock.Bind(ipEnd);
}
public static string receivedPath;
public static string curMsg = "Stopped";
public void StartServer()
{
try
{
int i = 0;
s:
curMsg = "Starting...";
sock.Listen(100);
curMsg = "READY";
Socket clientSock = sock.Accept();
bWrite.Close();
clientSock.Close();
curMsg = "File Received";
mk = "y";
goto s;
}
catch (Exception ex)
{
curMsg = "File Receving error." + ex;
}
}
}
class system
{
public static string curMsg;
public static void SendFile()
{
//try
//{
{
string fileName = "D:\\sf2.txt";
IPAddress[] ipAddress = Dns.GetHostAddresses("127.0.0.1");
IPEndPoint ipEnd = new IPEndPoint(ipAddress[0], 1000);
con.Open();
SqlCommand cmd = new SqlCommand("select * from vehicle where Vehile = '" + vehicle + "'", con);
SqlDataReader dr = cmd.ExecuteReader();
if (dr.Read())
{
no_per = dr["Noofpersons"].ToString();
}
con.Close();
listBox1.Items.Add(vehicle);
listBox2.Items.Add(route);
listBox3.Items.Add(speed);
listBox4.Items.Add(accident);
listBox5.Items.Add(time);
listBox6.Items.Add(no_per);
listBox7.Items.Add(siv);
}
}
Vehicle:
namespace TADR
{
public partial class Form1 : Form
{
public delegate void MyPing(int id);
public const string tempadd = @"C:\WINDOWS\Temp";
public DateTime DT = new DateTime();
public Form1()
{
InitializeComponent();
}
}
reciver1 obj = new reciver1();
private void Form1_Load(object sender, EventArgs e)
{
timer2.Start();
reciver1.receivedPath = @"E:\s\";
if (reciver1.receivedPath.Length > 0)
backgroundWorker1.RunWorkerAsync();
else
MessageBox.Show("Please select file receiving path", "Information", MessageBoxButtons.OK,
MessageBoxIcon.Information);
timer1.Start();
label1.Visible = false;
}
class system
{
public static string curMsg;
public static void SendFile()
{
string fileName ="E:\\s\\sf1.txt";
IPAddress[] ipAddress = Dns.GetHostAddresses("127.0.0.1");//server ipaddress
IPEndPoint ipEnd = new IPEndPoint(ipAddress[0], 1001);
int filelen = 0;
try
{
{
}
}
}
class reciver1
{
IPEndPoint ipEnd;
Socket sock;
public reciver1()
{
IPAddress[] ipAddress = Dns.GetHostAddresses("127.0.0.1");
ipEnd = new IPEndPoint(ipAddress[0], 1002);
sock = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.IP);
sock.Bind(ipEnd);
}
public DateTime DT = new DateTime();
public static string fileName;
public static string receivedPath;
public static string curMsg = "Stopped";
public void StartServer()
{
int filesize = 0;
try
{
da:
curMsg = "Starting...";
sock.Listen(100);
}
catch (Exception ex)
{
curMsg = "File Receving error." + ex;
}
FileInfo ff1 = new FileInfo(@"E:\w.txt");
ff1.Delete();
StartServer();
}
public void c()
{
const string tempadd = @"E:\s\";
int c;
string inputFile = reciver1.receivedPath + "/" + reciver1.fileName;//
FileStream fs = new FileStream(inputFile, FileMode.Open, FileAccess.Read);
if (fs.Length > 500)
c = 10;
else
c = 5;
int numberOfFiles = c;
int sizeOfEachFile = (int)Math.Ceiling((double)fs.Length / numberOfFiles);
DT = DateTime.Now;
string inputFile = openFileDialog1.FileName; // Substitute this with your Input File
// backgroundWorker1.RunWorkerAsync();
string s = openFileDialog1.FileName.ToString();
StreamWriter sw = new StreamWriter("E:\\s\\sf1.txt", false);
sw.WriteLine(comboBox1.SelectedItem.ToString());
sw.WriteLine(comboBox2.SelectedItem.ToString());
sw.WriteLine(textBox3.Text);
sw.WriteLine(comboBox3.SelectedItem.ToString());
sw.WriteLine(DT.ToString());
sw.Close();
label1.Visible = true;
FolderBrowserDialog ff = new FolderBrowserDialog();
// system.SendFile(s,DT,listView1.SelectedItems[0].Text);
system.SendFile();
label2.Visible = false;
}
}
}
SCREEN SHOTS:
EMERGENCY SERVICES SCREEN:
CONTROL UNIT:
EMERGENCY SERVICES WAITING TO RECEIVE DATA:
Our proposed system, we developed an activity recognition application service that uses
various sensors (accelerometers, temperature sensors, GPS, and so on) to determine responders’
activities in an emergency evacuation. The use of activity recognition applications in areas such
as cognitive assistance, emergency healthcare, and emergency management will likely increase
significantly in the near future. In these areas, an activity recognition application might require a
large amount of sensor data collection, fast activity recognition, and timely delivery of results to
the user (responders and command center).
9. REFERENCES
1. “Indonesia Quake Toll Jumps Again,” BBC News Report, 25 Jan. 2005,
http://news.bbc.co.uk/2/ hi/asia-pacific/4204385.stm.
2. R. R. Rao et al., Improving Disaster Management: The Role of IT in Mitigation, Preparedness,
Response, and Recovery, Nat’l Academies Press, 2007.
3. P. Currion, C. de Silva, and B. Van de Walle, “Open Source Software for Disaster
Management,” Comm. ACM, vol. 50, no. 3,
2007, pp. 61–65.
4. Á. Monares, “Mobile Computing in Urban Emergency Situations: Improving the Support to
Firefighters in the Field,” J. Expert Systems with Applications, vol. 38, no. 2, 2011, pp. 1255–
1267.
5. M. Tsai and N. Yau, “Improving Information Access for Emergency Response in Disasters,”
Natural Hazards, vol. 66, no. 2, 2013, pp. 343–354.
6. Z. Sanaei et al., “Heterogeneity in Mobile Cloud Computing: Taxonomy and Open
Challenges.” IEEE Comm. Surveys Tutorials, vol. 16, no. 1, 2014, pp. 369–392.
7. M. Satyanarayanan et al., “The Role of Cloudlets in Hostile Environments,” IEEE Pervasive
Computing, vol. 12, no. 4, 2013, pp. 40–49.
8. K. Ha et al., “Towards Wearable Cognitive Assistance,” Proc. 12th Ann. Int’l Conf. Mobile
Systems, Applications, and Services (MobiSys 14), 2014, pp. 68–81.
9. A. Gani et al., “A Review on Interworking and Mobility Techniques for Seamless
Connectivity in Mobile Cloud Computing,” J. Network and Computer Applications, vol. 43,
Aug. 2014, pp. 84–102.
10. M. Armbrust et al., “A View of Cloud Computing,” Comm. ACM, vol. 53, no. 4, 2010, pp.
50–58.
11. K. Andersson, D. Granlund, and C. Åhlund, “M4: Multimedia Mobility Manager: A
Seamless Mobility Management Architecture Supporting Multimedia Applications,” Proc. 6th
Int’l Conf. Mobile and Ubiquitous Multimedia, 2007, pp.
6–13.
12. S.H. Zanakis et al., “Multi-Attribute Decision Making: A Simulation Comparison of Select
Methods,” European J. Operational Research, vol. 38 107, no. 3, 1998, pp. 507–529.
13. M. Satyanarayanan et al., “The Case for VMBased Cloudlets in Mobile Computing,” IEEE
Pervasive Computing, vol. 8, no. 4, 2009, pp. 14–23.
14. N. Fernando, S.W. Loke, and W. Rahayu, “Mobile Cloud Computing: A Survey,” Future
Generation Computer Systems, vol. 29, no. 1,2013, pp. 84–106.
15. S. Saguna, A. Zaslavsky, and D. Chakraborty, “Complex Activity Recognition Using
Context- Driven Activity Theory and Activity Signatures,” ACM Trans. Computer–Human
Interactions, vol. 20, no. 6, 2013, article 32.