You are on page 1of 216

Ref No: 2








Phone number: 9247218407 (Hem Kumar)


A widely accepted prediction is that computing is moving background

projecting the human user into foreground. Today computing has become a key in
every technology. If this prediction is to come true, then next generation
computing, which we will call as human computing is about anticipatory user
interfaces that should be human centered, built for humans based on human

In the present paper we will discuss how human computing leads to

understanding of human behavior , number of components of human behavior, how
they might be integrated into computers & how far we are from enabling computers
to understand human behavior.


1. Human computing

2. Facial action detection

3. Approaches to measurement:

3.1……Real time facial expression recognition for natural interaction

3.2……Message judgment
3.3……Sign measurement

4. Research challenges

4.1……Scientific challenges
4.2……Technical challenges

5. Conclusion

6. References

1. Human computing:
A widely accepted prediction is that computing is moving
background, projecting the human user into foreground. Today computing has
become a key in every technology .If this prediction is to come true, the next
generation computing , which we will call as human computing is about
anticipatory user interfaces that should be human centered , build for humans
based on human model.

In human-human interaction , intentions & action tendencies often are

more important than what an individual may be feeling .People may or may not
aware of what they’re feeling , and feelings often come about sometime late in the

One way of tackling the problems is to move away from computer-

centered designs toward human-centered designs for HCI. The former involve
usually the conventional interface devices like keyboard, mouse, and visual
displays, and assume that the human will be explicit .But, a goal of human-
centered computing is computer systems that can understand human behavior in
unstructured environments & respond appropriately
Efforts at emotion recognition however are inherently flawed
unless one recognizes that emotion-intentions, action tendencies, appraisals, and
feelings-is not an observable. Emotion can only be inferred from context, self
report, physiological indicators, and expressive behavior.(see the following figure)

Here we will focus on expressive behavior, in particular facial
expression, and approaches to measurement.

2. Facial action detection:

Numerous techniques have been developed for face detection i.e.…,
identifying all regions in the scene that contain a human face. Tracking is the
essential step for human-motion analysis since it provides the data for recognition
of face head/body postures and gestures.

3.Approaches to measurement:
3.1. Real time facial expression recognition for natural interaction
3.2. Message judgment
3.3. Sign measurement
3.1. Real time facial expression recognition for natural interaction:

In this the basic step is facial action detection. The process of facial action
detection is as follows. Take a look at the above figure. First, the face image is
captured, then
the main important features that are needed are extracted . They are then
normalized based on the distance between the eyes. Now let us see about
behavioral recognition by real time facial expression recognition for natural

Here mainly 5 distances are taken into account. They are distances
between right eye & eye brow, left eye & eyelid, left eye & left corner of mouth,
width of the mouth, and height of the mouth. Now initially the distances are stored
in the database. Now each parameter can take three different states for each of the
emotions: C+, C- , and S.

State C+ means that the value of the parameters has increased with respect
to the neutral one; state C- that its value has diminished with respect to the neutral
one; and the state S that its value has not varied with respect to the neutral
one.First, we build a descriptive table of emotions, according to the state of the
parameters, like the one of the table below:

Now based on the above table we will know the 6 basic emotions of a
human. For example: take a look at first row in the above table. Distance D1 must
diminish, D2 must be neutral or diminished, D3 must be increased, D4 must be
increased, D5 must be diminish, width / height must decrease or neutral to the
stored value, in order to exhibit the emotion Joy. Like that all the remaining 5
emotions will be known from the above table.

3.2 Message judgment:

Generally all the facial expressions will be given some Action

Units (AU’s) (i.e.), all the facial expressions are feeded in computer with some AU’s.
Based on those Action Units the corresponding expression is displayed by the
computer. There are actually 40 AU’s, but the recent systems can recognize only 15
to 27 AU’s.
Generally for each and every part of a face according to the
features the Action Units are given to points plotted on the faces (i.e.) FACE
FEATURE POINTS. Based on these points the system designates the behavior of a
The task is to describe the surface of behavior with the
judgment based approach that is making judgments, such as “this face is happy
just by seeing a smile on his face.” But where as an observe with a sign based

approach would code the face.

Actually there are six “basic emotions.” They are joy, surprise,
sadness, disgust, fear and anger. An example of facial expressions for the six basic
-motions are shown in the figure above.
There are some contradictory emotion expressions, for
example consider “MASKING SMILE,” in which smiling is used to cover up or hide
an underlying emotion. So such type of underlying emotions can be observed with
sign based approach.

3.3 Sign measurement:

Of the various methods, the Facial Action Coding System (FACS)

is the most comprehensive, psychometrically rigorous, and widely used. The most
recent version of Facial Action Coding System specifies:

• 9 Action Units for upper face
• 18 Action Units for lower face
• 11 Action Units for head position and movement
• 9 Action Units for eye position and movement and
Action Units may occur singly or in combinations. Action Unit
Combinations may be Additive Combinations or Non-Additive Combinations. In
Additive Combinations, the appearance of each Action Unit is Independent, where
as in Non-Additive Combinations the Action Units modify each other’s appearance.
Non-Additive Combinations are analogous to co-articulation effects in speech, in
which one phoneme modifies the sound of one’s with which it is contiguous.
An Example of an Additive Combinations in Facial Action Coding
System is (AU 1+2), which often occurs in surprise (along with eye widening, AU 5)
and in the brow-flash greeting. The combination two Action Units raises the inner
(AU 1) and outer (AU 2) corners of the eye brows and causes horizontal wrinkles to
appear across the forehead. The appearance changes associated with (AU 1+2) are
the product of their joint actions.
The Examples of Non-Additive Combinations are (AU 1+4) and (AU
1+2+4), comparable to co-articulation effects in speech. Here (AU 1+4), which
often occurs in sadness. When AU 1 occurs alone, the inner eyebrows are pulled
upward. When AU 2 occurs alone, they are pulled together downward. When AU 1
and AU 4 occur together, the downward action of AU 4 is modified. The result is
that the inner eyebrows are raised and pulled together. This action typically gives
an oblique shape to the brows and causes horizontal wrinkles to appear in the
center of the forehead, as well as other changes in appearance.


4.1. Scientific challenges:

1. Modalities:
We should know how many and which behavioral signals has to be
combined for realization of robust and accurate human behavior analysis .Here the
behavioral channels includes face, body and tone of the voices of the human.
2. Fusion:
Here the main challenge is to know at what abstraction level the input
modalities have to be fused. We should know whether the input modalities depend
upon the machine learning techniques or not .One should know whether the tight
coupling persists when the modalities are used for the human behavior analysis.

4.2. Technical challenges:

For understanding of human behavior we should meet the technical
challenges which are as follows.

1 .Robustness:
Most methods for human sensing, context sensing and human behavior
understanding works only in the constrained environments. Here the environment
should be very calm without any noise. If the environment is noisy then it causes
them to fail.

2. Speed:
Many of the methods in the field do not perform fast enough to
support interactivity. Speed of the signals should be fast enough to the destination
point. Many of the researchers choose for more sophisticated processing rather
than for real time processing .The main challenge is to find faster hardware


Human behavior understanding is a complex and very difficult

problem. It is still far from being solved in a way which is suitable for anticipatory
interfaces and human computing application domain. In the past two decades there
has been significant progress in some parts of the field like face recognition and the
video surveillance .While in the other parts of the field it is non basic affective state
recognition and multimodal multi-aspect context-sensing.

Although there remain significant, scientific and technical issues to

be addressed, it is optimistic about future progress in the field. The main reason is
that anticipatory interfaces and their applications are likely to become the single
most widespread research topic of artificial intelligence and human computer
interaction research communities

Now there are a large and steadily growing number of

researches Projects which are concerned with the interpretation of the human
behavior at a deeper level.


[1] Aarts, E. Ambient intelligence drives open innovation. ACM Interactions, 12, 4
(July/Aug. 2005), 66-68.

[2] Ambady, N. and Rosenthal, R. Thin slices of expressive behavior as predictors of

interpersonal consequences: A meta-analysis. Psychological Bulletin, 111, 2 (Feb.
1992), 256-274.

[3] Ba, S.O. and Odobez, J.M. A probabilistic framework for joint head tracking and
pose estimation. In Proc. Conf. Pattern Recognition, vol. 4, 264-267, 2004.

[4] Bartlett, M.S., Littlewort, G., Frank, M.G., Lainscsek, C.,Fasel, I. and Movellan ,
J. Fully automatic facial action recognition in spontaneous behavior. In Proc. Conf.
Face &Gesture Recognition, 223-230, 2006.

[5] Bicego, M., Cristani, M. and Murino, V. Unsupervised scene analysis: A hidden
Markov model approach. Computer Vision & Image Understanding, 102, 1 (Apr.
2006), 22-41.

[6] Bobick, A.F. Movement, activity and action: The role of knowledge in the
perception of motion. Philosophical Trans. Roy. Soc. London B, 352, 1358 (Aug.
1997), 1257-1265.

[7] Bowyer, K.W., Chang, K. and Flynn, P. A survey of approaches and challenges
in 3D and multimodal 3D+2D face recognition. Computer Vision & Image
Understanding, 101, 1 (Jan. 2006), 1-15.

[8] Buxton, H. Learning and understanding dynamic scene activity: a review.

Image & Vision Computing, 21, 1 (Jan.2003), 125-136.

[9] Cacioppo, J.T., Berntson, G.G., Larsen, J.T., Poehlmann,K.M. and Ito, T.A. The
psychophysiology of emotion. In Handbook of Emotions. Lewis, M. and Haviland-
Jones, J.M.,Eds. Guilford Press, New York, 2000, 173-191.

[10] Chiang, C.C. and Huang, C.J. A robust method for detecting arbitrarily tilted
human faces in color images. Pattern Recognition Letters, 26, 16 (Dec. 2005),

[11] Costa, M., Dinsbach, W., Manstead, A.S.R. and Bitti, P.E.R. Social presence,
embarrassment, and nonverbal behavior.Journal of Nonverbal Behavior, 25, 4 (Dec.
2001), 225-240.

[12] Coulson, M. Attributing emotion to static body postures: Recognition accuracy,

confusions, & viewpoint dependence.J. Nonverbal Behavior, 28, 2 (Jun. 2004), 117-

Fa ci
While humans have had the innate ability to recognize and distinguish
different faces for millions of years, computers are just now catching up. In this
paper, we'll learn how computers are turning your face into computer code so it can
be compared to thousands, if not millions, of other faces. We'll also look at how
facial recognition software is being used in elections, criminal investigations and
to secure your personal computer.

Facial recognition software falls into a larger group of technologies known as

biometrics. Biometrics uses biological information to verify identity. The basic idea
behind biometrics is that our bodies contain unique properties that can be used to
distinguish us from others. Facial recognition methods may vary, but they generally
involve a series of steps that serve to capture, analyze and compare your face to
a database of stored images.

A Software company called Visionics developed Facial Recognition software

called Faceit. The heart of this facial recognition system is the Local Feature
Analysis (LFA) algorithm. This is the mathematical technique the system uses to
encode faces. The system maps the face and creates a faceprint, a unique
numerical code for that face. Once the system has stored a faceprint, it can
compare it to the thousands or millions of faceprints stored in a database. Potential
applications even include ATM and check-cashing security, Security Law
Enforcement & Security Surveillance and voter database for duplicates. This
biometrics technology could also be used to secure your computer files. By
mounting a Webcam to your computer and installing the facial recognition software,
your face can become the password you use to get into your computer. By
implementing this technology and the normal password security you are getting
double security to your valuable data.

People have an amazing ability to recognize and remember thousands of
faces. While humans have had the innate ability to recognize and distinguish
different faces for millions of years, computers are just now catching up. In this
paper, you'll learn how computers are turning your face into computer code so it
can be compared to thousands, if not millions, of other faces. We'll also look at how
facial recognition software is being used in elections, criminal investigations and
to secure your personal computer.

The Face
Your face is an important part of who you are and how people identify you.
Imagine how hard it would be to recognize an individual if all faces looked the
same. Except in the case of identical twins, the face is arguably a person's most
unique physical characteristic. While humans have had the innate ability to
recognize and distinguish different faces for millions of years, computers are just
now catching up.

Visionics, a company based in New Jersey, is one of many developers of

facial recognition technology. The twist to its particular software, FaceIt, is that it
can pick someone's face out of a crowd, extract that face from the rest of the scene
and compare it to a database full of stored images. In order for this software to
work, it has to know what a basic face looks like.

Facial recognition software can be used to find criminals in a crowd, turning a mass of
people into a big line up.
Facial recognition software is based on the ability to first recognize a face, which
is a technological feat in itself, and then measure the various features of each face.
If you look in the mirror, you can see that your face has certain distinguishable
landmarks. These are the peaks and valleys that make up the different facial
features. Visionics defines these landmarks as nodal points. There are about 80
nodal points on a human face. Here are a few of the nodal points that are measured
by the software:

• Distance between eyes

• Width of nose
• Depth of eye sockets
• Cheekbones
• Jaw line
• Chin
These nodal points are measured to create a numerical code, a string of
numbers that represents the face in a database. This code is called a faceprint. Only
14 to 22 nodal points are needed for the FaceIt software to complete the
recognition process. In the next section, we'll look at how the system goes about
detecting, capturing and storing faces.

The Software
Facial recognition software falls into a larger group of technologies known as
biometrics. Biometrics uses biological information to verify identity. The basic idea
behind biometrics is that our bodies contain unique properties that can be used to
distinguish us from others. Besides facial recognition, biometric authentication
methods also include:

• Fingerprint scan
• Retina scan
• Voice identification
Facial recognition methods may vary, but they generally involve a series of steps
that serve to capture, analyze and compare your face to a database of stored
images. Here is the basic process that is used by the FaceIt system to capture and
compare images:

To identify someone, facial recognition software compares newly captured images to

databases of stored images.

1. Detection - When the system is attached to a video surveillance system, the

recognition software searches the field of view of a video camera for faces. If
there is a face in the view, it is detected within a fraction of a second. A
multi-scale algorithm is used to search for faces in low resolution. (An
algorithm is a program that provides a set of instructions to accomplish a
specific task). The system switches to a high-resolution search only after a
head-like shape is detected.

2. Alignment - Once a face is detected, the system determines the head's

position, size and pose. A face needs to be turned at least 35 degrees

toward the camera for the system to register it.

3. Normalization -The image of the head is scaled and rotated so that it can be

registered and mapped into an appropriate size and pose. Normalization is

performed regardless of the head's location and distance from the camera.
Light does not impact the normalization process.

4. Representation - The system translates the facial data into a unique code. This

coding process allows for easier comparison of the newly acquired facial data
to stored facial data.

5. Matching - The newly acquired facial data is compared to the stored data and

(ideally) linked to at least one stored facial representation.

The heart of the FaceIt facial recognition system is the Local Feature Analysis
(LFA) algorithm. This is the mathematical technique the system uses to encode
faces. The system maps the face and creates a faceprint, a unique numerical code
for that face. Once the system has stored a faceprint, it can compare it to the
thousands or millions of faceprints stored in a database. Each faceprint is stored as
an 84-byte file.

The system can match multiple faceprints at a rate of 60 million per minute
from memory or 15 million per minute from hard disk. As comparisons are made,
the system assigns a value to the comparison using a scale of one to 10. If a score
is above a predetermined threshold, a match is declared. The operator then views
the two photos that have been declared a match to be certain that the computer is

Facial recognition, like other forms of biometrics, is considered a technology

that will have many uses in the near future. In the next section, we will look how it
is being used right now.

The primary users of facial recognition software like FaceIt have been law
enforcement agencies, which use the system to capture random faces in crowds.

These faces are compared to a database of criminal mug shots. In addition to law
enforcement and security surveillance, facial recognition software has several other
uses, including:

• Eliminating voter fraud

• Check-cashing identity verification
• Computer security
One of the most innovative uses of facial recognition is being employed by the
Mexican government, which is using the technology to weed out duplicate voter
registrations. To sway an election, people will register several times under
different names so they can vote more than once. Conventional methods have not
been very successful at catching these people. Using the facial recognition
technology, officials can search through facial images in the voter database for
duplicates at the time of registration. New images are compared to the records
already on file to catch those who attempt to register under aliases.

Potential applications even include ATM and check- cashing

security. The software is able to quickly verify a
customer's face. After the user consents, the ATM or check-
cashing kiosk captures a digital photo of the customer. The
facial recognition software then generates a faceprint of the
photograph to protect customers against identity theft and
fraudulent transactions. By using facial recognition
software, there's no need for a picture ID, bank card or personal
identification number (PIN) to verify a customer's identity.

Many people who don't use banks use check-cashing machines. Facial
recognition could eliminate possible criminal activity.
This biometric technology could also be
used to secure your computer files. By
mounting a Webcam to your computer and
installing the facial recognition software, your
face can become the password you use to get
into your computer. IBM has incorporated the
technology into a screensaver for it’s A, T and

X series ThinkPad laptops.

Webcam and Facial Recognition Software installed




Facial recognition software can be used to lock your computer.

With the following advantages and also some of the drawbacks, we conclude our paper on
Facial Recognition using Biometrics. Potential applications are as follows:

• Eliminating voter fraud

• Security law enforcement and Security surveillance
• ATM and Check-cashing identity verification
• Computer security

While facial recognition can be used to protect your private information, it can just as
easily be used to invade your privacy by taking you picture when you are entirely unaware of the
camera. As with many developing technologies, the incredible potential of facial recognition
comes with drawbacks.

But if we add both the facial recognition and the normal password security we can have
an added Double Security which is more reliable than one shield security, Just same as the
quote “Two heads are better than one”.





3rd year CSE 3rd year CSE


Mobile Computing: A technology that allows transmission of data, via a

computer without having to be a connected to a fixed physical link.
Mobile Computing and Communications is a major part of wireless
communication technology. Mobile communication today is a de facto standard by
itself. It commands the single largest share of the Global wireless technologies in
the market. Mobile communications popularity grew many folds over the past few
years and is still growing to a greater extent. Through WAP development of Mobile
Computing Applications is becoming easy and affective. It has also become a
foundation for many wireless LAN applications.

What will computers look like in ten years, in the next country? No wholly
accurate prediction can be made, but as a general feature, most computers will
certainly be portable. How will users access networks with the help of computers or
other communication devices? An ever-increasing number without any wires, i.e.,
wireless. How will people spend much of their time at work, during vacation? Many
people will be mobile already one of the key characteristics of today’s society.
Think, for example, of an aircraft with 800 seats. Modern aircraft already offer
limited network access to passengers, and aircraft of the next generation will offer
easy Internet access. In this scenario, a mobile network moving at high sped
above ground with a wireless link will be the only means of transporting data from
There are two kinds of mobility: user mobility and device portability. User
mobility refers to a user who has access to the same or similar telecommunication
services at different places, i.e., the user can be mobile, and the services will follow
him or her.
With device portability the communication device moves (with or without a
user). Many mechanisms in the network and inside the device have to make sure
that communication is still possible while it is moving.

1. Vehicles:
Tomorrow’s cars will comprise many wireless communication systems and
mobility aware applications. Music, news, road conditions, weather reports, and
other broadcast information are received via digital audio broadcasting (DAB) with
1.5Mbits/s. For personal communication, a global system for mobile
communications (GSM) phone might be available.
Networks with a fixed infrastructure like cellular phones (GSM, UMTS) will be
interconnected with trucked radio systems (TETRA) and wireless LANs (WLAN).
Additionally, satellite communication links can be used.

2. Business:
Today’s typical traveling salesman needs instant access to the company’s
database: to ensure that the files on his or her laptop reflect the actual state etc.

A very simple wireless device is represented by a sensor transmitting state
information. An example for such a sensor could be a switch sensing the office
door. If the door is closed, the switch transmits this state to the mobile phone
inside the office and the mobile phone will not accept incoming calls. Thus, without
user interaction the semantics of a closed door is applied to phone calls.

Mobile Phones:
Personal digital assistant:

Mobile world meets cyberspace

Mobile Internet is all about Internet access from mobile devices. Well, it’s true, but
the ground realities are different. No doubt Internet has grown fast, well really fast!
But mobile Internet is poised to grow even faster. The fundamental difference lies
in the fact that whereas academics and scientists started the Internet, the force
behind mobile Internet access is the cash-rich mobile phone industry. On the
equipment side, the wireless devices represent the ultimate constrained computing
device with:
Less powerful CPUs, less memory (ROM and RAM), restricted power consumption
The Wireless Application Protocol is the de-facto world standard for the presentation and delivery of
wireless information and telephony services on mobile phones and other wireless terminals..

Wireless Application Protocol – WAP
Three are three essential product components that you need to extend your host
applications and data to WAP-enabled devices. These three components are:
1. WAP Micro browser – residing in the client handheld device
2. WAP Gateway – typically on wireless ISP’s network infrastructure
3. WAP Server - residing either on ISP’s infrastructure or on end user
organization’s infrastructure

WAP Micro-browser :
A WAP micro-browser is client software designed to overcome challenges of mobile
handheld devices that enable wireless access to services such as Internet
information in combination with a suitable network server
Lots of WAP browsers and emulators are available free of cost which can be used to
test your WAP pages. Many of these browsers and emulators are specific to mobile
WAP emulators can be used to see how your site will look like on specific phones.
As these images show, the same thing can look different on different mobile
phones. So, the problems that web developer faces with the desktop browsers
(Netscape/I explorer) is present here also. So, make sure you test your code on
different mobile phones (or simulators)

(Circuit Switched Data, around 9.6 kbps data rate)

Source: WAP for web developers,
A WAP server is simply a combined web server and WAP gateway. WAP devices do
not use SSL. Instead they use WTLS. Most existing web servers should be able to
support WAP content as well. Some new MIME types need to be added to your web
server to enable it support WAP content. MIME stands for Multipurpose Internet
Mail Extension.

Wireless Application Protocol - WAP

WAP Communication Protocol & its Components

The WAP Protocols cover both the application (WAE), and the underlying transport
layers (WSP and WTP, WTLS, and WDP). WML and WML Script are collectively
known as WAE, the Wireless Application Environment. As described earlier the
'bearer' level of WAP depends on the type of mobile network. It could be CSD, SMS,
CDMA, or any of a large number of possible data carriers. Whichever bearer your
target client is using, the development above remains the same. Although it’s not
absolutely essential for a developer to know the details of the WAP communication
protocols, a brief understanding of the various protocols involved, their significance
and the capabilities can help a lot while looking for specific solutions.

WML and WMLScript are collectively known as WAE, the Wireless Application

WML is the WAP equivalent to HTML. It is a markup language based on XML
(extensible Markup Language). The WAE specification defines the syntax, variables,
and elements used in a valid WML file.

WBMP stands for Wireless Bitmap. It is the default picture format for WAP. The
current version of WBMP is called type 0. As a thumb rule, a WBMP should not be
wider than 96 pixels and higher than 48 pixels (at 72 dots per inch). There is also
plug-ins available for Paint shop, Photoshop and Gimp, which let you save WBMP
files with these programs.

• Teraflops online converter
• pic2wbmp
• WAP Pictus
• WAP Draw
• WBMPconv & UnWired plug-in for Photoshop/PaintShop

the Security layer protocol in the WAP architecture is called the Wireless Transport
Layer Security, WTLS. WTLS provides an interface for managing, secure
The differences arise due to specific requirements of the WTLS protocol because of
the constraints presented by the mobile data systems.
• Long round-trip times.
• Memory limitations of mobile devices
• The low bandwidth (most of the bearers)
• The limited processing power of mobile devices
• The restrictions on exporting and using cryptography

The Transport layer protocol in the WAP architecture consists of the Wireless
Transaction Protocol (WTP) and the Wireless Datagram Protocol (WDP). The WDP
protocol operates above the data capable bearer services supported by multiple
network types. As a general datagram service, WDP offers a consistent service to
the upper layer protocol (Security, Transaction and Session) of WAP and
communicate transparently over one of the available bearer services.
Source: WAP- WDP Specifications

In Internet a WWW client requests a resource stored on a web server by
identifying it using a unique URL, that is, a text string constituting an address to
that resource. Standard communication protocols, like HTTP and Transmission
Control Protocol/Internet Protocol (TCP/IP) manage these requests and transfer of
data between the two ends. The content is transferred can be either static or


The strength of WAP (some call it the problem with WAP) lies on the fact that it
very closely resembles the Internet model. In order to accommodate wireless
access to the information space offered by the WWW, WAP is based on well-known
Internet technology that has been optimized to meet the constraints of a wireless
environment. Corresponding to HTML, WAP specifies a markup language adapted to
the constraints of low bandwidth available with the usual mobile data bearers and
the limited display capabilities of mobile devices - the Wireless Markup Language
(WML). WML offers a navigation model designed for devices with small displays and
limited input facilities.

Future Outlook For WAP:
The point brought about by many analysts against WAP is that with the emergence
of next generations networks (including GPRS), as the data capabilities of these
networks evolve, it will make possible the delivery of full-motion video images and
high-fidelity sound over mobile networks. Service delivers information at a speed of
9,600 bits of information a second. With GPRS the speed will rise to 100,000.
Mobile commerce is one such application that can open up lots of opportunities for
WAP. By 2010, there could be more than 1500m mobile commerce users. M-
commerce is emerging more rapidly in Europe and in Asia, where mobile services
are relatively advanced, than in the US where mobile telephony has only just begun
to take off.
By allowing mobile to be in always connected state GPRS (or other services like
CDPD) will bring Internet more closely to the mobile.

WAP Applications
One of the most significant advantages of Internet access from mobile rather that
your PC is the ability to instantly identify users geographic location. Some of the
interesting applications of WAP (already existing or being worked on) are:
• Computer Sciences Corporation (CSC) and Nokia are working with a Finnish
fashion retailer who plans to send clothing offers direct to mobile telephones
using a combination of cursors.
• In Finland, children already play new versions of competitive games such as
"Battleships", via the cellular networks. In the music world, Virgin Mobile in
the UK offers to download the latest pop hits to customers in a daily offering.
• Nokia says applications that will benefit from WAP include customer care and
provisioning, message notification and call management, e-mail, mapping
and location services, weather and traffic alerts, sports and financial services,
address book and directory services and corporate intranet applications.

Wireless LAN technology constitutes a fast-growing market introducing the
flexibility of wireless access into office, home, or production environments. WLANs
are typically restricted in their diameter to buildings, a campus, single rooms etc.
And are operated by individuals, not by large-scale network providers. The global
goal of WLANs is to replace office cabling and, additionally, to introduce a higher
flexibility for ad hoc communication in, e.g. Group meetings.

Some advantages of WLANs are:

Only wireless ad hoc networks allow for communication without previous
planning, any wired network needs wiring plans.

Only wireless networks allow for the design of small, independent devices
which can for example be put into a pocket.

Wireless networks can survive disasters.

Some Disadvantages of WLANs:-

Quality of Service:
WLANs typically offer lower quality than their wired counterparts.

While, e.g., high-speed Ethernet adapters are in the range of some 10 E,
wireless LAN adapters, e.g., as PC-Card, still cost some 100 E.

Proprietary Solutions:
Safety and Security:

Mobile Network Layer

This topic introduces protocols and mechanisms developed for the network
layer to support mobility. The most prominent example is Mobile IP, which adds
mobility support to the Internet network layer protocol IP. While systems like GSM
have been designed with mobility in mind from the very beginning, the Internet
started at a time when no-one had a concept of mobile computers.
Another kind of mobility, rather portability of equipment, is supported by
DHCP. In former times computers did not change their location often. Today, due
to laptops or notebooks,

The following gives an overview of Mobile IP, the extensions needed for the
Internet to support the mobility of hosts. The following requires some familiarity
with Internet protocols especially IP.

Goals, assumptions, and requirements:

The Internet is the network for global data communication with hundreds of millions

of users. So why not simply use a mobile computer in the Internet?
The reason is quite simple: you will not receive a single packet as soon as
you leave your home network, i.e., the network your computer is configured for,
and reconnect your computer (wireless or wired) at another place. The reason for
this is quite simple if you consider routing mechanisms in the Internet. A host
sends an IP packet with the header containing a destination address besides other
fields. The destination address not only determines the receiver of the packet, but
also the physical subnet of the receiver.

Quick ‘Solutions’:
One might think of a quick solution to this problem by assigning the
computer a new, topologically correct IP address. So moving to a new location
would also mean assigning a new address. Now the problem is that nobody knows
of this new address. It is almost impossible to find a (mobile) host in the Internet
which has just changed its address. Especially the domain name system (DNS)
need some time before it updates its internal tables necessary for the mapping of a
logical name to an IP address. This approach does not work if the mobile node
moves quite often.
Furthermore, there is a severe problem with higher layer protocols like TCP
that rely on IP addresses. Changing the IP address while still having a TCP
connection open means breaking the connection. A TCP connection can be
identified by the tuple (source IP address, source port, destination IP address,
destination port), also known as a socket.

• Compatibility: A new standard cannot require changes for applications or
network protocols already in use.
• Transparency: Mobility should remain ‘invisible’ for many higher layer
protocols and applications. Besides maybe noticing a lower bandwidth and
some interruption in service, higher layers should continue to work even if
the mobile computer changed its point of attachment to the network.
• Scalability and efficiency: Introducing a new mechanism into the Internet
must not jeopardize the efficiency of the network. Enhancing IP for mobility
must not generate many new messages flooding the whole network.

• Security: Mobility poses many security problems. A minimum requirement is
the authentication of all messages related to the management of Mobile IP.


Mobile Computing and Communications is useful for wireless Networks. The

study of different versions will give differences between Mobile Computing and
Communications, Access Control, Security etc., the traditional mobile phone only
had a simple black and white text display and could send / receive voice or short
messages. Today, however, mobile phones migrate more and more toward PDAs.
Mobile phones with full color graphic display, on the internet browser are available.


1. Mobile Communications book by

2. Web Site from MobileComputing.Com

Rajeev Gandhi Memorial College of Engineering and







Cell: 9441422114.
Cell: 9704268488

Generally, data mining is the process of analyzing data from different perspectives
And summarizing it into useful information - information that can be used to

increaseRevenue, cuts costs, or both. Data mining software is one of a number of
analytical tools for Analyzing data. It allows Users to analyze data from many
different dimensions or angles, Technically, data mining is the Process of finding
correlations or patterns among dozens of Fields in large relational Databases.
Although data mining is a relatively new term, the Technology is not.

The objective of this paper is to provide full fledged information about the
process Of data mining, the steps to process the mining etc., it also provides the
more advantageous Techniques like data cleaning, integration etc., and all schemas
for effective process and Mining.

Data mining is the task of discovering interesting patterns from large
amounts of Data, where the data can be stored in data bases, data ware houses, or
other information Repositories. It is a young interdisciplinary field, drawing from
areas such as database Systems, data ware housing, Statistics, machine learning,
data visualization, information Retrieval, and high performance Computing.

A knowledge discovery process includes data cleaning, data integration,

and data Selection, data Transformation, data mining, pattern evaluation, and
knowledge presentation.

A Warehouse is a Repository for long-term storage of data from multiple sources

organized To as facilitate management Decision making .The data are stored under
a unified schema And are typically summarized. Data Ware house systems provide
some data analysis

Capabilities collectively referred to as OLAP (on-Line Analytical Processing).

Figure (1): Architecture of typical data mining system

Data mining-on what kind of data?
Data mining should be acceptable to any kind of information
repository. This Include relational data bases, data ware houses, transactional
databases, advansed Database systems Include object-oriented and object-
relational data bases, and specific Application oriented data bases Such as spatial
databases, text databases, and multimedia Data Bases. The challenges and
techniques of mining may differ for each repository systems.

We have to preprocess the data in order to help improve the quality of

Today’s real-World data bases are highly suspect able to noisy, missing, and

Data due to their typically huge size and their likely origin from multiple

Sources. Low quality will lead to low quality mining results.

To avoid these anomalies we have the following techniques.

DATA CLEANING – Remove noise and correct inconsistencies in data.

DATA INTEGRATION Merges data from multiple sources, into coherent data sore.

DATA TRANSFERMATING-Connect the data into appropriate forms for mining.

DATA REDUCTION – Reduces the data size by aggregate elemenatingredundanc.

The above techniques are not mutually exclusive, they may work together.

“Data processing techniques when applied before mining can

substantially Improve the Overall the quality of the data”. Patterns mined and/or
the time required for Actual mining. Descriptive data summarization provides the
analytical foundation for data Processing. The basic Statistical measures for data
summarization are mean, weighted mean, Median and mode etc., are Used to
measuring central tendency of data.

Data ware houses provide On-line analytical process (OLAP) tools for
Interactive Analysis of multidimensional data of varied granularities, which
facilitates data Generalization and Data mining. Many other data mining functions
such as association, Classification, and prediction and Clustering can be integrated
with OLAP operations to Enhance interactive mining of knowledge at multiple levels
of abstraction.

Data ware houses and OLAP tools are based on multidimensional data
model. This Model views data to be modeled and viewed in multiple dimensions. It
is identified by Dimensions and facts.

“Dimensions are the perspective or entities with respect to which an

organization Wants to keep records”. Each dimension may have a table associated
with it, called dimension table.

“The facts are the quantities by which we want to analyze relationships

between Dimensions”. Fact table contains, or measures as well as keys to each of
the relate Dimension tables.

3-d CUBES:
We usually think of cubes as 3-d geometric structures, in data
warehousing the Data cube is n-dimensional. The example of 3-d cube is shown in
the below figure (2).

Figure (2): Example of 3-d cube.

California 80 110 60 25

Utah 40 90 50 30

Arizona 70 55 60 35
Washington 75 85 45 45 …..
4-d: Colorado
65 45 85 60
1/1/2003 Time
Soda Diet Orange Lime
soda soda soda

If we want to add or view additional data we can go for 4-d approach. We can
think 4-d Cube as a series of 3-d cubes. The data cube is a metaphor for data


 Star schema: The most commonly used model. A star schema contains
(1) a Large Central table (fact table) containing the bulk of data with no
(2) a Set of smaller Attendant tables (dimension tables) one for each

Snow flake schema: The snowflake schema is a variant of the star schema
where Some Dimension tables are normalized, thereby further splitting the data
into Additional tables the major difference between snowflake and star schema
is that the Dimension tables of Snowflake model are kept in normalized form to
avoid Redundancies.

Such tables can save space .

 However snowflake schema may necessitate more joins.

 Fact constellation: Sophisticated applications may require multiple fact

tables to Share Dimension tables.

 This kind of schema is called galaxy schema or fact constellation.

How to define multidimensional schema?

 DMQL (Data Mining query language) contains primitives for defining data
Warehouses and data marts

 define cube( cube_name) [ ( Dimension list) ] : (measure list)

 define dimension ( dimension_name) as((attribute_or_subdimension_list))

OLAP operations in a Multi Dimensional Data Model: Roll Up, Drill Drown,
Slice And Dice, And Pivot.

The Process Of Data Warehouse Design:

 Choose a business process to model

 Choose the grain of the business process.

 Choose the dimensions that will apply to each fact table record.

 Choose the measures that will populate each fact table record

 The Spiral Method involves the rapid generation of increasingly functional

systems, With Short intervals between successive releases.

 This is considered a good choice for data warehouse development, especially
for data Marts, because the turnaround time is short, modifications can be
done quickly and New designs and technologies can be adapted in a timely

Three-Tier Data Warehouse Architecture:

 The bottom tier is a warehouse database server that is almost always a

relational Database System. The middle tier is an OLAP server that is
typically implemented Using either ROLAP or MOLAP.

The top tier is a client, which contains query and reporting tools, Analysis
tools, and/or data mining tools (e.g., trend analysis, prediction…).

Data Warehouse Implementation:

 Efficient computation of data cubes, Indexing of OLAP data, Efficient

processing of OLAP Queries, Metadata Repository, Data warehouse Back-End
tools and utilities.

 Partition the array into chunks. A chunk is a sub cube that is small enough to
fit intoThe Memory available for cube computation. Chunking is a method for
dividing an N-dimensional chunks, where each chunk is stored as an object
on disk.

 Compute aggregates by visiting (i.e.accesing the values at) cube cells. The
order in Which Cells are visited can be optimized so as to minimize the
number of times that Each cell must be revisited, thereby reducing memory
access and storage costs.

Metadata Repository:

 A description of the structure of the data warehouse, which includes the

warehouse Schema, View, dimensions, hierarchies, and derived data
definitions, as well as data Mart locations and contents. Operational
metadata, which include data lineage, Currency of data, and monitoring
information. The algorithms used for Summarization, which include measure
and Dimension definition algorithms, data on Granularity, partitions, subject
areas, aggregation, Summarization, and predefined Queries and reports. The
mapping from the operational Environment to the data warehouse, This
includes source databases and their contents, Gateway descriptions, data
partitions, Data extraction, cleaning, transformation rules and Defaults, data
refresh and purging rules, And security. Data related to system performance,

Which include indices and profiles that improve data access and retrieval?
Performance, in Addition to rules for the timing and scheduling of refresh, update,
and Replication cycles. Business metadata, which include business terms and
definitions, data Ownership information and charging policies.

Data Warehouse Back-End Tools And Utilities:

 Data Extraction, which typically gathers data from multiple, heterogeneous,
and External Sources.

 Data Cleaning, which detects errors in the data and rectifies them when

 Data Transformation, which converts data from legacy or host format to

warehouse Format

 Load, which sorts, summarizes, consolidates, computes views, checks

integrity, and Builds Indices and partitions

 Refresh, which propagates the updates from the data sources to the


Decision support
OLA Mining
QUERY Reporting P
Tools tools
Essbase Intelligent Miner
Crystal reports

Merge Relational
Clean Data warehouse
Summarize E.g. Redbrick
Detailed S
transactional dat
data Operational data a Cens
Bombay Delhi Calcutta us
branch branch branch data
Oracle IMS SAS


• The value of warehousing and mining in effective decision making based on

concrete Evidence from old data

• Challenges of heterogeneity and scale in warehouse construction and


• Grades of data analysis tools: straight querying, reporting tools,

multidimensional Analysis and mining.

Before going to mine the data we have to pre process the data to prevent
anomalies Such as noise etc..,


1) Data Mining Concepts and techniques by Jiawei Han and Micheline

Kamber, ELSEVIER SEERIES second edition.

2) Informatica Power Center 8.1.1.

3) (url.)





With the advent of the Internet and the plurality and variety of fancy applications it
brought with it, the demand for more advanced services on cellular phones
increasingly becoming urgent. Unfortunately, so far the introduction of new
enabling technologies did not succeed in boosting new services. The adoption of
Internet services has shown to be more difficult due to the difference between the
Internet and the mobile telecommunication system. The goal of this paper is to
examine the characteristics of the mobile system and to clarify the constraints that
are imposed on existing mobile services. The paper will also investigate
successively the enabling technologies and the improvements they brought. Most
importantly, the paper will identify their limitations and capture the fundamental
requirements for future mobile service architectures namely openness, separation
of service logic and content, multi-domain services, personalization, Personal Area
Network (PAN)-based services and collaborative services. The paper also explains
the analysis of current mobile service architecture such as voice communication;
supplementary services with intelligent network, enabling services on SIM with SIM
application tool kit, text services with short message service, internet services with
WAP and dynamic applications on mobile phones with J2ME.
Further our paper gives information on challenges of mobile
Computing which includes harsh communications, connections, bandwidth and
Heterogeneous networks. Under research issues seamless connectivity over multiple
Overlays, scalable mobile processing, wireless communications, mobility and
portability are discussed.

1. Introduction
With digitalization the difference between telecommunication and computer net-working is
fading and the same technologies are used in both fields. However, the
convergence does not progress as rapidly as expected. Moving applications and ser-
vices from one field to the other has proven to be very difficult or in many cases
impossible. The explanation is that although the technologies in use are rather
similar there are crucial differences in architecture and concepts. The paper starts
with a study of how mobile services are implemented in mobile telecommunication

systems and an identification of their limitations so as to meet the needs of the

2. Analysis of current mobile service architectures

2.1 Voice communication
As indicated by its name, the objective of mobile telecommunications
systems is to provide communication between mobile distant persons. These
systems only supported direct voice communication or telephony between two
participants, but supplementary services like call forwarding, barring and voice mail
were added later on.

2.2 Supplementary services with intelligent network :It does not

take long time before there is a need for more advanced call control ser-vices
like call forwarding, barring, voice mail, premium call, etc. As shown in Figure 3
an IN (Intelligent Network [1]) Service Control Point (SCP) is introduced in the
mobile network to allow the implementation of supplementary services

2.3 Enabling services on the SIM with SIM Application Toolkit

The telecom operators want to have other services than telephony and its
derivatives and turn to the SIM, which are their property. Unfortunately,
although the SIM is a smart card having both processing and storage
capabilities necessary for new services. The SIM is supposed to be the slave executing
orders from its master, the ME. To remedy this, the SIM Application Tool-kit (SAT)
[2] is introduced to allow applications/services residing on the SIM to control
the input and output units. With SAT it is possible to develop applications on
the SIM but there are many restrictions. First SAT applications should be small in
size. Secondly, operators who are reluctant to open the access due to
security control the installation of applications on the SIM.

2.4 Text services with Short Message Service (SMS) : SMS-C is

responsible to store and forward messages to and from mobile phone
(see Figure 3). In the illustration, components used for SMS are the client (C)
in the ME advanced SMS services are implemented by perlscripets. Provisioning
of SMS services requires installation of the above-mentioned application

on an SMS Gateway the system running the SMS Gateway to act as an SMSC itself (e.g.
a PC using a radio modem and the additional identifiers and parameters for a
specific service (the protocol)

2.5 Internet access with WAP: Wireless Application Protocol (WAP) [5]
was to provide access to the WWW on handheld terminals. A micro browser
installed Internet and the mobile network to convert Internet protocols to Wireless
binary protocols as shown in Figure 3.. One restriction of the technology is that it
is not possible to access ordinary web pages using a WAP browser.

2.6 Dynamic applications on mobile phones with J2ME

(CLDC/MIDP): Unlike a computer, the functionality of the mobile phones is defined at
manufacture time and it is not possible to install new applications. With
introduction of the J2ME CLDC/MIDP vast amount of sophisticated
applications, called MIDLETS can be found on the Internet. With J2ME, it is possible
to develop dynamic standalone applications.

3. Advanced Architecture
This section aims at identifying and elucidating the advanced pieces and
hence contributes to the definition of advanced architecture.

3.1 Separation of service content and logic

Mobility is the ultimate requirement for mobile services. The mobility

properties of a service are dependent on the architecture and particularly on the
location of the components, service logic and service content, makes the analysis
easier. In early mobile telecom services the service logic was embedded in the
dedicated hardware components. This has been a hindrance for development of
flexible services; these services will by default not be accessible from outside an
operator domain.
To enhance the mobility of ser-vices, it is necessary to decouple the
service logic from the system components.

3.2 Multi-domain services

By using this service we cannot only access all services provided by
the network but also many users can work at the same time. Mobile
services will be provided as distributed services where the logics residing in
different places will cooperate in delivering the end user service.

3.3 PAN-based Services: Nowadays, each individual is using several
devices like mobile phones, PDA’s, digital camera, GPS, etc. With the
emergence of wireless short-range technologies like Bluetooth, WLAN and
potentially, Personal Area Networks can be formed to allow communication

3.4CollaborativeServices: With a multi-domain service, it will be possible

for people not only to collaborate across network boundaries, but also across
terminal boundaries. It is also possible for several people to collaborate by
exchanging information through several channels and devices simultaneously
such as talking on the phones, showing picture on digital cameras, reading
document on PDA’s.

4.Challenges of Mobile Computing:

Freedom from Collocation

• Harsh communications environment.
The unfavorable Communication environment is coupled with Lower
bandwidth/higher latency not good enough for videoconferencing or any
other process. It has higher error rates and more frequent disconnection. Its
performance depends on density of nearby users but inherent scalability of
cellular/frequency reuse architecture helps.

• Connection/Disconnection:
Network failure is a common issue and therefore Autonomous operation is
highly desirable. Found it often Caching is a good idea, e.g., web cache.

• Low Bandwidth
– Orders of magnitude differences between wide-area, in building wireless

• Variable Bandwidth
– Applications adaptation to changing quality of connectivity

» High bandwidth, low latency: business as usual

» High bandwidth, high latency: aggressive prefacing
» Low bandwidth, high latency: asynchronous operation, use caches to
hide latency, Predict future references/trickle in, etc. etc.

• Heterogeneous Networks
“Vertical Handoff” among collocated wireless networks
5.Research Issues
• Seamless connectivity over multiple overlays
– Implementing low latency handoffs
– Exploiting movement tracking and geography
– Performance characterization of channels
– Authentication, security, privacy

• Scalable mobile processing

– Hierarchical and distributed network management ,load balancing for

network mgmt
– Integration with local- & wide-area networked servers
-Application support for adaptive connections

Wireless Communications
– Quality of connectivity and Bandwidth limitations

• Mobility
Location transparency and Location dependency

• Portability
Power limitations, Display, processing and storage limitations

6. Conclusion
This paper presents an analysis of the evolutionary path of mobile services,
from early voice communication services to prospects of future service
possibilities. It is argued that increasing openness can help excel the future of
mobile services. Each of the concepts discussed around mobile services in this paper
are on their own and of research and they must be further elaborated in separate
studies. Thus, the discussions in this paper are preliminary and do address only the
basic structures and further works will be carried out.

7. References
Gunnar Heine, GSM Networks: Protocols, Terminology and Implementation.
J. B. Andersen, T. S. Rappaport, S. Yoshida, "Propagation Measurements
Models for Wireless Communications Channels," IEEE Communications
Magazine, pp. 42-49 and H. Forman, J. Zahorjan, "The Challenges of Mobile
Computing," IEEE Computer, V 27, N

Devineni Venkata Ramana & Dr. Hima Sekhar

MIC College of Technology

Presented by….

A.Spandana (IIIrd year CSE)

Email id:

M.P.Priyadarshini (IIIrd CSE)

Email id:

The growth of wireless networking has blurred the traditional boundaries between
trusted and untrusted networks and shifted security priorities from the network
perimeter to information security. The need to secure mobile information and
control the wireless environment to prevent unauthorized access must be a
priority for maintaining the integrity of corporate information and systems.
But running a business from home using a home wireless local area network
(WLAN) with your computer may lead to thievery of confidential information and
hacker or virus penetration unless proper actions are taken. As WLANs send
information back and forth over radio waves, someone with the right type of
receiver in your immediate area could be picking up the transmission, thus
acquiring access to your computer.

Wireless Deploying a wireless network does not require special expertise. If
a department is eager to expand its network and it can’t or doesn’t want to wait
for the normal IT process, it can expand the network itself cheaply and easily, just
by plugging an Access Point into an Ethernet jack and by plugging a wireless card
into a laptop.

Basic LAN environment

• Surveillance

There are several approaches to locating a wireless network. The most basic
method is a surveillance attack. You can use this technique on the spur of the
moment, as it requires no special hardware or preparation. Most significantly, it is
difficult, if not impossible, to detect. How is this type of attack launched? You
simply observe the environment around you.

• War driving

The term war driving is borrowed from the 1980s phone hacking tactic
known as war dialing. War dialing involves dialing all the phone numbers in a
given sequence to search for modems. In fact, this method of finding modems
is so effective that it's still in use today by many hackers and security
professionals. Similarly, war driving, which is now in its infancy, will most likely
be used for years to come both to hack and to help secure wireless networks.

• Client-to-client hacking

Clients exist on both wireless and wired networks. A client can range from
anything such asa Network Attached Storage (NAS) device, to a printer, or
even a server. Because the majority of consumer operating systems are
Microsoft based, and since the majority of users do not know to how to secure
their computers, there is plenty of room to play here. An attacker can connect
to the laptop, upon which he could exploit any number of operating system
vulnerabilities, thus gaining root access to the laptop.

• Rogue Access Point

Rogue access points are those connected to a network without planning or

permission from the network administrator. For example, we know one
administrator in Dallas who just did his first wireless security scan (war
driving) on his eight-building office campus. To his surprise, he found over
thirty access points. Worse, only four of them had authorization to be
connected to the network. Needless to say, heads rolled.
Jamming(Denial of Service)

Denial-of-service (DoS) attacks are those that prevent the proper use of
functions or services. Such attacks can also be extrapolated to wireless

• Practical WEP cracking

"Cracking WEP," is fundamentally flawed, allowing you to crack it. This will
thwart the casual drive-by hacker. It also enables another layer of legal
protection that prohibits the cracking of transmitted, encrypted signals.


For WLANs, the first step in security hardening is to focus on the access
point. Since the AP is the foundation of wireless LAN data transfer, you must
ensure that it is part of the solution, instead of the problem.

• Access Point based security

• WEP: Enabling a protection that is minimally effective, you can

eliminate 99% of your threat. Similar to a car lock, WEP will protect your
network from passers-by; however, just as a dedicated thief will quickly
bypass the lock by smashing a car window, a dedicated hacker will put
forth the effort to crack WEP if it is the only thing between him and your
• MAC Filtering: Every device on a wireless network, by default, has
a unique address that's used to identify one WNIC from another. This
address is called the MAC address. To find the MAC address of the
network card the user only have to perform a few steps based on the
operating system However, while this in theory is an excellent way to stop
hackers from accessing your WLAN, there is a serious flaw in MAC
filtering. The problem with MAC filtering is that MAC addresses can be
spoofed by changing WNIC settings.
• Controlling Radiation zone: When a wireless network is active, it
broadcasts radio frequency (RF) signals. These signals are used to
transmit the wireless data from an access point to the WNIC and back
again. The same signal is also used in ad-hoc networks, or even between
PDAs with 802.11 WNICs. By using antenna management techniques, you
can control the range of your WLAN. In high-rise buildings or apartment
complexes, this can be a serious issue. Interference—and nosy neighbors
—can quickly become a problem. By removing one antenna, reducing the
output, and adjusting the position of the antenna, you can effectively
keep the signal within a tight range
• Defensive Security Through a DMZ: A DMZ, or demilitarized zone,
is a concept of protection. A DMZ typically defines where you place
servers that access the Internet. In other words, a Web server or mail
server is often set up in a DMZ. This allows any Internet user to access
the allocated resources on the server, but if the server becomes
compromised, a hacker will not be able to use the "owned" computer to
search out the rest of the network. Technically, a DMZ is actually its own
little network, separate from the internal network, and separate from the
Internet. However, while this type of protection can help protect internal
resources, it will not protect the wireless network users. Therefore, the
DMZ should be just one part of your wireless security plan.

• Third party security methods

• Firewalls: Firewall separates the wireless users from the internal

users. A firewall can do much to eliminate security threats. Depending on
how it is set up and what types of policies are used, a firewall can
effectively block all incoming requests that are not authorized. This creates
a physical barrier to crackers who might have control over the wireless
network and are trying to breach the internal network..
• VPN’s: VPNs create encrypted channels to protect private
communication over existing public networks. A VPN enables you to
establish a secure, encrypted network within a hostile, public network such
as the Internet. VPNs provide several benefits, including the following:

• Facilitate secure and easy inter-office communication

• Provide inexpensive network access for mobile employees
• Provide full network access for telecommuters
VPNs provide secure, encrypted communication in two ways:

• User-to-Network (Remote-Access Model) — In this configuration, remote

clients can connect through a public network such as the Internet. By using
a VPN, the remote client can become part of the company network. This
configuration effectively replaces the remote dial-in or authenticated
firewall access model.
• Network-to-Network (Site-to-Site Model) — In this configuration, one
branch office network can connect through a public network such as the
Internet to another branch office network. This configuration eliminates the
need for an expensive wide-area network (WAN).

Thus, VPNs are secure communication solutions that take advantage of public
networks to lower your costs. However, VPNs have their share of problems.

• Radius: Remote Authentication Dial-In User Service (RADIUS) is a

protocol that is responsible for authenticating remote connections made to a
system, providing authorization to network resources, and logging for
accountability purposes. Although the protocol was developed to help remote
modem users securely connect to and authenticate with corporate networks,
it has now evolved to the point where it can also be used in VPNs and WLANs
to control almost every aspect of a user's connection. There are several
brands of RADIUS servers available. One of the more popular is Funk's Steel-
Belted Radius server, which is often deployed with Lucent WLAN setups.
• Funk’s Steel Belted Radius

Funk’s product is a functional software package that provides a central point of

administration for all remote users, regardless of how they connect.

."Steel-Belted Radius is an award-winning RADIUS/AAA server that lets you

centrally manage all your remote and wireless LAN (WLAN) users and equipment,
and enhance the security of your network."

Steel-Belted Radius earns a second look because it provides extra security for
WLAN users by increasing the level of security and access by working with
existing access points to ensure only authorized users are allowed access.
Features of Funk's Steel-Belted Radius:
• Central User Administration: Steel-Belted Radius manages remote and

WLAN users by allowing authentication procedures to be performed from one

database. This relieves you of the need to administer separate authentication
databases for each network access or WLAN access point device on your LAN.
Steel-Belted Radius performs three main functions:

• Authentication— Validates any remote or WLAN user's username and

password against a central security database to ensure that only individuals
with valid credentials will be granted network access.
• Authorization— For each new connection, provides information to the
remote access or WLAN access point device, such as what IP address to
use, session time-limit information, or which type of tunnel to set up.
• Accounting— Logs all remote and WLAN connections, including usernames
and connection duration, for tracking and billing.
• Central Hardware administration: Steel-Belted Radius can
manage the connections of all your remote and wireless users. This
includes the following:

• Dial-up users who connect via remote access servers from 3Com, Cisco,
Lucent, Nortel, and others.
• Internet users who connect via firewalls from Check Point, Cisco, and
• Tunnel/VPN users who connect via routers from 3Com, Microsoft, Nortel,
Red Creek, V-One, and others.
• Remote users who connect via outsourced remote access services from
ISPs and other service providers.
• Wireless LAN users who connect via access points from Cisco, 3Com,
Avaya, Ericsson, Nokia and others.
• Users of any other device that supports the RADIUS protocols.

Moreover, Steel-Belted Radius supports a heterogeneous network, interfacing with

remote and wireless access equipment from different vendors simultaneously

• Authentication methods: Steel-Belted Radius not only works with

a wide variety of remote and wireless access equipment, but it also makes it
possible to authenticate remote and WLAN users according to any
authentication method or combination of methods you choose.

In addition to Steel-Belted Radius's native database of users and their passwords,

Steel-Belted Radius supports "pass-through" authentication to information
contained in the

Steel-Belted Radius can simultaneously authenticate many users. If you are

combining authentication methods, you can even specify the order in which each
is checked. The result is streamlined administration, as well as one-stop

Securing Your Wireless LAN: Steel-Belted Radius play a

pivotal role in securing their connections. Steel-Belted
Radius provides additional security on a WLAN by
• Protecting against rogue access points. Steel-Belted Radius ignores
communications from any access point that is not registered with it. This
helps prevent network intrusion from illegally installed or used equipment.
• Supporting time session limits, time-of-day restrictions, and other RADIUS
attributes, which let you impose additional security constraints on WLAN

Steel-Belted Radius also makes it possible to manage both wireless LAN and
remote users from a single database and console, greatly reducing your
administrative burden by eliminating the need for two separate authentication

• WLAN protection enhancement

• TKIP: The Temporal Key Integrity Protocol (TKIP) is used to correct WEP

problems. TKIP uses RC4 as the encryption algorithm, but it removes the
weak key problem and forces a new key to be generated every 10,000
packets or 10kb, depending on the source. TKIP is a stronger and more
secure method of verifying the integrity of the data. Called the Message
Integrity Check, this part of TKIP closes a hole that would enable a hacker to
inject data into a packet so he can more easily deduce the streaming key
used to encrypt the data.
• AES: Advanced Encryption Standard (AES) is a newer encryption method
that was selected by the U.S. government to replace DES as their standard.
It is quite strong, and is actually under review for the next version of the
wireless 802.11 standard (802.11i). AES allows different sizes of keys,
depending on need. The key size directly reflects the strength of the
encryption, as well as the amount of processing required to encrypt and
decipher the text.
 3.4 x 1038 possible 128-bit keys

 6.2 x 1057 possible 192-bit keys

 1.1 x 1077 possible 256-bit keys

Nevertheless, AES is destined to be the encryption method of all wireless

• SSL: Secure Sockets Layer is a protocol that uses RC4 to encrypt data

before it is sent over the Internet. This provides a layer of security to any
sensitive data and has been incorporated into almost all facets of online
communication. Everything from Web stores, online banking, Web-based
email sites, and more use SSL to keep data secure. The reason why SSL is so
important is because without encryption, anyone with access to the data
pipeline can sniff and read the information as plaintext. By using a Web
browser with SSL enabled, an end user can make a secure and encrypted
connection to a WLAN authentication server without having to deal with
cumbersome software. As most wireless users will be familiar with using
secure Web sites, the integration of SSL will go unnoticed. Once the
connection is made, the user account information can be passed securely and

• ID’s: Intrusion detection systems (IDSs) provide an additional level of

security for your wireless-enabled network. By adding wireless access to your

network, you are dramatically increasing your risk of compromise. To counter
this increased threat, you should also consider adding additional layers of
security for a defense in depth. A firewall and VPN(“Virtual Private
Networks") might no longer be enough. Fortunately, a properly configured
IDS can satisfy your demand for extra security by notifying you of suspected
• Log file monitors: The simplest of IDSs, log file
monitors, attempt to detect intrusions by parsing system event logs. This
technology is limited in that it only detects logged events, which attackers
can easily alter. In addition, such a system will miss low-level system
events, because event logging is a relatively high-level operation.
• Integrity Monitors: An integrity monitor watches
key system structures for change Although limited, integrity monitors can
add an additional layer of protection to other forms of intrusion detection.
The most popular integrity monitor is Tripwire.
• Signature Scanners: the majority of IDSs attempt
to detect attacks based on a database of known attack signatures. When
a hacker attempts a known exploit, the IDS attempts to match the exploit
against its database.


The encryption and decryption keys are different in public key cryptography. In
public key encryption Each user has a pair of keys namely the public and private
keys.Publickey encryption uses a one way function to scramble the data using the
recipient’s public key and the recipient should in turn decrypt the data using this
private key.

• Public key cryptography

In encryption applications, a tested and proven mathematical algorithm play a
very important role
The following are the most commonly used public key algorithms:

Public key

public key private key


 
plaintext Encryption Decryption plaintext
input Algorithm Algorithm output

Public key encryption

• RSA: This algorithm is named after its three inventors: Ron Rivest,
Adi Shamir, and Leonard Adleman.It is cryptographically strong, and is based
on the difficulty of factoring large numbers.It is capable of both digital
signature and key exchange operations.
• DSA: It is the digital signature algorithm invented by national
security agency of USA and is used for digital signature operations and not for
data encryption. Its cryptographic strength is based on the difficulty of
calculating discrete logarithms.
• Diffie-Hellman— This algorithm is named after its inventors Whitfield
Diffie and Martin Hellman. can be used for key exchange only. The
cryptographic strength of Diffie-Hellman is based on the difficulty of calculating
discrete logarithms in a finite field.

• DigitalSignatures: The concept of digital signatures has evolved to

address such threats such as snoofing and tampering.
• One-Way Hash Algorithms: Digital signatures rely on a mathematical
function called a one-way hash. A hash utilizes a one-way (irreversible)
mathematical function (a hash algorithm) to transform data into a fixed-
length digest, known as the hash value. Each hash value is unique. Thus,
authentication using the hash value is similar to fingerprinting. To verify the
origin of data, a recipient can decrypt the original hash and compare it to a
second hash generated from the received message.

Two common one-way hash functions are MD5 and SHA-1. MD5 produces a 128-bit
hash value, and is now considered less secure. SHA-1 produces a 160-bit hash
value. In PKI, hashes are used to create digital signatures.

Today, most organizations deploying wireless LANs simply haven’t put
enough effort into its security – it isn’t right, but it is true. Just like in the wired
world, organizations only began to take Internet security seriously after there had
been a series of highly visible and financially damaging hacker attacks. Only a
similar series of public wireless disasters will catalyze the change needed for
organizations to take wireless security more seriously.
While there are a number of inherent security problems with the 802.11
technology, there are also many straightforward measures that can be taken to
mitigate them. As with many new technologies, the best way to get started is to
recognize the problems and make a commitment to address the ones that can
reasonably be solved in our environment
Digital Certificates, and Cryptographic Modules are the fundamental
building blocks of strong authentication and there is no way around that. You can
make the best of it by leveraging the hefty investment for all your security needs.




N. Narmada chowdary.
R. Alekhya.
2/4 B.ECH C.S.E.
Ph no:9490907741.


The internet has been a wide usage in all the fields in the present competitive
world. It is being used in the education, research, business and what not, in
everything. But providing security for the users information or transactions or any
other data in any of the field has become a paramount. This paper gives a vivid
picture of “E-commerce” and the vulnerabilities they are facing in providing a
secure system for the users. In other words, how the security attacks are made
either by the hackers or the intruders, the ways how they attack and exploit to
illegitimate means.

This paper is an overview of the security and privacy concerns based on the
experiences as developers of E-commerce. E-commerce is a business middleware
that accelerates the development of any business transaction-oriented application,
from the smallest retailer to the distributor, to the consumer (user). These
transactions may apply b between manufacturers and distributors or suppliers.
Here, the user needs to be assured with the privacy of his/her information. In this
article, we focus on possible attack scenarios in an e-Commerce system and
provide preventive strategies, including security features that one can implement.

Here we present you the better ways of how to defend from the attacks
and protect your personal data without depending on the network provider’s
security with the help of personnel firewalls and honey pots.


E-Commerce refers to the exchange of goods and services over the Internet. All
major retail brands have an online presence, and many brands have no associated
bricks and mortar presence. However, e-Commerce also applies to business to
business transactions, for example, between manufacturers and suppliers or

E-Commerce provides an integrated platform that runs both their customer

facing online shopping sites, and their internal distributor or supplier portals as
shown in Figure.

E-Commerce systems are relevant for the services industry. For example,
online banking and brokerage services allow customers to retrieve bank
statements online, transfer funds, pay credit card bills, apply for and receive
approval for a new mortgage, buy and sell securities, and get financial guidance
and information.


A secure system accomplishes its task with no unintended side effects. Using the
analogy of a house to represent the system, you decide to carve out a piece of
your front door to give your pets' easy access to the outdoors. However, the hole
is too large, giving access to burglars. You have created an unintended implication
and therefore, an insecure system. While security features do not guarantee a
secure system, they are necessary to build a secure system. Security features
have four categories:

• Authentication: Verifies who you say you are. It enforces that you are the
only one allowed to logon to your Internet banking account.
• Authorization: Allows only you to manipulate your resources in specific
ways. This prevents you from increasing the balance of your account or
deleting a bill.
• Encryption: Deals with information hiding. It ensures you cannot spy on
others during Internet banking transactions.
• Auditing: Keeps a record of operations. Merchants use auditing to prove
that you bought specific merchandise.

The victims and the accused (the players):

In a typical e-Commerce experience, a shopper proceeds to a Web site to browse

a catalog and make a purchase. This simple activity illustrates the four major
players in e-Commerce security. One player is the shopper who uses his browser
to locate the site. The site is usually operated by a merchant, also a player, whose
business is to sell merchandise to make a profit. As the merchant business is
selling goods and services, not building software, he usually purchases most of
the software to run his site from third-party software vendors. The software
vendor is the last of the three legitimate players.

A threat is a possible attack against a system. It does not necessarily mean that
the system is vulnerable to the attack. An attacker can threaten to throw eggs
against your brick house, but it is harmless. Vulnerability is a weakness in the
system, but it is not necessarily be known by the attacker. Vulnerabilities exist at
entry and exit points in the system. In a house, the vulnerable points are the
doors and windows.
Points the attacker can target

As mentioned, the vulnerability of a system exists at the entry and exit

points within the system. Figure shows an e-Commerce system with several
points that the attacker can target:

• Shopper
• Shopper' computer
• Network connection between shopper and Web site's server
• Web site's server
• Software vendor

Tricking the shopper: Some of the easiest and most profitable attacks are
based on tricking the shopper, also known as social engineering techniques. These
attacks involve surveillance of the shopper's behavior, gathering information to
use against the shopper. For example, a mother's maiden name is a common
challenge question used by numerous sites. If one of these sites is tricked into
giving away a password once the challenge question is provided, then not only
has this site been compromised, but it is also likely that the shopper used the
same logon ID and password on other sites.

Snooping the shopper's computer: Millions of computers are added to the

Internet every month. Most users' knowledge of security vulnerabilities of their
systems is vague at best. A popular technique for gaining entry into the shopper's
system is to use a tool, such as SATAN, to perform port scans on a computer that
detect entry points into the machine. Based on the opened ports found, the
attacker can use various techniques to gain entry into the user's system. Upon
entry, they scan your file system for personal information, such as passwords. A
user that purchases firewall software to protect his computer may find there are
conflicts with other software on his system. To resolve the conflict, the user
disables enough capabilities to render the firewall software useless.

Sniffing the network: In this scheme, the attacker monitors the data between
the shopper's computer and the server. There are points in the network where
this attack is more practical than others. If the attacker sits in the middle of the
network, then within the scope of the Internet, this attack becomes impractical. A
request from the client to the server computer is broken up into small pieces
known as packets as it leaves the client's computer and is reconstructed at the
server. The packets of request are sent through different routes. The attacker
cannot access all the packets of a request and cannot decipher what message was

Guessing passwords: Another common attack is to guess a user's password.

This style of attack is manual or automated. Manual attacks are laborious, and
only successful if the attacker knows something about the shopper. For example,
if the shopper uses their child's name as the password.

Using server root exploits: Root exploits refer to techniques that gain super
user access to the server. This is the most coveted type of exploit because the
possibilities are limitless. When you attack a shopper or his computer, you can
only affect one individual. With a root exploit, you gain control of the merchants
and all the shoppers' information on the site. There are two main types of root
exploits: buffer overflow attacks and executing scripts against a server.


Despite the existence of hackers and crackers, e-Commerce remains a safe and
secure activity. The resources available to large companies involved in e-
Commerce are enormous. These companies will pursue every legal route to
protect their customers. Figure 6 shows a high-level illustration of defenses
available against attacks.
Education: Your system is only as secure as the people who use it. If a shopper
chooses a weak password, or does not keep their password confidential, then an
attacker can pose as that user. Users need to use good judgment when giving out
information, and be educated about possible phishing schemes and other social
engineering attacks.

Personal firewalls: When connecting your computer to a network, it becomes

vulnerable to attack. A personal firewall helps protect your computer by limiting
the types of traffic initiated by and directed to your computer. The intruder can
also scan the hard drive to detect any stored passwords.

Secure Socket Layer (SSL): Secure Socket Layer (SSL) is a protocol that
encrypts data between the shopper's computer and the site's server. When an
SSL-protected page is requested, the browser identifies the server as a trusted
entity and initiates a handshake to pass encryption key information back and
forth. Now, on subsequent requests to the server, the information flowing back
and forth is encrypted so that a hacker sniffing the network cannot read the

The SSL certificate is issued to the server by a certificate authority authorized by

the government. When a request is made from the shopper's browser to the site's
server using https://..., the shopper's browser checks if this site has a certificate
it can recognize. If the site is not recognized by trusted certificate authority, then
the browser issues a warning as shown in Figure

For example in mozilla:

Figure Secure icon in Mozilla Firefox

Server firewalls: A firewall is like the moat surrounding a castle. It ensures that
requests can only enter the system from specified ports, and in some cases,
ensures that all accesses are only from certain physical machines. A common
technique is to setup a demilitarized zone (DMZ) using two firewalls. The outer
firewall has ports open that allow ingoing and outgoing HTTP requests. This allows
the client browser to communicate with the server. A second firewall sits behind
the e-Commerce servers. This firewall is heavily fortified, and only requests from
trusted servers on specific ports are allowed through. Both firewalls use intrusion
detection software to detect any unauthorized access attempts. Figure shows the
firewalls and honey pots.
Password policies: Ensure that password policies are enforced for shoppers and
internal users. You may choose to have different policies provided by federal
information standard, shoppers versus your internal users. For example, you may
choose to lockout an administrator after 3 failed login attempts instead of 6.
These password policies protect against attacks that attempt to guess the user's
password. They ensure that passwords are sufficiently strong enough so that they
cannot be easily guessed.

Site development best practices

There are many established policies and standards for avoiding security issues.
However, they are not required by law. Some of the basic rules include:

• Never store a user's password in plain text or encrypted text on the

system. Instead, use a one-way hashing algorithm to prevent password
• Employ external security consultants (ethical hackers) to analyze your
• Standards, such as the Federal Information Processing Standard (FIPS),
describe guidelines for implementing features. For example, FIPS makes
recommendations on password policies, etc.

Security best practices remain largely an art rather than a science, but there are
some good guidelines and standards that all developers of e-Commerce software
should follow.

Using cookies
One of the issues faced by Web site designers is maintaining a secure session with
a client over subsequent requests. Because HTTP is stateless, unless some kind of
session token is passed back and forth on every request, the server has no way to
link together requests made by the same person. Cookies are a popular
mechanism for this. An identifier for the user or session is stored in a cookie and
read on every request. You can use cookies to store user preference information,
such as language and currency. The primary use of cookies is to store
authentication and session information, your information, and your preferences. A
secondary and controversial usage of cookies is to track the activities of users.

Using an online security checklist

Use this security checklist to protect yourself as a shopper: some of the checks
will be like:

• Whenever you logon, register, or enter private information, such as credit

card data, ensure your browser is communicating with the server using
• Use a password of at least 6 characters, and ensure that it contains some
numeric and special characters (for example, c0113g3).
• Avoid reusing the same user ID and password at multiple Web sites.
• If you are authenticated (logged on) to a site, always logoff after you finish.
• Use a credit card for online purchases. Most credit card companies will help
you with non-existent or damaged products.

Using threat models to prevent exploits: When architecting and developing a

system, it is important to use threat models to identify all possible security
threats on the server. Think of the server like your house. It has doors and
windows to allow for entry and exit. These are the points that a burglar will
attack. A threat model seeks to identify these points in the server and to develop
possible attacks.

Threat models are particularly important when relying on a third party vendor for
all or part of the site's infrastructure. This ensures that the suite of threat models
is complete and up-to-date.

This article outlined the key players and security attacks and defenses in an e-
Commerce system. Current technology allows for secure site design. It is up to
the development team to be both proactive and reactive in handling security
threats, and up to the shopper to be vigilant when shopping online.


• Learn about social factors in computer security. Schneier, Bruce. Secrets

and Lies: Digital Security In A Networked World, John Wiley and Sons, Inc.,

• A good introduction to computer security. Pfleeger, Charles P., Security in

Computing, Second Edition, Prentice-Hall, Inc., 1996.

• Low level tips for writing secure code. Howard, Michael and LeBland, David,
Writing Secure Code, Second Edition, Microsoft Press, 2003.

• An example of a denial of service attack. Yahoo on Trail of Site Hackers,

Reuters News Service, February 8, 2000.



29th December 2007
Presented by:-
Phone 9959276798

Phone 9966829702


Data warehousing is the process where organizations extract value from
their informational assets though the use of special stores called data
warehouses. In general data warehouse is defined to be subject-oriented,
integrated, time-variant, and non-volatile. Using both internal and external
systems sources of data, a data warehouse is created.
Data mining, the extraction of hidden predictive information from large
databases, is a powerful new technology with great potential to help companies
focus on the most important information in their data warehouses.

Data Warehouses are increasingly used by enterprises to increase

efficiency and competitiveness. Using Scorecarding, Data Mining and on-line
analytical processing (OLAP) analysis, business value can be extracted from Data

Generating positive return-on-investments (ROI) from Data Warehouses requires

a blend of business intuitiveness and technical skills.

This paper presents these strategies and technologies that will enhance the ROI
of Data Warehousing.


Database management Systems (DBMS) is mostly associated with

operational transaction processing systems. While DBMS help in automating
day-to-day operations of organizations, data is often locked up within each
transaction processing system and could not be used effectively for organization
wide information retrieval and decision support functions. The need for
enterprise wide integrated information retrieval for decision making is the basis
for data warehousing.

Data warehousing primarily deals with gathering data from multiple transaction
processing systems and external sources, ensuring data warehousing quality,
organizing the data warehousing for information processing, and providing
information retrieval and analysis through on-line analytical processing (OLAP),
reporting, web-based and data mining tools.

Data Mining is the discovery of useful patterns in data. Data mining are
used for prediction analysis and classification - e.g. what is the likelihood that a
customer will migrate to a competitor.

Data mining, the extraction of hidden predictive information from large

databases, it helps the companies focus on the most important information in
their data warehouses. Data mining tools predict future trends and behaviors,
allowing businesses to make proactive, knowledge-driven decisions. The
automated, prospective analyses offered by data mining move beyond the
analyses of past events provided by retrospective tools typical of decision support
Data Mining is the automated discovery of patterns in data. Data Mining
can be used for predictive analysis in marketing to comparison of gene
sequences in bio technology. Often Data Mining is used together with OLAP for
data analysis.

Data warehouse is a combination of data from all types of sources and
have the following characteristics: subject oriented, integrated, time-variant, and
non-volatile. Data warehouses are specifically designed to maximize performance
on queries run against them for analysis. Data warehousing is open to an almost
limitless range of definitions. Simply put, Data Warehouses store an aggregation
of a company's data.
Data Warehouses are an important asset for organizations to maintain
efficiency, profitability and competitive advantages. Organizations collect data
through many sources - Online, Call Center, Sales Leads, Inventory
Management. The data collected have degrees of value and business relevance.

As data is collected, it is passed through a 'conveyor belt', call the Data Life Cycle
An organization's data life cycle management's policy will dictate the data
warehousing design and methodology.

Figure1. Overview of Data Warehousing Infrastructure

The goal of Data Warehousing is to generate front-end analytics that will support
business executives and operational managers.

Pre-Data Warehouse

The pre-Data Warehouse zone provides the data for data warehousing. Data
Warehouse designers determine which data contains business value for insertion.
OLTP databases are where operational data are stored. OLTP databases can
reside in transactional software applications such as Enterprise Resource
Management (ERP), Supply Chain, Point of Sale, and Customer Serving Software.
OLTPs are design for transaction speed and accuracy.

Metadata ensures the sanctity and accuracy of data entering into the data
lifecycle process. Meta-data ensures that data has the right format and
relevancy. Organizations can take preventive action in reducing cost for the ETL

stage by having a sound Metadata policy. The commonly used terminology to
describe Metadata is "data about data".

Data Cleansing

Before data enters the data warehouse, the extraction, transformation and
cleaning (ETL) process ensures that the data passes the data quality threshold.
ETLs are also responsible for running scheduled tasks that extract data from

Data Repositories

The Data Warehouse repository is the database that stores active data of
business value for an organization. The Data Warehouse modeling design is
optimized for data analysis. There are variants of Data Warehouses - Data
Marts and ODS. Data Marts are not physically any different from Data
Warehouses. Data Marts can be though of as smaller Data Warehouses built on a
departmental rather than on a company-wide level. Data Warehouses collects
data and is the repository for historical data. Hence it is not always efficient for
providing up-to-date analysis. This is where ODS, Operational Data Stores, come
in. ODS are used to hold recent data before migration to the Data Warehouse.

ODS are used to hold data that have a deeper history that OLTPs. Keep large
amounts of data in OLTPs can tie down computer resources and slow down
processing - imagine waiting at the ATM for 10 minutes between the prompts for
inputs. .

Front-End Analysis

The last and most critical potion of the Data Warehouse overview are the front-
end applications that business users will use to interact with data stored in the


Data becomes active as soon as it is of interest to an organization. Data life cycle
begins with a business need for acquiring data. Active data are referenced on a
regular basis during day-to-day business operations. Over time, this data loses
its importance and is accessed less often, gradually losing its business value, and
ending with its archival or disposal.

Figure1. Data Life Cycle in Enterprises

Active Data
Active data is of business use to an organization. The ease of access for
business users to active data is an absolute necessity in order to run an efficient
All data moves through life-cycle stages is key to improving data management.
By understanding how data is used and how long it must be retained, companies
can develop a strategy to map usage patterns to the optimal storage media,
thereby minimizing the total cost of storing data over its life cycle, when data is
stored in a relational database, although the challenge of managing and storing
relational data is compounded by complexities inherent in data relationships.
Inactive Data
Data are put out to pasture once they are no longer active. I.e. there are
no longer needed for critical business tasks or analysis. Prior to the mid-nineties,
most enterprises achieved data in Microfilms and tape back-ups.
There are now technologies for data archival such as Storage Area Networks
(SAN), Network Attached Storage (NAS ) and Hierarchical Storage

Management . These storage systems can maintain referential integrity and
business context.
What is OLAP?

OLAP allows business users to slice and dice data at will. Normally data in an
organization is distributed in multiple data sources and are incompatible with
each other. A retail example: Point-of-sales data and sales made via call-center
or the Web are
stored in
and formats.
It would a time consuming process for an executive to obtain OLAP reports.

Figure4. Steps in the OLAP Creation Process

Part of the OLAP implementation process involves extracting data from the
various data repositories and making them compatible. Making data compatible
involves ensuring that the meaning of the data in one repository matches all
other repositories. It is not always necessary to create a data warehouse for
OLAP analysis. Data stored by operational systems, such as point-of-sales, are in
types of databases called OLTPs. OLTP, Online Transaction Process , databases do
not have any difference from a structural perspective from any other databases.
The main difference and only, difference is the way in which data is stored.
Examples of OLTPs can include ERP, CRM, SCM, Point-of-Sale applications, Call

OLTPs are designed for optimal transaction speed. When a consumer makes a
purchase online, they expect the transactions to occur instantaneously. Next
cubes for OLAP database are built and at last Reports are produced.


Data Mining is the automated discovery of patterns in data. Data Mining

can be used for predictive analysis in marketing to comparison of gene
sequences in bio technology. Often Data Mining is used together with OLAP for
data analysis.

Integrated Data Mining and OLAP for MySQL

DAT-A is a data mining and OLAP platform for MySQL. The goals of DAT-A are to
design a high-end analytical product from the needs of the end-user and then
follow through on the technology. Data mining software are noted for their lack
of ease of usability. This together with the abstract nature of analytics has meant
that there has been a relatively low pick-up of data mining in the commercial
world. Data mining has a useful role to play but very often the total cost of
ownership far out weighs any business benefits. Biotechnological and financial
firms find data mining an absolute part of their competitive advantage. However
these industries find the cost of running data warehouses and perform analytics
can be an expensive burden.

Data collected by businesses are increasing at a large rate. Most data have
business relevancy and cannot be simply shunted to archive storage should the
cost of storage increase. This is especially valid for analytics where trends need
to be observed over an
extended period for greater

The data center and data

mining solution below was
designed for a retail client who
had global operations. Retail is
a high cost low margin (HCLM)
and an extremely competitive
industry. The cost for data

storage in an active medium was proving prohibitive for the client. For
competitive reasons it was not possible to simply ignore the data and par down
cost. A distributed data center was designed replacing the mainframe
environment that existed previously and presented management with an
alternative to expensive server environment, using a cluster of over 200 Linux
boxes powered by MySQL databases.

Figure1. Data Cluster using Linux Boxes Using commodity

Linux boxes offered a tremendous cost savings over servers. As the amount of
data stored exceeded the terabyte range it was prudent to index the data and
store in a distributed manner over the data cluster.

MySQL was not quite an obvious choice as it may have seemed. While extremely
fast and reliable, MySQL does not have many sophisticated features that are
needed for data warehousing and data mining. This hurdle was overcome by
using a database transaction engine from InnoDB . Remote database
management systems were built for the MySQL/Linux cluster that allows data
administrators to visualize the "data spread" over the center - this is similar to a
table view found in most of the popular RDBMS databases. The data spread
diagram gave the data administrators, an ability to manipulate
and transfer data from afar without the need to logon to the
individual box. The data spread application also allowed the
user to build data cubes for OLAP analysis. There are two
methods by which users can perform data mining - the first
using a stovepipe approach that packages the business needs
tightly together with the data mining algorithms. This first
method allowed business users to directly perform data mining
and the user is given limited latitude in choosing the
methodologies. The second method gave freedom to explore
data and choose a number of algorithms. The types of user
envisioned for the second method are more seasoned data

Intelligent Data Mining

Open source application for data mining on MySQL

Data mining has had technical constrains placed by limitations of software

design and architecture. Most of the algorithms used in data mining are mature
for over 20yr. The next challenges in data mining are not algorithmic but s/w
design methodologies. Commonly used data mining algorithms are freely
available and processes that optimize data mining computing speed are well

Data access standards such as OLE-DB, XML for Analysis and JSR will minimize
the challenges for data access. Building a user friendly software interfaces for
the end-user are the next steps in the evolution of data mining. A comparable
analogy can be made with the increasing ease of use of OLAP client tools. The
J2EE and .NET software platforms offer a large spectrum of built-in APIs that
enable smarter software applications.

Text Mining

Text mining has been on the radar screen of corporate users since the mid-eighties.
Technical limitations and the overall complexity of utilizing data mining has been a
hurdle to text mining that has been surmounted by very few organizations. Text
mining is coming out into the open. Some of the reasons are:

• Storage cost reduction - data are stored in an electronic medium even after being
declared non-active.
• Data volume increase - the exponential growth of data with the lowering of data
transmission cost and increasing usage of the Internet.
• Fraud detection and analysis - there are compelling reasons for organizations to
redress fraud.
• Competitive advantage - It is used to better understand the realms of data in an

Text data is so called unstructured data. Unstructured data implies that the data are
freely stored in flat files (e.g. Microsoft Word) and are not classified. Structured data
are found in well-designed data warehouses. The meaning of the data is well-known,
usually through meta-data description, and the analysis can be performed directly on
the data. Unstructured data has to jump an additional hoop before it can be
meaningfully analyzed - information will need to be extracted from the raw text.

Text Mining for Fraud Detection

Creating cost effective data mining solutions for fraud analysis

For fraud detection, the client currently had a distinct data repository where
model scoring was performed. Based on the model score, reports would be
queried against the data warehouse to produce the claims that were suspect.
This process was inefficient for a number of reasons:

• Fraud detection analysis had to be conducted by a specialist who passed the

scores onto the fraud detection team. The team performed the investigation
and evaluated the merits of the claims. This was a disjointed process whereby
the fraud detection data mining scores were not being improved based on
investigation results.
• Fraud detection was not receptive to sudden changes in claim patterns. For

natural disaster events, such as hurricanes, there would be a spike in similar

claims. The data mining score would not be able to adapt to such scenarios.
• Data mining was confined to actuarial specialists and not day-to-day

Developing a Customized Text Mining Solution

The customized solution was developed in three modules. A scripting

engine was designed and developed for the data extraction layer. The data
extraction layer was designed to pull reports, either manually or as a scheduled
task, from the data warehouse repositories. The reports created by the claims
examiners are stored in Microsoft Word and follow the guidelines set by the

metadata repository. The scripting language used by the data extraction
module is Perl, which makes the extraction module highly accessible for making
changes by end-users.

Figure1. Text Mining Modules

The text mining module contains

the data mining scores based on
historically analysis of likelihood of
fraud. The algorithms were custom
developed based on text entered in
the claims examiner's reports and
details based on the claim. The
data mining model can give the
client a competitive advantage and
the technical details are kept as a closely guarded corporate secret. Reports are
generated on a web-based application layer. The data produced for the reports are
also fed into the SAP for Insurance ERP application which is used by the client and
commonly found in most of the larger insurance companies.

Figure 2 Process Chart for Conducting Text


Many text mining applications give users open-

ended freedom to explore text for meaning.
Text mining can be used as a deeper more
penetrative method which goes beyond
escalations of possible search interests and to
sense the mood of the written text; i.e. are the
articles generally positive on a certain subject?

While such open-ended undirected data mining may be suitable in some cases of text
mining, the cost associated can be very high and the results will have a lower
confidence of accuracy.

Future of Data Mining

Data mining is the analysis of large data

sets to discover patterns of interests. Data
mining has come a long way from the
early academic beginnings in the late
seventies. Many of the early data mining
software packages were based on one
algorithm. Until the mid-nineties data
mining required considerable specialized
knowledge and was mainly restricted to
statisticians. Customer Relationship
Management (CRM) software played a
great part in popularizing data mining
among corporate users. Data mining in
CRMs are often hidden from the end users. The algorithms are packaged behind
business functionality such as Churn analysis. Churn analysis is the process to
predict which customers are the ones most likely to defect to a competitor.

Data mining algorithms are now freely available. Database vendors have started to
incorporate data mining modules. Developers can now access data mining via open
standards such as OLE-DB for data mining on SQL Server 2000. Data mining
functionality can now be added directly to the application source code.


Data warehousing and mining, architecture is used to tie together a wide range of
information technology, this architecture also help to integrate and tie together the
business purpose and enterprise architecture of data warehousing solutions. OLAP is
best used for power IT users and finance/accounting users who like the spreadsheet
based presentation.

The value of relationship mining can be a useful tool not only in sales and marketing
but also for law enforcement and science research. However there is a threshold
barrier under which relationship mining would not be cost effective.

Text mining has reduced complexity of utilizing data mining. The security agencies
have a government mandate to intercept data traffic and evaluate for items of
interest. This also involves intercepting international fax transmission electronically
and mining for patterns of interest.

In the future of the Data mining, we hide the complexity of data mining from the
end-users before it will take the true center stage in an organization. Business use
cases can be designed, with tight constrains, around data mining algorithms


1. “ Data warehousing and mining”

2. Meta Group Application Development Strategies: “Data Mining for Data



T.Hima bindu.
Ph no:9490907741.


Information and communication technologies can have a key role in helping people
with educational needs, considering both physical and cognitive disabilities. Replacing
a keyboard or mouse, with eye-scanning cameras mounted on computers have
become necessary tools for people without limbs or those affected with paralysis. The
camera scans the image of the character, allowing users to ‘type’ on a monitor as
they look at the visual keyboard. The paper describes an input device, based on eye
scanning techniques that allow people with severe motor disabilities to use gaze for
selecting specific areas on the computer screen. It includes a brief description of eye,
about the visual key system where it gives overall idea on how does this process
goeson, also deals with the system architecture which includes calibration, image
acquisition, segmentation, recognition, and knowledge base.

The paper mainly includes three algorithms, one for face position,
for eye area identification, and for pupil identification which are based on scanning
the image to find the black pixel concentration.Inorder to implement this we use
software called dasher, which is highly appropriate for computer users who are
unable to use a two handed keyboard. One-handed users and users with no hands
love dasher. The only ability that is required is sight. Dashers along with eye tracking
devices are used.

This model is a novel idea and the first of its kind in the making,
which reflects the outstanding thinking of a human that he left no stone unturned.


‘Vis-Key’ aims at replacing the conventional hardware keyboard with a ‘Visual
Keyboard’. It employs sophisticated scanning and pattern matching algorithms to
achieve the objective. It exploits the eyes’ natural ability to navigate and spot
familiar patterns. Eye typing research extends over twenty years; however, there is
little research on the design issues. Recent research indicates that the type of
feedback impacts typing speed, error rate, and the user’s need to switch her gaze
between the visual keyboard and the monitor.

Fig 2.1 shows us the horizontal cross section of the human eye.
The eye is nearly a sphere with an average diameter of approximately 20mm.Three
membranes – The Cornea &Sclera cover, the Choroids layer and the Retina –
encloses the eye. When the eye is properly focused, light from an object is imaged on
the retina. Pattern vision is afforded by the distribution of discrete light receptors
over the surface of the Retina. There are two classes of receptors – Cones and Rods.
The cones, typically present in the central portion of the retina called fovea is highly
sensitive to color. The number of cones in the human eye ranges from6-7 millions.
These cones can resolve fine details because they are connected to its very own

nerve end. Cone vision is also known as Photopic or Bright-light vision. The rods are
more in number when compared to the cones (75-150 million). Several rods are
connected to a single nerve and hence reduce the amount of detail discernible by the
receptors. Rods give a general overall picture of the view and not much inclined
towards color recognition. Rod vision is also known as the Scotopic vision or Dim-
light vision as illustrated in fig 2.1, the curvature of the anterior surface of the lens is
greater than the radius of its posterior surface. The shape of the lens is controlled by
the tension in the fiber of the ciliary body. To focus on distant objects , the
controlling muscles cause the lens to be relatively flattened. Similarly to focus on
nearer objects the muscles allow the lens to be thicker. The distance between the
focal distance of the lens and the retina varies from 17 mm to 14 mm as the
refractive power of the lens increases from its minimum to its maximum.

3. The Vis-Key System

The main goal of our system is to provide users suffering from
severe motor disabilities (and that therefore are not able to use neither the keyboard
nor the mouse) with a system that allows them to use a personal computer.

Fig 3.1 System Hardware of the Vis-Key System

The Vis-Key system (fig 3.1) comprises of a High
Resolution camera that constantly scans the eye in order to capture the character
image formed on the Eye. The camera gives a continuous streaming video as output.
The idea is to capture individual frames at regular intervals (say ¼ of a second).
These frames are then compared with the base frames stored in the repository. If the

probability of success in matching exceeds the threshold value, the corresponding
character is displayed on the screen. The hardware requirements are simply a
personal computer, Vis-Key Layout (chart) and a web cam connected to the USB
port. The system design, which refers to the software level, relies on the
construction, design and implementation of image processing algorithms applied to
the captured images of the user.
4. System Architecture:
4.1. Calibration

The calibration procedure aims at initializing the system.
The first algorithm, whose goal is to identify the face position, is applied only to the
first image, and the result will be used for processing the successive images, in order
to speed up the process. This choice is acceptable since the user is supposed only to
make minor movements. If background is completely black (easy to obtain) the
user’s face appears as a white spot, and the borders can be obtained in
correspondence of a decrease in the number of black pixels. The Camera position is
below the PC monitor; if it were above, in fact, when the user looks at the bottom of
the screen the iris would be partially covered by the eyelid, making the identification
of the pupil very difficult. The user should not be distant from the camera, so that the
image does not contain much besides his/her face. The algorithms that respectively
identify the face, the eye and the pupil, in fact, are based on scanning the image to
find the black pixel concentration: the more complex the image is, the slowest the
algorithm is too. Besides, the image resolution will be lower. The suggested distance
is about 30 cm. The user’s face should also be very well illuminated, and therefore
two lamps were posed on each side of the computer screen. In fact, since the
identification algorithms work on the black and white images, shadows should not be
present on the User’s face.
4.2. Image Acquisition:
The Camera image acquisition is implemented via the Functions of
the AviCap window class that is part of the Video for Windows (VFW) functions. The
entire image of the problem domain would be scanned every 1/30 of second .The
output of the camera is fed to an Analog to Digital converter (digitizer) and digitizes
it. Here we can extract individual frames from the motion picture for further analysis
and processing.
4.3. Filtering of the eye component:
The chosen algorithms work on a binary (black and white) image,
and are based on extracting the concentration of black Pixels. Three algorithms are
applied to the first acquired image, while from the second image on only the third
one is applied

Fig 4.3.1
Algorithm 1 –
Face Positioning

image, while
from the First
algorithm, whose
goal is to identify
the face position,
is applied only to the first image, and the result will be used for processing the
successive images, in order to speed up the process. This choice is acceptable since
the user is supposed only to make minor movements. The Face algorithm converts
the image in black and white, and zooms it to obtain an image that contains only the
user’s face. This is done by scanning the original image and identifying the top,
bottom, left and right borders of the face. (Fig 4.3.1).Starting from the resulting
image, the Second algorithm extracts the information about the eye position (both
left and right) pixels is the one that contains the eyes. The algorithm uses this
information to determine the top and bottom borders of the eyes area (Fig.4.3.2), so
that it is extracted from the image. The new image is then analyzed to identify the
eye: the algorithm finds the right and left borders, and generates a new image
containing the left and right eyes independently

The procedure described up until now is applied only to the first image of the
sequence, and the data related to the right eye position are stored in a buffer and
used also for the following images. This is done to speed up the process, and is
acceptable if the user does only minor head movements. The Third algorithm extracts
the position of the center of the pupil from the right eye image. The Iris identification
procedure uses the same approach of the previous algorithm .First of all, the left and
right borders of the iris are extracted. Finding the top and bottom ones would be less
precise due to the eyelid presence, so a square area is built, that slides over the
image. The chosen area, which represents the iris, is the one that has the higher
concentration of black pixels. The center of this image represents also the center of
the pupil. The result of this phase is the coordinates of the center of the pupil for
each of the image in the sequence.

4.4. Preprocessing:
key function of
preprocessing is to

improve the image in ways to improve the chances for success with other processes.
Here preprocessing deals with 4 important techniques:
• To enhance the contrast of the image.
• To eliminate/minimize the effect of noise on the image.
• To isolate regions whose texture indicates likelihood to alphanumeric
• To provide equalization for the image.
4.5. Segmentation:
Segmentation broadly defines the partitioning of an input image into its
constituent parts or objects. In general, autonomous segmentation is one of the most
difficult tasks in Digital Image Processing. A rugged segmentation procedure brings
the process a long way towards Successful solution of the image problem. In terms of
character recognition, the key role of segmentation is to extract individual characters
from the problem domain. The output of the segmentation stage is raw pixel data,
constituting either the boundary of a region or all points n the region itself. In either
case converting the data into suitable form for computer processing is necessary. The
first decision is to decide whether the data should be represented as a boundary or
as a complete region. Boundary representation is appropriate when the focus is on
external shape characteristics like corners and inflections. Regional representation is
appropriate when the focus is on internal shape characteristics such as texture and
skeletal shape Description also called feature selection deals with extracting features
that result in some quantitative information of interest or features that are basic for
differentiating one class of objects from another.

4.6. Recognition and Interpretation:

Recognition is the process that assigns a label to an object based on
the information provided by its descriptors. This process allows us to cognitively
recognize characters based on knowledge base meaning to an ensemble of
recognized objects. Interpretation attempts to assign meaning to a set of labeled
entities. For example, to identify character say 'C', we need to associate descriptors
for that character with label 'C'.

4.7. Knowledge Base:
Knowledge about a particular problem domain can be coded into
an image processing system in the form of a knowledge database. The knowledge
may be as simple as detailing regions of an image where the information of interest
is known thus limiting our search in seeking that information. Or it can be quite
complex such as an image entries are of high resolution. The key distinction of this
knowledgebase is that it, In addition to guiding the operation of various components,
facilitates feedback operations of various modules of the system. This depiction on
FIG 4.1 indicated that communication between processing modules is based on prior
knowledge of what a result should be.
5. Design Constraints:
Though this model is thought provoking, we need to address the design constraints
as well.
 R & D constraints severely hamper our cause for a full-fledged working model
of the
Vis-Key system.
 The need for a very high resolution camera calls for a high initial investment.
 The accuracy and the processing capabilities of the algorithm are very much
liable to quality of the input.
6. Alternatives/Related References:
The approaches till date have only been centered on the
eye tracking theory. It lays more emphasis on the use of eye as a cursor and not as a
data input device. An eye tracking device lets users select the letters from a screen.
Dasher, the prototype program taps in to the natural gaze of the eye and makes
predictable words and phrases simpler to write.

Dasher is software which is highly appropriate for computer users. It calculates the
probability of one letter coming after another. It then presents the letters required as
if contained on infinitely expanding bookshelves. Researchers say people will be able
to write up to 25 words per minute with Dasher compared to on-screen keyboards,
which they say average about 15 words per minute. Eye-tracking devices are still
problematic. "They need re-calibrating each time you look away from the computer,"
says Willis. He controls Dasher using a trackball.
It opens a new dimension to how we perceive the world and
should prove to be a critical technological break through considering the fact that
there has not been sufficient research in this field of eye scanning. If implemented, it
will be one of the AWE-INSPIRING technologies to hit the market.


ü eyetyping.php
ü Ward, D. J. & MacKay, D. J. C. “Fast hands-free writing by gaze direction.”
Nature, 418, 838, (2002).
ü Daisheng Luo “Pattern Recognition and Image Processing” Horwood series in
engineering sciences


DVR & Dr. HS MIC College of Technology




Srikanth M Pradeep S




Interest in digital image processing methods stems from two principal application
areas: improvement pictorial information for human interpretation; and processing of
image data for storage, transmission, and representation for autonomous machine
Digital image processing refers to processing digital images by means of a
digital computer. Note that digital image is composed of finite number of elements,
each of which has a particular location and value.

There will be disturbances in every aspect digital image processing has no

exception. Digital image processing also suffers from some sort of disturbances from
external sources and others. This is going to be discussed in this aspect.

As we know that the images could be in the form of analog signals there is a
need to convert these signals to digital form which can be done by plotting the image
using different transfer functions which are explained here under. A transfer function
maps the pixel values from the CCD (Charge coupled device) to the available
brightness values in the imaging software all the images so far have been plotted using
linear transfer functions

Filter masks and other manipulations have also discussed in this aspect in order
to make the image filter and get a clear cut form of the image.


What is digital image processing?

A digital image is picture which is divided into a grid of “pixels” (picture
elements). Each pixel is defined by three numbers(x, y, z), and displayed on a
computer screen.

The first two numbers give the x and y coordinates of the pixel, and the third
gives its intensity, relative to all the other pixels in the image. The intensity is a
relative measure of the number of photons collected at that photosite on the CCD,
relative to all the others, for that exposure.

The clarity of a digital image depends on the number of “bits” the computer uses
to represent each pixel. The most common type of representation in popular usage
today is the “8-bit image”, in which the computer uses 8 bits, or 1 byte, to represent
each pixel.

This yields 28 or 256 brightness levels available within a given image. These
brightness levels can be used to create a black-and-white image with shades of gray
between black (0) and white (255), or assigned to relative weights of red, green, and
blue values to create a color image.

The range of intensity values in an image also depends on the way in which a
particular CCD handles its ANALOG TO DIGITAL (A/D) conversion. 12-bit A/D
conversion means that each image is capable of 212(4096) intensity values.
If the image processing program only handles 28(256) brightness levels, these
must be divided among the total range of intensity values in a given image.

Below histogram shows the number of pixels in a 12-bit image that have the
same intensity, from 0 to 4095.

Suppose u have software that only handles 8-bit information you assign black
and white limits, so that all pixels with values to the left of the lower limit are set to 0,
while all those to the right of the upper limit are set to 255. This allows u to look at
details within a given intensity range.

So, a digital image is a 2-dimensional array of numbers, where each number
represents the amount of light collected at one photosite, relative to all the other
photosites on the CCD chip.
It might look something like….
• 98 107 145 126 67 93 154 223 155 180 232 250 242 207 201
• 72 159 159 131 76 99 245 211 165 219 222 181 161 144 131
• 157 138 97 106 55 131 245 202 167 217 173 127 126 136 129
• 156 110 114 91 70 128 321 296 208 193 191 145 422 135 138
By this we can guess the brightest pixel in the “image”…

Correcting the raw image

Every intensity value contains both signal and noise. Your job is to extract the
signal and eliminate the noise!
But before that the sources of noise should be known. They can be -

a) The Dark Current:

Since electrons in motion through a metal or semiconductor create a current, these
thermally agitated electrons are called the Dark Current.
…so, to eliminate thermal electrons, the CCD must be COOLED as much as
possible. The cooler one can make one’s CCD, the less dark current one will
generate. In fact, the dark current decreases roughly by a factor of 2 for every 7 0c

drop in temperature of the chip.
At – 1000c the dark current is negligible.
When you process an image correctly, you must account for this dark current, and
subtract it out from the image. This is done by taking a “closed shutter” image of a
dark background, and then subtracting this dark image from the “raw” image you are
The exposure time of the dark image should match that of the image of the
object or starfield you are viewing.
In fact, the one who regularly take CCD image keep files of dark current
exposures that match typical exposure times of images they are likely to take, such
as: 10,20,45,60or 300 seconds, which are updated regularly, if not nightly.

b) The Bias Correction

CCD cameras typically add a bias value to each image they record. If you know that
the same specific bias value has been added to each pixel, you can correct for this by
subtracting a constant from your sky image.

c) Pixel – to – pixel Sensitivity variation

Another source of noise is the inherent variation in the response of each pixel to
incident radiation. Ideally, if your CCD is functioning properly, there should be no
variation in pixel value when you measure a uniformly–illuminated background.
However, nothing I perfect, and there usually is some slight variation in the sensitivity
of each photosite, even if the incident radiation is totally uniform.
This can be accounted for by taking a picture of a uniformly bright field, and
dividing the ray image by this “flat” field – a process called flat fielding. The length of
time to expose the flat image should be enough to saturate the pixels to the 50% or
75% level.
One must take 4 pictures before beginning to process the image. One needs 4
images to create a “noiseless” image of the sky.
1) The original;
2) A dark exposure of the same integration time as your original;
3) A flat exposure;

4) And another dark exposure, of the same integration time as your flat exposure!
Final image = raw image - dark raw

(Flat - dark )

(Don’t forget to subtract your dias correction form each image)

So… the correct raw image taken with any CCD one must
• Subtract the bias correction form each image;
• Subtract the dark current image from the raw image;
• Subtract the dark current image form the flat field image;
• Divide the dark – subtracted raw image by the dark- subtracted flat image.
3 ways of displaying your image to Enhance Certain Features
Once you have dark – subtracted and flat – fielded your image, there are many
techniques you can use to enhance your signal, once you have eliminated the noise.
These manipulations fall into two basic categories:


These methods are basically mapping routines, and include:
• Limiting the visualization thresholds within the histogram
• Plotting the image using different transfer functions
• Histogram equalization


These methods employ various matrix multiplications, Fourier transformations,
and convolutions, and we will address them in the next section.

Limiting the visualization thresholds within the histogram

We already saw that the histogram function shows you the distribution of
brightness values in an image, and the number of pixels within the same brightness
In the below histogram shown, most of the useful information is contained
between the user-defined limits. The peak of intensities on the lower end could
possibly be some faint feature, which could be enhanced in a variety of ways…

By changing the visualization limits in the histogram, the user can pre – define
the black and white levels of the image, thus increasing the level of detail available in
the mid – ranges of the intensities in a given image.

Some examples of histogram limitation is used to examine different features

Plotting the image using different transfer functions

A transfer function maps the pixel values from the CCD to the available brightness
values in the imaging software all the images so far have been plotted using linear
transfer functions
…but you can also use non – linear scaling
Human eyes see a wide range of intensities because our vision LONGARITHMICALLY
When you plot digital images logarithmically, it allows you to see a broader range of
intensities, and can give a more “natural” look… as if you could see the object with
your naked eyes.
…or, you could use a Power Law scaling
Fractional powers enhance low intensity features, while powers greater than 1 enhance
high intensity features.

It means of flattening your histogram by putting equal numbers of pixels in each “bin”,
it serves to enhance mid – range features in an image with a wide range of intensity
values. When you equalize your histogram, you distribute your 4096 intensity from
your CCD equally among the intensity values available in your software. This can be
particularly useful for bringing out features that close to the sky background, which
would otherwise be lost.

After you have corrected your raw image so that you are confident
that what you are seeing really come form incident photons and not
electronic noise of your CCD. You may still have unwanted components
in your image.
It is time now to perform mathematical operations on your signal which

will enhance certain features, remove unwanted noise, smooth rough
edges, or emphasize certain boundaries.
...and this brings us to our last topic for this module:

Filters masks & other Mathematical Manipulations

The rhyme and reason

Basically, any signal contains information of varying frequencies
and phases. In digital signal enhancement, we attempt to accentuate the
components of that signal which carry the information we want, and
reduce to insignificance those components which carry the noise.
Audio equipment, such as your stereo or CD player, has filters
which do this for one – dimensional, time – varying audio signal.
In digital image analysis we extend these techniques to 2 –
dimensional signals which are spatially varying.
In any case, the basic idea is the same:
Get the most out of your data
For the least amount of hassle!

Here’s how it Works:

you create an “n X n” matrix of numbers, such as 3 X 3 or 5 X 5, and
you move this across your image, like a little moving window, starting at
the upper left corner (that’s 0,0,recall).
You “matrix multiply” this with the pixel values in the image directly
below it to get a new value for the center pixel.
You move the window across your image, one pixel at a time, and repeat
the operation, until you have changed the appearance of the entire

Here’s an example of one kind of filter:


if you move this matrix across an image and matrix multiply along, you
will end up replacing the center pixel in the window with the weighted
average intensity of all the points located inside the window.


I11 I12 I13 I14 I15… You plop this window down over a 3 X 3
I21 I22 I23 I24 I25 … section of your image, do a matrix
I31 I32 I33 I34 I35 … multiplication on the center pixel, I22 in this
I41 I42 I43 I44 I45 … example, and the new value for I22 which is
I51 I52 I53 I54 I55… returned is

1I11 + 2I12 + 1I13 + 2I21 + 4I22 + 2I23 + 1I31 +

2I32 + 1I33
( 1+2+1+2+4+2+1+2+1 OR 16)

Imagine your image in 3 dimensions, where the intensity is

plotted as height. Large scale features will appear as hills and valleys,
while small bright objects like stars will appear as sharp spikes. Some
features will have steep gradients, while others shallow. Some features
may have jagged edges, while others are smooth.
You can design little n X n windows to investigate and enhance
these kinds of features of your image in the slides that follow, we will
show examples of low Pass, high pass ,edge detection, gradient
detection, sharpening, blurring, bias filtering your image. Low – pass
filters enhance the larger scale feature in your image.

High pass filters enhance the short period features in your image, giving
it a sharper look.
Some examples of high pass filters:
0 -1 0 0 -1 0 -1 -1 -1 -1 -1 -1
-1 20 -1 -1 10 -1 -1 10 -1 -1 16 -1
0 -1 0 0 -1 0 -1 -1 -1 -1 -1 -1

Edge detection filters are used to locate the boundaries between

regions of different intensities.
The “bias filter” makes an image look like a bas relief with shadows. This
can be useful for examining certain details

You can also combine process – such as low pass, high pass, and image
subtraction in a process called UNSHARP MASKING.

Unsharp masking consists of a 3 – step process:

1. Make a copy of the image where each pixel is the average of the
group of pixels surrounding it, so that the large features are not
disturbed, but the small ones are blurred( this is the unsharp mask)
2. The pixel values of the original image are multiplicated by a
constant (“A”), and then the pixel values of the unsharp mask are
subtracted from this one or more times (“B”). In this way, the large
features are not changed by much, but the small ones are enhanced.
3. Finally a low – pass filter is applied to the result.
You can also create your own filter to smooth or otherwise operate on
your image.

1. Digital image processing by Rafael C. Gonzalez and Richard E.


Presented By,
Abstract. referred as
One is happy when Ubiquitous
one’s desires are computing through
fulfilled. out the paper.
The highest ideal of One of the goals of
Ubicomp is to make ubiquitous
a computer so computing is to
imbedded, so enable devices to
fitting, so natural, sense changes in
that we use it their environment
without even and to
thinking about it. automatically adapt
Pervasive and act based on
computing is these changes

Page 111 of 216

based on user operating systems,
needs and user interfaces,
preferences. The networks, wireless,
technology required displays, and many
for ubiquitous other areas. We call
computing comes in our work
three parts: cheap, “ubiquitous
low- power computing”. This is
computers that different from
include equally PDA’s, dynabooks,
convenient or information at
displays, a network your fingertips. It is
that ties them all invisible;
together, and everywhere
software systems computing that
implementing does not live on a
ubiquitous personal device of
applications. any sort, but is in
Current trends the woodwork
suggest that the everywhere.
first requirement Single-room
will easily be met. networks based on
Our preliminary infrared or newer
approach: Activate electromagnetic
the world. Provide technologies have
hundreds of enough channel
wireless computing capacity for
devices per person ubiquitous
per office, of all computers, but
scales. This has they can only work
required network in indoors.

Page 112 of 216

Cryptographic Another term for this
techniques already ubicomp is
exist to secure PERVASIVE
messages from one COMPUTING. This
ubiquitous Ubicomp is roughly
computer to the opposite of virtual
another and to reality. Where virtual
safeguard private reality puts people
information stored inside a computer-
in networked generated world, i.e.,
systems. it forces the
We suggest using computer to live out
cell phone device here in the world with
available in the people. Ubiquitous
market for Ubicomp computing
also i.e., the encompasses a wide
handheld device will range of research
be used for both topics, including
Ubicomp and also distributed
as a cell phone. computing, mobile
computing, sensor
networks, human-
How Ubiquitous computer interaction,
Networking will and artificial
work intelligence.
Ubicomp integrates By using a small
computation into the radio transmitters
environment, rather and a building full of
than having special sensors, your
computers, which are desktop can be
distinct objects. anywhere you are. At

Page 113 of 216

the press of a button, and receiver chains.
the computer closest
to you in any room
becomes your
computer for long as
you need it.
In the Zone Fig. 1. The ‘Bat’
In order for a Users within the
computer program to system will wear a
track its user a bat, a small device
system should be that transmits a 48-
developed that could bit code to the
locate both people receivers in the
and devices i.e., ceiling. Bats also
ultrasonic location have an embedded
system. This location transmitter, which
tracking system has allows it to
three parts: communicate with
Bats :- the central controller
using a bi-directional
Small ultrasonic
433-MHz radio link.
transmitters, worn by
Bats are about the
size of a paper. These
Receivers :-
small devices are

Ultrasonic signal powered by a single

detectors embedded 3.6-volt lithium

in ceiling. thionyl chloride

Central Controller battery, which has a

:- lifetime of six
months. The devices
Co-ordinates the bats also contain two

Page 114 of 216

buttons, two light- sound at which the
emitting diodes and a ultrasonic pulse
piezoelectric speaker, reached three other
allowing them to be sensors.
used as ubiquitous By finding the
input and output position of two or
devices, and a more bats, the
voltage monitor to system can
check the battery determine the
status. orientation of a bat.
A bat will transmit an The central controller
ultrasonic signal, can also determine
which will be which way a person is
detected by receivers facing by analyzing
located in the ceiling the pattern of
approximately 4 feet receivers that
apart in a square detected the
grid. If a bat needs to ultrasonic signal and
be located, the the strength of the
central controller signal.
sends the bats ID The central controller
over a radio link to crates a zone around
the bat. The bat will every person and
detect its ID and object within the
send out an location system. The
ultrasonic pulse. The computer uses a
central controller spatial monitor to
measures the time it detect if a user’s zone
looks for that pulse to overlaps with the
reach the receiver. zone of a device.
Since the speed of Computer desktops

Page 115 of 216

can be created that
actually follow their
owners anywhere
with in the system
just by approaching
any computer display
in the building, the
bat can enable the
virtual network
computing desktop to
appear on that
Here, in contrast,
Ubi-Finger is the
gesture-i/p device,
which is simple,
compact and
optimized for mobile
use. Using our
systems, a user can
detect a target device
by pointing with
his/her index finger,
and then control it
flexibly by performing
natural gestures of
fingers (Fig. 2).

By pointing a light and making a

Page 116 of 216

gesture like “push will turn on!
a switch”.The light
Fig. 2. An example mechanism to start
to control Home and stop gesture
Appliances recognition.
As shown in Fig. 3,
Ubi-Finger consists of
three sensors to
detect gestures of
fingers, an infrared
transmitter to select
a target device in real Fig. 3.
world and a Mouse Field
microcomputer to Although various
control these sensors interaction
and communicate technologies for
with a host computer. handling information
Each sensor in the ubiquitous
generates the computing
information of environment have
motions as follows: been proposed, some
(1) a bending degree technologies are too
of the index finger, simple for performing
(2) tilt angles of the rich interaction, and
wrist, (3) operations others require special
of touch sensors by a expensive
thumb. We use (1) equipments to be
and (2) for installed everywhere,
recognition of and cannot soon be
gestures, and use (3) available in our
for the trigger everyday

Page 117 of 216

environment. Here command to control
there is a new simple the flow of
and versatile i/p information.
device called the
Mouse Field, which
enables users to
control various
applications easily
without huge amount
of cost.
A mouse field
consists of an ID
recognizer and
motion sensors that
can detect an object
and its movement
after the object is
placed on it. The
system can interpret
the user’s action as a

"Placing" "Moving"
(detecting an (detecting its
object) movement)
Fig. 4. Basic reader and motion
concept of Mouse sensing devices into
Field one package. Fig. 4
Mouse Field is a shows an
device which implementation of
combines an ID Mouse Field, which

Page 118 of 216

consist of two detects the
motion sensors and direction and
an RFID reader rotation of the
hidden under the object.
surface. The RFID
reader and the two
optical mouses are
connected to a PC
through a USB
cable, and they can
detect the ID and
the motion of the
object put on the
device. When a user
puts an object with
an RFID on the
Mouse Field, it first
detects what was
kept on the RFID
reader. When the
use moves or
rotates the object,
motion sensor
Front view Back View
Fig. 5 Mouse Field and CD
Implementation jackets which
of a Mouse Field represent the music
Device. in the CD. All the
Fig. 5 shows how a music in the CD is
user can enjoy saved in a music
music using a server, and an RFID

Page 119 of 216

tag is attached to
each jacket.

Page 120 of 216

These are used to control various parameters without special parameters.
Information Hoppers and Smart Posters
Once these zones are setup, computers on the network will have some
interesting capabilities. The system will help to store and retrieve data in an
Information hopper. This is a timeline of information that keeps track of
when data is created. The hopper knows who created it, where they were
and who they were with.
Another application that will come out of this ultrasonic location system is the
smart poster.
A convention computer interface requires us to click on a button on your
computer screen. In this new system, a button can be placed anywhere in
your workplace, not just on the computer display. The idea behind smart
posters is that a button can be a piece of paper that is printed out and struck
on a wall.
Smart posters will be used to control any device that is plugged into the
network. The poster will know where to send a file and a user’s preferences.
Smart posters could also be used in advertising new services. To press a
button on a smart poster, a user will simply place his or her bat in the smart
poster button and click the bat. The system automatically knows who is
pressing the poster’s button. Posters can be created with several buttons on
Ultrasonic location systems will require us to think outside of the box.
Traditionally, we have used our files, and we may back up these files on a
network server. This ubiquitous network will enable all computers in a
building to transfer ownership and store all our files in a central timeline.
Moving towards a future of Ubiquitous Computing
We suggest a new method to carry all of your personal media with you in a
convenient pocket form factor, and have wireless access to it when standing in
front of a PC, kiosk, or large display, anywhere in the world that might
significantly improve your mobile computing experience.
Intel researchers are developing a new class of mobile device that leverages
advances in processing, storage, and communications technologies to provide
ubiquitous access to personal information and applications through the existing
fixed infrastructure. The device, called a personal server is a small, lightweight
computer with high-density data storage capability. It requires no display, so it
can be smaller than a typical PDA. A wireless interface enables the user to
access content stored in the device through whatever displays are available in
the local environment. For example, in the digital home, the personal server
could wirelessly stream audio and video stored on the device to a PC or digital
home TV.
The technology to enable these scenarios and more is now being explored.
We are moving toward a future in which computing will be ubiquitous, woven
seamlessly into the fabric of everyday life. Researchers are engaged in several
projects to explore technologies and usage models for everyday uses of
computing. In their research, they are addressing fundamental issues that
must be resolved in order to enable “anytime, anywhere” computing.
To make ubiquitous computing a reality will require the collaboration of
researchers in a broadband of disciplines, within computer science and beyond.

 Application Coordination Infrastructure for Ubiquitous Computing
 Ubiquitous Bio-Information Computing (UBIC 2)
 What is Ubiquitous Computing? – Overview and resources.
 How Ubiquitous Networking will work? – Kevin Bensor.
 Panasonic Center: Realizing a Ubiquitous network society.
 Ubiquitous Computing Management Architecture.
 Introduction to UC.
 UC in Education.
 Designing Ubiquitous Computer – Resources.
Research works on UC
 Ichiro Satoh’s Research work on UC.
 Bill Schilit’s work on UC.
 Matthias Lampe’s work on UC.
 Pekka Ala – Siuru’s work on UC.
 Louise Barkhuus’ work on UC.
 George Roussos’ work on ubiquitous commerce.
 Dr. Albrecht Schmidt’s Research work on Ubiquitous Computing.

UC Research
 Research in UC and Applications at University of California, Irvine.
 Fuego: Future Mobile and Ubiquitous Computing Research.
 The Ubiquitous Computing Research Group at the University of
 Computing Department Research themes – Mobile and Ubiquitous
 Research in Ubiquitous Computing.
 GGF Ubiquitous Computing Research Group.
 Distributed Software Engineering Group Research into Ubiquitous
 Mobile Ubiquitous Security Environment (MUSE).






3rd year CSE 3rd year CSE

The present century has been one of many scientific

discoveries and technological advancements. With the advent of technology
came the issue of security. As computing systems became more complicated,
there was an increasing need for security.
This paper deals with cryptography, which is one of the
methods to provide security. It is needed to make sure that information is
hidden from anyone for whom it is not intended. It involves the use of a
cryptographic algorithm used in the encryption and decryption process. It
works in combination with the key to encrypt the plain text. Public key
cryptography provides a method to involve digital signatures, which provide
authentication and data integrity. To simplify this process an improvement is
the addition of hash functions.
The main focus of this paper is on quantum cryptography,
which has the advantage that the exchange of information can be shown to
be secure in a very strong sense, without making assumptions about the
intractability of certain mathematical problems. It is an approach of securing
communications based on certain phenomena of quantum physics. There are
two bases to represent data by this method depending on bit values. There
are ways of eavesdropping even on this protocol including the Man –in-the-
Middle attack. The quantum computers could do some really phenomenal
things for cryptography if the practical difficulties can be overcome.
Encryption and decryption:
Data that can be read and understood without any special measures is
called plaintext or clear text. The method of disguising plaintext in such a
way as to hide its substance is called encryption. Encrypting plaintext results
in unreadable gibberish called cipher text. You use encryption to make sure
that information is hidden from anyone for whom it is not intended, even
those who can see the encrypted data. The process of reverting ciphertext to
its original plaintext is called decryption.

Figure 1-1. Encryption and decryption

Strong cryptography:
Cryptography can be strong or weak, as explained above.
Cryptographic strength is measured in the time and resources it would
require to recover the plaintext. The result of strong cryptography is
ciphertext that is very difficult to decipher without possession of the
appropriate decoding tool. How difficult? Given all of today’s computing
power and available time—even a billion computers doing a billion checks a
second—it is not possible to decipher the result of strong cryptography
before the end of the universe.
How does cryptography work?
A cryptographic algorithm, or cipher, is a mathematical function used
in the encryption and decryption process. A cryptographic algorithm works in
Combination with a key—a word, number, or phrase—to encrypt the
plaintext. The same plaintext encrypts to different ciphertext with different
keys. The security of encrypted data is entirely dependent on two things: the
strength of the cryptographic algorithm and the secrecy of the key.
A cryptographic algorithm, plus all possible keys and all the protocols
that make it work, comprise a cryptosystem. PGP is a cryptosystem.

Conventional cryptography:
In conventional cryptography, also called secret-key or symmetric-key
encryption, one key is used both for encryption and decryption. The Data
Encryption Standard (DES) is an example of a conventional cryptosystem
that is widely employed by the U.S. government.

Public key cryptography

The problems of key distribution are solved by public key
cryptography. Public key cryptography is an asymmetric scheme that uses a
pair of keys for encryption: a public key, which encrypts data, and a
corresponding private key (secret key) for decryption.
It is computationally infeasible to deduce the private key from the
public key. Anyone who has a public key can encrypt information but cannot
decrypt it.Only the person who has the corresponding private key can
decrypt the information.
The primary benefit of public key cryptography is that it allows people
who have no preexisting security arrangement to exchange messages
securely. The need for sender and receiver to share secret keys via some
secure channel is eliminated; all communications involve only public keys,
and no private key is ever transmitted or shared. Some examples of public-
key cryptosystems are Elgamal, RSA, Diffie-Hellman and DSA, the Digital
Signature Algorithm.

A key is a value that works with a cryptographic algorithm to produce
a specific ciphertext. Keys are basically really, really, really big numbers. Key
size is measured in bits; the number representing a 2048-bit key is huge. In
public-key cryptography, the bigger the key, the more secure the ciphertext.
However, public key size and conventional cryptography’s secret key size are
totally unrelated. A conventional 80-bit key has the equivalent strength of a
1024-bit public key. A conventional 128-bit key is equivalent to a 3000-bit
public key. Again, the bigger the key, the more secure, but the algorithms
used for each type of cryptography are very different.
While the public and private keys are mathematically related, it’s very
difficult to derive the private key given only the public key; however, deriving
the private key is always possible given enough time and computing power.
This makes it very important to pick keys of the right size; large enough to
be secure, but small enough to be applied fairly quickly.

Larger keys will be cryptographically secure for a longer period of

time. Keys are stored in encrypted form. PGP stores the keys in two files on
your hard disk; one for public keys and one for private keys. These files are
called keyrings.
If you lose your private keyring you will be unable to decrypt any
information encrypted to keys on that ring.

Digital signatures:
A major benefit of public key cryptography is that it provides a method
for employing digital signatures. Digital signatures let the recipient of
information verify the authenticity of the information’s origin, and also verify
that the information was not altered while in transit. Thus, public key digital
signatures provide authentication and data integrity. A digital signature also
provides non-repudiation, which means that it prevents the sender from
claiming that he or she did not actually send the information. These features
are every bit as fundamental to cryptography as privacy, if not more.
A digital signature serves the same purpose as a handwritten
signature. However, a handwritten signature is easy to counterfeit. A digital
signature is superior to a handwritten signature in that it is nearly impossible
to counterfeit, plus it attests to the contents of the information as well as to
the identity of the signer.
Some people tend to use signatures more than they use encryption.
Instead of encrypting information using someone else’s public key, you
encrypt it with your private key. If the information can be decrypted with
your public key, then it must have originated with you.
Hash functions:
The system described above has some problems. It is slow, and it
produces an enormous volume of data—at least double the size of the
original information. An improvement on the above scheme is the addition of
a one-way hash function in the process. A one-way hash function takes
variable-length input in this case, a message of any length, even thousands
or millions of bits—and produces a fixed-length output; say, 160 bits. The
hash function ensures that, if the information is changed in any way—even
by just one bit—an entirely different output value is produced.
PGP uses a cryptographically strong hash function on the plaintext the
user is signing. This generates a fixed-length data item known as a message
digest. Then PGP uses the digest and the private key to create the
“signature.” PGP transmits the signature and the plaintext together. Upon
receipt of the message, the recipient uses PGP to recompute the digest, thus
verifying the signature. PGP can encrypt the plaintext or not; signing
plaintext is useful if some of the recipients are not interested in or capable of
verifying the signature.
As long as a secure hash function is used, there is no way to take
someone’s signature from one document and attach it to another, or to alter
a signed message in any way. The slightest change to a signed document will
cause the digital signature verification process to fail. Digital signatures play
a major role in authenticating and validating the keys of other PGP users.


• Ability to detect eavesdropping.
• Detection works only after the information was taken.

• Usually requires classical information channel for effective



• Primarily used for key exchange for classical cryptography.
• Key doesn’t have any information value.

• The receiver knows if any parts of the key are intercepted.

• The first protocol for Quantum Cryptography.

• Introduced by Charles H. Bennett from IBM NY and Gilles Brassard

from U.S of Montreal in 1984.
• The protocol uses both classical and quantum channels.

• There are many variations of this protocol.

Two basis are used:
• Vertical

• Diagonal

Depending the bit value the direction on the basis is chosen.

• The Sequence of events:
- A generates random key and encoding basis.
- A sends the polarized photons to B.
- A announces the polarization for each bit.
- B generates random encoding basis.
- B measures photons with random basis.
- B announces which basis are the same as A’s.
• Finally, the matching bits are used as the key for a classical channel.

• Privacy amplification is used to generate the final key.



Eavesdropping on the quantum channel requires measuring the photons,

therefore perturbing the system.

Eavesdropper will be required to resend the photons at random polarization,

the receiver will end up with 25% of the key.


Requires the attacker to take over both classical and quantum channels.

Can be prevented by authenticating the messages on the classical channel.


• Based on natural quantum laws
• Computational Complexity Expires.

• There is no expiration date on the security of QC messages.

• Perfect for public communication

• Easy to detect an eavesdropper.

• Severally limited by technology

• Practical systems are limited by distance.

• Photon emitters and detectors are far from perfect, causing a lot of
• Most protocols require a classical channel.


Progress in quantum optics has resulted in new photon sources, new

photo-detectors, and better optical fibers; the components which have the
potential for exhibiting the relevant quantum phenomena over much larger
distances. This holds out a reasonable prospect for implementation of a
secure key distribution system on a large local area network, transmitting
at about 20k bits per second with present technology.


i. “Cryptography and Network Security, Principles and Practices”

(Third Edition)-William Stallings.
ii. Diffie.W and Hellman.M – “Multiuser Cryptography Techniques.”
iii. Rivesp.R , Shamir.A and Adleman.L- “A Method for obtaining
Digital Signatures and Public Key Cryptographic Systems.”

 Presented by:

II/IV –B-Tech
ID: 06091A0582
Ph: 9908538218

II/IV –B-Tech
ID: 06091A0577
Ph: 9441190808


Abstract :

Today’s work force is increasingly mobile, yet the speed at

which business operates demands that mobile workers stay in
constant communication with their customers and colleagues. To
provide them with instant access to enterprise, personal and
Internet information, many corporations are designing and
deploying ‘Mobile computing’ solutions.

These solutions allow corporations to reap a significant

return on investment by

1. Increasing worker productivity.

2. Eliminating task duplication.
3. Improving customer service.
4. Providing point of service revenue opportunities.

Mobile computing is an entirely new paradigm of computing

and communication. Mobile users want to use different devices and
have information formed appropriately for each. Hence wireless
solutions need to address the unique needs of mobile workers. Unlike
their wired counter parts, design of software for mobile devices must
consider resource limitation, battery power and display size.
Consequently new software and hardware techniques must be
Holding records on all the latest modifications done in a
landmark is very difficult for a stationary record holder. So to overcome
this disadvantage mobile record holding came into existence..

In this particular paper we deal with location-based intelligence

technology which can provide spatial data & allows the personnel to
react & respond to the emergency situations. It ensures homeland
security & public safety. we also deal with the map info..

Mobile map info helps the field workers to obtain the critical info
on natural calamities.


Wireless networking technology has engendered a new era of

computing, called mobile computing. Mobile Computing is an umbrella
term used to describe technologies that enable people to access network
services any place, anytime, and anywhere.

Ubiquitous computing and nomadic computing are synonymous

with mobile computing. Mobile computing helps users to be productive
immediately by reducing the training requirements associated with
traditional automated data collection methods and provides a higher level of
portability than keyboard-based systems.

Field-based users can access any information available from the

system at any time to make critical business decisions. This information is
available at the point of use, wherever and whenever they need it.Portable
devices like laptop and palm top computers give mobile users access to
diverse sources of global information anywhere and at any time.

Wireless refers to the method of transferring information between

computing devices, such as a personal data assistant (PDA), and a data
source, such as an agency database server, without a physical connection.
Not all wireless communications technologies are mobile. For example,
lasers are used in wireless data transfer between buildings, but cannot be
used in mobile communications at this time.

Mobile simply describes a computing device that is not restricted to a

desktop. A mobile device may be a PDA, a "smart" cell phone or Web
phone, a laptop computer, or any one of numerous other devices that allow
the user to complete computing tasks without being tethered, or connected,
to a network. Mobile computing does not necessarily require wireless
communication. In fact, it may not require communication between devices
at all.

Mobile devices

Here we have seven different types of mobile devices: Laptop

computers ,PDA’s, handheld PCs Pagers, Smartphones, cellular phones,
bar code scanners, Blue tooth etc..;

Challenges in mobile computing

Wireless and mobile environments bring different challenges to users

and service providers when compared to fixed, wired networks. Physical
constraints become much more important, such as device weight, battery
power, screen size, portability, quality of radio transmission, error rates.
The major challenges in mobile computing are described including:
low bandwidth, high error rate, power restrictions, security, limited
capabilities, disconnection and problems due to client mobility.

Security and privacy are of specific concerns in wireless
communication because of the ease of connecting to the wireless link
anonymously. Common problems are impersonation, denial of service and
tapping. The main technique used is encryption. In personal profiles of users
are used to restrict access to the mobile units.


MapInfo Government Grant Program Aids Homeland Security …..

MapInfo Corporation has announced its Government Grant Program to

assist communities with a population of less than 150,000 in the
development and deployment of homeland security and continuity initiatives.

The company’s location-based intelligence technology will enable

spatial data on the Internet to be shared across departments, such as public
works, public utilities, police and fire departments, allowing personnel to
immediately react and respond to emergency situations.

Field workers equipped with a laptop or handheld device will be able

to exchange critical location information regarding flood areas, storm
patterns, earthquake regions and homeland security practices.

MapInfo software easily integrates with a municipality’s existing IT

infrastructure, eliminating the need for additional technology investments.
Both internal and external databases can be accessed with MapInfo, so
multiple government organizations can share data in different formats.

With the ability to convert any address or landmark into a point on a

map, MapInfo’s homeland security solutions enable government
organizations to make better-informed decisions to protect their citizens and
assets, the company said.

To qualify, municipalities must submit a homeland security, public

safety or continuity government plan for the use of MapInfo software,
 MapInfo Professional, a location-intelligence solution for
performing advanced and detailed data analysis and data
creation to plan logistics and prepare for emergency response.
 MapInfo Discovery, an enterprise-wide solution that enables

users to share interactive location analysis reports and maps via

the Internet or intranet.
 MapInfo StreetPro Display County, a database that contains

addressed street segments for up-to-date analysis and timely

emergency response practices.
 MapInfo MapMarker Plus County, a geocoding engine that

adds geographic coordinates to every record in a database,

enabling users to map, analyze and share homeland security


Mobile computers are also characterised as ubiquitous computers.

Ubiquity is the quality or state of being everywhere.

Some of the uses for the mobile computers can be:

 For Estate Agents

Estate agents can work either at home or out in the field. With
mobile computers they can be more productive. They can obtain
current real estate information by accessing multiple listing services,
which they can do from home, office or car when out with clients. They
can provide clients with immediate feedback regarding specific homes
or neighborhoods, and with faster loan approvals, since applications
can be submitted on the spot. Therefore, mobile computers allow them
to devote more time to clients.
 In courts

Defense counsels can take mobile computers in court. When the

opposing counsel references a case which they are not familiar, they
can use the computer to get direct, real-time access to on-line legal
database services, where they can gather information on the case and
related precedents. Therefore mobile computers allow immediate
access to a wealth of information, making people better informed and

 In companies

Managers can use mobile computers in, say, critical

presentations to major customers. They can access the latest market
share information. At a small recess, they can revise the presentation
to take advantage of this information. They can communicate with the
office about possible new offers and call meetings for discussing
responds to the new proposals. Therefore, mobile computers can
leverage competitive advantages.

 Government:
Applications center around assessments, inspections, and work
orders. Most of these applications involve auditing some sort of
facility or process (food service, restaurant, nursing home, child
care, schools, commercial and residential buildings).

 Healthcare:
The focus in this industry has been on automating patient
records, medication dispension, and sample collection. A common
goal is to leverage mobile computing in the implementation of
positive patient identification.
Uses like the above are endless. People find one that serves their needs so
more and more are subscribing for mobile computers.


Mobile computers are something like the opposite to virtual reality.

Where virtual reality puts people inside a computer-generated world, mobile
computing forces the computer to live out here in the world with people.


With rapid technological advancements in Artificial Intelligence

(AI), Integrated Circuitry and increases in computer processing
speeds, the future of mobile computing looks increasingly exciting.

First phase started with large main frames -shared by many

users, and then came the personal computers that allowed single
user processing tasks and now the trend is of mobile computers,
where a single user can use many computers.

Use of AI may allow mobile units to be ultimate in personal

secretaries, which can receive emails and paging messages,
understand what they are about and change the individual’s personal
schedule according to the message. The individual to plan his/her day
can then use this.

The introduction of 3G and other advanced technologies will lead

to many applications, which are easily accessible and easy to use.
Mobile Computing is an emerging technology with most promising
features like high speed data transfers, availability and accessibility of
data from remote locations… Etc.

In this paper, we briefly described some of the most

important technologies for Mobile Computing. Mainly the map
information that is described in this context is a milestone in the
present packed world. WAP will take a major role in future mobile
applications. We need to develop technologies that suite for mobile
devices by considering factors like resource limitation, bandwidth
availability and accessibility problems. Requirements need to be
reviewed and studied very carefully by all the involved actors.

Our analysis, as presented in this paper, shows that the

technologies and issues involved in mobile computing deployment and
provision cover a very wide spectrum including operating system
capabilities, user interface design, positioning techniques, terminal
technologies, network capabilities, etc.

The meticulous mapping of these technical aspects to the

identified requirements is a critical success factor for Mobile
Stay tuned … mobile computing is the way the world is heading.


Interview with Mr Eleftherios Koudounas, Assistant Commercial

Services Manager at Cyprus Telecommunications Authority

Interview with Dr Leonidas Leonidou, Mobile Services, Cyprus

Telecommunications Authority

Interview with Dr Zinonas Ioannou, Mobile Services, Cyprus

Telecommunications Authority
Cellular Communications for Data Transmission
M Flack & M Gronow

Visions of a cellular future

An overview of cellular techology

The CDPD Network

John Gallant, Technical Editor, PCSI


Submitted by

Roll.No.660751011 Roll.No.660751017
3rd year CSE 3rd year CSE





Now-a-days we are facing majority of crimes related to security

issues and these arise due to the leakage of passwords or illegal
authentication. At one end, there is a continuous and tremendous
improvement in the lifestyle of Humans while at the other end; the
technological crimes are increasing rapidly. As there is a problem, there must
be a solution. The need for a compromising technology which can be adopted
is highly imperative. Technologies capable of identifying each person uniquely
need to be developed. The only powerful solution for the problem of illegal
authentication is Biometrics.
This paper provides an overall idea of Biometrics , the typical
Biometric Model, an overview of the Biometric techniques and focuses mainly
on Keystroke Biometrics which is easy to implement and can provide fool
proof security based on the effectiveness of the algorithm.


INVENTION”, the need for a new type of identification and authentication
technique has led to the development of Biometrics. “Biometrics is an
automated method of recognizing a person based on a physiological or
behavioral characteristic. “

Biometric-based solutions are able to provide for confidential

financial transactions and personal data privacy. Most systems make use of a
personal identification code in order to authenticate the user. In these
systems, the possibility of malicious user gaining access to the code cannot
be ruled out. However, combining the personal identification code with
biometrics provides for robust user authentication system. Biometrics is of
two kinds: One deals with the physical traits of the user (Retinal scanning,
Fingerprint scanning, DNA testing etc.,) and the other deals with the
behavioral traits of the user (Voice recognition, Keystroke dynamics, etc.)
.Utilized alone or integrated with other technologies such as smart cards,
encryption keys and digital signatures, biometrics is set to pervade nearly
all aspects of the economy and our daily lives.
THE BIOMETRIC MODEL: The biometric authentication system consists:
User interface or the biometric reader, Communication Subsystem, The
Controlling software, Data storage.

Matching Score

Data Collection
Making Template
Biometric Capture Extraction


g Enrollment


Biometric system works by taking a number of samples of

physiological or behavioral characteristics to produce a reliable template of
the user information. The user is verified against a template in the memory,
which he claims to be himself and the user is authenticated if the biometric
pattern of the user matches with the template. The biometric sample of the
person is not stored in the host computer or the controller. So there is no
possibility of the others getting it. Moreover, the biometric template of
person is stored in the form of a dynamic binary template with suitable
encryption to provide utmost security


 Fingerprint Verification: This is one of the oldest forms of biometric

techniques which involves mapping of the pattern of the fingerprint of
the individual and then comparing the ridges, furrows, within the
template. The fingerprint given to the device is first searched at the
coarse level in the database and then finer comparisons are made to
get the result.
 Iris Recognition: In Iris and Retinal scanning, the iris and the retina
are scanned by a low intensity light source and the image is compared
with the stored patterns in the database template. They are the fastest
and the secure form of biometry.
 Facial Scanning: Facial scanning involves scanning of the entire face
and checking of critical points and areas in the face with the template.
This method is not completely reliable and so it is used in association
with another biometric technique.
 Hand and Finger geometry: This method uses the data such as
length, shape, distance between the fingers, overall dimensions of the
hand and also the relative angle between the fingers. Modern systems
use this technique in association with the Fingerprint scanning
 Voice Biometry: It is proved that the frequency, stress and accent of
speech differ from person to person. Voice biometry uses this concept
to solve the problem of illegal user.
 Signature Verification: This technology uses the dynamic analysis of
a signature to authenticate a person. This technology is based on
measuring speed, pressure and angle used by the person when a
signature is produced.
 Keystroke dynamic: In this technique, the system analyses the
rhythm of typing the password.

“The keystroke biometrics makes use of the inter-stroke gap
that exists between consecutive characters of the user identification
When a user types his authentication code, there exists a particular
rhythm or fashion in typing the code. If there does not exist any abrupt
change in this rhythmic manner, this uniqueness can be used as an
additional security constraint. It has been proved experimentally that the
manner of typing the same code varies from user to user. Thus this can be
used as a suitable biometric. Further, if the user knows before hand about
the existence of this mechanism, he can intentionally introduce the rhythm to
suite his needs.

As the user logs onto the system for the first time, a database entry is
created for the user. He is then put through a training period, which consists
of 15-20 iterations. During this time, one obtains the inter-stroke timings of
all the keys of the identification code. The inter stroke interval between the
keys is measured in milliseconds. The systems’ delay routine can be used to
serve the purpose. The delay routine measures in milliseconds and the
amount of delay incurred between successive strokes can be used as a
counter to record this time interval.
The mean and standard deviation of the code are calculated. This is
done in order to provide some leverage to the user typing the code. The
reference level that we chose is the mean of the training period and the
rounded standard deviation is used as the leverage allotted per user. These
values are fed into the database of the user. These details can also be
incorporated onto the system’s password files in order to save the additional
overhead incurred.
The mean and the standard deviation can be determined by
using the relationship given below.

Mean= (1/n) x (i)

Standard deviation= {[ (X (i)-mean)] 2/n}

Once the database entry has been allotted for the user, this can be used in
all further references to the user. The next time the user tries to login, one
would obtain the entered inter-stroke timing along with the password. A
combination of all these metrics is used as a security check of the user. The
algorithm given below gives the details of obtaining the authorization for a
particular user. The algorithm assumes that the database already exists in
the system and one has a system delay routine available
While considering any system for authenticity, one needs to
consider the false acceptance rate (FAR) and the false rejection rate

The [FAR] is the percentage of unauthorized users accepted by the

The [FRR] is the percentage of authorized users not accepted by the

An increase in one of these metrics decreases the other and vice

versa. The level of error must be controlled in the authentication system by
the use of a suitable threshold such that only the required users are selected
and the others who are not authorized are rejected by the system. In this
paper, standard deviation of the user’s training period entry is used as a
threshold. The correct establishment of the threshold is important since too
strong a threshold would lead to a lot of difficulty in entry even for the legal
user, while a lax threshold would allow non-authorized entry. Thus a balance
would have to be established taking both the factors into consideration


Input : User name, User_id, Password.

Output: Registration of a new user (or) Acceptance of a user if registered

(or) Rejection of an unregistered user.

main ()
if (User==New)
{ read (User); // Getting User name, User_id,
read (Inter-stroke gap); // Time interval between consecutive
Add user (database); // Add the User to the database
User count =1; }
else if (User==Training)
{ read (User);
read (Inter-stroke gap);
if (Check (User, Password))
{ if (User count<15)
{ update ( User count); // User count = User count +1
add (Inter-stroke gap); }
else if (User count ==15)
{ update (User count);
add (Inter-stroke gap);
Calculate Mean (M), Standard deviation (S.D); }
else if (User==Existing)
{ read (User);
read (deviation);
if (Check (User, Password, deviation))
exit(0); } }

Analysis of inter-keystroke timing of user code:

A graph is plotted between keystrokes and keystroke timing. The
‘X’ axis indicates the number of inter-keystrokes and negative ‘Y’ axis
indicates the inter-keystrokes timing in milliseconds.
User accepted:
Graph I shows the inter-keystroke timing analysis when the user is
accepted. Here it can be easily seen that when the user is authentic or when
he types in his normal rhythm, the user automatically comes into the
predefined ranges. The current inter-keystroke timing lies around the
database inter-keystroke timing, thereby providing adequate amount of
predefined ranges. FAR and FRR
db=Database can be
Graph I: reduced to a treat
Inter keystroke extent
timing so that only
when the user gets access to the system. The +R boundary and –R
the legal
boundary Boundary
give the desired range so that user
legal user gets access.
-R=-VE Boundary
In the graph, the line (L3) indicates the current pattern of typing the access
code on the keyboard; -R
the line (L2) indicates the keystroke pattern
-R -R -R
according to reference
c level and the line
c (L1) and (L2) indicates the
-R -R db db -R (L1)
c c and the negative ranges.
db db The ranges
dbcan be decided by the standard
db c c c db (L2)
deviation method, which is used here for analysis or any other adaptive
c c c (L3)
db +R
method. db +R +R +R
+R +R

+R +R +R (L4)


User not accepted:

Graph II indicates inter-keystroke timing when the user is not
legal or not following his rhythmic behavior of typing the access code. It can
be easily noticed when the user is not legal, his typing pattern for the access
code is not at all into the predefined ranges.

db=Database Graph II: Inter keystroke timing when the

user is
+R=+VE Boundary not legal or not following his
-R=-VE Boundary behaviour
-R -R
-R -R -R

-R -R db c db -R (L1)
db db db
db c db
c (L2)
db db +R c
+R c +R +R
+R +R
+R c
c A biometric system which relies only on a single
c biometric

identifier is often not able to meet the desired performance requirements.


Identification based on multiple biometrics represents an emerging trend.

This system takes the advantage of the capabilities of each individual
biometric and overcomes the limitations of individual biometric. This multi
biometric system operates with an admissible response time.
EXAMPLE (A Multibiometric system):


Face Extractor

Minutiae Extractor

Ceptral Analysis

Eigenspace Projection
And HMM training

Face Eigenspace
Locator Comparison

Minutiae Minutiae Decision Accept/

FINGERPRINT Extractor matching Fusion Reject

Ceptral HMM
Analyzer scoring



BIOMETRIC BANKING: Banks have been experimenting with keystroke Biometrics

for ATM machine use and to counteract the credit card frauds. The smart card or
the credit card may be incorporated with the biometric information. When a user
inserts his card for verification, the biometric sample of the person can be verified
precisely and if it is identical the person is authenticated. The advantage of this
system is that the user can enjoy the facilities offered by the Bank along with
utmost security.

INTERNET SECURITY: If the password is leaked out, the computer or the web server
will not be able to identify whether the original user is operating the computer. PCs
fitted with biometric sensors can sense the biometric template and transmit it to the
remote computer so that the remote server is sure about the user in the computer.

BIOMETRIC SMART CARDS: Biometric technologies are used with smart cards for ID
systems applications specifically due to their ability to identify people with minimal
ambiguity. A biometric based ID allows for the verification of “who you claim to be”
(information about the card holder stored in the card) based on “who you are” (the
biometric information stored in the smart card), instead of, or possibly in addition to,
checking “what you know” (such as password).


A question that arises with any technology is that “Does this technology have any
constraints?” The answer to this question is that, “It purely depends upon its
implementation mechanism”. In Keystroke biometrics, the person being authenticated
must have registered their bio-identity before it can be authenticated. Registration
processes can be extremely complicated and very inconvenient for users. This is
particularly true if the user being registered is not familiar with what is happening. The
problem for the operator is that the right person will be rejected occasionally by what
might be presented as a ‘foolproof’ system. Both the FAR and the FRR depend to some
extent on the deviation allowed from the reference level and on the number of
characters in the identification code (Password). It has been observed that providing a
small deviation lowers the FAR to almost NIL but at the same time tends to increase
the FRR. This is due to the fact that the typing rhythm of the user depends to some
extent on the mental state of the user. So, a balance would have to be established
taking both the factors into consideration.


The performance measure of Keystroke biometrics purely depends on User psychology,

i.e., the user’s particular temperament; understanding and current state of mind can
have a dramatic impact on real system performance. If a user is not happy about using
the biometric device, he is likely to be consistent in using it, potentially producing a
much larger than average error rate. Conversely, if a user is intrigued and enthusiastic
about using the device, he is likely to use it as intended, be more consistent and enjoy
8relatively low error rates. Since this is the case, clearly we should aim for well
educated (in terms of the system) users who have good quality reference templates
and are happy with the overall system concept and its benefits.


Keystroke Biometrics offers a valuable approach to current security

technologies that make it far harder for fraud to take place by preventing ready
impersonation of the authorized user. Even if the unauthorized user discovers the
access code, he cannot get access to the system until and unless he also knows the
rhythm. Also, the typing rhythm can be self-tuned by the user to suit his needs. As the
keyboard has duplicate keys, the typing rhythm also depends whether the user is a
left-handed person or a right-handed person. Positively Keystroke Biometrics will
replace the entire traditional security systems in the future.

[1] S.Singh, “The Code Book”, Doubleday, 1999.

[2] Exploring Biometrics: Seeing the Unseen
A paper By Neil F.Johnson Sushil Jajodia George Mason University


Presented by ---

K.Balanjaneyulu S.Vijay Raghavendra

(05001A0540) (05001A0539)
ph: 9966645162

Email id:

Find where your kids have been! Verify employee driving routes! Review family
members driving habits! Watch large shipment routes! Know where anything or
anyone has been! All this can be done merely by sitting at your own desk!

Finding your way across the land is an ancient art and science. The stars, the compass,
and good memory for landmarks helped you get from here to there. Even advice from
someone along the way came into play. But, landmarks change, stars shift position,
and compasses are affected by magnets and weather. And if you've ever sought
directions from a local, you know it can just add to the confusion. The situation has
never been perfect. This has led to the search of new technologies all over the world
.The outcome is THE GLOBAL POSITIONING SYSTEM. Focusing the application and
usefulness of the GPS over the age-old challenge of finding the routes, this paper
describes about the Global Positioning System, starting with the introduction, basic
idea and applications of the GPS in real world.


The Global Positioning System (GPS) is a worldwide radio-navigation system

formed from a constellation of 24 satellites and their ground stations. Global
Positioning Systems (GPS) are space-based radio positioning systems that provide 24
hour three-dimensional position, velocity and time information to suitably equipped
users anywhere on or near the surface of the Earth (and sometimes off the earth).

Global Navigation Satellite Systems (GNSS) are extended GPS systems, providing
users with sufficient accuracy and integrity information to be useable for critical
navigation applications. The NAVSTAR system, operated by the U.S. Department of
Defense, is the first GPS system widely available to civilian users. The Russian GPS
system, GLONASS, is similar in operation and may prove complimentary to the
NAVSTAR system.

These systems promise radical improvements to many systems that impact all people.
By combining GPS with current and future computer mapping techniques, we will be
better able to identify and manage our natural resources. Intelligent vehicle location and
navigation systems will let us avoid congested freeways and find more efficient routes to
our destinations, saving millions of dollars in gasoline and tons of air pollution. Travel
aboard ships and aircraft will be safer in all weather conditions. Businesses with large
amounts of outside plant (railroads, utilities) will be able to manage their resources
more efficiently, reducing consumer costs. However, before all these changes can take
place, people have to know what GPS can do.

What it does?

GPS uses the "man-made stars" as reference points to calculate positions accurate to a
matter of meters. In fact, with advanced forms of GPS you can make measurements to
better than a centimeter! In a sense it's like giving every square meter on the planet a
unique address.

GPS receivers have been miniaturized to just a few integrated circuits and so are
becoming very economical. And that makes the technology accessible to virtually
everyone. These days GPS is finding its way into cars, boats, planes, construction
equipment, movie making gear, farm machinery, even laptop computers. Soon GPS
will become almost as basic as the telephone.

How GPS works

Each satellite is equipped with a computer, an atomic clock and a radio. They enable
the satellite to continuously monitor and broadcast its changing position and time. The
satellite reports daily to Earth and figures its own position by knowing the distance
from the satellite to the user.

The GPS receiver on Earth determines its own position by communicating with a
satellite. The results are provided in longitude and latitude. If the receiver is equipped
with a computer that has a map, the position will be shown on the map. If you are
moving, a receiver may also tell you your speed, direction of travel and estimated time
of arrival at a destination.

Here's how GPS works in five logical steps:

1. The basis of GPS is "triangulation" from satellites.

2. To "triangulate," a GPS receiver measures distance using the travel time of radio
3. To measure travel time, GPS needs very accurate timing which it achieves with
some tricks.
4. Along with distance, you need to know exactly where the satellites are in space.
High orbits and careful monitoring are the secret.
5. Finally you must correct for any delays the signal experiences as it travels
through the atmosphere.

Improbable as it may seem, the whole idea behind GPS is to use satellites in space as
reference points for locations here on earth. That's right, by very, very accurately
measuring our distance from three satellites we can "triangulate" our position
anywhere on earth.

The Big Idea Geometrically:

Suppose we measure our distance from a satellite and find it to be 11,000

miles.Knowing that we're 11,000 miles from a particular satellite narrows down all the
possible locations we could be in the whole universe to the surface of a sphere that is
centered on this satellite and has a radius of 11,000 miles.

Next, say we measure our distance to a second satellite and find out that it's 12,000
miles away. That tells us that we're not only on the first sphere but we're also on a
sphere that's 12,000 miles from the second satellite. Or in other words, we're
somewhere on the circle where these two spheres intersect.If we then make a
measurement from a third satellite and find that we're 13,000 miles from that one,
which narrows our position down even further, to the two points where the 13,000
mile sphere cuts through the circle that's the intersection of the first two spheres. So
by ranging from three satellites we can narrow our position to just two points in
space.To decide which one is our true location we could make a fourth measurement.
But usually one of the two points is a ridiculous answer (either too far from Earth or
moving at an impossible velocity) and can be rejected without a measurement.
But how can you measure the distance to something that's floating around in space?
We do it by timing how long it takes for a signal sent from the satellite to arrive at our

The Big Idea Mathematically

In a sense, the whole thing boils down to those "velocity times travel time" math
problems we did in high school. Remember the old: "If a car goes 60 miles per hour for
two hours, how far does it travel?"

Velocity (60 mph) x Time (2 hours) = Distance (120 miles)

In the case of GPS we're measuring a radio signal so the velocity is going to be the
speed of light or roughly 186,000 miles per second. The problem is measuring the
travel time.

The timing problem is tricky. First, the times are going to be awfully short. If a satellite
were right overhead the travel time would be something like 0.06 seconds. So we're
going to need some really precise clocks.

But assuming we have precise clocks, how do we measure travel time? To explain it
let's use a goofy analogy:

Suppose there was a way to get both the satellite and the receiver to start playing
"The Star Spangled Banner" at precisely 12 noon. If sound could reach us from space
(which, of course, is ridiculous) then standing at the receiver we'd hear two versions of
the Star Spangled Banner, one from our receiver and one from the satellite. These two
versions would be out of sync. The version coming from the satellite would be a little
delayed because it had to travel more than 11,000 miles. If we wanted to see just how
delayed the satellite's version was, we could start delaying the receiver's version until
they fell into perfect sync. The amount we have to shift back the receiver's version is
equal to the travel time of the satellite's version. So we just multiply that time times
the speed of light and BINGO! We’ve got our distance to the satellite.
That's basically how GPS works.

Only instead of the Star Spangled Banner the satellites and receivers use something
called a "Pseudo Random Code" - which is probably easier to sing than the Star
Spangled Banner.

A Random Code?

The Pseudo Random Code (PRC, shown below) is a

fundamental part of GPS. Physically it's just a very
complicated digital code, or in other words, a
complicatedsequence of "on" and "off" pulses as shown
here: The signal is so complicated that it almost looks
like random electrical noise. Hence the name "Pseudo-Random."

There are several good reasons for that complexity: First, the complex pattern helps
make sure that the receiver doesn't accidentally sync up to some other signal. The
patterns are so complex that it's highly unlikely that a stray signal will have exactly the
same shape.

Since each satellite has its own unique Pseudo-Random Code this complexity also
guarantees that the receiver won't accidentally pick up another satellite's signal. So all
the satellites can use the same frequency without jamming each other. And it makes it
more difficult for a hostile force to jam the system. In fact the Pseudo Random Code
gives the DoD a way to control access to the system.

But there's another reason for the complexity of the Pseudo Random Code, a reason
that's crucial to making GPS economical. The codes make it possible to use
"information theory" to "amplify" the GPS signal. And that's why GPS receivers don't
need big satellite dishes to receive the GPS signals. Goofy Star-Spangled Banner
analogy assumes that we can guarantee that both the satellite and the receiver start
generating their codes at exactly the same time. But how do we make sure everybody
is perfectly synced?
If measuring the travel time of a radio signal is the key to GPS, then our stop watches
had better be darn good, because if their timing is off by just a thousandth of a
second, at the speed of light, that translates into almost 200 miles of error! On the
satellite side, timing is almost perfect because they have incredibly precise atomic
clocks on board.

But what about our receivers here on the ground?

Both the satellite and the receiver need to be able to precisely synchronize their
pseudo-random codes to make the system work. If our receivers needed atomic clocks
(which cost upwards of $50K to $100K) GPS would be a lame duck technology. Nobody
could afford it.

Luckily the designers of GPS came up with a brilliant little trick that lets us get by with
much less accurate clocks in our receivers. This trick is one of the key elements of GPS
and as an added side benefit it means that every GPS receiver is essentially an atomic-
accuracy clock.

The secret to perfect timing is to make an extra satellite measurement.

That's right, if three perfect measurements can locate a point in 3-dimensional space,
then four imperfect measurements can do the same thing.

This idea is fundamental to the working of GPS

Extra Measurement Cures Timing Offset

If our receiver's clocks were perfect, then all our satellite ranges would intersect at a
single point (which is our position). But with imperfect clocks, a fourth measurement,
done as a cross-check, will NOT intersect with the first three. So the receiver's
computer says "Uh-oh! there is a discrepancy in my measurements. I must not be
perfectly synced with universal time." Since any offset from universal time will affect
all of our measurements, the receiver looks for a single correction factor that it can
subtract from all its timing measurements that would cause them all to intersect at a
single point.

That correction brings the receiver's clock back into sync with universal time, and
bingo! - you've got atomic accuracy time right in the palm of your hand.

Once it has that correction it applies to all the rest of its measurements and now we've
got precise positioning. One consequence of this principle is that any decent GPS
receiver will need to have at least four channels so that it can make the four
measurements simultaneously. With the pseudo-random code as a rock solid timing
sync pulse, and this extra measurement trick to get us perfectly synced to universal
time, we have got everything we need to measure our distance to a satellite in space.

But for the triangulation to work we not only need to know distance, we also need to
know exactly where the satellites are.

But how do we know exactly where they are? After all they're floating around 11,000
miles up in space.

A high satellite gathers no moss

That 11,000 mile altitude is actually a benefit in this case, because something that
high is well clear of the atmosphere. And that means it will orbit according to very
simple mathematics. The Air Force has injected each GPS satellite into a very precise
orbit, according to the GPS master plan.On the ground all GPS receivers have an
almanac programmed into their computers that tells them where in the sky each
satellite is, moment by moment.

The basic orbits are quite exact but just to make things perfect the GPS satellites are
constantly monitored by the Department of Defense. They use very precise radar to
check each satellite's exact altitude, position and speed. The errors they're checking
for are called "ephemeris errors" because they affect the satellite's orbit or
"ephemeris." These errors are caused by gravitational pulls from the moon and sun
and by the pressure of solar radiation on the satellites. The errors are usually very
slight but if you want great accuracy they must be taken into account.

Getting the message out

Once the DoD has measured a satellite's exact position, they relay that information
back up to the satellite itself. The satellite then includes this new corrected position
information in the timing signals it's broadcasting. So a GPS signal is more than just
pseudo-random code for timing purposes. It also contains a navigation message with
ephemeris information as well.

With perfect timing and the satellite's exact position you'd think we'd be ready to make
perfect position calculations. But there's trouble afoot.

Up to now we've been treating the calculations that go into GPS very abstractly, as if
the whole thing were happening in a vacuum. But in the real world there are lots of
things that can happen to a GPS signal that will make its life less than mathematically
perfect. To get the most out of the system, a good GPS receiver needs to take a wide
variety of possible errors into account. Here's what they've got to deal with.

First, one of the basic assumptions we've been using throughout this paper is not
exactly true. We've been saying that you calculate distance to a satellite by multiplying
a signal's travel time by the speed of light. But the speed of light is only constant in a
vacuum. As a GPS signal passes through the charged particles of the ionosphere and
then through the water vapor in the troposphere it gets slowed down a bit, and this
creates the same kind of error as bad clocks.

There are a couple of ways to minimize this kind of error. For one thing we can predict
what a typical delay might be on a typical day. This is called modeling and it helps but,
of course, atmospheric conditions are rarely exactly typical.
Another way to get a handle on these atmosphere-induced errors is to compare the
relative speeds of two different signals. This "dual frequency" measurement is very
sophisticated and is only possible with advanced receivers.

Trouble for the GPS signal doesn't end when it gets down to the ground. The signal
may bounce off various local obstructions before it gets to our receiver. This is called
multipath error and is similar to the ghosting you might see on a TV. Good receivers
use sophisticated signal rejection techniques to minimize this problem.

Problems at the satellite

Even though the satellites are very sophisticated they do account for some tiny errors
in the system. The atomic clocks they use are very, very precise but they're not
perfect. Minute discrepancies can occur, and these translate into travel time
measurement errors. And even though the satellites positions are constantly
monitored, they can't be watched every second. So slight position or "ephemeris"
errors can sneak in between monitoring times. Basic geometry itself can magnify these
other errors with a principle called "Geometric Dilution of Precision" or GDOP. It sounds
complicated but the principle is quite simple.

There are usually more satellites available than a receiver needs to fix a position, so
the receiver picks a few and ignores the rest. If it picks satellites that are close
together in the sky the intersecting circles that define a position will cross at very
shallow angles. That increases the gray area or error margin around a position. If it
picks satellites that are widely separated the circles intersect at almost right angles
and that minimizes the error region. Good receivers determine which satellites will give
the lowest GDOP

GPS technology has matured into a resource that goes far beyond its original design
goals. These days scientists, sportsmen, farmers, soldiers, pilots, surveyors, hikers,
delivery drivers, sailors, dispatchers, lumberjacks, fire-fighters, and people from many
other walks of life are using GPS in ways that make their work more productive, safer,
and sometimes even easier.

In this section you will see a few examples of real-world applications of GPS. These
applications fall into five broad categories.

• Location - determining a basic position

• Navigation - getting from one location to another
• Tracking - monitoring the movement of people and things
• Mapping - creating maps of the world
• Timing - bringing precise timing to the world

An application of the GPS--Track Stick:

What is TrackStick?

Simply put, the Track-Stick is a Personal GPS - Global Positioning System with a USB

The GPS Track Stick records its own location, time, date, speed, heading and altitude
at preset intervals. With over 1Mb of memory, it can store months of travel
information. All recorded history can be outputted to the following formats:

RTF ( text file with .html map links)

XLS (Microsoft Excel spread sheets)
HTML (Webpage with graphics maps)

KML (Proprietary Google Earth file)

Track Stick works around the planet!

The Track Stick GPS Systems outputs .KML files for compatibility with Google Earth. By
exporting to Google Earth's .KML file format, each travel location can be pinpointed
using 3D mapping technology.
View 3D images of actual recordings of the Track Stick revealing where it has been.

Track Stick comes with it's own HTML GPS Tracking Software

Map Location histories can also be exported to the following


Microsoft Streets and Trips Encarta... and many other third party mapping programs.

How it works

The Track Stick receives GPS signals from twenty four

satellites orbiting the earth. With this information, the Track
Stick GPS systems can precisely calculate its own position
anywhere on the planet to within fifteen meters.

Where it works

The Track Stick will work anywhere on the planet Earth!

Using the latest in GPS systems mapping technologies, your exact location can be
shown on graphical maps and 3D satellite images. The Track Stick's micro computer
contains special mathematical algorithms, that can calculate how long you have been
indoors. While visiting family, friends or even shopping, the Track Stick can accurately
time and map each and every place you have been. Global positioning System for
home or business has never been so easy!

The screenshots below are examples of

exactly how the Track Stick Global
positioning System will reveal where it has

Freeway Middle East

Cabo San Lucas Hollywood, CA

Italy North Pole

Sydney, Australia Shopping Mall


To conclude, we hereby say that the Global Positioning System is improving day by
day. It is the simplest way of replacing the traditional way of finding routes with the
new technology. This is surely a striking issue and let’s hopes for the best of it in
coming years.












Grid computing, emerging as a new paradigm for next-generation computing,

enables the sharing, selection, and aggregation of geographically distributed

heterogeneous resources for solving large-scale problems in science, engineering, and

commerce. The resources in the Grid are heterogeneous and geographically

distributed. Availability, usage and cost policies vary depending on the particular user,

time, priorities and goals. It enables the regulation of supply and demand for


It provides an incentive for resource owners to participate in the Grid; and

motivates the users to trade-off between deadline, budget, and the required level of

quality of service. The thesis demonstrates the capability of economic-based systems

for wide-area parallel and distributed computing by developing users’ quality-of-service

requirements-based scheduling strategies, algorithms, and systems. It demonstrates

their effectiveness by performing scheduling experiments on the World-Wide Grid for

solving parameter sweep—task and data parallel—applications.

This paper focuses on introduction, grid definition and its evolution. It covers

about grid characteristics, types of grids and an example describing a community grid

model. It gives an overview of grid tools, various components, advantages followed by

conclusion and bibliography.

The Grid unites servers and storage into a single system that acts as a single
computer - all your applications tap into all your computing power. Hardware resources
are fully utilized and spikes in demand are met with ease. This Web site sponsored by
Oracle brings you the resources you need to evaluate your organization's adoption of
grid technologies. The Grid is ready when you are.

The Grid is the computing and data management infrastructure that will provide
the electronic underpinning for a global society in business, government, research,
science and entertainment, integrate networking, communication, computation and
information to provide a virtual platform for computation and data management in the
same way that the Internet integrates resources to form a virtual platform for
information. The Grid is the computing and data management infrastructure that will
provide the electronic. Grid infrastructure will provide us with the ability to dynamically
link together resources as an ensemble to support the execution of large-scale,
resource-intensive, and distributed applications.
Grid is a type of parallel and distributed system that enables the sharing, selection,
and aggregation of geographically distributed "autonomous" resources dynamically at
runtime depending on their availability, capability, performance, cost, and users'

quality-of-service requirements.
Parallel computing in the 1980s focused researchers’ efforts on the
development of algorithms, programs and architectures that supported simultaneity.
During the 1980s and 1990s, software for parallel computers focused on providing
powerful mechanisms for managing communication between processors, and
development and execution environments for parallel machines. Successful application
paradigms were developed to leverage the immense potential of shared and distributed
memory architectures. Initially it was thought that the Grid would be most useful in
extending parallel computing paradigms from tightly coupled clusters to geographically
distributed systems. However, in practice, the Grid has been utilized more as a
platform for the integration of loosely coupled applications – some components of
which might be running in parallel on a low-latency parallel machine – and for linking
disparate resources (storage, computation, visualization, instruments). Coordination
and distribution – two fundamental concepts in Grid Computing.
The first modern Grid is generally considered to be the information wide-area
year (IWAY). Developing infrastructure and applications for the I-WAY provided a
seminar and powerful experience for the first generation of modern Grid researchers
and projects. Grid research focuses on addressing the problems of integration and
management of software.
An enterprise-computing grid is characterized by three primary features -
• Diversity;
• Decentralization; and
A typical computing grid consists of many hundreds of managed resources of
various kinds including servers, storage, Database Servers, Application Servers,
Enterprise Applications, and system services like Directory Services, Security and
Identity Management Services, and others. Managing these resources and their life
cycle is a complex challenge.
Traditional distributed systems have typically been managed from a central
administration point. A computing grid further compounds these challenges
since the resources can be even more decentralized and may be geographically
distributed across many different data centers within an enterprise.
Grid computing can be used in a variety of ways to address various kinds of application
requirements. Often, grids are categorized by the type of solutions that they best
address. The three primary types of grids are
Computational grid
A computational grid is focused on setting aside resources specifically for
computing power. In this type of grid, most of the machines are high-performance

Scavenging grid
A scavenging grid is most commonly used with large numbers of desktop
machines. Machines are scavenged for available CPU cycles and other resources.
Owners of the desktop machines are usually given control over when their resources
are available to participate in the grid.
Data grid
A data grid is responsible for housing and providing access to data across
multiple organizations. Users are not concerned with where this data is located as long
as they have access to the data. For example, you may have two universities doing life
science research, each with unique data. A data grid would allow them to share their
data, manage the data, and manage security issues such as who has access to what
Infrastructure components include file systems, schedulers and resource
managers, messaging systems, security applications, certificate authorities, and file
transfer mechanisms like Grid FTP.
• Directory services. Systems on a grid must be capable of discovering what
services are available to them. In short, Grid systems must be able to define
(and monitor) a grid’s topology in order to share and collaborate. Many Grid
directory services implementations are based on past successful models, such as
LDAP, DNS, network management protocols, and indexing services.
• Schedulers and load balancers. One of the main benefits of a grid is maximizing
efficiency. Schedulers and load balancers provide this function and more.
Schedulers ensure that jobs are completed in some order (priority, deadline,
urgency, for instance) and load balancers distribute tasks and data management
across systems to decrease the chance of bottlenecks.
• Developer tools. Every arena of computing endeavor requires tools that allow
developers to succeed. Tools for grid developers focus on different niches (file
transfer, communications, environment control), and range from utilities to full-
blown APIs.
• Security. Security in a grid environment can mean authentication and
authorization in other words, controlling who/what can access a grid’s resources
-- but it can mean a lot more. For instance, message integrity and message
confidentiality are crucial to financial and healthcare environments
Depending on the grid design and its expected use, some of these components
may or may not be required, and in some cases they may be combined to form a
hybrid component.
Portal/user interface
Just as a consumer sees the power grid as a receptacle in the wall, a grid user
should not see all of the complexities of the computing grid. Although the user
interface can come in many forms and be application-specific. A grid portal provides
the interface for a user to launch applications that will use the resources and services
provided by the grid. From this perspective, the user sees the grid as a virtual
computing resource just as the consumer of power sees the receptacle as an. interface
to a virtual generator.
Figure 2: Possible user view of a grid
A major requirement for Grid computing is security. At the base of any grid
environment, there must be mechanisms to provide security, including authentication,
authorization, data encryption, and so on. The Grid Security Infrastructure (GSI)
component of the Globus Toolkit provides robust security mechanisms. The GSI
includes an OpenSSL implementation. It also provides a single sign-on mechanism, so
that once a user is authenticated, a proxy certificate is created and used when
performing actions within the grid. When designing your grid environment, you may
use the GSI sign-in to grant access to the portal, or you may have your own security
for the portal.

Figure 3: Security in a grid environment

Once authenticated, the user will be launching an application. Based on the
application, and possibly on other parameters provided by the user, the next step is to
identify the available and appropriate resources to use within the grid. This task could
be carried out by a broker function. Although there is no broker implementation
provided by Globus, there is an LDAP-based information service. This service is called
the Grid Information Service (GIS), or more commonly the Monitoring and Discovery
Service (MDS). This service provides information about the available resources within
the grid and their status. A broker service could be developed that utilizes MDS.

Figure 4: Broker service

Once the resources have been identified, the next logical step is to schedule the
individual jobs to run on them. If sets of stand-alone jobs are to be executed with no
interdependencies, then a specialized scheduler may not be required. However, if you
want to reserve a specific resource or ensure that different jobs within the application
run concurrently then a job scheduler should be used to coordinate the execution of
the jobs.. It should also be noted that there could be different levels of schedulers
within a grid environment. For instance, a cluster could be represented as a single
resource. The cluster may have its own scheduler to help manage the nodes it
contains. A higher-level scheduler (sometimes called a meta scheduler) might be used
to schedule work to be done on a cluster, while the cluster's scheduler would handle
the actual scheduling of work on the cluster's individual nodes.

Figure 5:
Job and
With all the other facilities we have just discussed in place, we now get to the
core set of services that help perform actual work in a grid environment. The Grid
Resource Allocation Manager (GRAM) provides the services to actually launch a job on
a particular resource, check its status, and retrieve its results when it is complete.

Figure 7: Gram
Job flow in a grid environment
Enabling an application for a grid environment, it is important to keep in mind
these components and how they relate and interact with one another. Depending on
your grid implementation and application requirements, there are many ways in which
these pieces can be put together to create a solution.
Grid computing is about getting computers to work together. Almost every
organization is sitting on top of enormous, unused computing capacity, widely
distributed. Mainframes are idle 40% of the time With Grid computing, businesses can
optimize computing and data resources, pool them for large capacity workloads, share
them across networks, and enable collaboration. Many consider Grid computing the
next logical step in the evolution of the Internet, and maturing standards and a drop in
the cost of bandwidth are fueling the momentum we're experiencing today.
A word of caution should be given to the overly enthusiastic. The grid is not a
silver bullet that can take any application and run it a 1000 times faster without the
need for buying any more machines or software. Not every application is suitable or
enabled for running on a grid. Some kinds of applications simply cannot be
parallelized. For others, it can take a large amount of work to modify them to achieve
faster throughput. The configuration of a grid can greatly affect the performance,
reliability, and security of an organization's computing infrastructure. For all of these
reasons, it is important for us to understand how far the grid has evolved today and
which features are coming tomorrow or in the distant future
Grid computing introduces a new concept to IT infrastructures because it
supports distributed computing over a network of heterogeneous resources and is
enabled by open standards. Grid computing works to optimize underutilized resources,
decrease capital expenditures, and reduce the total cost of ownership. This solution
extends beyond data processing and into information management as well.
Information in this context covers data in databases, files, and storage devices. In this
article, we outline potential problems and the means of solving them in a distributed
environment. .

[2] Foster, I. and Kesselman, C. (eds) (1999) The Grid: Blueprint for a New
Computing Infrastructure.. San Francisco, CA: Morgan Kaufmann
[3] Berman, F., Fox, G. and Hey, T. (2003) Grid Computing: Making the Global
Infrastructure a Reality. Chichester: John Wiley & Sons.
[4] Web Site associated with book, Grid Computing: Making the Global
Infrastructure a Reality,
E-Learning: Learning Through Internet

(Major area - Web Technologies)


S.Sindhuri & K.Preethi

CSE, 2nd year.

G.Narayannama Institute Of Technology & Science

(For Women)
Shaikpet, Hyderabad-500008
Contact: 040-23565648 (exn.313)
Phone: 9441606667
E-mail –

This paper presents an approach for integrating e-learning with the traditional
education system. A conceptual map is then created for this integration leading to a
functional model for open and flexible learning. In the proposed integration,
convergence of CD-based, class based and web based education in recommended
and architecture to achieve this convergence is presented. In order to transform the
existing schools, colleges and universities into digital campuses, an inclusive system
architecture is designed for digital campus. A case study is given an actual
implementation in a conventional school. Integration of e-learning with traditional
education is not only possible but also highly effective with the proposed model.

E-learning is part of the Web Technologies. What is e-learning? Advances in
Information and Communication Technologies (ICT) have significantly transformed the
way we live and work today. This transformation is so ubiquitous and pervasive that
we often talk about emergence of “knowledge economy”, “knowledge-based society”
and “knowledge ear”.
ICT had fundamentally three effects on human society one, the barrier of
geography has been dissolved; “Distance is dead” is the new metaphor. Second, the
concept time itself has undergone change. Once can interact now synchronously as
well as asynchronously. The events can also be simulated and activities can take place
in cyber space on pre-defined times scales. Lastly, ICT allows us to individual himself
along with the environment in which he is immersed. No wonder, advances in ICT have
impacted the human civilization more than anything else in history ever before.

E-learning should be defined as learning opportunities delivered and facilitated

by electronic means. In this e-learning mode, education contents are delivered by
electronic technology. E-learning differs from the traditional system because of the
way it is delivered. E-learning imparts all the three main components of learning,
namely contents, learning methodologies, and teaching or tutoring methodologies on
one hand and e-learning can supplement and complement the traditional system on
the other hand. It is the alternative for the traditional education system.
This paper mainly focuses on the difference between traditional education
system and e-learning. In e-learning, three different forms of computer based systems
have evolved. 1) CD based education (CDE), 2) Class room based learning (CEBE), and
3) Web based learning (WBE) with their own advantages and limitations.

Experiments done at ETH Research Lab show that the success of e-learning
strategies depends on how best we can combine all the three learning modes. This is
the best method for integrating e-learning with the traditional teaching process. The e-
learning technology can create an open and flexible learning environment with the
convergence of CD based, class based and web based education.

Conceptual Model:
Traditionally our education system has two distinct processes, namely the
learning process and the learning administration or support process. Learning system
caters to the development of skills and competencies in the learners through personal
learning, group learning group learning in class, learning from teachers and experts,
and learning from the experiences of self and others.
The e-learning system architecture must address both to the learning process as
well as learning support process. In e-learning support processes are equally important
as the content and services has to be provided by educational providers (ESPs) as such
experts normally dose on the campuses.

ETH Research Lab has advanced open and flexible e-learning system
architecture as shown in fig1, providing convergence of formal, non formal and
informal education and bringing educational service providers, activity providers,
learning resources on one single platform for achieving mass personalized education.
The conceptual model has all required subsystems namely learning system, learning
support system, delivery system, maintenance-development-production (MDP)
system, and total administration and QA system.
Learning system of traditional education system will be enhanced with the
availability and accessibility of virtual learning resources.

Fig 1:E-learning conceptual model

Learning support system addresses the technology for facilitation of learning

system. It will also include the learning from the remote tutors through the virtual
class mode, lecture on demand and formative feedback for the assessment and
Delivery system delivers the content in electronic form to the learners and
distributed classrooms.
Services management (maintenance, development and production) will help in
collaborating with the service providers. This enhances the knowledge repository as
compared to the traditional education system.
Total management and QA system monitors and manages the quality education
delivery to the learners.
Functional model:
The conceptual architecture best describes the convergence of the WBE, CBE,
and CEBE. With the field trails and experiments, we have observed that there is a need
for the operational support services and mini ERP services for managing the
educational institutions. With technology we can make the whole administration and
management of education more effective. Our architecture aims at creating a full-
fledged digital campus driven by ICT.

Fig – E-learning functional model

The conceptual architecture of e-learning described in the previous section can
be enhanced with these services for managing the institution..


Functional model for learning system and learning support systems are
implemented as Learning Management System (LMS) and Learning Content
Management System (LCMS). Content Authoring Tool is useful for creation of the
content. With this, teachers can create their own content and publish it. Content
creation with the SCORM (Sharable Content Object Reference Model) standards makes
it interchangeable across the LMS.
Content Assembly Tools plays an important role in packaging the content prepared
with authoring tool and publishing it. Our architecture complies with IMS Content
Packaging standards for content assembly tool thus giving interoperability. Learning
Objects published with content assembly tools are categorized with the Learning
Object Metadata and stored in Catalog Manager.

Fig 3:E-learning LMS system

Course Repository stores the course structures which integrates the learning
objects and sequence them.
Learning Planner assists learner with the appropriate alternative plans
corresponding to the learning objective of the learner. Finally it sends the User
Information to the Use Repository.

Personalized delivery of the course to the learner is managed with delivery

engine in LAN version and WAN version.
For the individual leaning and group learning various collaborative services like
special interest groups, discussion forum, virtual classroom and various enrichment
contents can be delivered through collaboration engine. When we collaborate with the
Educational Service Providers, we need to provide the facility for monitoring and
control of the services provided by them. Operational Support Services (OSS) helps in
service collaboration, launching, billing, support and maintenance services.
ETH Research Lab, we have approached this architecture with the Open Source
technologies, independent of platform and databases. Support for multilingual
technologies is also provided. XML based integration layer helps in data exchange and
collaboration across the systems.

Service oriented architecture (SOA):

Researchers are implementing the ETH mission in phased manner. They have
deployed the computer literacy program in collaboration with university. Now we are
extending it through schools and colleges. With digital campus, we will digitize schools
and colleges within the campus. Here e-learning will be complementary to the
traditional education process.
All the digital campuses should collaborate on one platform in order to build the
content and knowledge repository which can be delivered to the students
electronically. This collaborate approach will be driven through the service oriented
architecture (SOP). Schematic representation of the SOA implementation is expressed
in fig4.
Service oriented architecture can be application to the networked digital campus
as expressed in the above architecture diagram. We have used J2EE framework to
implemented SOA. SOA is a collection of services on a network that communicate with
one another. The services are loosely coupled, have well-defined, platform
independent interfaces, and are reusable. SOA is a higher level of application
development. Web services use WSDL, UDDI and SOAP. Services provides information.
Service providers describe their services in a WSDL document. The URL for WSDL is
give in UDDI.
With the application of this, the learning system of one campus can be accessed
based on the agreed terms and conditions for service sharing by other campus
through UDDI search. UDDI locates the WDSL file which locates the service on the
network. This will deliver the desired learning resources to the learners on single sign-
The common platform is portal which can be used for convergence of
educational services from the educational service providers. This service can be easily
accessible to various digital campuses as the learning and teaching resources.


Binds Service
Learner Digital Campus

WSDL Digital

Fig 4: Schematic Representation of use of SOA

Case Study:
Lectures can be enhanced with the electronic media and can be projected to the
student in the classrooms. Interactive CDs help learners to visualize and understand
the matter; teachers can make use of computers for making the lectures more lively
and effective through multimedia. Documentation management system can help issue
and management of bonafied certificates, leaving certificates and other statutory
certificates for student and staff. Work flow management system allows management
of processes like lecture scheduling; leave management, library management helps in
issue and procurement of books.
The concept of the Digital School has been implemented where we have
digitalized the operations of the schools, CD based content of the school syllabus for
teachers and student and web based support and enrichment material. It has got wide
acceptance from the teachers and learners as it reflected the processes and
methodologies they have been following in the school.
This system now helping in admission process, scheduling of lectures, managing
conventional and digital library resources, lecture on demand, students notes, group
learning through Special Interest Groups, pre and post examination processes,
accounts and finance management, budgetary control, asset management of the
school along with the day today administration of attendance payroll and also
documentation management of statutory certificates for students.
Currently the challenge lies in getting the schools into the e-learning framework,
orientation of teachers and making the teachers use the system on their own. Other
challenges are installation and maintenance of infrastructure as well as organizing
budgets for the digital campus initiative in each institution.

With Digital Campus program, schools, colleges, teachers, student, learning

resources will be able to collaborate on one platform. The convergence model will have
to be supported with quality educational services and activities. Best teachers are
made available across the campus and across other institutions through virtual

After going through the survey work of implementation of e-learning through
several schools and colleges with the Open and Flexible Learning Environment
architecture, we feel that the success in the co-existence and integration of traditional
and e-learning strategies.
Current barriers for success of these convergence models are mind set, budgets
for infrastructures, preparedness if teachers and local support. However, with marked
reduction in the cost of PCs, laptops, networking and servers, introduction of IT subject
in school syllabus, spread of affordable internet and growing awareness of the benefits
of IT in education, the perceived barriers are getting dissolved.
However, there is a great challenge of quality content creation. This must be
compliant with the emerging international standards such as SCROM / IMS to be
reusable and modifiable. Our objective in this paper was to present an implementable
architecture for integration of e-learning with traditional education in school, colleges
and universities. We have also presented the results of our implementations as a case
study proving the validity of our architecture and strategies.

1.White paper on “An Implementable Architecture off an e-learning system” by Xiaofel
Liu, Abdulmotaleb El Saddik and Nicolas D. Georganas.
2.“e-Learning Interoperability Standards”, Sun white paper, January 2002-
3.“e-Learning theories and pedagogical Strategies”- hhtp://
4.“Evaluating the effectiveness of E-Learning Strategies for Small Medium Enterprises”
– Eduardo Figueria
5.“ Triple Play Solutions for the Student Residence”- Allied Telesyn


Presented by ---

K.Siddu Avinash R.K.Pavan

(05001A0528) (05001A0513)
ph: 9908966866 ph: 9966537591
Email id:

Data warehousing provides architectures and tools for business
executives to systematically organize, understand, and use their data to make
Strategic decisions. Data Warehouses are arguably among the best resources a
modern company owns. As enterprises operate day in day out, the data warehouse
may be updated with a myriad of business process and transactional information:
orders, invoices, customers, shipments and other data together form the corporate
operations archive.
As the volume of data in the Warehouse continues to grow so, the time it takes
to mine (extract) the required data, individual queries, as well as loading data can
consume enormous amounts of processing Power and time, impeding other data
warehouse activity; and customers experiences slow response times while the
information technology(IT) budget shrinks-this is the data warehouse dilemma
Ideal solutions, which perform all the work speedily and with out
cost, are obviously impractical to consider. The near-ideal solution would, 1) help
reduce load process time, and 2) optimize available resources for the analysis, to
achieve these two tasks we need to invest in buying additional compute resources.

There is a solution, however, enabling the ability to gain compute resources

without purchasing additional hardware: Grid computing.

Grid Computing provides a novel approach to harnessing distributed resources,

including applications, computing platforms or databases and file systems. Applying
Grid computing can drive significant benefits to the business by improving information
access and responsiveness.
The Grid-enabled application layer dispatches jobs in parallel to
multiple compute nodes; this parallelization of previously serial tasks to
multiple CPU’s is where Grid gets its power. Grids can benefit from sharing
existing resources and adding dedicated resources such as clusters to
improve throughput.
Finally, Grid Computing solution enables the ability to gain compute resources
with out purchasing additional hardware.
Data warehouse:
“A data warehouse is a subject-oriented, integrated, time-variant, and
nonvolatile collection of data in support of management’s decision making process”.
Subject-oriented: A data warehouse is organized around major subjects, such as
customer, supplier, product, and sales
Integrated: A data warehouse is usually constructed by integrating multiple
heterogeneous sources, such as relational databases, flat files, and on-line transaction
Time-variant:Data are stored to provide information from a historical
perspective(e.g., the past 5-10 years)
A data warehouse is a copy of transaction data specifically
structed for querying, analysis and reporting. Data warehouse is a Star & Snowflake
architecture where data is viewed as 3D – cube (multidimensional model).

Grid Technology:
Grid technology is a form of distributed computing that involves
coordinating and sharing computing, application, data, storage, or network resources
across dynamic and geographically dispersed organizations, Grid technologies promise
to change the way organizations tackle complex computational problems.
With a Grid, networked resources desktops, servers, storage,
databases, even scientific instruments can be combined to deploy massive computing
power wherever and whenever it is needed most users can find resources quickly, use
them efficiently, and scale them seamlessly. There are various types of grids are their
viz.. cluster grids, campus grids, global grids e.t.c.

(an eg of global grid)


A data warehouse is comprised of two processes:loading

(input) and analysis (output). Loading a data warehouse is often straightforward. Data
is collected from multiple systems of record (extraction), cleansed and normalized to
meet the data integrity standards of the warehouse (transformation), and then
brought into the warehouse as new records (load): a process referred to as ETL
(Extract, Transform and Load). Once the data is loaded, it is ready for analysis.
Although analysis can be performed by querying the data warehouse directly, most
often smaller data subsets, referred to as data cubes, are created and distributed.

Third-party tools are used to

analyze the data within the cubes separately from the warehouse. Depending on the
analysis tool,the data cube may need to be built to meet multiple specifications
requiring the data to go through a second ETL process. It may be easier to throw more
processing power at both processes, but this is not always a cost-effective solution.
Although the load process is fundamental to the warehouse, value is derived only
during analysis. Therefore, we can target the non-value-added load step and seek to
reduce its impact on warehouse resources.
The near-ideal solution :
Ideal solutions, which perform all the work speedily and
without cost, are obviously impractical to consider. The near-ideal solution would, 1)
help reduce load process time, and 2) optimize available resources for the analysis
process at a cost lower than upgrading hardware. Focusing on reducing the load
process first, the computing burden is primarily the result of the transformation step
(cleansing and normalization). The addition of compute resources can accelerate the
transformation step; however, it must be more cost-effective than upgrading
During transformation, a given set of rules is applied to a set of data.
Often the data is discrete, not dependent on relationships to external data (e.g., when
processing market data, each market territory is typically treated with the same rules
but processed independently). Without dependencies between units of work, changing
the serial model to a parallel model can accelerate processing.
Processing multiple work units at once reduces the time required by the
transformation step and thereby meets the first criteria to help reduce the load
process. Moving to a parallel model requires the addition of compute resources, which
would correspondingly satisfy the second criteria of optimizing resource availability for
analysis—although the cost of adding compute resources must be lower than
upgrading the data warehouse hardware. There is a solution, however, enabling the
ability to gain compute resources without purchasing additional hardware: Grid
Off-loading ETL process to a Grid environment can lower the processing
cost, reduce the cycle time, and enable the reallocation of critical data warehouse
resources for use by data mining and reporting tools. Providers may eventually use
information Grids to enable their customers to access the data in a virtual data
warehouse environment as if it was stored locally.

Harnessing distributed resources with Grid computing:

Grid computing enables the virtualization of distributed computing
over a network of heterogeneous resources—such as processing, network bandwidth
and storage capacity—giving users and applications seamless, on demand access to
vast IT capabilities. Using power of open standards, it enables computing capacity to
be dynamically procured and released based on variations in peak loads - offering
business value in day-to-day operations as well as great response and recovery.
Grid computing provides a novel approach to harnessing distributed
resources , including applications, computing platforms or databases and file systems.
Applying Grid computing can drive significant benefits to the business by improving
information access and responsiveness, and adding flexibility, all crucial components of
solving the data warehouse dilemma. By placing CPUs into a virtual resource pool,
more power can be applied where needed. Achange at the application layer is required
to take advantage of the new architecture: a task often performed in days.
The Grid-enabled application layer dispatches jobs in parallel to
multiple compute nodes; this parallelization of previously serial tasks to multiple CPUs
is where Grid gets its Power. The ability to execute transformation tasks independently
enables the load process to be broken into multiple subprocesses, which can be sent to
a different node in the virtual pool. ( for eg, to find no of people who have age > 18 in
A.P (suppose A.P census data is maintained in an data warehouse) approximately 7
crore of data need to processed by a single CPU (server), assuming data is
denormalised) it may take some hours. if A.P census dept has 28 computers ( this can
be combined to form a grid) then we can distribute 25 lakhs of data to each CPU ,
then the results may be achieved with in minutes) similarly, in medical system to
obtain the details of patient’s who used a specific drug.
(using sun cluster grid The Durham university cosmology Engine
Performs 465 billon arithmetic operations per second on a sun cluster grid)
( sun as a grid of over 7,500 total CPU’s across 3 us sites, with over
98 % CPU utilization excuting over 50,000 EDA job’s a day)

further, to guard against downtime and failures, each subprocess

can be sent to multiple nodes—each competing to return the results first, eliminating
the single point of failure while guaranteeing the fastest response time. The nodes that
form the virtual pool may already exist as underutilized servers within the
organization. Grids can benefit from not only sharing existing resources, but also from
adding dedicated resources such as clusters to improve throughput. This would help
improve predictability in completion of a job governed by service level agreements
By adding new hardware to the Grid, resource utilization can be optimized
versus dedicating the hardware to one particular process. Best of all, “Grid is a mature
technology that has already been scrutinized and proven successfully”.
New technology security concerns:
As expected, Grid computing addresses universal concerns regarding
new technology, including security, scalability and reliability. Security is managed at
multiple levels:
•At the user level, the Grid middleware server performs rights management,
abstracting the user from the Grid resources.
•At the data level, data is encrypted during transmission between Grid nodes and the
Grid middleware server.
•At the process level, the Grid middleware operates in a virtually tamper-
proof“sandbox” to help prevent viewing or modifying the data or process.
Scalability is one of the inherent capabilities of Grid computing. As a
virtual pool of resources, scalability is governed by the limitations of the size of data
being transferred and available bandwidth. There aren’t any inherent restrictions on
the size of a Grid, and reliability is addressed through multiple approaches. Tasks can
be constructed so a node failure prompts the Grid mid-dleware to resubmit the job to
another node. Alternately, jobs can be submitted to multiple nodes and accept the first
response, thus helping to eliminate the concern of a single point of failure while
providing the fastest response time.
Building on the concept of virtualizing computing power into resource
pools,information Grids treat data silos as information pools with nearby storage pools.
Data required by the warehouse can be accessed via the information Grid where
domain security management is handled automatically. Data transformations can be
performed at the remote client and the results brought back across the network,
encrypted for privacy.
Retrieved data is cached to improve performance with mechanisms for
expira-tion to verify that only the most recent data is available. The resulting solution
is not a replacement for guaranteed delivery-messaging solutions. However,for less
time dependent business intelligence, the solution is designed to provide the needed
functionality at the lowest possible cost by leveraging the existing infrastructure. As
an added benefit, the information Grid provides a new IT service, enabling remote silos
to access each other. This opens up an entirely new realm of time-independent
opportunities such as data replication and inherent resiliency.
Making it happen:
To determine if your data warehouse can benefit from Grid
technologies, it’s necessary to institute a process of investigation. A good place to start
is by inventorying all the ETL processes and evaluating the big hitters. Another good,
low-risk place to start is by engaging a consultant with Grid experience.

Conclusion :
Grid computing introduces a new concept to IT infrastructures because it
supports distributed computing over a network of heterogeneous resources and is
enabled by open standards. Grid computing—which helps optimize underutlized
resources, decrease expenses and reduce costs—has helped organizations accelerate
business processes, enable more innovative applications, enhance productivity, and
improve resiliency of IT infrastructure.
finally, applying grid technology to a data warehouse can
reduce the cost , with out purchasing the additional hardware.

References :

Modern data ware housing, mining & Visualization - marakas

Building the data ware house - immon

Network security and protocols



G.Ramya Krishna. B. Durga Raja Sena.

ID: 06091A0573 ID: 06091A0524



Affiliated to J.N.T.U, HYDERRABAD. Accredited


Network security is a complicated subject, historically only tackled by

well trained and experienced experts. However as more and more people become
"wired" an increasing number of people need to understand the basics of security in
a networked world in this document we discuss some of the network security issues
in TCP/IP.

The Transmission control protocol/Internet protocol (TCP/IP) suite is a

very widely used technique that is employed inter connect computing facilities in
modern network environments TCP/IP is the "language" of the internet. Anything that
can learn to "Speak TCP/IP" can play on the internet.

However , there exist several security vulnerabilities in the TCP specification and
additional weaknesses in a number of widely available implementations of TCP.
These vulnerabilities may unable an intruder to "attack" TCP - based systems,
enabling him or her to "hijack" a TCP connection are cause denial of service to
legitimate users. We discuss some of the flaws present in the TCP implementation of
many widely used operating system and provide recommendations to improve the
security state of a TCP-based system. e.g., incorporation of a "Timer escape route"
from every TCP state.

Keywords and phrases:

Network security, TCP, IP, Vulnerability analysis, state transitions


Internet working is an approach that allows dissimilar computers on dissimilar

networks to communicate with one another in seamless fashions by hiding the
details of the underlying network hardware. The most widely used form of internet
working is provided by the transmission control protocol/Internet protocol (TCP/IP)

There are some inherent security problems in the TCP/IP suite which
makes the situation conducive to intruders. TCP sequence numbers prediction, IP
address spoofing, misuse of IP's source routing principle, use of internet control
message protocol (ICMP) messages denial of service, etc are some methods to exploit
the networks vulnerabilities. Considering the fact that most important application
programs such as simple mail transfer protocol(SMPP),Telnet-
commands(rlogin,rsh,etc),file transfer protocol(FTP),etc. have TCP as their transport
layer, security flaws in TCP can prove to be very hazardous for the network.
The objectives of this paper are to identify and analyze the vulnerabilities of TCP/IP
and to develop security enhancements to overcome those flaws. Our work is based on
analyzing the state transition diagram of TCP and determining the security relevance
of some of the “improperly-defined” transitions between different states in the state
transition diagram of many widely used TCP code implementations.

The TCP Protocol uses a simple method to establish communications between
computers. The origination computer (the one trying to start a communications
“session”) sends an initial bit of data or packet called a “SYN” to the computer or other
device with which it needs to communicate. The “target” computer answers the original
computer with another data packet called an “ACK” or acknowledgement. The original
computer then returns an “ACK” packet back to the “target” computer.
This process is referred to as the SYN-ACK “handshake” or the “three-way handshake”
and is characteristic of all TCP/IP communications. This process is illustrated in Figure


Workstation Web Server

Send Request Server Accept

Read TCP
+ Re
se D

Process Request /

ACK Send Response
ta +
onse Da
+ Re

Wakeup ACK

Figure B.6-. TCP-IP “Handshake”

Security Issues
SYN Flood Attack
TCP/IP uses a “best effort” method in delivering packets, which just means that
it has no real way to guarantee the delivery of a packet to its destination. It’s
interesting to consider that the Post Office uses this method as well. One of the
consequences of this is that message latency and packet loss are not uncommon
which can result in messages arriving late or in non-sequential order. The TCP
part of TCP/IP uses sequence numbers to identify packets and to help ensure
that it is given to the user in the correct order regardless of when the data is
actually received. These sequence numbers are initially established during the
opening phase of a TCP connection, in the three-way handshake illustrated in
Figure B.6-1.

One way that “hackers” exploit TCP/IP is to launch what is called a SYN attack
(sometimes called “SYN Flooding”) which takes advantage of how hosts implement
the “three-way handshake.” When the “target” computer (illustrated in B.6-1)
receives a SYN request from the originating computer, it must keep track of the
partially opened connection in what is called a "listen queue" for at least 75
seconds. This was built into TCP/IP to allow successful connections even with long
network delays.

The problem with doing this is that sometimes the TCP/IP is configured so that it
can only keep track of a limited number of connections (most do 5 connections
by default, although some can track up to 1024). A person with malicious intent
can exploit the small size of the listen queue by sending multiple SYN requests
to a computer or host, but never replying to the SYN-ACK the “target” sends
back. By doing this, the host's listen queue is quickly filled up, and it will stop
accepting new connections, until a partially opened connection in the queue is
completed or times out.

The classic SYN flood was introduced in 1993 by members of the CompuServe
“Internet chat” community as a method of removing “undesirables” from chat
rooms or networks. The first UNIX programs to utilize this method were
synflood.c (circa 1993) and nuke.c Satanic Mechanic (circa 1992/1993). This
ability to effectively remove a host from the network for at least 75 seconds can
be used solely as a denial-of-service attack, or it can be used as a tool to
implement other attacks, like IP Spoofing.
IP Spoofing
The Internet Protocol (IP) portion of TCP/IP is the part that carries the
information describing where the packet is coming from and where it’s going to.
This information is called the “IP Address.” “IP Spoofing” is an attack where an
attacker pretends to be sending data from an IP address other than his own.
TCP/IP assumes that the source address on any IP packet it receives is the same
IP address as the system that actually sent the packet (which is a vulnerability
of TCP/IP in that it incorporates no authentication.) Many higher level protocols
and applications also make this assumption, so anyone able to fake (or “forge”)
the source address of an IP packet (called "spoofing" an address) could get
authorized privileges as an unauthorized user.

There are two disadvantages to this spoofing technique. The first is that all
communication is likely to be one-way. The remote host will send all replies to
the spoofed source address, not to the host actually doing the spoofing. So, an
attacker using IP spoofing is unlikely to see output from the remote system
unless he has some other method of eavesdropping on the network between the
other two hosts. Additional information is available in the IT white paper,
Intrusion Prevention & Detection. The second disadvantage is that an attacker
needs to use the correct sequence numbers if he plans on establishing a TCP
connection with the compromised host. Many “common” applications or services
that run on many operating systems, like Telnet and FTP, use TCP. The final ACK
in the three-way handshake must contain the other host's first sequence
number, known as the initial sequence number or ISN. If TCP/IP does not “see”
this ISN in the ACK, the connection cannot be completed because the ISN in the
SYN+ACK packet is sent to the real host, an attacker must “steal” this ISN by
some technical method. If he could “eavesdrop” on the packets sent from the
other host (using an IDS or protocol analyzer), he could see the ISN.
Unfortunately, attackers have developed new ways to overcome both of these
challenges to IP Spoofing.
Source Routing
Another way to do IP spoofing makes use of an IP option which is rarely used
called "source routing." Source routing allows the originating host to specify the
path (route) that the receiver should use to reply to it. Any attacker can take
advantage of this by specifying a route that by-passes the real host, and instead
directs replies to a path it can monitor (probably to itself). Although simple, this
attack may not be as successful now, as most routers are commonly configured
to drop packets with source routing enabled.

Connection or “Session” Hijacking

Yet another way to accomplish IP spoofing is for a host to insert itself in the
middle of a connection between two hosts. This is called “connection hijacking”
or “session hijacking.” IP spoofing alone may not bypass additional security,
such as an authentication measure that we have added or enforced on our
operating system, but with this attack, an attacker can allow normal
authentication to proceed between the two hosts, and then seize control of the

Session hijacking exploits a "desynchronized state" in TCP communication. This

happens when the sequence number in a received packet is not the same as the
expected sequence number. In this case, the connection is said to be
"desynchronized." Depending on the actual numerical value of the received
sequence number, TCP may either discard or store the packet in its queue. TCP
has a choice, because it uses a “sliding window” application or protocol for
efficient communication even if the network has extensive packet loss. If the
received packet is not the one TCP expected, but is within the current “window,”
the packet will be saved and TCP will expect it later. If the received packet is
outside of the current window, it is discarded. The result is that when two hosts
are desynchronized enough they will discard (or ignore) packets from each
other. The attacker can then inject ‘forged’ packets with the correct sequence
numbers (and potentially modify or add commands to the packet).
This exploit is usually perpetrated by an “insider” because it requires the
attacker to be located in the communication path between the two hosts to
“listen-in and replicate packets being sent. The key to this attack is creating the
desynchronized state.

Note that "ignored" or discarded packets may actually generate ACKs, rather
than being completely ignored. When the other end receives packets with wrong
sequence numbers, it sends back an ACK packet with the sequence number it is
expecting. The receiver of these ACK discards them, since they have the wrong
sequence numbers. The receiver then sends its own ACK to notify the sender. In
this way, a large number of ACKs are generated forming the attack. This is a
classic "signature" employed by Intrusion Detection Systems (IDS) to detect
session hijacking. Additional information on intrusion detection is available in the

IT white paper, Intrusion Prevention & Detection.

ICMP Attack
The Internet control message protocol (ICMP) is used in networking to send a
one-way informational message to a host or device. The most common use of
ICMP is the "PING" utility. This application sends an ICMP "echo request" to a
host, and waits for that host to send back an ICMP "echo reply" message. This
utility application is very small and has no method of authentication and is
consequently used as a tool by “an attacker to intercept packets or cause a
denial of service.

Attackers almost always use either the "time exceeded" or "destination

unreachable" messages that can be generated by ICMP to carry out the attack.
The "time exceeded" message describes the “time-to-live” or the amount of time
allocated for a packet to exist as it travels around the network. This can
normally be caused by trying to reach a host that is extremely distant.
"Destination unreachable" basically indicates that packets cannot successfully be
sent to the desired host. Both of these ICMP messages can cause a host to
immediately drop a connection (which is what you want if the ICMP message is
legitimate). An attacker can make use of this by forging one of these ICMP
messages, and sending it to one or both of the communicating hosts which
breaks their connection.

These are just some of the more common attacks that can occur because of the
nature and design of the TCP/IP protocol. As we mentioned earlier, we must
employ additional technology and practices to compensate for this shortfall in
TCP/IP. These measures are characterized as Internet security.

Internet Security Measures

The most common technology applied to Internet Security today is the firewall.
The first firewalls were originally “packet filters.” Since this filtering can be
accomplished on dedicated computers placed in the network or on the router at
the boundary of the network these “first generation” firewalls are often referred
to as screening routers. Packet filtering uses access control lists or “rules” which
tell the firewall what to let into the network and what to reject. These rules are
sequentially applied to each packet that is received. The packet filter firewall
also examines each incoming packet’s source and destination IP address to be
sure that it is directed to a legitimate or “real” computer and that the sender is
not on the inside of our network. That is a symptom of a potential compromise
since packets destined for a computer on the safe side of the firewall wouldn’t
need to go through it to the unsafe side and back again. An example of a packet
filter firewall is the PIX firewall from Cisco.

Firewalls are extremely effective in preventing IP spoofing attacks. This is

primarily due to their packet filtering abilities. The most effective ability firewalls
have in terms of preventing spoofing attacks is that they clearly define the
outside or “untrusted” side of the firewall from the inside or “trusted” side of the
firewall. They force everything inside to go through the ”inside” interface or port
on the firewall, and everything outside must come in through the “outside”
interface. This means that the packet filtering done in the firewall can drop
suspicious packets. If the firewall detects a packet coming from the outside that
has a source inside the firewall it is probably a spoofed packet, and should be
dropped. That’s suspicious because a packet originating inside our network could
be destined for an address outside but would NOT be trying to get back through
to the inside from the outside of our firewall. Likewise, if a packet attempts to
exit the firewall with an address from anywhere other than inside our firewall it
can be dropped immediately because it is almost surely “spoofed.” Packet
filtering also segments the Internet into smaller zones, which then can not spoof
each other. However, even with a firewall, all spoofing within our network
cannot be prevented.

Figure B.6-2 illustrates a typical Internet firewall configuration. In this example,

the wide area network router is directly connected to the Internet provider and the
“outside” or Internet facing port of the firewall is directly connected to this router.
The “inside” interface of the firewall is connected to a network switch that in turn
serves as the distribution point for the rest of the network.

Boundary Firewall

Network Switch

Applications Database E-Mail

• Spyware - is a computer program which can be installed on personal

computers (usually without the permission from the owner) and has as its
sole purpose, the collecting of personal information and sending it back to
another source - usually an Internet marketing firm but possibly a hacker. A
similar type of application is known as “Adware” which behaves in a similar
manner but almost always asks permission to be installed beforehand.
We can have the certificate
server check the user’s
identity against our
directory server to be sure
she’s supposed to have
access to the information
accessible through the
Web site.

er s th e s
S i n f i fi
e ga y o e r
ve t t e
a t a it v

r he
i c e ti c r
t if at n se
e r fic e w
C rti uth bro
ce e a r’s
t h se

Certificate Server
(Authentication ) Directory Server
(User Identity )

Browser now maintains an authenticated , encrypted path from

the laptop to the Web site .

Authorized user accesses a “secure” Web site and the server

“downloads” certificate to the user’s browser .

Authorized User
Web Server
(Running SSL Protocol )

figure: ssl process

Client-based remote access is the other method for providing secure, remote access.
In this case, we do not rely on the web browser on our laptop or remote access
computer, as we described in the SSL section, but on another piece of software
typically referred to as the client or VPN client. VPN, as we noted earlier stands for
virtual private network. Additional information on VPNs is available in the IT white
paper, Boundary Security.

As we have discussed in Ohio IT Policy ITP-B.2, “Boundary Security,” a VPN is “a

private communications network used by many organizations and agencies to
communicate confidentially over a public networked environment.” This can be
thought of as a “tunnel” that allows our traffic to travel through public networks
without exposing our agency’s IP packet information to the outside world. Typically,
VPN’s will use encryption to provide confidentiality, sender authentication and message
integrity to achieve the necessary privacy.

For remote access configurations, remote access client software is installed on the
remote computing device and this software will create the VPN “tunnel” from the
remote computing device through the firewall to the VPN hardware providing the
secure connection.

This process is illustrated in figure B.6-5. Note that before the VPN establishes the
secure “session” or transmission, proper authentication must take place. If we have
determined that our remote access requires the security of a VPN, then we will usually
require at least two forms of authentication. This is known as “two-factor”
authentication and provides a higher level of security than a user name and password.
Additional information on two-factor authentication is available in the IT white paper,
Password and PIN Security.


The main objective of this was to identify and analyze some new vulnerabilities of
TCP/IP. We have discussed different attacks that can be launched by an intruder who
manipulates the security flaws in the TCP/IP specification as well as its
implementations. We have analyzed the TCP source code and identified spurious
state-transitions present in the implementation of TCP in several operating systems.
We have analyzed how TCP behaves for various combination of input packets. Finally,
we provide several recommendations to plug some of these flaws in TCP and its


J.Postel, Transmission Control Protocol.

IP Spoofing Attacks and Hijacked Terminal Connections
Journal Computer Networks by Tenenbaum