Professional Documents
Culture Documents
1. Internet
Objectives:
The Internet was conceptualized with two main objectives. They were:
Sharing of information among computer users across the globe
Interconnection of a wide variety of computers across the world
translate each letter or symbol into a number, these numbers then can
then be sent across to another machine to be read and interpreted as the
letters and symbols that they were originally written as. Other data
encodings must be used to send image, audio, and video data. The data
must be able to be read by any machine using the appropriate decoding
process regardless of the underlying architecture.
Storage/Retrieval of Data
Having data alone is usually not enough for most users. In order to
organize and deal with large volumes of data or metadata, it is necessary
to have programs that arrange the data in some structured manner. In
most cases, some form of database does this job very effectively. The role
of the database is as an intermediary between some user or program and
the data. It provides organized storage and methods of retrieval that
allow the data to be accessed in many different ways. Writing specialized
queries can retrieve specific subsets of the data. This is mostly limited to
the text-based data, however; Most non-text data is simply treated as a
file.
Data Manipulation
Another aspect of data management is data manipulation. Computer
scientists have been devising schemes to manipulate the bits and bytes of
data in a computer since before the first assembly language was ever
written. The idea is to use the data as input to a program and produce
some kind of output. In the context of the Internet, much of the data
manipulation by web languages is done to generate dynamic information
and content. Each language and class of languages behaves somewhat
differently in how they take input and what kind of output they can
produce. An often overlooked of data manipulation is the need to control
not only the data, but the network and network resources themselves
that make up the backbone of communication for the data to pass
through.
Transferring Data
One of the most difficult, albeit important, aspects of data management is
the matter of how to relay the data from one computer to another across
the network. The data must pass from one machine to another
regardless of the architecture of the network(s) that the data may pass
through to reach its destination. Protocols for communication are
necessarily used by the client and server, or any other machine no matter
what network model is used. The client/server model is the dominant
network model used with the Internet, and the highest priority of the
protocols are to effectively allow the client and server to communicate.
The protocols define how data is sent between machines. They can range
in complexity from very simple to extremely sophisticated, but they are a
necessary ingredient for intercommunication.
for any kind of data. Encroachments made on copyright laws are difficult
to track. Illegally pirating of commercial software and any other kind of
data becomes increasingly easy. In countries where certain modes of
expression and information are prohibited from being viewed by the
public, it becomes difficult to restrict the inflow of information from the
Internet. It is near impossible to ensure that export-controlled data does
not cross country "borders". It is impossible to stop the wrong kinds of
information from getting into the wrong hands. When they say, "You can
find anything on the internet." they do mean anything. Detailed
instructions on how to make explosives, and adult-oriented mature
content which most families would not want their children to see are a
couple of examples of the kind of things that can be found on the
Internet. The Internet provides the perfect setting for people across the
world with similar interests to unite and pool together their knowledge
and resources.
2. Networks
"Man is only man at the surface. Remove his skin, dissect, and
immediately you come to machinery."
--Valery
2.1 Introduction:
The domination of a single technology was one of the most
characteristics during the past three centuries. The 18th century was the
are where the mechanical systems push the industrial revolution to the
edge, the 19 century was the age of the steam engine. The 20th century
was the age of the information.
From gathering the information, processing and distribution of the
information, the invention of the means to handle this was a critical
demand till this time. Starting from connecting the telephone wires, the
radio, the television, the biggest jump was accomplished buy the
invention of the computer and the satellites.
The rapid progress of the technology shortens the distances and brings
to the world new measures and policies. The birth of the entity
-organizations, companies, institution, …- with a world spread existence
would not be possible without the progress of the computers merging
with the communication.
The computer network, which means the interconnection of a
collection of autonomous computers independently from the other aspect
of distributed systems, is the subject of our concern. A step furthers the
fast development of the computer end up with the generation of the
monster of the Internet. At this time the dealing with the communication
becomes much trivial than the Internet, which introduces a new aspect of
networking and new issues. With the networks over the Internet will
2.2 Hardware:
Well the first thing is to create the physical connection between
computers. On an individual scale this may consist of a single user using
a phone line and modem to call another computer which is either the
final destination that the user wants, or having the computer dial into a
computer which is connected to a Local area network which has the
destination that the user wants to go to.
On a much larger scale, say a corporation who has headquarters in
both the United states and in Japan, and they wanted to be able to link
the two networks together, they would need a little more hardware. Most
likely some sort of satellite receiver/transmitter station at each network.
This would then allow the VP of the US branch to be able to directly
connect with the VP at the branch in Japan. Basically the hardware is just
a way to branch the old existing LANs, MANs, and WANs together.
As you can see just the physical connection starts out fairly small, but
as more and more users start to become connected, the job of the servers
and routers to route the information to the correct computer becomes
more and more of a problem. This is also when efficiency starts to
become a problem. Here's an example of what I mean. Consider user "A"
wants to communicate with user "B". Seems fairly straight forward, just
go from this computer to this computer in this order. However what is to
stop the network from routing user "A's" stuff from his computer all the
around the network before coming to little old user "B". Although there is
a physical connection to another computer in many different ways, there
is no guarantee that the data will go in the most efficient way.
The one nice thing though about this model, is that if one connection
gets severed, the whole network will not go down. This is a key element
in creating the internet. It has to be redundant.
2.3 Software:
This is just the start of a network. Now that we have a physical
connection between computers we will need to define some sort of
standard in communication.
There are so many different types of machines on this network, there
has to be a common way for each one to communicate. That way a
computer which represents an "A" on one computer when it comes across
the network it will also represent an "A" on that computers. Or likewise if
a picture is sent on one computer the same picture should come out on a
computer running something totally different.
We think of this as a standard that everyone on the internet has to
follow. That is the data that actually gets sent on the network will follow
a certain pattern. For our purposes we will say the network is able to
send binary (1's and 0's) on the network. So for anyone to be able to send
this information on the network, they will first have to break down there
information into the correct representation. This can be dealt with in two
2.5 Protocols:
At some point we would like to send data from point to another,
regardless for the hardware there are many others constraint interfere to
establish and complete the communication. In order to handle the variant
codification. At this level the standardization of some measures -as ISO
Model- to govern the communications and regulate the transformations
of data from a network to another was a necessity. A packet of data
issued to the transportation medium if may need a reformatting at some
point when it leaves the Local network before joining another. Say a
packet from Token Bus to an Ethernet 10BaseT. The device responsible
for handling this conversion is a switch.
real time multimedia will be the driving application for high bandwidth
networks. Some examples of this are tele-immersive virtual reality, 3D
interactive gaming, distance education and tele- medicine. If these
applications do become the dominant IP applications then there is no
question that there will be a demand for large bandwidth networks.
2.7 Conclusion:
There are now three guarantees in life: death, taxes, and the demand for
more bandwidth. The increased consumption of bandwidth by businesses
and individuals is not a new revelation. But the rate at which demand for
this commodity is growing is unprecedented and changing the entire
complexion of the telecommunications infrastructure.
It's amazing to think that less than 20 years ago, the majority of
communications traffic was voice-related and carried over analog
transmission facilities. Electro-mechanical switching systems routed the
majority of analog voice traffic. As the telecommunications industry grew
and evolved in the 1980s, digital telephony and lightwave systems began
to play significant roles in the advancing transmission and switching
systems.
3. Web Languages
"The tie of language is perhaps the strongest and the most
durable that can unite mankind."
--Alexis de Tocqueville
3.1 Introduction:
One of the most recurrent ideas we have mentioned in our writings about
the internet is the central theme of communication. The reason the
internet was first developed was to speed up and improve communication
in the scientific community. Since the dawn of human intelligence,
language has been our primary means of communication. Language in
the context of computer science, and specifically in the context of the
internet is somewhat different, but is intrinsically the same. Its base
purpose is to convey information. Computer languages are primarily
useful for manipulating data to perform some computation and return
some output. Similarly, web languages are used for the expression of
computer programs that display information, automate web pages, create
dynamic information, and link web pages to other applications. Their
usefulness for web designers and programmers is infinite.
In the following sections, material will be presented which should help
clarify the usefulness of web languages. Several principals will be
introduced which should govern not only the web languages that exist
today, but also those of the future. These principals serve as a base of
what kind of functionality should be inherent in any programming
language used with the internet. Since there are so many different uses
languages have on the internet, we will define a level of abstraction
which should generalize all languages to fit into one of the following
classifications: Markup Languages, Scripting Languages, and (compiled)
How they interact with the user and system: server side programs
cannot manipulate any information on the client browser: they can
rewrite a browser image, but they cannot modify an existing one
(although they can change the contents of graphic files)
In contrast, client programs interact easily with the user, and in
the case of DHTML, can move, modify, add, or replace virtually
anything in the browser
XML: the newest markup language (see this for more details). Unlike
HTML which is concerned with appearances only, XML (eXtensible
Markup Language) is concerned almost wholly with content as
opposed to form. Thus the tags identify information that is relevant
to a particular discipline. For example, the following XML might
appear in medical applications:
<patient>
blah blah blah
<birthdate> blah </birthdate>
<sex> blah </sex>
<drug history>
blah blah blah
<drug allergies>
blah bah blah
</drug allergies>
</drug history>
<admissions>
blah blah blah
</admissions>
</patient>
3.2.3 DHTML
Here is a more complete description of DHTML. DHTML stands for
"Dynamic HTML". It is currently a leading edge development and, as
such, suffers from a lack of standards and very different
implementations. Its goal is to allow you to manipulate every element of
an HTML page though script languages, such as JavaScript.
The coming standard HTML 4.0 should solidify the standards so that the
differences between browsers are not so large and the elements are
stable. HTML used to be a very simple language, but no longer: the
reference guide for HTML in one publication is now 670 pages long.
There are several ways to write DHTML that is compatible with both
browsers:
You can use the "Navigator" object to detect the type of browser and
then branch to different pages depending on which browser and
version it is
You can switch to different sections within a page depending on
which browser it is
You can develop an applications interface (API) which fields generic
calls and then selects the implementation based on which browser
has been selected. Here is a snippet of code taken "Dynamic HTML"
by Danny Goodman that does this.
Because Internet Explorer (IE) is much more developed in this area and
is much closer to the emerging standard, we will only show DHTML for
IE.
Style sheets let you define a set of characteristics for a group of HTML
elements, such as color, font, position, and visibility. There are a large
number of style sheet elements, a number of ways of incorporating Style
Sheets, and a huge number of ways of applying them. For our purposes
we will confine Style Sheets to the format and content as shown in the
following example:
<HTML>
<HEAD>
<STYLE TYPE="text/css">
#imageA {position:absolute; left:50;
top:150; width:120; z-index:100}
#mytext {border:solid blue 5 px;
color:green; background-color:coral}
#otherImage {position:absolute;
visibility:hidden}
</STYLE>
</HEAD>
...
</HTML>
...
<DIV NAME=my_text >
Now is the time for all good men to come
to the aid of their party.
<IMG SRC="my.gif" >
</DIV >
For example,
var my_image = window.event.srcElement
If an event occurs, such as a mouse passing over an image or a piece
of text, this returns the object activated by the event, for a GIF
image.
obj = my_image.parentElement.style
This returns the container of the object activated by the event. For
example, if a GIF image named "my_image" is contained in a DIV
element with an ID = XYZ, then a pointer to XYZ is returned by this
command.
if (obj == document.all.imageA.style)
dbg.document.write(" Selected Container for imageA)
This tests to see if "obj" is "document.all.imageA".
Examples of DHTML
Populate the classes and objects with variables and functions (called
"methods") which describe their behavior
The following analogy might be helpful. Consider a large sheet of dough,
and suppose you have a number of cookie cutters in various shapes, such
as a Christmas tree, Donald Duck, a GingerBread man, and so on. Each
cookie cutter represents a class, and each time you stamp out a shape
with the cookie cutter you generate a new object of that class. All of the
objects in a class (all the pieces of dough stamped out with the same
cookie cutter) have similar properties, in that they share the shape of the
cookie cutter, but might also have individual properties, such as green
sprinkles versus blue sprinkles. So objects in a class share certain major
characteristics in common, but allow for individual variation as well.
You determine the classes you use based on your purposes. For example,
suppose you are developing a web-based system to keep track of people
in your department, and suppose, for the purposes of the program, that
students, staff, and faculty must be treated in very different ways. Then it
makes sense to setup your classes to reflect this natural division, as
follows:
C and C++ are also widely used but are compiled languages and lack the
regular expressions and "Here documents" that are so useful in Perl.
Invocation: you can invoke them through a URL, and can pass
parameters to them though the URL followed by "?
p1=value&p2=value&p3=value" (the GET method: see this for a very
simple example of a CGI program invoked by typing a URL into a
browser, and this for a more complex example). The most usual way
is by pressing the submit button associated with a form which
transfers control to the CGI program along with the parameters in
the form. You can also invoke them through the #exec Server Side
Include function in HTML scripts whose file extension reads .shtml
as opposed to just .html (see this for an example, and see this for the
source code)
Source: 1999 UW Computing & Communications & NMT Dept CS
20
Prepared by V.Radha
Post Graduate Program in Banking Technology @IDRBT Session 9 - Financial Networks
Mixing client and server side scripts: you can fairly easily combine
client and server (CGI) scripts to work together. Here is an example
of JavaScript program that writes HTML which lets you to specify the
locations of people, but which calls a CGI Program to make those
changes permanent