You are on page 1of 6

Chapter 1: Introduction to Web Krupali Shah

History of the INTERNET:

The internet started as the ARPANET and connected mainframe computers on dedicated
connections. The second stage involved adding desktop PCs which connected through the
telephone wires. The third stage was adding wireless connections to laptop computers.
And currently the internet is evolving to allow mobile phone internet connectivity
ubiquitously using cellular networks.

The ARPANET in its early days was limited to the militants in the US for their
communication purpose. Then it was propagated to the Universities and the government
offices. This led to rapid growth of Internet for the common people.

Many smaller companies developed their regional networks to bring the internet into
homes. Most of the networks responsible for distributing Internet connectivity
Worldwide are owned and operated by communications corporations.

Web system Architecture:

Some entities related to the web (client-server) architecture can be explained as follows,

1. Client: A client can be defined as a requester of a particular service.

The client is a process (program) that sends a message to a server process
(program), requesting that the server perform a task (service). Client programs
usually manage the user-interface portion of the application, validate data entered
by the user, dispatch requests to server programs, and sometimes execute business
logic. The client-based process is the front- end of the application that the user
sees and interacts with. The client process contains solution-specific logic and
provides the interface between the user and the rest of the application system. The
client process also manages the local resources that the user interacts with such
as the monitor, keyboard, workstation CPU and peripherals. One of the key elements
of a client workstation is the graphical user interface (GUI). Normally a part of
operating system i.e. the window manager detects user actions, manages the windows
on the display and displays the data in the windows.

2. Server: A server can be defined as the provider of the service.

A server process (program) fulfills the client request by performing the task
requested. Server programs generally receive requests from client programs, execute
database retrieval and updates, manage data integrity and dispatch responses to
client requests. Sometimes server programs execute common or complex business
logic. The server-based process "may" run on another machine on the network. This
server could be the host operating system or network file server; the server is
then provided both file system services and application services. Or in some cases,
another desktop machine provides the application services. The server process acts
as a software engine that manages shared resources such as databases, printers,
communication links, or high powered-processors. The server process performs the
back-end tasks that are common to similar applications.

Client/server is a computational architecture that involves client processes

requesting service from server processes.
The basic characteristics of client/server architectures are:

1. Combination of a client or front-end portion that interacts with the user, and a
server or back-end portion that interacts with the shared resource. The client
process contains solution-specific logic and provides the interface between the
user and the rest of the application system. The server process acts as a software
engine that manages shared resources such as databases, printers, modems, or high
powered processors.

2. The front-end task and back-end task have fundamentally different requirements for
computing resources such as processor speeds, memory, disk speeds and capacities,
and input/output devices.

3. The environment is typically heterogeneous and MultiFinder. The Hardware platform

and operating system of client and server are not usually the same. Client and
server processes communicate through a well-defined set of standard application
program interfaces (API's).

4. An important characteristic of client-server systems is scalability. They can be

scaled horizontally or vertically. Horizontal scaling means adding or removing
client workstations with only a slight performance impact. Vertical scaling means
migrating to a larger and faster server machine or multi-servers.

Page 1 of 6
Chapter 1: Introduction to Web Krupali Shah

There are basically two types of web architecture namely they include the following:

1. Two Tier Architecture: Two-tier architecture is where a client talks directly to

a server, with no intervening server. It is typically used in small
environments(less than 50 users).

A common error in client/server development is to prototype an

application in a small, two-tier environment, and then scale up by simply
adding more users to the server. This approach will usually result in an
ineffective system, as the server becomes overwhelmed. To properly scale
to hundreds or thousands of users, it is usually necessary to move to
three-tier architecture.

2. Three-Tier Architecture: Three-tier architecture introduces a server (or an

"agent") between the client and the server. The role of the agent is
manifold. It can provide translation services (as in adapting a legacy
application on a mainframe to a client/server environment), metering services
(as in acting as a transaction monitor to limit the number of simultaneous
requests to a given server), or intelligent agent services (as in mapping a
request to a number of different servers, collating the results, and
returning a single response to the client.

URL (Uniform Resource Locator):

To identify web pages, an addressing scheme is needed. Basically, a web page is given
an address called a Uniform Resource Locator (URL). At the application layer level,
this URL provides the unique address for a web page, which can be treated as an
internet resource.

The general form of a URL is as follows:

protocol: // domain_name: port /directory/resource


protocol defines the protocol being used, for
e.g. http: hyper text transfer protocol
https: secure hyper text transfer protocol
ftp: file transfer protocol
domain_name defines the domain name of the destined computer
port defines the port number of the connection
directory defined the corresponding directory of the source

URLs are not case sensitive i.e. and

HTTP://EN.WIKIPEDIA.ORG/WIKI/URL means the same web page address.

DNS (Domain Name Server):

A DNS as its name suggests is a server which provides the translation from the normal
IP addresses to the human understood names for these web pages. The Domain Name System
makes it possible to assign domain names to groups of Internet users in a meaningful
way, independent of each user's physical location.

It translates human meaningful domain names to the numerical (binary) identifiers

associated with networking equipment for the purpose of locating and addressing these
devices world-wide. An often used analogy to explain the Domain Name System is that it
serves as the "phone book" for the Internet by translating human-friendly computer
hostnames into IP addresses. For example, translates to

HTTP (Hyper Text Transfer Protocol):

Hypertext Transfer Protocol (HTTP) is a communications protocol. Its use for

retrieving inter-linked text documents (hypertext) led to the establishment of the
World Wide Web.

Seriously, HTTP development was coordinated by the World Wide Web Consortium and the
Internet Engineering Task Force (IETF),

Page 2 of 6
Chapter 1: Introduction to Web Krupali Shah

HTTP is a request/response standard between a client and a server. A client is the

end-user, the server is the web site. The client making a HTTP request—using a web
browser, spider, or other end-user tool—is referred to as the user agent. The
responding server—which stores or creates resources such as HTML files and images—is
called the origin server. In between the user agent and origin server may be several
intermediaries, such as proxies, gateways, and tunnels.

HTTP stands for Hypertext Transfer Protocol. It's the network protocol used to deliver
virtually all files and other data (collectively called resources) on the World Wide
Web, whether they're HTML files, image files, query results, or anything else.
Usually, HTTP takes place through TCP/IP sockets (and this tutorial ignores other

A browser is an HTTP client because it sends requests to an HTTP server (Web server),
which then sends responses back to the client. The standard (and default) port for
HTTP servers to listen on is 80, though they can use any port.

• It is a stateless protocol i.e. neither the client nor the server stores the
information about the state of the other side of an ongoing connection.
• HTTP sets up a new connection for each request.
• When a client issues a request to a server and then the server returns the
response, the request is specified in the ASCII format, whereas the response is
specified in MIME (Multi-response Internet Mail Extension) format.
• MIME defines various types of contents like text, image & audio.

HTTP request:
The general form of client request is,
Request_method Resource_address HTTP/version_number

Types of Request_method:

Method Name Description

GET It gets or retrieves a web page.
HEAD It requests the header information of the web page. In other worlds
the response is the same as that of the GET with the body or the
contents of the page removed.
POST It posts additional data to the server in the HTTP request message.
The additional data is attached after the headers.

Generation of dynamic web pages:

Web pages do not change every time the page is loaded into the browser, they even
cannot change if a user clicks on a button. The only change that you will see in
static pages is you can very well observe them loading and unloading, like what
happens when you click on a hyper link.

Static web pages (normal pages you build) always look the same and the content never
changes unless you load a new page or you change the page yourself and upload the new
version of the pages unto the server. Dynamic pages are the pages that changes
dynamically. Dynamic pages can change every time they are loaded (without you having
to make those changes) and they can change their content based on what user does, like
clicking on some text or an image. One of the most common types of dynamic web pages
is the database driven type. This means that you have a web page that grabs
information from a database (the web page is connected to the database by
programming.) and inserts that information into the web page each time it is loaded.
If the information stored in the database changes, the web page connected to the
database will also change accordingly and automatically without human intervention.

This is commonly seen on online banking sites where you can log in (by entering your
user name and password) and check out your bank account balance. Your bank account
information is stored in a database and has been connected to the web page with
programming thus enabling you to see your banking information. Hopefully it is
understood by you that when you would like to start a database driven site; You would
like to have it when your information changes very often, just like in a banking site.

Page 3 of 6
Chapter 1: Introduction to Web Krupali Shah

Database driven sites can be built using several competing technologies, each with its
own advantages.

Page 4 of 6
Chapter 1: Introduction to Web Krupali Shah

Some of those technologies/tools include:





• Cold Fusion.

There are two ways to create this kind of interactivity:

1. Using client-side scripting to change interface behaviors within a specific web

page, in response to mouse or keyboard actions or at specified timing events. In
this case the dynamic behavior occurs within the presentation.
2. Using server-side scripting to change the supplied page source between pages,
adjusting the sequence or reload of the web pages or web content supplied to the
browser. Server responses may be determined by such conditions as data in a
posted HTML form, parameters in the URL, the type of browser being used, the
passage of time, or a database or server state.

The result of either technique is described as a dynamic web page, and both may be
used simultaneously

Cookies are a way for a server (or a servlet, as part of a server) to send some
information to a client to store, and for the server to later retrieve its data from
that client. Servlets send cookies to clients by adding fields to HTTP response
headers. Clients automatically return cookies by adding fields to HTTP request
Each HTTP request and response header is named and has a single value. For example, a
cookie could be a header named BookToBuy with a value 304qty1, indicating to the
calling application that the user wants to buy one copy of the book with stock number
304. (Cookies and their values are application-specific.)
Multiple cookies can have the same name. For example, a servlet could send two cookies
with headers named BookToBuy; one could have the value shown previously, 304qty1,
while the other could have a value 301qty3. These cookies would indicate that the user
wants to buy one copy of the book with stock number 304, and three copies of the book
with stock number 301.
In addition to a name and a value, you can also provide optional attributes such as
comments. Current web browsers do not always treat the optional attributes correctly,
so you should not rely on them.
A server can provide one or more cookies to a client. Client software, such as a web
browser, is expected to support twenty cookies per host, of at least four kilobytes
When you send a cookie to a client, standard HTTP/1.0 caches will not cache the page.
Currently, the javax.servlet.http.Cookie does not support HTTP/1.1 cache control
Cookies that a client stores for a server are returned by the client to that server
and only that server. A server can contain multiple servlets; the Duke's Bookstore
example is made up of multiple servlets running within a single server. Because
cookies are returned to a server, servlets running within a server share cookies. The
examples in this section illustrate this by showing the CatalogServlet and ShowCart
servlet working with the same cookies.

Page 5 of 6
Chapter 1: Introduction to Web Krupali Shah

How the Web Works?

Every computer that is connected to the Internet is given a unique address made
up of a series of four numbers between 0 and 256 separated by periods—for example, or These numbers are known as IP addresses. IP (or
Internet Protocol) is the standard for how data is passed between machines on the
Internet. When you connect to the Internet using an ISP you will be allocated an IP
address, and you will often be allocated a new IP address each time you connect.

Every Web site, meanwhile, sits on a computer known as a Web server (often you
will see this shortened to server). When you register a Web address, also known as a
domain name, such as you have to specify the IP address of the computer that
will host the site. When you visit a Web site, you are actually requesting pages from
a machine at an IP address, but rather than having to learn that computer’s 12-digit
IP address, you use the site’s domain name, such as or

When you enter something like, the request goes to one of
many special computers on the Internet known as domain name servers (or name servers,
for short). These servers keep tables of machine names and their IP addresses, so when
you type in, it gets translated into a number, which identifies
the computers that serve the Google Web site to you.
When you want to view any page on the Web, you must initiate the activity by
requesting a page using your browser. The browser asks a domain name server to
translate the domain name you requested into an IP address. The browser then sends a
request to that server for the page you want, using a standard called Hypertext
Transfer Protocol or HTTP (hence the http:// you see at the start of many Web
addresses). The server should constantly be connected to the Internet—ready to serve
pages to visitors. When it receives a request, it looks for the requested document and
returns it. When a request is made, the server usually logs the client’s IP address,
the document requested, and the date and time it was requested. An average Web page
actually requires the Web browser to request more than one file from the Web server—
not just the XHTML page, but also any images, style sheets, and other resources in the
page. Each of these files, including the main page, needs a URL (a uniform resource
locator) to identify it. A URL is a unique address on the Web where that page,
picture, or other resource can be found and is made up of the domain name (for
example,, the name of the folder or folders on the Web server that the file
lives in (also known as directories on a server), and the name of the file itself. For
example, the yahoo logo on the home page of the yahoo Web site has the unique address
“” and the main page is “”. After
the browser acquires the files it then inserts the images and other resources in the
appropriate place to display the page

Page 6 of 6