You are on page 1of 7

Client/Server computing is new technology that yields solutions to many data management problems

faced by modern organizations. The term

Client/Server is used to describe a computing model for the development of computerized systems.

A Client is any process that services from the server process.

A Server is a process that provides requested services for the Client.

1.1.1 A Server for Every Client


1. File Server

All the files reside on the server machine. File Server provides clients access to records within files
from the server machine. File Servers are useful for sharing files across a network among the
different client process requesting the services.

2. Application Server

This machine manages access to centralized application software; for example, a shared database.
When the user requests information from the database, the application server processes the
request and returns the result of the process to the user.

3. Mail Server

This machine manages the flow of electronic mail, messaging, and communication with
mainframe systems on large-scale networks.

4. Fax Server

Provides the facility to send and receive the Faxes through a single network connection. The Fax
server can be a workstation with an installed FAX board and special software or a specialized
device dedicated and designed for Fax Services.

5. Directory Services Server

It is found on large-scale systems with data that is distributed throughout multiple servers. This
machine functions as an organization manager, keeping track of what is stored where, enabling
fast and reliable access to data in various locations.

6. Directory Services Server

It is found on large-scale systems with data that is distributed throughout multiple servers. This
machine functions as an organization manager, keeping track of what is stored where, enabling
fast and reliable access to data in various locations.

7. Database Server

Data resides on server, in the form of a SQL database. Database server provides access to data to
clients, in response to SQL requests. It shares the data residing in a database across a network.
8. Transaction Servers

The data and remote procedures reside on the server. The Server provides access to high level
functions, and implements efficient transaction processing. It shares data and high level functions
across a network.

Groupware Servers

Liable to store semi-structured information like text, image, mail, bulletin boards, flow of work. Groupware
Server provides services, which put people in contact with other people, that is because groupware” is an
ill-defined classification protocol differing from product to product.

Object Application Servers

Communicating distributed objects reside on the server. The object server primarily provides access to
those objects from the designated client objects. The object Application Servers are responsible for sharing
distributed objects across the network.

1.1.2 Client/Server: Fat or Thin


A thin client is one that conducts a minimum of processing on the client side

fat client is one that carries a relatively larger proportion of processing load

Fat Clients: This architecture places more application functionality on the client machine(s). They are used
in traditional of Client/Server models.

Fat Servers: This architecture places more application functionality on the server machine(s). Typically, the
server provides more abstract, higher- level services.

A stateless server is a server that treats each request as an independent transaction that is unrelated to
any previous request.

A stateless server keeps no state information. Using a stateless file server, the client must specify complete
file names in each request specify location for reading or writing and re-authenticate for each request

A stateful server remembers client data (state) from one request to the next.
1.1.4 Servers and Mainframes
Mainframes are the monstrous computer system that deals mainly the business functions and technically
these giant machines will run MVS, IMS and VSAM operating systems

• Application development: Mainframe systems are over structured, time-consuming and


create application backlogs.
• Data manipulation: Mainframe systems have very limited data manipulation capabilities
whereas these techniques are very flexible in the case of Client/Server systems
• System management: Mainframe systems are known to be integrated systems but in the
case of Client/Server systems only few tools are available for system management.
• Security: Mainframe systems are highly centralized whether as Client/Server systems are
relaxed or decentralized.
• End user platform: Mainframe systems comprise of dumb terminals, are character based,
single task oriented and of limited productivity

1.1.6 Client/Server Topologies


A Client/Server topology refers to the physical layout of the Client/Server network in which all the clients
and servers are connected to each other.

• Single client, single server: This topology is shown in the Fig. 1.2 given below. In this
topology, one client is directly connected to one server.
• Multiple clients, single server: This topology is shown in the Fig. 1.3 given below. In this
topology, several clients are directly connected to only one server.
• Multiple clients, multiple servers: This topology is shown in the following Fig. 1.4 In this
topology several clients are connected to several servers.

1.1.7 Integration with Distributed Computing


Distributed computing is the term used for implementation of technologies across heterogeneous
environments

1.2 CLASSIFICATION OF CLIENT/SERVER SYSTEMS

1. First-tier (client-tier): The main responsibility of this tier is to receive user events and to control
the user interface and presentation of data.
2. Second-tier (application-server-tier): The complex application logic is loaded here and available to
the client tier on request from client
3. Three-tier (database-server-tier): This tier is responsible for data storage. This server mostly
operates on a relational database
1.2.2.1 Transaction Processing Monitors
A Transaction Processing Monitor is a program that controls data transfer between client and server in
order to provide consistent environment, particularly for online transaction processing (OLTP).

Transaction Routing: TP monitor can increase scalability by directing transactions to specific DBMS’s.

Managing Distributed Transaction: The TP Monitor can manage transactions that require access to data
held in multiple, possibly heterogeneous, DBMSs.

Load balancing: The TP Monitor can balance client requests across multiple DBMS’s on one or more
computers by directing client services calls to the least loaded server.

Funneling: In an environment with a large number of users, it may sometimes be difficult for all users to
be logged on simultaneously to the DBMS.

Increased reliability: The TP Monitor acts as transaction manager, performing the necessary action to
maintain the consistency of database, with the DBMS acting as a resource manager.

A TP Monitor is typically used in environments with a very heavy volume of transaction, where the TP
Monitor can be used to offload processes from the DBMS server

Messaging is another way to implement the three-tier architecture.

N-tier computing obliges developer to design components according to a business schema that represents
entities, relationship, activities roles, and rules, thereby enabling them to distribute functionality across
logical and physical tiers, allowing better utilization of hardware and platform resources, as well as sharing
of those resources and the components that they support to serve several large applications at the same
time.

A fat client embeds business logic in the application at the client level.

Driving Forces Behind Client/Server Computing

There are various forces that drive the move to client/server computing. Some of them are:
1. The changing business environment.
2. Globalization: The world as a market.
3. The growing need for enterprise data access.
4. The demand for end user productivity gains based on the efficient use of data resources.
5. Technological advances that have made client/server computing practical like
microprocessor technology, data communication and Internet, Database systems,
Operating Systems and Graphical User Interface, PC-based and end user application
software.
6. Growing cost and performance advantages of PC-based platforms.
7. Enterprise network management.

DRIVING FORCES
1. Business perspective.
2. Technology perspective.

Business Perspective

• For increased productivity.


• Superior quality.
• Improved responsiveness.
• Focus on core business.
The effective factors that govern the driving forces are given below:
1. The changing business environment: Business process engineering has become necessary
for competitiveness in the market which is forcing organizations to find new ways to
manage their business, despite fewer personnel, more outsourcing, a market driven
orientation, and rapid product obsolescence
Globalization -Conceptually, the world has begun to be treated as a market. Information
Technology plays an important role in bringing all the trade on a single platform by eliminating
the barriers.

2. The growing need for enterprise data access: One of the major MIS functions is to provide
quick and accurate data access for decision- making at many organizational levels.
3. The demand for end user productivity gains based on the efficient use of data resources:
The growth of personal computers is a direct result of the productivity gains experienced
by end-users at all business levels.
Technology Perspective
Technological advances that have made Client/Server computing practical by proper use of the
following:
• Intelligent desktop devices.
• Computer network architectures.
• Technical advances like microprocessor technology, data communication and
Internet Database system, operating system and graphical user interface.
• Trends in computer usage like:
1) Standardization: Trend towards open systems and adaptation of industry
standards, which includes:
- de facto standard: protocol or interface that is made public & widely
accepted. (e.g., SNA, TCP/IP, VGA)
- de jure standard: protocol or interface specified by a formal standard
making body. (e.g., ISO’s OSI, ANSI C)
2) Human-Computer Interaction (HCI): trend towards GUI, user Control.
3) Information dissemination: trend towards data warehousing, data mining.

Development Phases
There are six main phases in Client/Server system development.
1. Information System Infrastructure Self-study
2. Client/Server Infrastructure Definition
3. Selecting a Window of Opportunity
4. Management Commitment
5. Implementation
6. Review and Evaluation

CLIENT/SERVER SECURITY
A security threat is defined as circumstance, condition, or event with the potential to cause
economic hardship to data or network resources in the form of destruction.

1. Physical security holes: These results when any individual gains unauthorized access to a
computer by getting some user’s password.
2. Software security holes: These result due to some bug in the software, due to which the
system may be compromised into giving wrong performance.
3. Inconsistent usage holes: These may result when two different usages of a systems
contradict over a security point.

You might also like