You are on page 1of 274

MIDDLE-WARE TECHNOLOGIES

UNIT I
NOTES
CHAPTER - 1
CLIENT / SERVER ARCHITECTURE

1.1 INTRODUCTION
The actual client/server model started gaining acceptance in the late 1980s. The term
client/server was first used in the 1980s in reference to personal computers (PCs) on a
network. The client/server software architecture provides a versatile, message-based and
modular infrastructure that is intended to improve usability, flexibility, interoperability and
scalability as compared to centralized, mainframe, time sharing computing.
This unit introduces the reader to the client / server architecture. The usage and
functionality of the different types of servers : File server, Database server, Group server
and more recently the Object server have been explained. This unit covers the different
types of client / server models which have been illustrated using examples.
1.2 LEARNING OBJECTIVES
At the end of this unit, the reader must be familiar with the following concepts:
 Evolution of client / server architecture
 Client / Server architecture
 Characteristics of client / server model
 Different types of servers
 Client / server on the Internet
 Different types of client/server models
1.3 EVOLUTION OF CLIENT / SERVER ARCHITECTURE
Before the advent of client / server architecture, computing environments consisted of
mainframes hooked to dumb terminals that only did processing at the mainframe. In mainframe
software architectures all intelligence (processing, data) was within the central host computer.
Users interacted with the host through a terminal that captures keystrokes and sends that
information to the host. As the number of users increased, the power of the mainframes
had also to increase to cope with the increased processing requirements and user connectivity.
This era saw the development of very powerful mainframe computers capable of immense
processing power and providing support to hundreds of users. Computers were generally
large, costly systems owned by large corporations, universities, government agencies and
similar-sized institutions.
The advent of Personal Computers (PC)’s saw drastic changes in the computing
scenario. Personal computers are normally operated by one user at a time to perform such
general purpose tasks such as word processing and data anlysis using spreadsheet software.
PC’s were also widely used for multimedia applications and other entertainment like games.
The software industry provided a wide range of products for use in personal computers,
targeted at both the expert and the non-expert user. Like the telephone, automobile, and

1 ANNA UNIVERSITY CHENNAI


DMC 1754 / 1945

television before it, the PC changed the way people communicate, shop, retrieve information
NOTES and entertain themselves.
The personal computers started to replace these dumb terminals but the processing
continued to be done on the mainframe. The improved capacity of personal computers were
largely ignored or used on an individual level. With so much computing power idle, many
organizations started thinking about sharing, or splitting, some of the processing demands
between the mainframe and the PC.
Client/server technology evolved out of this movement for greater computing control
and more computing value. Client/server refers to the way in which software components
interact to form a system that can be designed for multiple users. This technology is a
computing architecture that forms a composite system allowing distributed computation,
analysis, and presentation between PCs and one or more larger computers on a network.
Each function of an application resides on the computer most capable of managing that
particular function. There is no requirement that the client and server must reside on the
same machine. In practice, it is quite common to place a server at one site in a local area
network (LAN) and the clients at the other sites. The client, a PC or workstation, is the
requesting machine and the server, a LAN file server, mini or mainframe, is the supplying
machine. Clients may be running on heterogeneous operating systems and networks to
make queries to the server(s).
1.4 CLIENT / SERVER MODEL
Client/server describes the relationship between two computer programs in which one
program, the client, makes a service request from another program, the server, which fulfills
the request. The client/server idea can be used by programs within a single computer, but
finds greater application of the idea in a network. In a network, the client/server model
provides a convenient way to interconnect programs that are distributed efficiently across
different locations.
Businesses of various sizes have various computing needs. Larger businesses may
therefore need to use more computers than smaller businesses do. This type of architecture
provides a division of labor for the computing functions required by a large business. Under
the structure of the client-server architecture, a business’s computer network will have a
server computer, which functions as the “brains” of the organization, and a group of client
computers, which are commonly called “workstations”. The server part of the client-server
architecture will be a large-capacity computer, perhaps even a mainframe, which supports
multiple uses and has usually a large amount of data and functionality stored on it. The client
portions of the client-server architecture are smaller computers that employees use to perform
their computer-based responsibilities.
Servers commonly contain data files and applications that can be accessed across the
network by workstations or employee computers. The server can be used to store the
organization’s data which could be accessed by the client computers. For example, client
requests for files from servers may be implemented using the File Transfer Protocol (FTP).
The server is not used just for storage only. Many networks have a client-server
architecture in which the server acts as a processing power source as well. In this scenario,
the client computers are virtually “plugged in” to the server and gain their processing power
from it. In this way, a client computer can simulate the greater processing power of a server
without having the requisite processor stored within its framework. Alternatively, the clients

2 ANNA UNIVERSITY CHENNAI


MIDDLE-WARE TECHNOLOGIES

may also access applications available in the server. Examples of such applications are
numerous; among the most popular are word processors, spreadsheets. NOTES
The client/server model has become one of the central ideas of network computing. In
a true client / server environment, both clients and servers must share in the business
processing. Most business applications being written today use the client/server model. In
the usual client/server model, one server is developed and activated to await client requests.
Typically, multiple client programs share the services of a common server program. Both
client programs and server programs are often part of a larger program or application.
1.5 CHARACTERISTICS OF CLIENT / SERVER MODEL
A client is defined as a requester of services and a server is defined as the provider of
services. Figure 1.1 illustrates a simple client-sever architecture.

Client initiates request

Server services request

SERVER CLIENT

Figure 1.1 Client / Server Model


A Server is simply a computer that is running software that enables it to serve specific
requests from other computers, called “clients”. The server is normally “dedicated” because
it is optimized to serve requests from the “client” computers quickly.
Any normal desktop or personal computer can act as a server. The server can range in
size from PCs to mainframes. However standard server hardware normally includes:
 Support for large amount of RAM
 Fast input and output
 Fast network cards
 Ability to support multiple processors
 Support for fault tolerance
Some of the important characteristics of a client / server architecture are:
 Asymmetrical protocols: there is a many-to-one relationship between clients and a
server. Clients always initiate a dialog by requesting a service. Servers wait passively
for requests from clients.

3 ANNA UNIVERSITY CHENNAI


DMC 1754 / 1945

 Encapsulation of services: the server is a specialist which determines how a


NOTES client request has to be serviced. Servers can be upgraded without affecting clients
as long as the published message interface used by both is unchanged.
 Location transparency: the server is a process that can reside on the same machine
as a client or on a different machine across a network. Client/server software
usually hides the location of a server from clients by redirecting service requests. A
program can be a client, a server, or both.
 Message-based exchanges: clients and servers are loosely-coupled processes that
can exchange service requests and replies using messages.
 Modular, extensible design: the modular design of a client/server application enables
that application to be fault-tolerant. In a fault-tolerant system, failures may occur
without causing a shutdown of the entire application. In a fault-tolerant client/server
application, one or more servers may fail without stopping the whole system as long
as the services offered on the failed servers are available on servers that are still
active. Another advantage of modularity is that a client/server application can respond
automatically to increasing or decreasing system loads by adding or shutting down
one or more services or servers.
 Platform independence: the ideal client/server software is independent of hardware
or operating system platforms, allowing you to mix client and server platforms.
Clients and servers can be deployed on different hardware using different operating
systems, optimizing the type of work each performs.
 Scalability: client/server systems can be scaled horizontally or vertically. Horizontal
scaling means adding or removing client workstations with only a slight performance
impact. Vertical scaling means migrating to a larger and faster server machine or
adding server machines.
 Separation of Client/Server Functionality: client/server is a relationship between
processes running on the same or separate machines. A server process is a provider
of services. A client is a consumer of services. Client/server provides a clean
separation of functions.
 Shared resources: one server can provide services for many clients at the same
time, and regulate their access to shared resources.
The server can additionally provide many benefits including:
 Optimization : Server hardware is designed to service client requests quickly and
efficiently
 Centralization: Files and data available in a central location for common and easy
access by multiple clients. This also results in cheaper maintenance.
 Data Integity : Since the data is centralized, it facilitates maintenance of data integrity
(accuracy and completeness)
 Security : Multiple levels of permission and security of access by different clients
 Back-up : Data and files can be backuped up and restored quickly in case of problems
Client/server networking focuses primarily on the applications rather than the hardware.
The same computer can function as a server and client. Also the same machine can function
as both a client and a server depending on the software configuration.
Computer transactions using the client/server model are very common. To be a true
client/server environment, both client and server must share in the business transaction
processing. A typical client transaction could be to check balance of a bank account. A
program in the client computer would forward the request of the user to the main server at

4 ANNA UNIVERSITY CHENNAI


MIDDLE-WARE TECHNOLOGIES

the bank. The request would be serviced by a server program in the main server and the
information is returned to the client which displays the information to the user. NOTES
To consider a more complex scenario as illustrated in Figure 1.2:

CLIENT
2
MAIN SERVER

Figure 1.2 Client / Server Transaction Processing


1. The client program in the user PC could forward the user’s request to the bank’s
main server
2. The server program at the bank’s main server would in turn forward the request to
its own client program that sends a request to a database server at another computer
to retrieve the account balance
3. The server program at the database server would service the client request from
the bank’s main server and return the account balance to the bank’s main server
client
4. The bank’s main server program in turn serves it back to the client in the user’s
personal computer, which displays the information.
In the above example, the bank main server acts as both client and server to process
the user request for knowing the bank account balance.
The functions of the Client and Server have been summarized and given below:
Client
 Initiates requests
 Waits for and receives replies
 Can connect to several servers at the same time
 Typically interacts directly with end-users using a graphical user interface
Server
 Waits for requests from clients

5 ANNA UNIVERSITY CHENNAI


DMC 1754 / 1945
 Upon receipt of requests, processes them and then serves replies
NOTES  Usually accepts connections from a large number of clients
 Typically does not interact directly with end-users
1.6 CLIENT/SERVER ARCHITECTURE IN THE WEB
The World Wide Web (WWW) or the Web as it is called revolves around the client/
server architecture. The client computer system uses browsers like Internet Explorer,
Netscape Navigator and Mozilla etc. to interact with the Internet servers using protocols.
These protocols help in the accurate transfer of data through requests from a browser and
responses from the server. There are many protocols available on the Internet. Commonly
used protocols in the web are:
 HTTP (Hyper Text Transfer Protocol) used to transfer web pages and files contained in web
pages such as images.
 FTP (File Transfer Protocol) used for transferring files from one computer to another.
 SMTP (Simple Mail Transfer Protocol) used for email.
 Three models have been used to examine the client/server communication on the web using
HTTP. Figure 1.3 shows the client / server architecture for delivering static HTML pages on
the Web.

Figure 1.3 Client / Server Architecture – Static HTML pages

Figure 1.4 Client / Server Architecture – Dynamic HTML pages


Common Gateway Interface (CGI) is a standard for interfacing external applications with information
servers, such as HTTP or Web servers. A plain HTML document that the Web daemon (server
program) retrieves is static, which means it exists in a constant state: a text file that doesn’t change.
A CGI program, on the other hand, is executed in real-time, so that it can output dynamic information.
CGI program can be used to generate dynamic web pages. For example, suppose a web page has a
search option to look up information and the word ‘computers’ is typed in as the search query. The
client browser will send the request to the server. The server checks the headers and locates the
necessary CGI program and passes it the data from the request including your search query
“computers”. The CGI program processes this data and returns the results to the server. The server
then sends this formatted in HTML to your browser which in turn displays the HTML page. Thus the
6 ANNA UNIVERSITY CHENNAI
MIDDLE-WARE TECHNOLOGIES
The client program (browser) requests for an HTML file stored on the remote server. The
server program which processes the client request locates this file and passes it to the
client. The client program then displays this file on the client machine. In this case, the
NOTES
HTML page is static. Static pages do not change until the developer modifies them.
Figure 1.4 shows the client / server architecture for delivering dynamic web pages using
CGI script. The content of such web pages will depend on the user input.
The Common Gateway Interface (CGI) is a standard for interfacing external
applications with information servers, such as HTTP or Web servers. A plain HTML
document that the Web daemon (server program) retrieves is static, which means it
exists in a constant state: a text file that doesn’t change. A CGI program, on the other hand,
is executed in real-time, so that it can output dynamic information. CGI program can be
used to generate dynamic web pages. For example, suppose a web page has a search
option to look up information and the word ‘computers’ is typed in as the search query.
The client browser will send the request to the server. The server checks the headers and
locates the necessary CGI program and passes it the data from the request including your
search query “computers”. The CGI program processes this data and returns the results to
the server. The server then sends this formatted in HTML to your browser which in turn
displays the HTML page. Thus the CGI program generates a dynamic HTML page. The
contents of the dynamic page depend on the query passed to the CGI program.GI program
generates a dynamic HTML page. The contents of the dynamic page depend on the query passed to
the CGI program.

Figure 1.5 Client / Server Architecture – Server-side Scripting


The third model illustrated in Figure 1.5 also involves dynamic response to user’s request.
In this model, dynamic response is generated by the use of server side technologies.
Server-side scripting is a web server technology in which a user’s request is fulfilled by
running a script directly on the web server to generate dynamic HTML pages. It is usually
used to provide interactive web sites that interface to databases or other data stores. The
web page is generated using data retreived from such data stores as per user query. There
are many technologies available for server-side scritpting. These include Active Server
Pages (ASP) and ASP.NET from Microsoft, JavaServer Pages (JSP), a Java-based system
for embedding Java-related code in HTML pages, PHP a widely-used general-purpose
scripting language for embedding code into HTML page and other commercial systems like
ColdFusion.

7 ANNA UNIVERSITY CHENNAI


DMC 1754 / 1945
1.7 TYPES OF SERVERS
NOTES 1.7.1 File Server
 A File Server is a high-speed computer in a network that stores the programs and
data files shared by users. It acts like a remote disk drive. The client / server
architecture for File Server is given in Figure 1.5.

Figure 1.6 Client / Server Architecture – File Server


The objectives of File Server are:
 To promote sharing of files. (computer programs and/or data)
 To encourage indirect or implicit (via programs) use of remote computers.
 To shield a user from variations in file storage systems among hosts.
 To transfer data reliably and efficiently.
File Transfer Protocol (FTP) for example uses client/server interactions to exchange
files between systems. An FTP client requests a file that resides on another system. An FTP
server on the system where the file resides handles the client’s request. The server gets
access to the file and sends the file back to the client’s system.
1.7.2 Database Server
A computer in a LAN dedicated to database storage and retrieval. The database server
is a key component in a client/server environment. It holds the Database Management
System (DBMS) and the databases. Upon requests from the client machines, it searches
the database for selected records and passes them back over the network. A database
server and file server may be one and the same, because a file server often provides database
services. However, the term implies that the system is dedicated for database use only and
not a central storage facility for applications and files.
A sample example to illustrate the use of database server is given in Figure 1.7. A
team of software professionals developing an information system will share a common
database to create their schema, programs and documents. The workgroup members use
PCs that are linked by the way of a local area network (LAN). The database is managed
by a computer, called database server, which is part of the network. Data is retrieved from
the database by the database server as per user request.

8 ANNA UNIVERSITY CHENNAI


MIDDLE-WARE TECHNOLOGIES

Depending on the size of the organization, either a single database server or multiple
database servers could also be used to service user’s requests. Depending on the business NOTES
requirement, individual database servers could be used for each department. For a marketing
department the database would store data concerning customers, order, sales persons etc.
Such departmental database servers would be linked to enable access to data stored in any
of the Servers from anywhere in the organization.

Figure 1.7 Client / Server Architecture – Database Server


1.7.3 Application Server
The difference between a file / database server and an application server, is that the
file server stores the programs and data, while the application server runs the programs and
processes the data.
In a two-tier client/server environment, which is most common, the user’s machine
performs the business logic as well as the user interface, and the server provides the database
processing. In a three-tier environment, a separate computer (application server) performs
the business logic and some data access, although some part may still be handled by the
user’s machine. Application servers are typically used for complex transaction-based
applications. An application server in a three-tier client/server environment provides middle
tier processing between the user’s machine and the database management system (DBMS).
Two and three tier architecture is explained in greater detail in section 1.8.
1.7.4 Web Server
A Web server is a computer system that delivers Web pages to browsers and other
files to application via the HTTP protocol. It includes the hardware, operating system, Web
server software, Communication protocols like TCP/IP protocol and site content (Web pages
and other files) as shown in Figure 1.8.
Web server’s traditional function has been to serve static HTML and more recently
XML. A web server serves web pages to clients across the Internet or an Intranet. If the
Web server is used internally and not by the public, it may be called an “intranet server.”
The web server hosts the pages, scripts, programs, and multimedia files and serves them
using HTTP.

9 ANNA UNIVERSITY CHENNAI


DMC 1754 / 1945

NOTES

Figure 1.8 Client / Server Architecture – Web Server


Uniform Resource Locator (URL) is a string of characters which represent a “pointer”
to a resource available in the Web. A common way to get to a Web site is to enter the URL
of its home page file in the Web browser’s address line. For example, if the URL http://
www.servername.com/index.html is requested by a client, this would be translated into a
connection to www.servername.com using HTTP protocol. The web server will locate the
file index.html and send it to the client.
Any computer can be turned into a Web server by installing server software and
connecting the machine to the Internet. Many different servers are in use on the Internet.
Some of the more popular ones are:
 Apache Free open source server
 Internet Information Server (IIS) Microsoft’s web server
 Netscape Enterprise Server Popular server
Web servers also have the capability of logging information like client requests and
server responses. The log files can be analyzed to collect statistics on client requests.
Many public web servers also implement features such as:
 Verifying the identity of the client before allowing access to web content
 Support for both static and dynamic content delivery by supporting one or more
standard interfaces like CGI, PHP, ASP, etc.
 Security to allow the web contents to be encrypted and sent to the clients using
protocols such as HTTPS (HTTP over Secure Socket Layer (SSL))
 Content compression to reduce the size of the responses for lower bandwidth usage
 Bandwidth throttling to limit the speed of the responses in order not to saturate the
network and to be able to serve more clients.
1.7.5 Object Server
Business objects are intelligent components that encapsulate the data and business
logic needed to carry out a business function. Business objects provide a way to describe

10 ANNA UNIVERSITY CHENNAI


MIDDLE-WARE TECHNOLOGIES
application independent objects like a customer, order, payment or patient. Object Servers
are used to make business logic available to different kinds of clients in a distributed
environment. An object server’s supports support distributed objects and these technologies
NOTES
support interoperability across languages and platforms, as well as enhancing maintainability
and adaptability of the system. There are currently two prominent distributed object
technologies: Common Object Request Broker Architecture (CORBA) and COM/DCOM
(Component Object Model and Distributed COM). These distributed object technologies
provide a range of services for component services from services promoting component
integration on a single platform to component interaction across heterogeneous networks.
The distributed/collaborative enterprise architecture that emerged in 1993 is a software
architecture based on Object Request Broker (ORB) technology. It uses shared, reusable
business models (not just objects) on an enterprise-wide scale. The benefit of this architectural
approach is that standardized business object models and distributed object computing are
combined to give an organization flexibility to improve effectiveness organizationally,
operationally, and technologically. An enterprise is defined here as a system comprised of
multiple business systems or subsystems. More details about CORBA and ORB is given in
the later sections of this courseware.
1.7.6 Other Types of Servers
There are a number of other servers emerging in today’s scenario. To list a few of
such servers:
Chat Servers
Chat servers enable a large number of users to exchange information in an environment
similar to Internet newsgroups that offer real-time discussion capabilities.
Fax Servers
A fax server is an ideal solution for organizations looking to reduce incoming and
outgoing telephone resources but that need to fax actual documents.
FTP Servers
One of the oldest of the Internet services, File Transfer Protocol makes it possible to
move one or more files securely between computers while providing file security and
organization as well as transfer control.
Groupware Servers
A GroupWare server is software designed to enable users to collaborate, regardless of
location, via the Internet or a corporate Intranet and to work together in a virtual atmosphere.
List Servers
List servers offer a way to better manage mailing lists, whether they are interactive
discussions open to the public or one-way lists that deliver announcements, newsletters, or
advertising.
Mail Servers
Mail servers move and store mail over corporate networks via LANs and WANs and
across the Internet.
News Servers
News servers act as a distribution and delivery source for the thousands of public
news groups currently accessible over the USENET news network. USENET is a worldwide
bulletin board system that can be accessed through the Internet or through many online
services The USENET contains more than 14,000 forums called newsgroups that cover
every imaginable interest group. It is used daily by millions of people around the world.

11 ANNA UNIVERSITY CHENNAI


DMC 1754 / 1945
Proxy Servers
NOTES Proxy servers sit between a client program typically a Web browser and an external
server (typically another server on the Web) to filter requests, improve performance, and
share connections.
Telnet Servers
A Telnet server enables users to log on to a host computer and perform tasks as if
they’re working on the remote computer itself.
1.8 TYPES OF CLIENT/SERVER MODELS
Every client/server application contains three functional units:
 Presentation logic or user interface (for example, ATM machines)
 Business logic (for example software that enables a customer to request an account
balance)
 Data (for example, records of customer accounts)
These functional units can reside on either the client or on one or more servers in the
application. Depending on the way the application is split, many possible variations of the
client / server architecture can occur. Middleware is the set of software used to communicate
between the tiers.
Client/server architecture can be deployed to meet the processing needs in different
situations.
 Small Business : the client, the middleware software, and most of the business
services operate on the same computer system which may be a desktop or even a
laptop. Examples of such usage could be small shops, doctor’s office, dentist’s
office, a home office and a business traveler who frequently works on a laptop
computer.
 Small to Medium Business and Corporate departments: a LAN based single-server
deployment may be adequate to meet the information needs. Users of this type of
application include small businesses, such as a medical practice with several doctors,
a multi-department corporation, or a bank with several branch offices. In this type
of application, multiple clients talk to a local server. Administration is simple and
failures can be detected easily.
 Large enterprises: multiple servers that offer diverse functionality may be required.
Multiple servers can reside on the Internet, intranets, and corporate networks, all of
which are highly scalable. Servers can be partitioned by function, resources, or
databases, and can be replicated for increased fault tolerance or enhanced
performance. This model provides a great amount of power and flexibility. The
development of the application, and partitioning of work among the various servers
is critical for the successful deployment of this client/server model.
1.8.1 Two-tier Model
The two-tier model is the basic client / server model. In a two-tier model, the client
talks directly to the server. The user (client) machine commonly performs the business
logic as well as the user interface. The server was commonly a database or file server
which provided the data needed for the business processing. Alternatively, the business
logic can be divided between the client and server as shown in Figure 1.9. File servers and
database servers with stored procedures are examples of two-tier architecture.

12 ANNA UNIVERSITY CHENNAI


MIDDLE-WARE TECHNOLOGIES

The two tier client/server architecture is a good solution for distributed computing
when work groups are defined with small number of users, limited to say about 100 people NOTES
interacting on a LAN simultaneously. It does have a number of limitations. When the number
of users exceeds the LAN limit, the performance begins to deteriorate.
In the Internet processing environment, the first tier, the client, generally operates on a
web browser environment. The server side is the place where the functionality of the
information service is supported; the information service provides data and responds to user
queries. The client / server architecture for this type of environment is given in Figure 1.10.

Figure 1.9 Two-Tier Client / Server Architecture


The server side is implemented as a Web Server (Internet In-formation Server – IIS,
Apache, Tomcat, Java Web Server – JWS), operating on different environments (Windows,
UNIX, Sun, HP). In this model the server give data to unlimited number of users by static’s
HTML pages on HTTP written statement.

Request
Request

Web
Response Server
Client Response

Figure 1.10 Two-Tier Client / Server Architecture on the Internet

13 ANNA UNIVERSITY CHENNAI


DMC 1754 / 1945
1.8.2 Three-tier Model
NOTES The three tier architecture emerged to overcome the limitations of the two tier
architecture. In the three tier architecture, a middle tier was added between the user system
interface client environment and the database management server environment. The business
logic can reside in the middle tier, separate from the data and user interface. The client /
server with business logic in a separate tier is given in Figure 1.11. In this way, processes
can be managed and deployed separately from the user interface and the database. Also, 3-
tier systems can integrate data from multiple sources

Figure 1.11 Three -Tier Client / Server Architecture


There are a variety of other ways of implementing this middle tier, such as transaction
processing monitors, message servers, or application servers. The middle tier can perform
queuing, application execution, and database staging.
For example, if the middle tier provides queuing, the client can deliver its request to the
middle layer and disengage because the middle tier will access the data and return the
answer to the client. In addition the middle layer adds scheduling and prioritization for work
in progress.
The three tier client/server architecture has been shown to improve performance for
groups with a large number of users (in the thousands) and improves flexibility when compared
to the two tier approach. Flexibility in partitioning can be as simple as “dragging and dropping”
application code modules onto different computers in some three tier architectures.
Three tier architecture with transaction processing monitor technology:
One way of implementing the three tier architecture is by having a middle layer consisting
of Transaction Processing (TP) monitor technology. The TP monitor technology is a type of
message queuing, transaction scheduling, and prioritization service where the client connects
to the TP monitor (middle tier) instead of the database server. The transaction is accepted
by the monitor, which queues it and then takes responsibility for managing it to completion,
thus freeing up the client. When the capability is provided by third party middleware vendors
it is referred to as “TP Heavy” because it can service thousands of users. When it is
14 ANNA UNIVERSITY CHENNAI
MIDDLE-WARE TECHNOLOGIES

embedded in the DBMS (and could be considered a two tier architecture), it is referred to
as “TP Lite” because experience has shown performance degradation when over 100 clients NOTES
are connected. TP monitor technology also provides
 the ability to update multiple different DBMS in a single transaction
 connectivity to a variety of data sources including flat files, non-relational DBMS,
and the mainframe
 the ability to attach priorities to transactions
 robust security
Using a three tier client/server architecture with TP monitor technology results in an
environment that is considerably more scalable than a two tier architecture with direct client
to server connection. For systems with thousands of users, TP monitor technology (not
embedded in the DBMS) has been reported as one of the most effective solutions.
Three tier with message server
Messaging is another way to implement three tier architectures. Messages are prioritized
and processed asynchronously. Messages consist of headers that contain priority information,
and the address and identification number. The message server connects to the relational
DBMS and other data sources. The difference between TP monitor technology and message
server is that the message server architecture focuses on intelligent messages, whereas the
TP Monitor environment has the intelligence in the monitor, and treats transactions as dumb
data packets. Messaging systems are good solutions for wireless infrastructures.
Three tier with an application server
The three tier application server architecture allocates the main body of an application
to run on a shared host rather than in the user system interface client environment. The
application server does not drive the GUIs; rather it shares business logic, computations,
and a data retrieval engine. Advantages are that with less software on the client there is less
security to worry about, applications are more scalable, and support and installation costs
are less on a single server than maintaining each on a desktop client. The application server
design should be used when security, scalability, and cost are major considerations
Three tier with an Object Server Architecture
The middle tier can be designed to be an Object Server that clients can interface to
access application objects for business processing. The server objects provide an integrated
model of the disparate data sources and back-end applications. The client objects can be
insulated from the need to know about stored procedures and databases that are present in
the third tier. The server objects communicate with the third tier to process and deliver the
client requests.
Figure 1.12 illustrates a three-tier model in the Internet. The first tier is the web client
(browser), the second tier is the Web server and the third tier is the Application server.
1.8.3 Multi-tier Model
Multi-tier models have four or more tiers. Figures 1.13, 1.14 and 1.15 illustrate different
types of four tier model over the Internet.

15 ANNA UNIVERSITY CHENNAI


DMC 1754 / 1945

NOTES

Figure 1.12 Client / Web / Database Server

Figure 1.13 Client / Web / Application / Database Servers

Figure 1.14 Client / Web / Transaction / Database Servers

16 ANNA UNIVERSITY CHENNAI


MIDDLE-WARE TECHNOLOGIES

NOTES

Figure 1.15 Client / Web / Application / Transaction / Database Servers


1.9 ADVANTAGES / DISADVANTAGES OF CLIENT/SERVER MODEL
1.9.1 Advantages
 In most cases, a client-server architecture enables the roles and responsibilities of
a computing system to be distributed among several independent computers that
are known to each other only through a network. This creates an additional advantage
to this architecture: greater ease of maintenance. For example, it is possible to
replace, repair, upgrade, or even relocate a server while its clients remain both
unaware and unaffected by that change.
 All the data are stored on the servers, which generally have far greater security
controls than most clients. Servers can better control access and resources, to
guarantee that only those clients with the appropriate permissions may access and
change data.
 Since data storage is centralized, updates to those data are far easier to administer
than would be possible when data is distribtued over several “peers”, each of which
are independently managed computer systems. Data updates may need to be
distributed and applied to each “peer” in the network, which is both time-consuming
and error-prone, as there can be thousands or even millions of peers.
 Many mature client-server technologies are already available which were designed
to ensure security, ‘friendliness’ of the user interface, and ease of use.
 It functions with multiple different clients of different capabilities.
1.9.2 Disadvantages
 Traffic congestion on the network has been an issue since the inception of the
client-server paradigm. As the number of simultaneous client requests to a given
server increases, the server can become severely overloaded.
 Under client-server, should a critical server fail, clients requests cannot be fulfilled.
Hence, lack of robustness is a cause for concern.

17 ANNA UNIVERSITY CHENNAI


DMC 1754 / 1945

1.10 CONCLUSION
NOTES In a client / server architecture, clients request information or a service from a server
and that server responds to the client by acting on that request and returning the results. The
client and server could be on the same computer system or more generally the client and
server applications reside on different computer systems accessed over a network. The
client and server are two separate devices which can work over a LAN, long-distance
WANs or the Internet.
This approach to networking has proven to be a cost-effective way to share data
between tens or hundreds of clients. Client/server is just one approach to distributed
computing. The client/server model has been popular for a long time, but peer-to-peer
networking and grid technology have emerged as viable alternatives for distributed computing.
HAVE YOU UNDERSTOOD QUESTIONS?
a) What is a client / server model?
b) How did the client / server architecture evolve?
c) What are the roles and functions of client and server?
d) What are the important characteristics of the client / server architecture?
e) What are the different types of servers and what are the functionalities provided by
each type of server?
f) What are the different types of client/server models?
g) What are the advantages and disadvantages of client/server architecture?
SUMMARY
 In a client/server model (also known as client/server architecture), processing is
shared between the client and server
 The client issues the request and the server services the request
 The client/server technology has evolved from mainframe computing environment.
The advent of PC (of low cost and with processing power) saw the replacement of
dumb terminals of mainframes with PC’s as clients capable of sharing the processing
load.
 The client and server can be on the same computer or different computer systems.
Typically, the client and server are connected over a LAN or WAN using standard
communication protocols.
 The important characteristics of a client / server environment includes location
transparency of files, data and application, message-based exchanges and modular
extensible designs that can be scaled to support numerous clients and multiple servers.
 Servers can be classified based on the type of service they provide : File servers,
Database servers, Application servers, Object servers etc.
 Different types of client / server models can be deployed to match the processing
needs of an organization or Institution. These are typically 2-tier, 3-tier and multi-
tier models.
 Some of key advantages of client / server architecture include sharing of processing
load, sharing of resources between multiple users, easy maintenance and security
of access.

18 ANNA UNIVERSITY CHENNAI


MIDDLE-WARE TECHNOLOGIES

 One of the key disadvantages of client / sever environment is traffic congestion


that can be caused in the network due to heavy and simultaneous client requests to NOTES
the servers
 Middleware is the software layer that connects the client and server over a network.
EXERCISES
Part I
1. In mainframe environment processing & data handling take place at
a) Central host computer b) Local host
c) Local Computer d) None
2. Server can interact simultaneously with how many clients
a) Only one b) Only Two
c) Maximum of three d) Many
3. Server commonly contains
a) Only system programs b) Data files, applications
c) Only application programs d) System software & application programs
4. Client computer system interacts with the Internet servers using the program
a) Complier b) Operating System
c) Browser d) Machine Language Program
5. Web server is accessed using its
a) Index.html b) main.html
c) URL d) Ethernet address
6. Network protocol used to access web server
a) TCP/IP b) Application Server
c) Operating System d) none
7. Client/Server architecture delivers dynamic web pages using
a) ASP&CGI b) HTML
c) C Language d) Operating System
8. In a client/Server model, business processing is done at
a) Client b) Server
c) Web Server d) Client & Server
9. For dynamic webpage creation, script is executed at
a) Client b) Server
c) middle-Tier d) Network
10. Which of the following is not a browser software
a) Internet explorer b) Mozilla
c) JSP d) Netscape navigator

19 ANNA UNIVERSITY CHENNAI


DMC 1754 / 1945

Part II
NOTES 11. What are the various functions of Client/Server?
12. What is the difference between file and database servers?
13. What are the objectives of a file server?
14. What are the features of public web server’s?
15. Explain the role played by middle tier architecture?
Part III
16. What are the benefits provided by the server in client/server model?
17. What are the advantages / disadvantages of client / server model?
18. List out and explain the various types of server?
19. Explain the important characteristics of Client/Server architecture?
20. Explain three-tier architecture and the functions of each tier in detail with examples?
Part I
Answers:
1) a 2) d 3) b 4) c 5) c 6) a 7) a 8) b 9) b 10) d
REFERENCES
1. Websites : http://www.sei.cmu.edu/str/descriptions/clientserver_body.html http://
www.webdeveloper snotes.com/basics/client_ser ver_ar chitectur e.php3,
wikipedia.org, http://faqs.org/faqs/client-server-faq/

20 ANNA UNIVERSITY CHENNAI


MIDDLE-WARE TECHNOLOGIES

CHAPTER - 2 NOTES
REMOTE PROCEDURE CALL / PEER-TO-PEER

2.1 INTRODUCTION
Remote Procedure Call (RPC) is a client/server infrastructure that increases the
interoperability, portability, and flexibility of an application by allowing the application to
be distributed over multiple heterogeneous platforms. It reduces the complexity of developing
applications that span multiple operating systems and network protocols by insulating the
application developer from the details of the various operating system and network interfaces.
Peer-to-peer, also known by the acronym P2P describes a type of network in which
each workstation (peer or computer system) has equivalent capabilities and responsibilities.
This differs from client/server architectures, in which some computers are dedicated as
servers to service client request.
RPC and Peer-to-Peer networking have been explained in this unit. The comparison
of peer-to-peer and client /server architecture has been discussed.
2.2 LEARNING OBJECTIVES
 Overview of Remote Procedure Call
 How RPC works?
 RPC Implementation Issues
 RPC Usage Considerations
 Peer-to-Peer Networking
 Common Peer-to-Peer Applications
 Comparison of Client/Server & PEER-TO-PEER
2.3 REMOTE PROCEDURE CALL
2.3.1 RPC Overview
Remote Procedure Call (RPC) is a powerful technique for constructing distributed,
client-server based applications. The idea of RPC is quite simple. It is based on the observation
that procedure calls are a well-known and well understood mechanism for transfer of control
and data within a program running on a single computer to another program. RPC extends
this mechanism to provide for transfer of control and data across a communication network.
Hence, the called procedure need not exist in the same address space as the calling procedure.
The two processes may be on the same system, or they may be on different systems with a
network connecting them. By using RPC, programmers of distributed applications avoid the
details of the interface with the network. The transport independence of RPC isolates the
application from the physical and logical elements of the data communications mechanism
and allows the application to use a variety of transports.
Remote Procedure Call is implemented as a protocol (is a set of rules) that one program
(client) can use to request a service from a program (server) located in another computer in
a network without having to understand network details. A procedure call is also sometimes
known as a function call or a subroutine call. RPC uses the client/server model of distributed
computing. An RPC is initiated by the client sending a request message to a known remote
server in order to execute a specified procedure using supplied parameters. A response is

21 ANNA UNIVERSITY CHENNAI


DMC 1754 / 1945
returned to the client where the application continues along with its process. There can be
NOTES many variations and subtleties in various implementations of RPC resulting in a variety of
different (incompatible) RPC protocols.
Some of the well-known and commonly used RPC analogues include:
 Java Remote Method Invocation (Java RMI) API provides similar functionality to
standard UNIX RPC methods
 XML-RPC is an RPC protocol which uses XML to encode its calls and HTTP as a
transport mechanism
 Microsoft .NET Remoting offers RPC facilities for distributed systems implemented
on the Windows platform
 RPyC (Remote python call) implements RPC mechanisms in Python, with support
for asynchronous calls.
 Routix-RPC technology based on TCP-transport and XML-packets (messages)
By using RPC, the complexity involved in the development of distributed processing is
reduced by keeping the semantics of a remote call the same whether or not the client and
server are co-located on the same system. Like a regular or local procedure call, an RPC is
a synchronous operation requiring the requesting program to be suspended until the results
of the remote procedure are returned. However, the use of lightweight processes or threads
that share the same address space allows multiple RPCs to be performed concurrently.
RPC facilitates communication between client and server processes by allowing a
client component of an application to employ a function call to access a server on a remote
system. RPC allows the remote component to be accessed without knowledge of the network
address or any other lower-level information.
2.3.2 How RPC Works
An RPC is analogous to a function call. Like a function call, when an RPC is made, the
calling arguments are passed to the remote procedure and the caller waits for a response to
be returned from the remote procedure.
Most RPC implementations use a synchronous, request-reply (sometimes referred to as
“call/wait”) protocol which involves blocking of the client until the server fulfills its request.
Asynchronous (“call/no wait”) implementation of RPC is also available.
Figure 2.1 shows the flow of activity that takes place during a synchronous RPC call
between two networked systems. In synchronous RPC the thread that issued the RPC call
blocks the client until the RPC call is complete.
1. Call is issued by the application.
2. The RPC runtime sends the call to the server, on behalf of the client. Meanwhile,
the client thread that issued the RPC call is stuck in the RPC runtime waiting for
the call to complete.
3. The call is dispatched by the server side RPC runtime. The server application then
executes the remote call.
4. Control returns back to the server RPC runtime.
5. Server RPC runtime sends the reply to client.
6. The client thread unblocks. The RPC call is complete.
Asynchronous Remote Procedure Call (RPC) is an extension of the synchronous RPC
mechanism. Asynchronous RPC allows the thread that issued the call to continue

22 ANNA UNIVERSITY CHENNAI


MIDDLE-WARE TECHNOLOGIES

execution and pick up the results at a later time. Similarly, on an Asynchronous


RPC server the logical NOTES

Figure 2.1 Synchronous RPC


RPC call can continue even after the dispatched call returns from the application code into
the RPC runtime (manager routine), as shown in Figure 2.2.
1. The client submits Asynchronous RPC call. Control returns back to the client thread.
The thread is free to continue with other work.
2. RPC runtime sends request to the server, on behalf of the client.
3. Request is dispatched to the server. The server application begins execution of the
remote call. Control returns back to the server RPC runtime, but the server side call
is not complete.
4. Server completes the call.
5. Reply is sent back to the client.
6. Client is notified that reply has arrived.
7. Client calls back into the RPC runtime and picks up the reply. At this point, the
Asynchronous RPC is complete.

Figure 2.2 Asynchronous RPC

23 ANNA UNIVERSITY CHENNAI


DMC 1754 / 1945

It is useful to use Asynchronous RPC on the client when the remote procedure call
NOTES takes a while to complete and the client can do other work on the thread before it needs the
results of this RPC. The client can also make simultaneous calls to one or more servers.
For example, if a client wants to make simultaneous synchronous RPC calls to four servers,
it cannot do so with one thread. It has to spin off at least three threads and make an RPC
call in each thread. However, if it is using Asynchronous RPC, it can make all four calls on
the same thread and then wait for all of them.
It is useful to use Asynchronous RPC on the server when the processing of the call will
take a long time to complete. Instead of just processing the call in as with synchronous
RPC, the server can add it to the work queue and process it later. In synchronous RPC is
used, the server will have to start a thread for every RPC call. The server application can
notify the client on completion of the task.
2.3.3 Onc RPC Protocol
The Open Network Computing (ONC) Remote Procedure Call (RPC) protocol is
documented in RFC 1831. It is based on the remote procedure call model. One thread of
control logically winds through two processes: the caller’s (client) process, and a server’s
process. The caller process first sends a call message to the server process and waits
(blocks) for a reply message. The call message includes the procedure’s parameters, and
the reply message includes the procedure’s results. Once the reply message is received,
the results of the procedure are extracted, and caller’s execution is resumed. On the server
side, a process is dormant awaiting the arrival of a call message. When one arrives, the
server process extracts the procedure’s parameters, computes the results, sends a reply
message and then awaits the next call message.
However, this model is only given as an example. The ONC RPC protocol makes no
restrictions on the concurrency model implemented and others are possible. For example,
an implementation may choose to have RPC calls be asynchronous, so that the client may
do useful work while waiting for the reply from the server. Another possibility is to have the
server create a separate task to process an incoming call, so that the original server can be
free to receive other requests.
There are a few important ways in which remote procedure calls differ from local
procedure calls:
 Error handling: failures of the remote server or network must be handled when
using remote procedure calls
 Global variables and side-effects: since the server does not have access to the
client’s address space, hidden arguments cannot be passed as global variables or
returned as side effects.
 Performance : remote procedures usually operate one or more orders of magnitude
slower than local procedure call
 Authentication: since remote procedure calls can be transported over unsecured
networks, authentication may be necessary. Authentication prevents one entity
from masquerading as some other entity.
The RPC protocol can be implemented on several different transport protocols like
TCP or UDP. The RPC protocol does not care how a message is passed from one process
to another, but only with specification and interpretation of messages. However, the application
may wish to obtain information about (and perhaps control over) the transport layer through
an interface not specified in this document. For example, the transport protocol may impose
a restriction on the maximum size of RPC messages, or it may be stream-oriented like TCP
with no size limit. The client and server must agree on their transport protocol choices.

24 ANNA UNIVERSITY CHENNAI


MIDDLE-WARE TECHNOLOGIES

It is important to point out that RPC does not try to implement any kind of reliability and
that the application may need to be aware of the type of transport protocol underneath NOTES
RPC. If it knows it is running on top of a reliable transport such as TCP then most of the
work is already done for it. On the other hand, if it is running on top of an unreliable
transport such as UDP, it must implement its own time-out, retransmission, and duplicate
detection policies as the RPC protocol does not provide these services.
The RPC protocol provides the fields necessary for a client to identify itself to a service,
and vice-versa, in each call and reply message. Security and access control mechanisms
can be built on top of this message authentication. Several different authentication protocols
can be supported. A field in the RPC header indicates which protocol is being used.
To summarize RPC protocol implementations must provide for the following:
 Unique specification of a procedure to be called.
 Provisions for matching response messages to request messages.
 Provisions for authenticating the caller to service and vice-versa.
Besides these requirements, features that detect the following are worth supporting
because of protocol roll-over errors, implementation bugs, user error, and network
administration:
 RPC protocol mismatches
 Remote program protocol version mismatches.
 Protocol errors (such as misspecification of a procedure’s parameters).
 Reasons why remote authentication failed.
 Any other reasons why the desired procedure was not called.
2.3.4 RPC Implementation Issues
RPC provides a simple means for an application programmer to construct distributed
programs because it abstracts away from the details of communication and transmission.
However, the achievement of true transparency is a problem which needs to be resolved.
The following issues regarding the properties of remote procedure calls need to be considered
in the design of an RPC system if the distributed system is to achieve transparency.
Messages
The semantics of RPC are the same as those of a local procedure call. The calling
process calls and passes arguments to the procedure and it blocks while the procedure
executes.
The ONC RPC message protocol consists of two distinct structures: the call message
and the reply message. A client makes a remote procedure call to a network server and
receives a reply containing the results of the procedure’s execution. By providing a unique
specification for the remote procedure, RPC can match a reply message to each call (or
request) message. The RPC message protocol is defined using the eXternal Data
Representation (XDR) data description, which includes structures, enumerations, and unions.
The initial structure of an RPC message is as follows:
struct rpc_msg
{ unsigned int xid;
union switch (enum msg_type mtype)

25 ANNA UNIVERSITY CHENNAI


DMC 1754 / 1945

{ case CALL;
NOTES call_body cbody;
case REPLY;
reply_body rbody;
} body;
};
All RPC call and reply messages start with a transaction identifier, xid, which is
followed by a two-armed discriminated union. The union’s discriminant is msg_type, which
switches to one of the following message types: CALL or REPLY. The msg_type has the
following enumeration:
enum msg_type {
CALL = 0,
REPLY = 1
};
The xid parameter is used by clients matching a reply message to a call message or
by servers detecting retransmissions. The initial structure of an RPC message is followed
by the body of the message. The body of a call message has one form. The body of a reply
message, however, takes one of two forms, depending on whether a call is accepted or
rejected by the server.
The RPC protocol for a reply message varies depending on whether the call message
is accepted or rejected by the network server. A call message can be rejected by the server
for two reasons: either the server is not running a compatible version of the RPC protocol,
or there is an authentication failure. The reply message to a request contains information to
distinguish the following conditions:
 RPC executed the call message successfully.
 The remote program is not available on the remote system.
 The remote program does not support the requested version number. The lowest
and highest supported remote program version numbers are returned.
 The requested procedure number does not exist. This is usually a caller-side protocol
or programming error.
Communication transparency
The users should be unaware that the procedure they are calling is remote. The three
difficulties when attempting to achieve transparency are:
 the detection and correction of errors due to communication and site failures
 the passing of parameters
 Exception handling
Communication and site failures can result in inconsistent data because of partially
completed processes. The solution to this problem is often left to the application programmer.
Parameter passing in most systems is restricted to the use of value parameters. Exception
handling is a problem also associated with heterogeneity. The exceptions available in different
languages vary and have to be limited to the lowest common denominator.
For example, if a request is sent, but no response is received, what should the requestor
do?
26 ANNA UNIVERSITY CHENNAI
MIDDLE-WARE TECHNOLOGIES
 If the request is blindly retransmitted, the remote procedure might be executed
twice (or more) NOTES
 If the request is not retransmitted, the remote procedure might not be executed at
all
It may be possible for some remote procedures to be safely executed twice. Such
procedures are said to be idempotent. It is essential that remote procedures must execute
with desired behavior.
Location of services
In a distributed environment, Servers need to advertise their services and clients need
to identify compatible servers. Hence some type of Directory Service or Registry services
must be implemented for registration and location of available services as shown in Figure
2.4. The RPC runtime can access the Directory Service to locate the server.

Figure 2.4 Location of services


Binding
Binding provides a connection between the name used by the calling process and the
location of the remote procedure. Binding can be implemented, at the operating system
level, using a static or dynamic linker extension which binds the procedure name with its
location on another machine. Another method is to use procedure variables which contain a
value which is linked to the procedure location.
The act of binding a particular client to a particular service and transport parameters is
NOT part of ONC RPC protocol specification. Both TCP and UDP rely on well-known
port numbers to perform service rendezvous, that is connect to a particular service on the
server. For example, TCP TELNET service is available on port 21, FTP on port 23, SMTP
on port 25, and so on. Connecting to TCP and UDP services simply requires connecting to
the right port number. RPC introduces another step in this process, to divorce services from
being tied to a given port number. It does so using a special RPC service called
PORTMAPPER or RPCBIND. These binding protocols, documented in RFC 1833 and
often referred to as the portmapper, are unique among RPC services since they have an
assigned port of their own (port 111). Other RPC services, running on any port number, can
register themselves using an RPC call to port 111. The portmapper offers other RPC calls to
permit service lookup. The most important consequence of this design is that the portmapper
must be the first RPC program started, and must remain in constant operation.
Concurrency
Concurrency mechanisms should not interfere with communication mechanisms. Single
threaded clients and servers, when blocked while waiting for the results from a RPC, can
cause significant delays. These delays can be exacerbated by further remote procedure

27 ANNA UNIVERSITY CHENNAI


DMC 1754 / 1945
calls made in the server. Lightweight processes allow the server to execute calls from more
NOTES than one client concurrently.
Heterogeneity
Different machines may have different data representations, the machines may be
running different operating system or the remote procedure may have been written using a
different language. Static interface declarations of remote procedures serve to establish
agreement between the communicating processes on argument types, exception types (if
included), type checking and automatic conversion from one data representation to another,
where required.
Service Interface
In order to allow servers to be accessed by differing clients, a number of standardized
RPC systems have been created. Most of these use an interface description language
(IDL) to allow various platforms to call the RPC.
IDL also known as Interface Definition Language is a specification language used to
describe a software component’s interface. IDLs describe an interface in a language-neutral
way, enabling communication between software components that do not share a language –
for example, between components written in C++ and components written in Java.
IDLs are commonly used in remote procedure call software. In these cases the
machines at either end of the “link” may be using different operating systems and computer
languages. IDLs offer a bridge between the two different systems. The IDL files can then
be used to generate code to interface between the client and server. A common tool used
for this is RPCGEN.
Hence the requirements for effective RPC implementation can be summarized as
follows:
 Resolve differences in data representation
 Support a variety of execution semantics
 Support multi-threaded programming
 Provide good reliability
 Provide independence from transport protocols
 Ensure high degree of security
 Locate required services across networks
2.3.5 Other features of RPC
RPC protocol also supports other features which include:
 Batching calls
 Broadcasting calls
 Callback procedures
 Authentication
Batching calls
Batching allows a client to send an arbitrarily large sequence of call messages to a
server. Batching typically uses reliable byte stream protocols, such as TCP/IP, for its transport.
When batching, the client never waits for a reply from the server, and the server does not
send replies to batched requests.

28 ANNA UNIVERSITY CHENNAI


MIDDLE-WARE TECHNOLOGIES

The RPC architecture is designed so that clients send a call message and then wait for
servers to reply that the call succeeded. This implies that clients do not compute while NOTES
servers are processing a call. However, the client may not want or need an acknowledgment
for every message sent. Therefore, clients can use RPC batch facilities to continue computing
while they wait for a response.
Batching can be thought of as placing RPC messages in a pipeline of calls to a desired
server. Batching assumes the following:
 Each remote procedure call in the pipeline requires no response from the server,
and the server does not send a response message.
 The pipeline of calls is transported on a reliable byte stream transport such as TCP/
IP.
Because the server sends no message, the clients are not notified of any failures that
occur. Therefore, clients must handle their own errors. Because the server does not respond
to every call, the client can generate new calls that run parallel to the server’s execution of
previous calls. Furthermore, the TCP/IP implementation can buffer many call messages,
and send them to the server with one write subroutine. This overlapped execution decreases
the inter-process communication overhead of the client and server processes as well as the
total elapsed time of a series of calls. Batched calls are buffered, so the client should eventually
perform a non-batched remote procedure call to flush the pipeline with positive
acknowledgment.
Broadcasting Calls
In broadcast RPC-based protocols, the client sends a broadcast packet to the network
and waits for numerous replies. Broadcast RPC uses only packet-based protocols, such as
User Datagram Protocol/Internet Protocol (UDP/IP), for its transports. Servers that support
broadcast protocols respond only when the request is successfully processed and remain
silent when errors occur. Broadcast RPC requires the RPC port map service to achieve its
semantics. The port map daemon converts RPC program numbers into Internet protocol
port numbers. The main differences between broadcast RPC and normal RPC are as follows:
 Normal RPC expects only one answer, while broadcast RPC expects one or more
answers from each responding machine.
 The implementation of broadcast RPC treats unsuccessful responses as garbage
by filtering them out. Therefore, if there is a version mismatch between the
broadcaster and a remote service, the user of broadcast RPC may never know.
 All broadcast messages are sent to the port-mapping port. As a result, only services
that register themselves with their port mapper are accessible through the broadcast
RPC mechanism.
 Broadcast requests are limited in size to the maximum transfer unit (MTU) of the
local network. For the Ethernet system, the MTU is 1500 bytes.
 Broadcast RPC is supported only by packet-oriented (connectionless) transport
protocols such as UPD/IP.
Call-back Procedures
Occasionally, the server may need to become a client by making an RPC callback to
the client’s process. To make an RPC callback, the user needs a program number on which
to make the call. The program number is dynamically generated.
Authentication
The server may require the client to identify itself before being allowed access to
services. Remote Procedure Call (RPC) authentication provides a certain degree of security.

29 ANNA UNIVERSITY CHENNAI


DMC 1754 / 1945

RPC deals only with authentication and not with access control of individual services.
NOTES Each service must implement its own access control policy and reflect this policy as return
statuses in its protocol. The programmer can build additional security and access controls on
top of the message authentication. The authentication subsystem of the RPC package is
open-ended. Different forms of authentication can be associated with RPC clients. That is,
multiple types of authentication are easily supported at one time. Examples of authentication
types include UNIX®, DES, and NULL. The default authentication type is none.
The RPC protocol provisions for authentication of the caller to the server, and vice
versa, are provided as part of the RPC protocol. Every remote procedure call is authenticated
by the RPC package on the server. Similarly, the RPC client package generates and sends
authentication parameters. The call message has two authentication fields: credentials and
verifier. The reply message has one authentication field: response verifier.
2.3.6 RPC Application Development
RPC is typically implemented in one of two ways:
 within a broader, more encompassing propriety product
 by a programmer using a proprietary tool to create client/server RPC stubs
For example, a client/server application can be developed to lookup a database located
on a remote machine. A server has to be established on the remote machine that can
respond to queries. The client can retrieve information by sending a query to the remote
server for processing and obtaining the reply.
To develop an RPC application, therefore the following steps are needed:
 Specify the protocol for client server communication
 Develop the client program
 Develop the server program
The programs will be compiled separately. The communication protocol is achieved by
generated stubs and these stubs and RPC (and other libraries) will need to be linked in.
When program statements that use RPC are compiled into an executable program, a stub is
included in the compiled code to act as the representative of the remote procedure code.
When the program is run and the procedure call is issued, the stub receives the request and
forwards it to a client runtime program in the local computer. The client runtime program
has the knowledge of how to address the remote computer and server application and sends
the message across the network that requests the remote procedure. Similarly, the server
includes a runtime program and stub that interface with the remote procedure itself. Results
are returned the same way.
Some of the terms and definitions associated with RPC are:
Client: A process such as a program or task that requests a service provided by another
program. The client process uses the requested service without having to “deal” with many
working details about the other program or the service.
Server: A process, such as a program or task, that responds to requests from a client.
Endpoint: The name, port, or group of ports on a host system that is monitored by a server
program for incoming client requests. The endpoint is a network-specific address of a server

30 ANNA UNIVERSITY CHENNAI


MIDDLE-WARE TECHNOLOGIES

process for remote procedure calls. The name of the endpoint depends on the protocol
sequence being used. NOTES
Endpoint Mapper (EPM): Part of the RPC subsystem that resolves dynamic endpoints in
response to client requests and, in some configurations, dynamically assigns endpoints to
servers.
Client Stub: Module within a client application containing all of the functions necessary for
the client to make remote procedure calls using the model of a traditional function call in a
standalone application. The client stub is responsible for invoking the marshalling engine and
some of the RPC application programming interfaces (APIs).
Server Stub: Module within a server application or service that contains all of the functions
necessary for the server to handle remote requests using local procedure calls.
The sequence of steps of a client / server interchange is depicted in Figure 2.3.

Client Client functions Server functions Server


process process
1 10 6 5
Client stub Server stub

2 9 3 7 4
Kernel Kernel
Network routines Network routines
8

Figure 2.3: Functional steps in a remote procedure call


1. The client calls a local procedure, called the client stub. To the client process, it
appears that this is the actual procedure. The client stub packages the arguments to
the remote procedure and builds one or more network messages. These complex
data structures have to be converted into a format suitable for transmission. The
conversion to a standard format and packaging of arguments into a network message
is called marshaling.
2. Network messages are sent by the client stub to the remote system (via a system
call to the local kernel).
3. Network messages are transferred by the kernel to the remote system via some
communication protocol (either connectionless or connection-oriented).
4. A server stub procedure on the server receives the messages. It unmarshals the
arguments (reverse of marshaling) from the messages and possibly converts them
from a standard form into a machine-specific form. The process
5. The server stub executes a local procedure call to the actual server function, passing
it the arguments that it received from the client.
6. When the server is finished, it returns to the server stub with its return values.
7. The server stub converts the return values (if necessary) and marshals them into
one or more network messages to send to the client stub.
8. Messages get sent back across the network to the client stub.

31 ANNA UNIVERSITY CHENNAI


DMC 1754 / 1945
9. The client stub reads the messages from the local kernel.
NOTES 10. It then returns the results to the client function (possibly converting them first).
The client code then continues its execution in the normal manner.
2.3.7 RPC Usage Considerations
RPC is appropriate for client/server applications in which the client can issue a request
and wait for the server’s response before continuing its own processing. Because most
RPC implementations do not support peer-to-peer, or asynchronous, client/server interaction,
RPC is not well-suited for applications involving distributed objects or object-oriented
programming.
Asynchronous and synchronous mechanisms each have strengths and weaknesses
that should be considered when designing any specific application. In contrast to asynchronous
mechanisms employed by Message-Oriented Middleware, the use of a synchronous request-
reply mechanism in RPC requires that the client and server are always available and
functioning (i.e., the client or server is not blocked).
When utilizing RPC over a distributed network, the performance (or load) of the network
should be considered. One of the strengths of RPC is that the synchronous, blocking
mechanism of RPC guards against overloading a network, unlike the asynchronous mechanism
of Message-Oriented Middleware (MOM). However, when recovery mechanisms, such as
retransmissions, are employed by an RPC application, the resulting load on a network may
increase, making the application inappropriate for a congested network. Also, because RPC
uses static routing tables established at compile-time, the ability to perform load balancing
across a network is difficult and should be considered when designing an RPC-based
application.

2.3.8 XML-RPC
It’s a specifications and a set of implementations that allow software running on disparate
operating systems, running in different environments to make procedure calls over the Internet.
It is a remote procedure call protocol which uses HTTP as the transport and XML
(Extensible Markup Language) to encode the calls as shown in Figure 2.8.

Figure 2.8 XML-RPC

32 ANNA UNIVERSITY CHENNAI


MIDDLE-WARE TECHNOLOGIES

XML-RPC is designed to be as simple as possible, while allowing complex data structures


to be transmitted , processed and returned. XML-RPC was first created by Dave Winer of NOTES
UserLand Software in 1998 with Microsoft. As new functionality was introduced, the standard
evolved into what is now SOAP (Simple Object Access Protocol).
SOAP is a lightweight protocol for exchange of information in a decentralized, distributed
environment. It is an XML based protocol that consists of three parts: an envelope that
defines a framework for describing what is in a message and how to process it, a set of
encoding rules for expressing instances of application-defined data types, and a convention
for representing remote procedure calls and responses.
Within the world of XML there are two main ways to implement a Remote Procedure
Call (RPC) XML-RPC and SOAP. SOAP tries to pick up where XML-RPC left off by
implementing user defined data types, the ability to specify the recipient, message specific
processing control, and other features.
2.4 PEER-TO-PEER
The term “peer-to-peer” (P2P) refers to a class of systems and applications that employ
distributed resources to perform a critical function in a decentralized manner. With the
pervasive deployment of computers, P2P is increasingly receiving attention in research,
product development, and investment circles. P2P is a way to leverage vast amounts of
computing power, storage, and connectivity from personal computers distributed around the
world. P2P is about sharing: giving to and obtaining from the peer community. A peer gives
some resources and obtains other resources in return. Some of the benefits of a P2P
approach include: improving scalability by avoiding dependency on centralized points;
eliminating the need for costly infrastructure by enabling direct communication among clients;
and enabling resource aggregation.
2.4.1 Peer-to-Peer Network
A pure peer-to-peer network does not have the notion of clients or servers, but only
equal peer nodes that simultaneously function as both “clients” and “servers” to the other
nodes on the network. This model of network arrangement differs from the client-server
model where communication is usually to and from a central server as shown in Figure 2.8.
A peer-to-peer (P2P) computer network uses diverse connectivity between participants
in a network and the cumulative bandwidth of network participants rather than conventional
centralized resources where a relatively low number of servers provide the core value to a
service or application. Peer-to-peer networks are typically used for connecting nodes via
largely ad hoc connections. Such networks are useful for many purposes. Sharing content
files (see file sharing) containing audio, video, data or anything in digital format is very
common, and realtime data, such as telephony traffic, is also passed using P2P technology.
The earliest peer-to-peer network in widespread use was the Usenet news server
system, in which peers communicated with one another to propagate Usenet news articles
over the entire Usenet network. Particularly in the earlier days of Usenet, UUCP (UNIX -
to-UNIX Copy) was used to extend even beyond the Internet. However, the news server
system also acted in a client-server form when individual users accessed a local news
server to read and post articles.
P2P however gained greater visibility with Napster’s support for music sharing on the
Web. Other popular applications include Free net (distributed data store), Gnutella (file

33 ANNA UNIVERSITY CHENNAI


DMC 1754 / 1945

sharing) and Kaaza (free music download). SETI@home, another example of P2P, is a
NOTES scientific experiment that uses Internet-connected computers in the Search for Extraterrestrial
Intelligence (SETI). You can participate by running a free program that downloads and
analyzes radio telescope data. However, it is increasingly becoming an important technique
in various areas, such as distributed and collaborative computing both on the Web and in ad-
hoc networks.
Although Peer-to-Peer networking is still an emerging area, some Peer-to-Peer concepts
are already applied successfully in different contexts. Good examples are Internet routers,
which deliver IP packages along paths that are considered efficient. Theses routers form a
decentralized, hierarchical network. They consider each others as peers, which collaborate
in the routing process and in updating each other. Unlike centralized networks, they can
compensate node failures and remain functional as a network. But, unlike a typical P2P
application, a router by itself does not change how resources within the network are shared.
Peer-to-Peer as it has evolved today takes these concepts from the network to the application
layer, where software defines purpose and algorithms of virtual (non-physical) Peer-to-
Peer networks.
There also exist countless hybrid peer-to-peer systems. Such systems normally have a
central server that keeps information on peers and responds to requests for that information.
Peers are responsible for hosting available resources (as the central server does not have
them), for letting the central server know what resources they want to share, and for making
its shareable resources available to peers that request it.

Figure 2.8 Peer-to-Peer Network & Client/server Model


The emergence of large scale, decentralized, autonomous, peer-to-peer (P2P) systems is a
spectacular phenomenon that has generated a new level of network programming abstraction
and presents significant challenges for parallel and distributed computing, distributed data
management, and software engineering. This is a fundamental shift from the current client-
server based systems.
When most people hear the term “P2P”, they think not of traditional peer networks,
but rather peer to peer file sharing over the Internet. A good definition of P2P software was
proposed by Dave Winner of User Land Software many years ago when P2P was first
being used for mainstream computing. Dave suggests that P2P software applications include
these seven key characteristics:
 The user interface runs outside of a Web browser.
 Computers in the system can act as both clients and servers.
 The software is easy to use and well-integrated.
 The application includes tools to support users wanting to create content or add
functionality.
 The application makes connections with other users.

34 ANNA UNIVERSITY CHENNAI


MIDDLE-WARE TECHNOLOGIES
 The application does something new or exciting.
 The software supports cross-network protocols like SOAP or XML-RPC. NOTES
P2P acronym has also acquired a non-technical meaning as well. It is also described
with a second meaning of “P2P” as “people-to-people.” From this perspective, P2P is a
model for developing software and growing businesses that help individuals on the Internet
meet each other and share common interests.
Advancing P2P applications from the basics of file sharing towards more general and complex
resource sharing, process management, and ultimately towards a sea of global P2P
applications, requires significant understanding and study of P2P algorithms and network
programming technologies. As network technologies continue to expand into wireless and
ad-hoc networking domains, the use of P2P applications will become increasingly important.
Research efforts area concentrated in such areas and includes developments in the areas of
P2P network design and protocols, P2P data management, mobile location-aware P2P
networking, and software engineering for P2P applications.
2.4.2 Advantages of P2P Systems
Some of the important advantages are as listed below.
 Cost sharing/reduction.
Centralized systems that serve many clients typically bear the majority of the cost of
the system. When that main cost becomes too large, a P2P architecture can help
spread the cost over all the peers.
 Resource aggregation and interoperability.
A decentralized approach lends itself naturally to aggregation of resources. Each node
in the P2P system brings with it certain resources such as compute power or storage
space. Applications that benefit from huge amounts of these resources, such as compute-
intensive simulations or distributed file systems, naturally lean toward a P2P structure
to aggregate these resources to solve the larger problem.
 Reliabillity
The distributed nature of peer-to-peer networks also increases robustness in case of
failures by replicating data over multiple peers, and — in pure P2P systems — by
enabling peers to find the data without relying on a centralized index server. In the
latter case, there is no single point of failure in the system
 Increased automomy
In many cases, users of a distributed system are unwilling to rely on any centralized
service provider. Instead, they prefer that all data and work on their behalf be performed
locally. P2P systems support this level of autonomy simply because they require that
the local node do work on behalf of its user.
 Anonymity & Privacy
Related to autonomy is the notion of anonymity and privacy. A user may not want
anyone or any service provider to know about his or her involvement in the system.
With a central server, it is difficult to ensure anonymity because the server will typically
be able to identify the client, at least by Internet address. By employing a P2P structure
in which activities are performed locally, users can avoid having to provide any information
about themselves to anyone else.

35 ANNA UNIVERSITY CHENNAI


DMC 1754 / 1945
 Dynamism
NOTES P2P systems assume that the computing environment is highly dynamic. That is, resources,
such as compute nodes, will be entering and leaving the system continuously. When an
application is intended to support a highly dynamic environment, the P2P approach is a natural
fit. This naturally enables ad-hoc communication and collaboration.
2.4.3 P2P Challenges
Designing Peer-to-Peer middleware is a complex task. One cannot rely on a static network of
dedicated and mostly centralized service providers, which is still a common approach. Instead, Peer-
to-Peer networks confront present several new challenges:
 Shared environment: Infrastructures tend to become shared platforms for several
independent applications that may have contrary requirements and may interfere with each
other.
 Scalability: The large number of nodes within a Peer-to-Peer network may affect
performance, latency, and reliability.
 Dynamic network: Nodes are unreliable because they may join or leave the network
unpredictably.
 Dynamic node characteristics: Since nodes are autonomous, their characteristics may
change.
 Network heterogeneity: Each peer is unique according to its networking and computing
power, location in the physical network, and provided services.
 Quality-of-Service: Given an unpredictable network, the quality of network dependent
services might not be guaranteed but improved.
 Security: The middleware should be resistant to malicious peers.
2.4.4 Comparison of Client/Server & Peer-To-Peer
 Client/Server means that the user’s PC acts as the Client, and connects to centralized
infrastructure, the Server, for every aspect of the service.

C lie n t/S e r v e r (C /S ) P e e r -to - P e e r (P 2 P )


K n o w le d g e O n th e S e r v e r O n th e C lie n t
s to ra g e
O r g a n iz a tio n o f S tru c tu re d a n d U n s tru c tu re d a n d
k n o w le d g e c e n tra liz e d in d iv id u a liz e d
A b ility to U s e r h a s to u p lo a d th e S e a m le s s a b ility to c h a n g e a n d
m a n ip u la te o f th e d o c u m e n t a f te r tra n s fe r
d a ta m o d if ic a tio n
S y n c h ro n ic it y A s y n c h ro n o u s A s y n c h r o n o u s a n d s y n c h ro n o u s
P la c e d e p e n d e n c y N e e d to b e c o n n e c te d to D a ta o n th e c lie n t, th u s a l w a y s
th e s e r v e r to d o w n lo a d a v a ila b le
d a ta
P o rta b ilit y L im ite d to th e s o ftw a r e U s e rs h a v e th e fle x ib ilit y to u s e
s u p p o rte d b y th e s e r v e r a n y s o ftw a re th e y a re
c o m fo r ta b le w ith
A u th e n tic a tio n C e n tr a ll y m a n a g e d , M a n a g e d b y th e in d iv id u a l,
g re a te r to o ls a v a ila b le lim ite d to o ls a v a ila b le
S e a r c h / tr a n s fe r S ta n d a rd iz e d f o rm a t D a ta u n s ta n d a rd iz e d a n d s p re a d
a n d s to r a g e r e s u lts in a c ro s s th e n e tw o rk re s u ltin g in
q u ic k e r s e a r c h s e a rc h in g b e in g m o r e d if fic u lt
T e c h n o lo g ic a l C e n tr a liz a tio n re s u lts in N e w te c h n o lo g y w ith a l o t o f
c o n s id e ra b le c o n tro l o n is s u e s f o r re s e a r c h
th e s e is s u e s .

36 ANNA UNIVERSITY CHENNAI


MIDDLE-WARE TECHNOLOGIES
 Peer-To-Peer means that the user’s PC is connected to other peers, on an equal
footing basis. NOTES
 For Service providers, Peer-To-Peer is more complex to use than Client-Server
model
 Client-Server models offer easier maintenance, security, and administration. For
example, encapsulation makes it possible for servers to be repaired, upgraded, or
replaced without clients being affected. In Peer-To-Peer models, updates must be
applied and copied to all peers in the network, which requires a lot labour and is also
prone to errors
 Client-Server paradigms often suffer from network traffic congestion. This is not a
problem for P2P, since network resources are in direct proportion to the number of
peers in the network
 Client-Server paradigms lack the robustness of P2P networks. If a server fails in
Client-Server models, the request cannot be completed. In P2P, a node can fail or
abandon the request. Other nodes still have access to resources needed to complete
the download.
2.5 CONCLUSION
This chapter has introduced and discussed various aspects of Remote Procedure Call.
Peer-to-Peer networking and its advantages and challenges have been covered. The
differences between P2P and client/server technology has been dealt with in detail.
HAVE YOU UNDERSTOOD QUESTIONS?
a) What is the RPC model?
b) What are the important issues to be considered while designing and implementing
RPC protocol?
c) When and where should RPC be used?
d) What is Peer-to-Peer networking?
e) How does P2P differ from client/server model?
f) What are the advantages of P2P architecture?
g) What are the challenges faced in implementing P2P applications?
SUMMARY
 Remote Procedure Call (RPC) is a powerful technique for constructing distributed,
client-server based applications.
 RPC extends procedure calls mechanism to provide for transfer of control and data
across a communication network.
 The two processes may be on the same system, or they may be on different systems
with a network connecting them.
 By using RPC, programmers of distributed applications avoid the details of the
interface with the network.
 The transport independence of RPC isolates the application from the physical and
logical elements of the data communications mechanism and allows the application
to use a variety of transports.
 Most RPC implementations use a synchronous, request-reply protocol which involves
blocking of the client until the server fulfills its request.
 Asynchronous implementations are also available
 The Open Network Computing (ONC) Remote Procedure Call (RPC) protocol is
based on the remote procedure call model
 Important ways in which remote procedure calls differ from local procedure calls
include : Error handling, Global variables, Performance and Authentication
37 ANNA UNIVERSITY CHENNAI
DMC 1754 / 1945
 Peer-to-Peer refers to a class of systems and applications that employ distributed
NOTES resources to perform a critical function in a decentralized manner.
 P2P is about sharing: giving to and obtaining from the peer community. A peer gives
some resources and obtains other resources in return
 P2P is a way to leverage vast amounts of computing power, storage, and connectivity
from personal computers distributed around the world.
 Some of the benefits of a P2P approach include: improving scalability by avoiding
dependency on centralized points; eliminating the need for costly infrastructure by
enabling direct communication among clients; and enabling resource aggregation
 With the pervasive deployment of computers, P2P is increasingly receiving attention
in research, product development, and investment circles.
EXERCISES
Part I
1. A type of network in which each workstation has equivalent capabilities and
responsibilities is known as :
a) Client/Server b) Mainframe Computing
c) Peer-to-peer d) Distributed
2. In a P2P network, as each peer joins the network:
a) Traffic congestion on the network increases
b) Bandwidth increases
c) Bandwidth decreases
d) Server becomes overloaded
3. Broadcast RPC normally uses the transport protocol
a) TCP b) IP
c) ARP d) UDP
4. State True or False
a) In synchronous RPC, after sending the request to the server, the client thread is
blocked waiting for reply from server
b) While using RPC, the called procedure call can be in the same computer system as
the client program
c) RPC can only implemented on TCP transport protocol
d) Client-Server paradigms do not suffer from network traffic congestion as the server
is capable of handling multiple users
e) Anonymity means establishing the identity of a peer by using its network address
5. Fill in the blanks with the appropriate word
a) RPC is also known as ________ call
b) In RPC, when ____________ the client sends a large number of call messages to
the server
c) Robustness is increased in case of failures in P2P networks by _______ information
in multiple peers
d) A ______ procedure at the server unmarshals the arguments from the messages
and converts them into a format that is processed by the server application

38 ANNA UNIVERSITY CHENNAI


MIDDLE-WARE TECHNOLOGIES
e) The process which requires the client to identify itself to the server before being
allowed access to services is known as ________ NOTES
Part II
6. What is meant by Communication Transparency? What is the need for it?
7. List out some common P2P applications?
8. What are the different types of errors that can happen while processing the RPC
request at the server?
9. What is meant by call-back procedure in RPC? Explain with an example
10. What is meant by Anonymity? Explain the need for peer anonymity with an example
11. Fill in the blanks with the appropriate word
Part III
12. What are the various functional steps involved in RPC Operation?
13. Discuss the difference between processing local and remote procedure calls
14. Discuss the advantages and disadvantages of using synchronous and asynchronous
RPC. Give examples of applications where they can be applied?
15. Discuss the challenges in implementing P2P using a sample application
16. Compare the client/Server model with peer-to-peer model
Part I – Answers
1. c) 2. b) 3. c) 4 a) True 4 b) True 4 c) False 4 d) False 4 e) False
5. a) Function / Subroutine b) batching c) replicating d) stub
e) Authentication
REFERENCES
1. ACM Transactions on Computer Systems, Vol. 2, No. 1, February 1984 42 by A.D.
Birrell and B. J. Nelson
2. Middleware’s role today and tomorrow by Dejan Milojicic, Hewlett Packard
Laboratories
3. Implementing Remote Procedure Calls by ANDREW D. BIRRELL and BRUCE
JAY NELSON, Xerox Palo Alto Research Center
4. Remote Procedure Call Protocol Version 2, RFC 1831 (www.ietf.org/rfc/rfc1831.txt)
5. Peer-to-Peer Computing by Dejan S. Milojicic, Vana Kalogeraki, Rajan Lukose,
Kiran Nagaraja1, Jim Pruyne, Bruno Richard, Sami Rollins 2 ,Zhichen Xu, HP
Laboratories Palo Alto
6. Websites : Wikipedia.org, http://www.cs.cf.ac.uk/Dave/C/node33.html,
www.xmlrpc.com, http://www.freesoft.org/CIE/Topics/86.htm,
technet2.microsoft.com/WindowsServer

39 ANNA UNIVERSITY CHENNAI


DMC 1754 / 1945

CHAPTER - 3
NOTES
MIDDLEWARE

3.1 INTRODUCTION
Middleware is the software layer that functions between the client and the server. In
the distributed computing system, middleware is defined as the software layer that lies
above the operating system and the networking software and below the applications.
Middleware consists of a set of enabling services that allow multiple processes running on
one or more computer systems to interact across a network. This technology evolved to
provide for interoperability in support of the move from mainframe computing to client/
server architecture. The role of middleware is to ease the task of designing, programming
and managing distributed applications by providing a simple, consistent and integrated
distributed programming environment.
This unit discusses the middleware architecture and the services provided by middleware.
The categories and the different types of middleware have been discussed.
3.2 LEARNING OBJECTIVES
At the end of this unit, the reader must be familiar with the following concepts:
 Evolution towards middleware
 Middleware architecture
 Services offered by middleware
 Types of middleware
 General and special purpose middleware
3.3 EVOLUTION TOWARDS MIDDLEWARE
The evolution towards middleware is depicted in the illustrations given in Figures 3.1 to
3.5. From programming and working with a single computer, the IT industry has progressed
towards distributed computing and middleware. While the term middleware has been around
for a long time, the middleware technology, as it is used now, evolved in the 1990’s to
provide for interoperability in support of the move to client/server architecture.
Using the client/server architecture, the computing facilities of large scale enterprises
evolved into an enterprise wide network of information services, including applications and
databases, on the local area and wide area networks. Servers on the local area network
typically supported files and file based applications, such as electronic mail, bulletin boards,
document preparation, and printing. Local area servers also supported a directory service,
to help a desktop user to find other users and to find and connect to services of interest.
Servers on the wide area network generally supported database access, such as corporate
directories and electronic libraries, or transaction processing applications, such as purchasing,
billing, and inventory control. Some servers also acted as gateways to services offered
outside the enterprise, such as travel or information retrieval services, news feeds (weather,
stock prices, etc.), and electronic document interchange with business partners.

40 ANNA UNIVERSITY CHENNAI


MIDDLE-WARE TECHNOLOGIES

NOTES

Figure 3.1 Building application directly on top of Hardware & OS

Figure 3.2 Building applications using High level Programming Languages

Figure 3.3 Problems in building application using High level Programming


Languages

41 ANNA UNIVERSITY CHENNAI


DMC 1754 / 1945

NOTES

Figure 3.4 Solution: Two-way Solution

Figure 3.5 Building Application using Middleware


To help solve the heterogeneity and distribution problems, middleware software evolved
as distributed system services that have standard programming interfaces and protocols.
Standard programming interfaces make it easier to port applications to a variety of server
types. Standard protocols enable programs to interoperate. Interoperate means that a program
on one computer system can access programs and data on another system. Interoperation
is possible only if the two systems use the same protocol, that is, the same message formats
and sequences. Also, the applications running on the systems must have similar semantics,
so the messages map to operations that the applications understand. The systems supporting
the protocol may use different machine architectures and operating systems, yet they can
still interoperate.
Middleware emerged as the software “glue” that connects the client to the server. It
enabled the server to offer and provide services to the clients in a transparent manner.
Middleware made the task of designing and managing distributed applications easy by
providing an integrated distributed programming environment.
3.4 MIDDLEWARE ARCHITECTURE / SERVICES
Middleware enables dissimilar systems to interoperate. It is a layer above the operating
system and communications protocol layers but below the application layer that helps to
simplify and unify the communication between various systems. The standard model for
networking protocols and distributed applications is the International Standard Organization’s
42 ANNA UNIVERSITY CHENNAI
MIDDLE-WARE TECHNOLOGIES
Open System Interconnect (ISO/OSI) model. Middleware primarily implements the Session
and Presentation Layers of the ISO/OSI model as shown in Figure 3.6.
NOTES
Its main goal is to enable communication between distributed components. By providing
transparent interaction between unique systems and databases, middleware enables unified
user interfaces, reduces infrastructure requirements, and allows disparate systems to become
easier to manage. Middleware also offers solutions to resource sharing and fault tolerance
requirements.

Application

Presentation

Session

Transport

Network

Data Link

Physical

Figure 3.6 ISO/OSI Reference Model


Middleware provides ability to leverage the existing systems while investing in building new
systems. For example, most middleware in use today achieves integration by implementing
a common database that each standalone or legacy system interacts with. Standardized
procedures, database “connectors” and vendor-supplied integration tools allow dissimilar
systems to map shared data into a common database. It is the interaction with this common
database that creates the perceived system-to-system transparency and easy integration.
This style of middleware allows each participating system to expose pertinent data and
system functionality to the centralized middleware platform. Client systems subscribe to the
shared database and retrieve published data for the client application’s unique uses.
Middleware is the software that makes it possible in practice to build distributed systems
by providing an execution environment for them as shown in Figure 3.7.
Middleware technologies, or rather the products implementing them, provide the services
and tools required to connect the pieces together, preferably based on industry standards.
The applications are implemented as sets of components that are invoked through the
middleware by clients. As the application executes, the components may invoke other
components within the same or different applications. The path followed by a typical
transaction is shown in the figure.
Each component is unaware of the location of the others. The middleware takes care
of finding them and ensuring that the communication takes place, if necessary using the
network to carry the request, as shown in the figure. Middleware thus hides the underlying
complexity of the environment, particularly networking, from the applications. It may also
hide the operating system from the applications, which only interface with the middleware.

43 ANNA UNIVERSITY CHENNAI


DMC 1754 / 1945

To generalize, middleware services are sets of distributed software that exist between
NOTES the application and the operating system and network services as shown in Figure 3.8.

Server 1 Server 2 Server 3


Application A Application B Application C

C C C C

Middleware

Network

Client Client Client

Typical transaction

Figure 3.7 Middleware, components and distributed systems.


Middleware services provide a more functional set of Application Programming
Interfaces (API) than the operating system and network services. It allows an application
to locate transparently across the network, providing interaction with another application or
service; to be independent from network services; be reliable and available and scale up in
capacity without losing function.
A middleware service is defined by the APIs and protocols it supports. It may have
multiple implementations that conform to its interface and protocol specifications. Most
middleware services are distributed. That is, a middleware service usually includes a client
part, which supports the service’s API running in the application’s address space and a
server part, which supports the service’s main functions and may run in a different address
space (That is on a different system). There may be multiple implementations of each part.
Most middleware services run on multiple platforms, thereby enhancing the platform coverage
of applications that depend on these services. If the service is distributed, this also enhances
interoperability, since applications on different platforms can use the service to communicate
and/or exchange data. To have good platform coverage, middleware services are usually
programmed to be portable meaning they are ‘‘able to be ported to another platform with
modest and predictable effort.’’ Ideally, a middleware service supports a standard protocol,
or at least a published one. That way, multiple implementations of the service can be developed
and those implementations will interoperate.
For example, middleware can be used to provide a common graphical user interface,
or human interface. Because middleware consolidates unique systems, it enables a single
user interface to access multiple underlying systems. Without middleware, each system
has dedicated front-end interfaces. By using middleware to standardize the interaction
with dissimilar systems, one user interface can be utilized. This eliminated learning

44 ANNA UNIVERSITY CHENNAI


MIDDLE-WARE TECHNOLOGIES

curves, makes infrastructure support easier, and leverages the use of a common
database, if needed. NOTES
Middleware performs the following functions.
 Hiding distribution, which is the fact that an application is usually made up of many
interconnected parts running in distributed locations.
 Hiding the heterogeneity of the various hardware components, operating systems
and communication protocols that are used by the different parts of an application.
 Providing uniform, standard, high-level interfaces to the application developers and
integrators, so that applications can easily interoperate and be reused, ported, and
composed
 Supplying a set of common services to perform various general purpose functions,
in order to avoid duplicating efforts and to facilitate collaboration between
applications.
Using middleware has many benefits most of which derive from abstraction:
 hiding low-level details
 providing language and platform independence
 reusing expertise and possibly code
 ease of application development
As a consequence, one may expect a reduction in application development cost and
time, better quality (since most efforts may be devoted to application specific problems),
and better portability and interoperability. A potential drawback is the possible performance
penalty linked to the use of multiple software layers.

Figure 3.8 Middleware Services


The following components could be middleware services:
 Presentation Management: forms manager, graphics manager, hypermedia linker,
and printing manager.

45 ANNA UNIVERSITY CHENNAI


DMC 1754 / 1945

 Computation: sorting, math services, internationalization services (for character


NOTES and string manipulation), data converters, and time services.
 Information management: directory server, log manager, file manager, record
manager, relational database system, object-oriented database system, repository
manager.
 Communications: peer-to-peer messaging, remote procedure call, message
queuing, electronic mail, electronic data interchange.
 Control: thread manager, transaction manager, resource broker, fine-grained request
scheduler, coarse-grained job scheduler.
 System Management: event notification, accounting, configuration manager,
software installation manager, fault detector, recovery coordinator, authentication
service, auditing service, encryption service, access controller.
A middleware system may be general purpose, or may be dedicated to a specific class
of applications. General Middleware design is particularly challenging because no assumptions
can be made about the specific application domain of the middleware. Its architecture cannot
be coupled with any particular operating system or hardware platform. Consequently,
generality, the designer’s or vendor’s interest to provide a set of commonly shared and
reusable features, constantly wrestles with specialty, which represents the user’s desire of
having a tailored middleware to fit their specific needs. One solution to this dilemma is
through multiple specifications and large product families.
In recent years a new style of middleware, commonly referred to as Web Services,
has evolved as a means to provide peer-to-peer interaction between dissimilar systems. The
pervasive, connected nature of the Internet has fostered the creation of integration techniques
that reduce the need for a centrally managed database or middleware platform. By using
standardized integration methods, Web Services allows the data elements and capabilities of
one system to be utilized directly by another system. Developers of high-level applications
and unified user interfaces are able to “invoke” application services directly on foreign
systems.
It is important to note that the style or type of middleware selected for use is often
determined by the desired results of the integration project. Modern versions of middleware
platforms will embrace the unique benefits of centrally managed systems as well as the
benefits of highly distributed, peer-to-peer integration afforded by Web Services. For example,
a stock / share market brokering system might use a centrally managed middleware platform
to supply real-time share prices to a vast number of participating subscribers. Direct
interaction between the broker and every individual subscriber is not a requirement of this
system. The centrally managed database can create a bulletin board of current prices for all
interested parties to read and react to. The same system might use peer-to-peer integration
using Web Services to provide individual customers the ability to interact with the system.
The benefits accrued are both of a centrally managed system as well as the ability of the
customer to interact individually as per their requirements.

46 ANNA UNIVERSITY CHENNAI


MIDDLE-WARE TECHNOLOGIES

3.5 REQUIREMENTS OF MIDDLEWARE


Middleware is layered between network operating systems and application components.
NOTES
Middleware facilitates the communication and coordination of components that are distributed
across several networked hosts. The aim of middleware is to provide application engineers
with high-level primitives that simplify distributed system construction. The idea of using
middleware to build distributed systems is comparable to that of using database management
systems when building information systems. It enables application engineers to abstract
from the implementation of low-level details, such as concurrency control, transaction
management and network communication, and allows them to focus on application
requirements. Some of the major requirements of middleware are discussed in the next few
sections.
Network Communication
The different components of a distributed system may reside on different hosts. In
order for the distributed system to appear as an integrated computing facility, the components
have to communicate with each other. This communication can only be achieved by using
network protocols which are classified by the ISO/OSI reference model. Distributed systems
are usually built on top of the transport layer, of which TCP or UDP are good examples. The
layers underneath are provided by the network operating system.
Different transport protocols have in common that they can transmit messages between
different hosts. If the communication between distributed systems is programmed at this
level of abstraction, application engineers need to implement session and presentation layer.
This is too costly, too error prone and too time-consuming. Instead, application engineers
should be able to request parameterized services from possibly more than one remote
components and may wish to execute them as atomic and isolated transactions, leaving the
implementation of session and presentation layer to the middleware. The parameters that a
component requesting a service needs to pass to a component that provides a service are
often complex data structures. The presentation layer implementation of the middleware
should provide the ability to transform these complex data structures into a format that can
be transmitted using a transport protocol, i.e. a sequence of bytes. This transformation is
referred to as marshalling and the reverse is called unmarshalling.
Coordination
As application components reside on different hosts, distributed systems have multiple
points of control. Components on the same host execute concurrently, which leads to a need
for synchronization when components communicate with each other. This synchronization
needs to be implemented in the session layer implementation provided by the middleware.
Synchronization can be achieved in different ways. A component can be blocked while it
waits for another component to complete execution of a requested service. This form of
communication is often called synchronous. After issuing a request, a component can also
continue to perform its operations and synchronize with the service providing component at
a later point. This synchronization can then be initiated by either the client component (using,
for example polling), in which case the interaction is often called deferred synchronous.
Synchronization that is initiated by the server is referred to as asynchronous communication.
Thus, application engineers need some basic mechanisms that support various forms of
synchronization between communicating components. Sometimes more than two components
are involved in a service request. These forms of communications are also referred to as
group requests. This is often the case when more than one component is interested in

47 ANNA UNIVERSITY CHENNAI


DMC 1754 / 1945

events that occur in some other component. An example is a distributed stock ticker application
NOTES where an event, such as a share price update, needs to be communicated to multiple distributed
display components, to inform traders about the update. Although the basic mechanisms for
this push-style communication are available in multi-cast networking protocols additional
support is needed to achieve reliable delivery and marshalling of complex request parameters.
A slightly different coordination problem arises due to the sheer number of components
that a distributed system may have. The components, i.e. modules or libraries, of a centralized
application reside in virtual memory while the application is executing. This is inappropriate
for distributed components for the following reasons:
 Hosts sometimes have to be shut down and components hosted on these machines
have to be stopped and restarted when the host resumes operation
 The resources required by all components on a host may be greater than the resources
the host can provide
 Depending on the nature of the application, components may be idle for long periods,
thus wasting resources if they were kept in virtual memory all the time.
For these reasons, distributed systems need to use a concept called activation that
allows for component executing processes to be started (activated) and terminated
(deactivated) independently from the applications that they execute.
The middleware should manage persistent storage of components’ state prior to
deactivation and restore components’ state during activation. Middleware should also enable
application programmers to determine the activation policies that define when components
are activated and de-activated. Given that components execute concurrently on distributed
hosts, a server component may be requested from different client components at the same
time. The middleware should support different mechanisms called threading policies to
control how the server component reacts to such concurrent requests. The server component
may be single-threaded, queue requests and process them in the order of their arrival.
Alternatively, the component may also spawn new threads and execute each request in its
own thread. Finally the component may use a hybrid threading policy that uses a pool with
a fixed number of threads to execute requests, but starts queuing once there are no free
threads in the pool.
Reliability
Network protocols have varying degrees of reliability. Protocols that are used in practice
do not necessarily guarantee that every packet that a sender transmits is actually received
by the receiver and that the order in which they are sent is preserved. Thus, distributed
system implementations have to put error detection and correction mechanisms in place to
cope with these unreliabilities. Unfortunately, reliable delivery of service requests and service
results does not come for free. Reliability has to be paid for with decreases in performance.
To allow engineers to trade-off reliability and performance in a flexible manner, different
degrees of service request reliability are needed in practice.
For communication about service requests between two components, the reliabilities
that have been suggested for a distributed system are best effort, at-most-once, atleast-
once and exactly-once. Best effort service requests do not give any assurance about the
execution of the request. At-most-once requests are guaranteed to execute only once. It
may happen that they are not executed, but then the requester is notified about the failure.

48 ANNA UNIVERSITY CHENNAI


MIDDLE-WARE TECHNOLOGIES

At-least-once service requests are guaranteed to be executed, possibly more than once.
The highest degree of reliability is provided by exactly-once requests, which are guaranteed NOTES
to be executed once and only once. Additional reliabilities can be defined for group requests.
The above reliability discussion applies to individual requests. It can be extended to
consider more than one request. Transactions are important primitives that are used in
reliable distributed systems. Transactions have ACID properties, which means they enable
multiple request to be executed in an atomic, consistency-preserving, isolated and durable
manner. This means that the sequence of requests is either performed completely, or not at
all. It enforces that every completed transaction is consistent. It demands that a transaction
is isolated from concurrent transaction and, finally that once the transaction is completed its
effect cannot be undone. Every middleware that is used in critical applications needs to
support distributed transactions.
Reliability may also be increased by replicating components, that is, components are
available in multiple copies on different hosts. If one component is unavailable, for example
because its host needs to be rebooted, a replica on a different host can take over and
provide the requested service. Sometimes components have an internal state and then the
middleware should support replication in such a way that these states are kept in sync.
Scalability
Scalability denotes the ability to accommodate a growing future load. In centralized or
client/server systems, scalability is limited by the load that the server host can bear. This can
be overcome by distributing the load across several hosts. The challenge of building a scalable
distributed system is to support changes in the allocation of components to hosts without
changing the architecture of the system or the design and code of any component. This can
only be achieved by respecting the different dimensions of transparency identified in the
ISO Open Distributed Processing (ODP) reference model in the architecture and design of
the system.
Access transparency, for example demands that the way a component accesses the
services of another component is independent of whether it is local or remote. Another
example is location transparency, which demands that components do not know the physical
location of the components they interact with. If components can access services without
knowing the physical location and without changing the way they request it, load balancing
mechanisms can migrate components between machines in order to reduce the load on one
host and increase it on another host. It should again be transparent to users whether or not
such a migration occurred. This is referred to as migration transparency.
Replication can also be used for load balancing. Components whose services are in high
demand may have to exist in multiple copies. Replication transparency means that it is
transparent for the requesting components, whether they obtain a service from the master
component itself or from a replicated site.
The different transparency criteria that will lead to scalable systems are very difficult
to achieve if distributed systems are built directly on network operating system primitives.
To overcome these difficulties, middleware must support access, location, migration and
replication transparency.
Heterogeneity
The components of distributed systems may be procured off-the-shelf, may include
legacy systems and new components. As a result they are often rather heterogeneous. This

49 ANNA UNIVERSITY CHENNAI


DMC 1754 / 1945

heterogeneity comes in different dimensions: hardware and operating system platforms,


NOTES programming languages and indeed the middleware itself.
Hardware platforms use different encodings for atomic data types, such as numbers
and characters. Mainframes use the EBCDIC character set, Unix servers may use 7-bit
ASCII characters, while Windows-based PCs use 16-bit Unicode character encodings.
Thus the character encoding of alphanumeric data that is sent across different types of
platforms has to be adjusted. Likewise, mainframes and RISC servers, for example, use
big-endian representations for numbers, i.e. the most significant byte encoding an integer,
long or floating point number comes last. PCs, however, use a little-endian representation
where the significance of bytes decreases. Thus, whenever a number is sent from a little-
endian host to a big-endian host or vice versa, the order of bytes with which this number is
encoded needs to be swapped. This heterogeneity should be resolved by the middleware
rather than the application engineer.
When integrating legacy components with newly-built components, it often occurs that
different programming languages need to be used. These programming languages may follow
different paradigms. While legacy components tend to be written in imperative languages,
such as COBOL, PL/I or C, newer components are often implemented using object-oriented
programming languages. Even different object-oriented languages have considerable
differences in their object model, type system, approach to inheritance and late binding.
These differences need to be resolved by the middleware.
There can be many approaches to middleware design. The availability of different
middleware solutions may present a selection problem, but sometimes there is no optimal
single middleware, and multiple middleware systems have to be combined. This may be for
a variety of reasons. Different middleware may be required due to availability of programming
language bindings, particular forms of middleware may be more appropriate for particular
hardware platforms (e.g. COM on Windows and CORBA on Mainframes). Finally, the
different middleware systems will have different performance characteristics and depending
on the deployment a different middleware may have to be used as a backbone than the
middleware that is used for local components. Thus middleware will have to be interoperable
with other implementations of the same middleware or even different types of middleware
in order to facilitate distributed system construction.
Security
Middleware provides seamless interaction between the distributed components, which
are independently created and combined. These components are deployed on different host
computer systems giving rise to high security risk. All component calls are directed through
the middleware which operates using common networking infrastructure. A malicious user
could listen in (eavesdrop) to the network communication thus tracking down all interchanged
messages. The malicious user could decode the messages and steal identity information
about the calling user. Such information could be used in order to invoke components
masquerading as another user. Security could be compromised even more if the malicious
user manages to bypass security checks, due to bad authorization setup, and is able to
tamper with system and application logs. The threats that apply to distributed component
architecture can be summarized as :
 Disclosure of confidential information to unauthorized users. Eavesdropping on an
insecure communication line, so gaining access to confidential data. Hence
unauthorized users can get access to important information.

50 ANNA UNIVERSITY CHENNAI


MIDDLE-WARE TECHNOLOGIES
 Violation of data or code integrity. Tampering with the communication line by injecting
or removing data. Replacing component code with malicious code. Provide malicious
code as application code
NOTES
 Misappropriation of protected resources. Security breach that allows unauthorized
user to use protected services
 Compromise of availability of the services. Denial of service attacks. Physical
communication line attacks.
A formal method has to be developed to prevent distributed component-structured
software and the connecting networks against malicious attacks. The key aspects of security
that needs to be addressed are:
 Confidentiality : preventing unauthorized access / disclosure
 Integrity : to limit access to those with express permission
 Availability : protection from disruption of service to legitimate users
In order to respond to these security challenges, all component architectures need to
offer a vast set of security related features. Although the implementation details in these
features may defer, the general principles are more or less the same. Identification of principals
has to be established using authentication mechanisms, while access controls can prevent
unauthorized access to objects. Information confidentiality and integrity can be enforced
using communication security mechanisms. Misuse of the system can be detected by keeping
audit logs which, in conjunction with non-repudiation services can respond to the need for
accountability.
To enforce these security requirements the following middleware functionality is needed:
 Identification / Authentication : Server and Client authentication to verify their
identify. Proof must be provided in the form of credentials for their verification prior
to access of application components/resources.
 Access Controls and Authorization: for the purpose of providing explicit
permission for access of the application components and to limit access to
those users/programs that are given explicit permission.
 Security of communication: Secure invocation of application components. Data
items transferred are encrypted. This prevents both malicious and false operation,
as well as eavesdropping.
 Non-repudiation : to provide proof of data origin and receipt, so access cannot be
later denied
 Logging : to record all activities related to components, like who (user/program)
initiated an invocation for which component and when. Provide for security auditing.
Finally, security administration methods are of great importance and include the use of
security policy to describe the complex security rules, user management, roles and permissions
for access.
3.6. TYPES OF MIDDLEWARE
Middleware in a client/server infrastructure increases the interoperability, portability,
and flexibility of an application by allowing the application to be distributed over multiple
heterogeneous platforms. It reduces the complexity of developing applications that span
multiple operating systems and network protocols by insulating the application developer
from the details of the various operating system and network interfaces - Application
Programming Interfaces (APIs) that extend across diverse platforms and networks.

51 ANNA UNIVERSITY CHENNAI


DMC 1754 / 1945

Middleware may be broadly classified as transactional, message-oriented,


NOTES procedural, and object or component middleware. This classification is based on the
primitives that middleware products provide for the interaction between distributed
components, which are distributed transactions, message passing, remote procedure calls
and remote object requests. Database Connectivity Middleware – ODBC (Open Data
Base Connectivity) and JDBC (Java Data Base Connectivity) provides for standard
connectivity between applications and database servers.
3.6.1 Transactional Middleware
Transactional middleware supports transactions involving components that run on
distributed hosts. Transaction processing (TP) monitors provides the distributed client/server
environment the capacity to efficiently and reliably develop, run, and manage transaction
applications.
TP monitor technology controls transaction applications and performs business logic/
rules computations and database updates. TP monitor technology emerged many years ago
when Atlantic Power and Light created an online support environment to share concurrently
applications services and information resources with the batch and time sharing operating
systems environment.
Transactional middleware enables application engineers to define the services that
server components offer, implement those server components and then write client
components that request several of those services within a transaction. Client and server
components can reside on different hosts and therefore requests are transported via the
network in a way that is transparent to client and server components.
TP monitor can provide application services to thousands of clients in a distributed
client/server environment. TP monitor technology does this by multiplexing client transaction
requests (by type) onto a controlled number of processing routines that support particular
services. These events are depicted in Figure 3.10.

Figure 3.10 Transaction Processing Monitor Technology


Clients are bound, serviced, and released using stateless servers that minimize overhead.
The database sees only the controlled set of processing routines as clients.
TP monitor technology maps numerous client requests through application services
routines to improve system performance. The TP monitor technology (located as a server)
can also take the application transitions logic from the client. This reduces the number of
upgrades required by these client platforms. In addition, TP monitor technology includes

52 ANNA UNIVERSITY CHENNAI


MIDDLE-WARE TECHNOLOGIES
numerous management features, such as restarting failed processes, dynamic load balancing,
and enforcing consistency of distributed data. TP monitor technology is easily scalable by
adding more servers to meet growing numbers of users.
NOTES
TP monitor technology is independent of the database architecture. It supports flexible
and robust business modeling and encourages modular, reusable procedures. TP monitor
designs allow Application Programming Interfaces (APIs) to support components such as
heterogeneous client libraries, databases and resource managers, and peer-level application
systems. TP monitor technology supports architecture flexibility because each component
in a distributed system is comprised of products that are designed to meet specific functionality,
such as graphical user interface builders and database engines.
Within distributed client/server systems, each client that is supported adds overhead to
system resources (such as memory). Responsiveness is improved and system resource
overhead is reduced by using TP monitor technology to multiplex many clients onto a much
smaller set of application service routines. TP monitor technology provides a highly active
system that includes services for delivery order processing, terminal and forms management,
data management, network access, authorization, and security.
TP monitor technology supports a number of program-to-program communication
models, such as store-and-forward, asynchronous, Remote Procedure Call (RPC) and
conversational. This improves interactions among application components. TP monitor
technology provides the ability to construct complex business applications from modular,
well-defined functional components. Because this technology is well-known and well-defined
it should reduce program risk and associated costs.
TP monitor provide Coordination as the client components can request services using
synchronous or asynchronous communication. Transactional middleware supports various
activation policies and allows services to be activated on demand and deactivated when
they have been idle for some time. Activation can also be permanent, allowing the server
component to always reside in memory.
Transaction oriented middleware uses the two-phase commit protocol to implement
distributed transactions. When a transaction involves multiple distributed resources, for
example, database servers on two different computer systems, the transaction commit process
is complex as it spans two distinct software systems. Two-phase commit protocol (2PC)
uses a coordinator to ensure that distributed transactions are performed in an orderly manner
and all nodes agree to either commit the transaction or rollback the transaction.
Reliability is provided as a client component can cluster more than one service request
into a transaction, even if the server components reside on different machines. In order to
implement these transactions, transactional middleware has to assume that the participating
servers implement the two-phase commit protocol. If server components are built using
database management systems, they can delegate implementation of the two-phase commit
to these database management systems. For this implementation to be portable, a standard
has been defined. The Distributed Transaction Processing (DTP) Protocol, which has been
adopted by the Open Group, defines a programmatic interface for two-phase commit in its
XA-protocol. DTP is widely supported by relational and object-oriented database management
systems. This means that distributed components that have been built using any of these
database management systems can easily participate in distributed transactions. This makes
them fault-tolerant, as they automatically recover to the end of all completed transactions.
Scalability is addressed as most transaction monitors support load balancing and
replication of server components. Replication of servers is often based on replication
capabilities that the database management systems provide upon which the server
components rely.

53 ANNA UNIVERSITY CHENNAI


DMC 1754 / 1945

Transactional middleware supports heterogeneity because the components can reside


NOTES on different hardware and operating system platforms. Also different database management
systems can participate in transactions, due to the standardized DTP protocol. Resolution of
data heterogeneity is, however, not well-supported by transactional middleware, as the
middleware does not provide primitives to express complex data structures that could be
used as service request parameters and therefore also does not marshal them.
Transactional middleware can therefore simplify the construction of distributed systems.
TP monitor technology makes database processing cost-effective for online applications.
Spending relatively little money on TP monitor technology can result in significant savings
compared to the resources required to improve database or platform resources to provide
the same functionality.
Transactional middleware, however, has several weaknesses. Firstly, it creates an undue
overhead if there is no need to use transactions, or transactions with ACID semantics are
inappropriate. This is the case, for example, when the client performs long-lived activities.
Secondly, marshalling and unmarshalling between the data structures that a client uses and
the parameters that services require needs to be done manually in many products. Thirdly,
although the API for the two-phase commit is standardized, there is no standardized approach
for defining the services that server components offer. A limitation to TP technology is that
the implementation code is usually written in a lower-level language (such as COBOL), and
is not yet widely available in the popular visual toolsets. This reduces the portability of a
distributed system between different transaction monitors.
TP monitor technology has been used successfully in the field for many years. TP
monitor technology is used for delivery order processing, hotel and airline reservations,
electronic fund transfers, security trading, and manufacturing resource planning and control.
It improves batch and time-sharing application effectiveness by creating online support to
share application services and information resources. The products in this category include
IBM’s CICS , BEA’s Tuxedo and Transarc’s Encina. Use of TP monitor technology is a
cost-effective alternative to upgrading database management systems or platform resources
to provide this same functionality
3.6.2 Message-Oriented Middleware
Message-oriented middleware (MOM) supports the communication between distributed
system components by facilitating message exchange. The message may contain data,
software instructions or both. MOM infrastructure is typically built around a queuing system.
Message queuing is defined as indirect communication model, where communication
happens via a queue. Message from one program is sent to a specific queue, identified by
name. After the message is stored in queue, it will be sent to a receiver. MOM keeps track
of whether and when each message has been delivered. Most MOM systems also support
message passing, a direct communication model where the information is sent to the
interested parties. One example of message passing is publish-subscribe (pub/sub) middleware
model. In pub/sub clients have the ability to subscribe to the interested subjects. After
subscribing, the client will receive any message corresponding to a subscribed topic.
MOM as shown in Figure 3.12 is software that resides in both portions of a client/
server architecture and typically supports asynchronous calls between the client and server
applications. Client components use MOM to send a message to a server component across
the network. The message can be a notification about an event, or a request for a service
execution from a server component. The content of such a message includes the service
parameters. The server responds to a client request with a reply-message containing the
result of the service execution.

54 ANNA UNIVERSITY CHENNAI


MIDDLE-WARE TECHNOLOGIES

Message queues provide temporary storage when the destination program is busy or
not connected. MOM reduces the involvement of application developers with the complexity NOTES
of the master-slave nature of the client/server mechanism. MOM increases the flexibility
of architecture by enabling applications to exchange messages with other programs without
having to know what platform or processor the other application resides on within the network.

A A
T T
P N N P
R R
P M E E P
A A
L M L
O N T T N
I W W O I
C M S S
C
P O O P M
A R R A
O O
T K K T
R R
I I
T T
O O
N N

Application Specific Messages

Figure 3.12 Message-Oriented Middleware


Nominally, MOM systems provide a message queue between interoperating processes,
so if the destination process is busy, the message is held in a temporary storage location until
it can be processed. Message queues also help to achieve fault tolerance. The sender
writes the message into the message queue and if the receiver is unavailable due to a
failure, the message queue retains the message until the receiver is available again
MOM is typically asynchronous and peer-to-peer, but most implementations support
synchronous message passing as well. Asynchronous and synchronous mechanisms each
have strengths and weaknesses that should be considered when designing any specific
application. The asynchronous mechanism of MOM, unlike Remote Procedure Call (RPC),
which uses a synchronous, blocking mechanism, does not guard against overloading a network.
As such, a negative aspect of MOM is that a client process can continue to transfer data to
a server that is not keeping pace. Message-oriented middleware’s use of message queues,
however, tends to be more flexible than RPC-based systems, because most implementations
of MOM can default to synchronous and fall back to asynchronous communication if a
server becomes unavailable.
The message-oriented middleware software (kernel) must run on every platform of a
network. The impact of this varies and depends on the characteristics of the system in
which the MOM will be used:
 Not all MOM implementations support all operating systems and protocols. The
flexibility to choose a MOM implementation may be dependent on the chosen
application platform or network protocols supported, or vice versa.
 Local resources and CPU cycles must be used to support the MOM kernels on
each platform. The performance impact of the middleware implementation must be
considered; this could possibly require the user to acquire greater local resources
and processing power.
 The administrative and maintenance burden would increase significantly for a network
manager with a large distributed system, especially in a mostly heterogeneous system.

55 ANNA UNIVERSITY CHENNAI


DMC 1754 / 1945

 A MOM implementation may cost more if multiple kernels are required for a
NOTES heterogeneous system, especially when a system is maintaining kernels for old
platforms and new platforms simultaneously.
 MOM can be effectively combined with remote procedure call (RPC) technology-
RPC can be used for synchronous support by a MOM.
Products in this category include IBM’s MQSeries and Sun’s Java Message Queue.
A strength of MOM is that this paradigm supports asynchronous message delivery
very naturally. The client can continue processing as soon as the middleware has taken the
message. Eventually the server will send a message including the result and the client will
be able to collect that message at an appropriate time. This achieves de-coupling of client
and server and leads to more scalable systems. The weakness, at the same time, is that the
implementation of synchronous requests is cumbersome as the synchronization needs to be
implemented manually in the client. A further strength of MOM is that it supports group
communication by distributing the same message to multiple receivers in a transparent way.
However, asynchronous message passing can also introduce other problems. What
happens if a message cannot be delivered? The sender may never wait for delivery of the
message, and thus never hear about the error. Similarly, a mechanism is needed to notify an
asynchronous receiver that a message has arrived. The operation invoker could learn about
completion/errors by polling, getting a software interrupt, or by waiting explicitly for completion
later using a special synchronous wait call. An asynchronous operation needs to return a
call/transaction ID (identification) if the application needs to be later notified about the
operation. At notification time, this ID would be placed in some global location or passed as
an argument to a handler or wait call.
MOMs do not support access transparency very well, because client components use
message queues for communication with remote components, while it does not make sense
to use queues for local communication. This lack of access transparency disables migration
and replication transparency, which complicates scalability. Moreover, queues need to be
set up by administrators and the use of queues is hard-coded in both client and server
components, which leads to rather inflexible and poorly adaptable architectures.
MOM does not support data heterogeneity very well either, as the application engineers
have to write the code that marshals. With most products, there are different programming
language bindings available.
MOM is most appropriate for event-driven applications. When an event occurs, the
client application hands over to the messaging middle-ware application, the responsibility of
notifying a server that some action needs to be taken. However, message oriented middleware
also has some weaknesses as it only supports at-least once reliability. Thus the same message
could be delivered more than once. Moreover, MOM does not support transaction properties,
such as atomic delivery of messages to all or none receiversMOM is also well-suited for
object-oriented systems because it furnishes a conceptual mechanism for peer-to-peer
communications between objects. MOM insulates developers from connectivity concerns-
the application developers write to APIs that handle the complexity of the specific interfaces.
Implementations of MOM first became available in the mid-to-late 1980s. Many MOM
implementations currently exist that support a variety of protocols and operating systems.
Many implementations support multiple protocols and operating systems simultaneously.
Some vendors provide tool sets to help extend existing inter-process communication across
a heterogeneous network.

56 ANNA UNIVERSITY CHENNAI


MIDDLE-WARE TECHNOLOGIES
MOM is typically implemented as a proprietary product, which means MOM
implementations are nominally incompatible with other MOM implementations. Using a
single implementation of a MOM in a system will most likely result in a dependence on the
NOTES
MOM vendor for maintenance support and future enhancements. This could have a highly
negative impact on a system’s flexibility, maintainability, portability, and interoperability.
3.6.3 Procedural Middleware
Remote Procedure Calls (RPCs) were devised by Sun Microsystems in the early 1980s
as part of the Open Network Computing (ONC) platform. Sun provided remote procedure
calls as part of all their operating systems and submitted RPCs as a standard to the X/Open
consortium, which adopted it as part of the Distributed Computing Environment (DCE).
RPC’s are now available on most Unix implementations and also on Microsoft’s Windows
operating systems. The concept of RPC has been discussed in literature as far back as
1976, with full-scale implementations appearing in the late 1970s and early 1980s.
In order to access the remote server portion of an application, special function calls,
RPCs, are embedded within the client portion of the client/server application program.
Because they are embedded, RPCs do not stand alone as a discreet middleware layer.
When the client program is compiled, the compiler creates a local stub for the client portion
and another stub for the server portion of the application. These stubs are invoked when the
application requires a remote function and typically support synchronous calls between clients
and servers. These relationships are shown in Figure 3.11.
RPCs support the definition of server components as RPC programs. An RPC program
exports a number of parameterized procedures and associated parameter types. Clients
that reside on other hosts can invoke those procedures across the network. Procedural
middleware implements these procedure calls by marshalling the parameters into a message
that is sent to the host where the server component is located. The server component
unmarshalls the message and executes the procedure and transmits marshalled results back
to the client, if required. Marshalling and un-marshalling are implemented in client and server
stubs, that are automatically created by a compiler from an RPC program definition.

APPLI- T T
R N N R APPLI-
CATION
M A E E A CATION
O N T T N
M S W W S
P O O P
O R R O
RPC R K K R RPC
Stub T T Stub
program program

Application Specific procedure


invocations and returns

Figure 3.11 Remote Procedure Calls


By using RPC, the complexity involved in the development of distributed processing
is reduced by keeping the semantics of a remote call the same whether or not the client and
server are co-located on the same system. However, RPC increases the involvement of an
application developer with the complexity of the master-slave nature of the client/server
mechanism.
57 ANNA UNIVERSITY CHENNAI
DMC 1754 / 1945

RPC increases the flexibility of an architecture by allowing a client component of an


NOTES application to employ a function call to access a server on a remote system. RPC allows the
remote component to be accessed without knowledge of the network address or any other
lower-level information. Most RPCs use a synchronous, request-reply (sometimes referred
to as “call/wait”) protocol which involves blocking of the client until the server fulfills its
request. Asynchronous (“call/nowait”) implementations are available but are currently the
exception.
RPC is typically implemented in one of two ways:
 Within a broader, more encompassing propriety product
 By a programmer using a proprietary tool to create client/server RPC stubs
RPC is appropriate for client/server applications in which the client can issue a request
and wait for the server’s response before continuing its own processing. Because most
RPC implementations do not support peer-to-peer, or asynchronous, client/server interaction,
RPC is not well-suited for applications involving distributed objects or object-oriented
programming.
Asynchronous and synchronous mechanisms each have strengths and weaknesses
that should be considered when designing any specific application. In contrast to asynchronous
mechanisms employed by Message-Oriented Middleware, the use of a synchronous request-
reply mechanism in RPC requires that the client and server are always available and
functioning (i.e., the client or server is not blocked). In order to allow a client/server application
to recover from a blocked condition, an implementation of a RPC is required to provide
mechanisms such as error messages, request timers, retransmissions, or redirection to an
alternate server. The complexity of the application using a RPC is dependent on the
sophistication of the specific RPC implementation (i.e., the more sophisticated the recovery
mechanisms supported by RPC, the less complex the application utilizing the RPC is required
to be). RPCs that implement asynchronous mechanisms are very few and are difficult
(complex) to implement.
Procedural middleware is weaker than transactional middleware and MOM as it is not
as fault tolerant and scalable. When utilizing RPC over a distributed network, the performance
(or load) of the network should be considered. One of the strengths of RPC is that the
synchronous, blocking mechanism of RPC guards against overloading a network, unlike the
asynchronous mechanism of Message-Oriented Middleware (MOM). However, when
recovery mechanisms, such as retransmissions, are employed by an RPC application, the
resulting load on a network may increase, making the application inappropriate for a congested
network. Also, because RPC uses static routing tables established at compile-time, the
ability to perform load balancing across a network is difficult and should be considered
when designing an RPC-based application.
Procedural middleware improve transactional middleware and MOM with respect to
interface definitions from which implementations that automatically marshal and unmarshal
service parameters and results. A disadvantage of procedural middleware is that this interface
definition is not reflexive. This means that procedures exported by one RPC program cannot
return another RPC program. Object and component middleware resolve this problem.
The scalability of RPCs is rather limited. Unix and Windows RPCs do not have any
replication mechanisms that could be used to scale RPC programs. Thus replication has to
be addressed by the designer of the RPC-based system, which means in practice that RPC-
based systems are only deployed on a limited scale.
Procedural middleware can be used with different programming languages. Moreover,
it can be used across different hardware and operating system platforms. Procedural
middleware standards define standardized data representations that are used as the transport
representation of requests and results. DCE, for example standardizes the Network Data

58 ANNA UNIVERSITY CHENNAI


MIDDLE-WARE TECHNOLOGIES
Representation (NDR) for this purpose. When marshalling RPC parameters, the stubs
translate hardware specific data representations into the standardized form and the reverse
mapping is performed during unmarshalling.
NOTES
Tools are available for a programmer to use in developing RPC applications over a
wide variety of platforms, including Windows (3.1, NT, 95), Macintosh, 26 variants of UNIX,
OS/2, NetWare, and VMS. RPC infrastructures are implemented within the Distributed
Computing Environment (DCE) , and within Open Network Computing (ONC), developed
by Sunsoft, Inc. These two RPC implementations dominate the current Middleware market.
RPC implementations are nominally incompatible with other RPC implementations,
although some are compatible. Using a single implementation of a RPC in a system will
most likely result in a dependence on the RPC vendor for maintenance support and future
enhancements. This could have a highly negative impact on a system’s flexibility,
maintainability, portability, and interoperability.
Because there is no single standard for implementing an RPC, different features may
be offered by individual RPC implementations. Features that may affect the design and cost
of a RPC-based application include the following:
 Support of synchronous and/or asynchronous processing
 Support of different networking protocols
 Support for different file systems
 Whether the RPC mechanism can be obtained individually, or only bundled with a
server operating system.
Because of the complexity of the synchronous mechanism of RPC and the proprietary
and unique nature of RPC implementations, training is essential even for the experienced
programmer.
3.6.4 Object and Component Middleware
Object middleware evolved from RPCs. The development of object middleware mirrored
similar evolutions in programming languages where object-oriented programming languages,
such as C++ evolved from procedural programming languages such as C. The idea here is
to make object-oriented principles, such as object identification through references and
inheritance, available for the development of distributed systems. Systems in this class of
middleware include the Common Object Request Broker Architecture (CORBA) of the
OMG, the lattest versions of Microsoft’s Component Object
Model (COM) and the Remote Method Invocation (RMI) capabilities that have been available
since Java 1.1. More recent products in this category include middleware that supports
distributed components, such as Enterprise Java Beans (EJB).
Object middleware support distributed object requests, which mean that a client object
requests the execution of an operation from a server object that may reside on another host.
The client object has to have an object reference to the server object. Marshalling operation
parameters and results is again achieved by stubs that are generated from an interface
definition.
Object Request Broker (ORB) is a middleware technology that manages communication
and data exchange between objects. ORBs promote interoperability of distributed object
systems because they enable users to build systems by piecing together objects- from different
vendors- that communicate with each other via the ORB. The developers of the distributed
applications have to concern themselves only with the object interface details. The actual
implementation details of the ORB are generally not important to developers building

59 ANNA UNIVERSITY CHENNAI


DMC 1754 / 1945
distributed systems. This form of information hiding enhances system maintainability since
NOTES the object communication details are hidden from the developers and isolated in the ORB.
ORB technology promotes the goal of object communication across machine, software,
and vendor boundaries. The relevant functions of an ORB technology are:
 Interface definition
 Location and possible activation of remote objects
 Communication between clients and object.
An object request broker acts as a kind of telephone exchange. It provides a directory
of services and helps establish connections between clients and these services. Figure 3.13
illustrates some of the key ideas.

Figure 3.13 Object Request Broker


The ORB must support many functions in order to operate consistently and effectively,
but many of these functions are hidden from the user of the ORB. It is the responsibility of
the ORB to provide the illusion of locality, in other words, to make it appear as if the object
is local to the client, while in reality it may reside in a different process or machine. Thus the
ORB provides a framework for cross-system communication between objects. This is the
first technical step toward interoperability of object systems.
The next technical step toward object system interoperability is the communication of
objects across platforms. An ORB allows objects to hide their implementation details from
clients. This can include programming language, operating system, host hardware, and object
location. Each of these can be thought of as a “transparency,” and different ORB technologies
may choose to support different transparencies, thus extending the benefits of object
orientation across platforms and communication channels.
There are many ways of implementing the basic ORB concept; for example, ORB
functions can be compiled into clients, can be separate processes, or can be part of an
operating system kernel. These basic design decisions might be fixed in a single product; or
there might be a range of choices left to the ORB implementer.
Three major ORB technologies:
 The Object Management Group’s (OMG) Common Object Request Broker
Architecture (CORBA) specification
 Microsoft’s Component Object Model (see Component Object Model (COM),
DCOM, and Related Capabilities)
 Remote Method Invocation (RMI); this is specified as part of the Java language/
virtual machine. RMI allows Java objects to be executed remotely. This provides
ORB-like capabilities as a native extension of Java.
A high-level comparison of ORB technologies is available in the Table 3.1 Details are
available in the referenced technology descriptions.

60 ANNA UNIVERSITY CHENNAI


MIDDLE-WARE TECHNOLOGIES
Successful adoption of ORB technology requires a careful analysis of the current and
future software architectural needs of the target application and analysis of how a particular
ORB will satisfy those needs. Among the many things to consider are platform availability,
NOTES
support for various programming languages, as well as implementation choices and product
performance parameters. After performing this analysis, developers can make informed
decisions in choosing the ORB best suited for their application’s needs.
As shown in Table 3.1, there are a number of commercial ORB products available.
ORB products that are not compliant with either CORBA or COM also exist; however,
these tend to be vendor-unique solutions that may affect system interoperability,
portability, and maintainability.
The default synchronization primitives in object middleware are synchronous requests, which
block the client object until the server object has returned the response. However, other
synchronization primitives are also supported. CORBA 3.0, for example, supportsThe default
synchronization primitives in object middleware are synchronous requests, which block the
client object until the server object has returned the response. However, other synchronization
primitives are also supported.
Table 3.1 Comparison of ORB Technologies

O RB Platform Ap plicable to M echanism Implementations


Availability
C OM / Originally PC "PC-centric" AP Is to One
DC OM platform s, but distributed proprietary
becom ing system s system
available on architecture
other platform s
C ORB platform - general specification of M any
A independent and distributed distributed
interoperability system object
am ong architecture technology
platform s
Java/ wherever Java general im plem entation various
RM I virtual m achine distributed of distributed
(VM ) executes system object
architecture technology
and W eb-
based
Intranets

CORBA 3.0, for example, supports both deferred synchronous and asynchronous object
requests. Object middleware supports different activation policies. These include whether
server objects are active all the time or started on demand. Threading policies are available
that determine whether new threads are started if more than one operation is requested by
concurrent clients, or whether they are queued and executed sequentially. CORBA also
supports group communication through its Event and Notification services. This service can
be used to implement push-style architectures.
The default reliability for object requests is atmost-once. Object middleware support
exceptions, which clients catch in order to detect that a failure occurred during execution of

61 ANNA UNIVERSITY CHENNAI


DMC 1754 / 1945
the request. CORBA messaging or the Notification service can be used to achieve exactly-
NOTES once reliability. Object middleware also supports the concept of transactions. CORBA has
an Object Transaction service that can be used to cluster requests from several distributed
objects into transactions. COM is integrated with Microsoft’s Transaction Server and the
Java Transaction Service provides the same capability for RMI.
Object middleware supports heterogeneity in many different ways. CORBA and COM
both have multiple programming language bindings so that client and server objects do not
need to be written in the same programming language. They both have a standardized data
representation that they use to resolve heterogeneity of data across platforms. Java/RMI
takes a different approach as heterogeneity is already resolved by the Java Virtual Machine
in which both client and server objects reside. The different forms of object middleware
inter-operate. CORBA defines the Internet Inter-Orb Protocol (IIOP) standard, which
governs how different CORBA implementations exchange request data. Java/RMI leverages
this protocol and uses it as a transport protocol for remote method invocations, which means
that a Java client can perform a remote method invocation of a CORBA server and vice
versa. CORBA also specifies an inter-working specification to Microsoft’s COM.
Object middleware provides very powerful component models. They integrate most of
the capabilities of transactional, message-oriented or procedural middleware. ORB products
are available for all major computing platforms and operating systems.
3.6.5 ODBC and JDBC
Open Database Connectivity (ODBC) is an open standard application programming
interface (API) for accessing a database. By using ODBC statements in a program, it is
possible to access data from a number of different databases, including Access, dBase,
DB2, Excel, and Text. In addition to the ODBC software, a separate module or driver is
needed for each database to be accessed. The main proponent and supplier of ODBC
programming support is Microsoft.
ODBC is based on and closely aligned with The Open Group standard Structured
Query Language (SQL) Call-Level Interface. It allows programs to use SQL requests that
will access databases without having to know the proprietary interfaces to the databases.
ODBC handles the SQL request and converts it into a request the individual database system
understands.
ODBC was created by the SQL Access Group and first released in September, 1992.
Although Microsoft Windows was the first to provide an ODBC product, versions now exist
for UNIX, OS/2, and Macintosh platforms as well.
The JDBC API is a Java API that can access any kind of tabular data, especially data
stored in a Relational Database. JDBC helps to write java applications that manage these
three programming activities:
1. Connect to a data source, like a database
2. Send queries and update statements to the database
3. Retrieve and process the results received from the database in answer to user’s
queries
The JDBC API supports both two-tier and three-tier processing models for database access.
3.6.6 Web Services (Introduction)
This section only introduces web services as an emerging middleware and is not intended
to provide a detailed description on web services.
Web services are being seen as middleware based on Extensible Markup Language
(XML). Web services are considered to be the next evolutionary step in object-oriented
programming for business-to-business e-commerce. Web services describe a standardized
62 ANNA UNIVERSITY CHENNAI
MIDDLE-WARE TECHNOLOGIES
way of integrating Web-based applications over an Internet protocol backbone. Unlike
traditional client/server models, such as a Web server/Web page system, Web services do
not provide the user with a Graphic User Interface (GUI). Web services instead share
NOTES
business logic, data and processes through a programmatic interface across a network. The
applications interface, not the users. Developers can then add the Web service to a GUI
(such as a Web page or an executable program) to offer specific functionality to users.
Web services allow different applications from different sources to communicate with each
other without time-consuming custom coding, and because all communication is in XML,
Web services are not tied to any one operating system or programming language. For example,
Java can talk with Perl, Windows applications can talk with UNIX applications.
Web services require a framework of standards to be interoperable. Web services use
the Extensible Markup Language (XML), Simple Object Access Protocol (SOAP), Web
Service Definition Language (WSDL) and Universal Description Discovery and Integration
(UDDI) open standards. XML is used to describe the data, SOAP is used to transfer the
data, WSDL is used for describing the services available and UDDI is used for listing what
services are available.
These standards are in various states of definition. They are being defined by a number
of groups including World Wide Web Consortium (W3C) and Organization for Advancing
Open Standards for the Information Society (OASIS). Most leading vendors are members
of such organizations.
3.7 CONCLUSION
Although the concept of middleware will be there for a long time, the specific components
that constitute middleware will change over time. The strong driver of middleware evolution
is new applications areas, such as mobile computing, groupware, and multimedia as existing
middleware services may not be able to satisfy the new requirements of these applications.
Large enterprises are relying on middleware to support their information needs.
Middleware helps organizations attain new levels of insight, responsiveness, ease of use and
security in the way their IT systems support their business operations as middleware enables
disparate applications and data sources to communicate better and helps employees at all
levels access and manage the information they need to do their jobs. The trends to simplify
middleware and expand its functionality into new application areas are likely to increase
this reliance in the future.
HAVE YOU UNDERSTOOD QUESTIONS?
a) What is middleware?
b) How and why did middleware evolve?
c) What are the services provided by middleware?
d) What are the requirements of middleware?
e) What are the different types of middleware?
SUMMARY
 Middleware connects the client and server over a network.
 Middleware is defined as the software layer that lies above the operating system
and the networking software and below the applications.
 In the OSI/ISO standard model for networking protocols and distributed applications,
middleware may be primarily seen as implementing the Presentation and Session
layers.

63 ANNA UNIVERSITY CHENNAI


DMC 1754 / 1945
 Its main goal is to enable communication between distributed application components
NOTES which may be available on computer systems with different hardware, operating
systems and network architecture.
 Middleware does the function of providing uniform, standard, high-level interfaces
to the application developers and integrators, so that applications can easily
interoperate and be reused, ported, and composed.
 It enables application engineers to abstract from the implementation of low-level
details, such as concurrency control, transaction management and network
communication, and allows them to focus on application requirements
 Middleware simplifies the task of designing, programming and managing distributed
applications by providing a simple, consistent and integrated distributed programming
environment
 Middleware provides ability to leverage the existing systems while investing in building
new systems
 Middleware may be broadly classified as transactional, message-oriented,
procedural, and object or component middleware based on the primitives that is
used for the interaction between distributed components, which are distributed
transactions, message passing, remote procedure calls and remote object requests.
EXERCISES
Part I
1. In the ISO/OSI model, middleware primarily implements the layers
a) Transport & Network b) Transport & Presentation
c) Session & Presentation d) Application & Presentation
2. Middleware is
a) Hardware b) Software
c) System programs e) Peripheral device
3. Marshalling refers to
a) Randomizing the data b) Dis-ordered data
c) Sequencing the data d) sequencing the data
4. Most RPCs use a synchronous, request-reply protocol which involves blocking of the
client until the server fulfills its request. This protocol is known as :
a) call/no wait b) wait/no wait
c) call/wait d) request/wait
5. The process of transforming the memory representation of an object to a data format
suitable for transmission over communication link is known as:
a) Randomizing the data b) Marshalling
c) Un-Marshalling d) Sequencing the data
6. For communication about service requests between two components, which type of
reliability does not give any assurance about the execution of the request?
a) at-most-once b) at least-once
c) exactly-once d) best effort
7. Middleware technology evolved to provide for interoperability in support of the move
from:
a) client/server to mainframe computing
b) client/server to distributed computing

64 ANNA UNIVERSITY CHENNAI


MIDDLE-WARE TECHNOLOGIES
c) mainframe computing to client/server architecture
d) client/server to mobile computing NOTES
8. A mix of hardware systems which includes personal computers, workstations,
minicomputers and mainframes is termed as:
a) Heterogeneous b) homogenous
c) client /server d) middleware
9. The feature by which a program on one computer system can access programs and
data on another system is known as:
a) Mainframe computing b) Message passing
c) Interoperability d) Middleware
10. The property that enables a computer-based system to continue operating properly in
the event of the failure of some of its components is known as
a) Reliability b) Fault tolerance
c) Security d) message queues
Part II
11. Name and explain briefly five reasons why to build distributed systems
12. Define publish/subscribe communication. What are the advantages and disadvantages
of offering this as the only communication service?
13. What is location transparency?
14. Discuss the need for marshalling and un-marshalling data objects
15. What is meant by legacy systems? How can middleware help to integrate legacy systems?

Part III

16. Discuss the role of middleware and how it helps in the design, development and
management of distributed applications.
17. Explain in detail the different requirements that must be satisfied by middleware.
18. Discuss and explain which type of middleware supports transaction processing.
19. Explain in detail the differences between message-oriented middleware and RPC.
20. Explain how evolution of object-oriented middleware. Discuss how ORB implements
the object-oriented middleware.
REFERENCES
1. Software Engineering and Middleware: A Roadmap by Wolfgang Emmerich , Dept.
of Computer Science, University College London
2. Middleware: Past and Present a Comparison by Hennadiy Pinus
3. What is middleware and where is it going? By Peter Bye, Unisys Technical Consulting
Services
4. Middleware Architecture with Patterns and Frameworks by Sacha Krakowiak
5. Middleware: An Architecture for Distributed System Services by Philip A.
6. Bernstein, ACM
7. Middleware by David E Bakken, Washington State University
8. Websites: http://www.sei.cmu.edu/str/descriptions/middleware.html,
9. http://www.sei.cmu.edu/str/descriptions/clientserver_body.html,
10. wikipedia.org
65 ANNA UNIVERSITY CHENNAI
DMC 1754 / 1945

NOTES
UNIT II
CHAPTER - 4
EJB ARCHITECTURE

4.1. INTRODUCTION
Enterprise JavaBeans (EJB) represents a new direction in the development, installation,
and management of distributed Java applications in the enterprise. EJB is a server side
component architecture that simplifies the process of building distributed component
applications in Java. EJB technology enables rapid and simplified development of distributed,
transactional, secure and portable applications based on Java technology. EJB is designed to
support application portability and reusability over any vendor’s middleware services. Hence,
the componential architecture of EJB has greatly simplified the development and management
of corporate applications.
This unit introduces the reader to the EJB architecture. The history of EJB and the
components of EJB system have been dealt with in this unit.
4.2 LEARNING OBJECTIVES
At the end of this Unit, the reader must be familiar with the following concepts:
 EJB Architecture
 EJB Technology Design goals
 Features of the EJB architecture
4.3. THE HISTORY OF EJB
Applications have evolved over the past few decades and more so in the last ten years.
In the beginning, applications were complete entities, sometimes including an operating
system, but most probably managing data storage. Because of the repetitious task of storing
and retrieving data, and the complexity involved in transaction management and
synchronization, database technology was evolved to provide an application-independent
interface to an application’s data. As applications grew complicated and required more
resources to do the processing, applications came to be distributed across multiple processes
that were responsible for a certain part of the application’s business logic.
The advent of distributed programming was followed shortly by the birth of distributed
component models. A distributed component model can be as simple as defining a
mechanism for one component to locate and use the services of another component (also
referred to as Object Request Broker) or as complicated as managing transactions,
distributed objects, concurrency, security, persistence, and resource management ( also
referred to as Component Transaction Monitors or CTMs.). CTMs are by far the most
complicated of these component models; because they manage not only components but
also database transactions, resources, and so on, and they are also referred to as application
servers. The Enterprise JavaBeans technology is Sun Microsystems answer to an application
server.

With the prevalence of the Internet, distributed technologies have added another tier to
enterprise applications. In this case, Web browsers are thin clients that talk to Web servers.

66 ANNA UNIVERSITY CHENNAI


MIDDLE-WARE TECHNOLOGIES
Web servers communicate with the “middleware” layer (CTM) that in turn communicates
with one or more databases. NOTES
A software component model is a standard that defines how components are written
so that systems can be built from components by different developers with little or no
customization. This component model is called as JavaBeans in Java. Enterprise JavaBeans
is a standard way of writing distributed components so that the components can be written
and used with other components located in the same computer system or in other computer
systems.
Sun Microsystems had earlier introduced Java Remote Method Invocation (RMI) API
as a distributed object computing technology. RMI specifies how to write objects, so that
each object can talk to other objects on the network. In the RMI model, the server defines
objects that the client can use remotely. The clients can then invoke methods of this remote
object as if it were a local object running in the same virtual machine as the client. RMI
hides the underlying mechanism of transporting method arguments and return values across
the network. However, RMI does not specify about other characteristics normally required
of an enterprise-class distributed environment and how the distributed objects work together
to construct a single transaction. These drawbacks in RMI lead to the realization for the
need of a distributed component model which resulted in the development of JavaBeans
component model.
4.4. EJB OVERVIEW
Enterprise JavaBeans (EJB) is a comprehensive technology that provides the
infrastructure for building enterprise-level server-side distributed Java components. EJB is
a server-side component that encapsulates the business logic of an application.
The EJB technology provides a distributed component architecture that integrates several
enterprise-level requirements such as distribution, transactions, security, messaging,
persistence, and connectivity to mainframes and Enterprise Resource Planning (ERP)
systems. When compared with other distributed component technologies such as Java RMI
and CORBA, the EJB architecture hides most the underlying system-level semantics that
are typical of distributed component applications, such as instance management, object pooling,
multiple threading, and connection pooling. EJB technology provides different types of
components for business logic, persistence, and enterprise messages. Thus, an Enterprise
Java Bean is a remote object with semantics specified for creation, invocation and deletion.
The name Enterprise JavaBeans trades on the popularity of Java beans portable, reusable
Java software components. Extending the concept of Java components from the client
domain to the server domain, Enterprise JavaBeans technology represents an ambitious
step forward in the growth of Java technology into a robust, scalable environment that can
support mission-critical enterprise information systems. EJB are not only platform independent
but also implementation independent, that is, EJBs can run in any application server that
implements the EJB specifications.
Figure 4.1 shows a Three-tier system using EJB as the middle tier.
EJB technology can be used for the development and deployment of business logic within a
larger enterprise application. It is predominantly used in the middle tier of an N-tier
architecture. This middle tier provides communication between client components of the
client tier and Enterprise Information Systems (EISs) of the server tier.
The EJB technology allows users to isolate their business logic in the middle tier, away
from the actual presentation and data layers (the client and server tiers, respectively). The
middle tier is made up of the following components:

67 ANNA UNIVERSITY CHENNAI


DMC 1754 / 1945

NOTES Java Client


Server or
Other web based Client tier
client

Application server
EJB EJB EJB Middle tier
instance instance instance

EIS System EIS System Server tier

Figure 4.1 Example of 3-tier architecture using EJB

 An application server
 Instances of Enterprise JavaBeans, called simply enterprise beans (or EJBs)
In the context of an enterprise bean, an application server provides basic resource-
allocation services to enterprise beans, which access EISs.
The benefit of accessing an EIS through an application server is that the client component
does not need to know the details of connection management, security management, and
transaction management. The client component is a client-side module that communicates
with an application server to access various components (such as EJBs) that the application
server manages.
The EJB specification was originally developed in 1997 by IBM and later adopted by
Sun Microsystems (EJB 1.0 and 1.1) and enhanced under the Java Community Process as
JSR 19 (EJB 2.0), JSR 153 (EJB 2.1) and JSR 220 (EJB 3.0).
The EJB specification provides a standard way to implement the back-end ‘business’
code typically found in enterprise applications. Enterprise Java Beans were intended to
handle such common concerns as persistence, transactional integrity, and security in a standard
way. It details how an application server provides support for:
 Transaction Processing
 Persistence
 Concurrency control
 Security
 Naming and Directory Service (JNDI)
 Events using Java Message Service (JMS)
 Security ( Java Cryptography Extension (JCE) and Java Authenticatin and
Authorization Services (JAAS ) )
 Deployment of software components in an application server
 Remote procedure calls using RMI-IIOP.
 Exposing business methods as Web Services.
4.5. EJB TECHONOLGY DESIGN GOALS
The goals for the EJB architecture are:
 Enterprise JavaBeans architecture will be the standard component architecture for
building distributed object-oriented business applications in the Java programming

68 ANNA UNIVERSITY CHENNAI


MIDDLE-WARE TECHNOLOGIES
language. Enterprise JavaBeans architecture will make it possible to build distributed
applications by combining components developed using tools from different vendors. NOTES
 Enterprise JavaBeans architecture will make it easy to write applications: application
developers will not have to understand low-level transaction and state management
details, multi-threading, resource pooling, and other complex low-level APIs.
 Enterprise JavaBeans applications will follow the “Write Once, Run Anywhere”
philosophy of the Java programming language. An EJB component can be developed
once and then deployed on multiple platforms without recompilation or source code
modification.
 The Enterprise JavaBeans architecture will address the development, deployment,
and run-time aspects of an enterprise application’s life cycle.
 The Enterprise JavaBeans architecture will define the contracts that enable tools
from multiple vendors to develop and deploy components that can interoperate at
run time.
 The Enterprise JavaBeans architecture will be compatible with existing server
platforms. Vendors will be able to extend their existing products to support Enterprise
JavaBeans components.
 The Enterprise JavaBeans architecture will be compatible with other Java
programming language APIs.
 The Enterprise JavaBeans architecture will provide interoperability between EJB
components and non-Java programming language applications.
 The Enterprise JavaBeans architecture will be compatible with CORBA.
4.6 EJB ARCHITECTURE
The diagrammatic representation of EJB Architecture is as shown in Figure 4.2.
An enterprise bean is composed of many parts, not just a single class. Essentially, an enterprise
bean is constructed with a bean class, remote interface, home interface and deployment
descriptor. The bean class is the implementation of the bean; Remote Interface defines the
business methods that will be visible to the client that use the enterprise bean;

EJB Server
EJB Container
Creat Home
e Home
Interface Object
EJB Enterprise
Remote
Client EJB Java Bean
Interface Database
Invok Object
e

Security JNDI JTS

Enterprise Services and API

Figure 4.2 EJB Architecture

Home Interface defines the create, delete (remove), and query methods for an enterprise
bean type and Deployment Descriptor is used to describe the enterprise bean’s runtime

69 ANNA UNIVERSITY CHENNAI


DMC 1754 / 1945

behaviour to the container. As explained later, EJB 3.0 uses metadata annotations as an
NOTES alternative to Deployment Descriptors.
The remote and home interfaces are used by applications to access enterprise beans
at runtime. The home interface allows the application to create or locate the bean, while the
remote interface allows the application to invoke a bean’s business methods. The bean class
runs in the environment provided by the EJB Server and Container. More details are given
in the following sections.
4.6.1 Bean Class
Bean Class is the implementation class of the bean that defines its business, persistence,
and passivation logic. The bean class runs inside the EJB container. Instances of the bean
class service the client request indirectly, instances of the bean class are not visible to the
client. An entity bean must implement javax.ejb.EntityBean and a session bean must implement
javax.ejb.SessionBean. Both EntityBean and Session Bean extend javax.ejb.EnterpriseBean.
Bean class has to implement the bean’s business methods in the remote interface apart
from some other callback methods.
4.6.2 Remote Interface
Remote Interface defines the business methods that will be visible to the client’s that
use the enterprise bean. The remote interface extends the javax.ejb.EJBObject interface
and is implemented by a remote (distributed object) reference. Client applications interact
with the enterprise bean through its remote interface.
4.6.3 Home Interface
This interface defines the bean’s life cycle methods such as creation of new beans,
removal of beans, and locating beans. The home interface extends the javax.ejb.EJBHome
interface which in turn extends java.rmi.Remote. The client application will use the home
interface to create beans, find existing beans, and remove specific beans.
4.6.4 Deployment Descriptors
Deployment descriptor is used to describe the enterprise bean’s runtime behaviour to
the container. Among other things the deployment descriptor allows the transaction,
persistence, and authorization security behaviour of a bean to be defined using declarative
attributes. This greatly simplifies the programming model when developing beans
Deployment descriptor describes how to apply the primary services to each bean class
at runtime to EJB server. Deployment Descriptors are used to specify the following
requirements of a bean:
 Bean management and lifecycle requirements
 Persistence requirements
 Transaction Requirements
 Security Requirements
4.6.5 EJB Server
The EJB server provides an environment that supports the execution of applications
developed using EJB components. It manages and coordinates the allocation of resources
to the applications.

70 ANNA UNIVERSITY CHENNAI


MIDDLE-WARE TECHNOLOGIES
The EJB server is the base set of services on top of which the container runs. Many
different EJB containers can run on a single EJB server. EJB servers are generally delivered NOTES
as part of a J2EE (Java 2 Platform Enterprise Edition) compliant application server. Examples
of such servers, as mentioned earlier, include BEA WebLogic, IBM WebSphere, Tomcat
and Adobe JRun server. Once the application server is installed and running, it will provide
the underlying services required of an EJB server and will host EJB containers.
In the first generation of EJB, most EJB server vendors also provided EJB containers.
If the vendor provides both the container and the server, the interface between the two can
remain proprietary. In future generations of the EJB specification, however, some work
was done to define the container-server interface and delimit the responsibilities of the
container. One advantage of defining a container-server interface is that it allows third-
party vendors to produce containers that can plug into any EJB server. If the responsibilities
of the container and server are clearly defined, then vendors who specialize in the technologies
that support these different responsibilities can focus on developing the container or server
as best matches their core competency. The disadvantage of a clearly defined container-
server interface is that the plug-and-play approach could impact performance. The high
level of abstraction that would be required to clearly separate the container interface from
the server, would naturally lead to looser binding between these large components, which
always results in lower performance.
Many EJB-compliant servers actually support several different kinds of middleware
technologies. It’s quite common, for example, for an EJB server to support the vendor’s
proprietary CTM model as well as standard EJB, servlets, web server functionality, and
other server technologies. Defining an EJB container concept is useful for clearly
distinguishing what part of the server supports EJB from all the other services it provides.
4.6.6 EJB Container
The environment that surrounds the beans on the EJB server is referred to as the
container. The EJB Container is as shown in Figure 4.3.

CLIENT
Client Request Transaction
Management
Persistence
Management
Security
Management

Bean

EJB Context, JNDI ENC

Callback Methods

Figure 4.3 EJB Container


The container acts as an intermediary between the bean class and the EJB server.
Enterprise beans are software components that run in a special environment called an EJB
container. The container manages the EJB, that is, the container creates, controls and destroys

71 ANNA UNIVERSITY CHENNAI


DMC 1754 / 1945

the EJB. An EJB server can have more than one container and each container in turn can
NOTES accommodate more than one enterprise bean.
The container hosts and manages an enterprise bean in the same manner that the Java
Web Server hosts a servlet or an HTML browser hosts a Java applet. An enterprise bean
cannot function outside of an EJB container. The EJB container manages every aspect of
an enterprise bean at runtimes including remote access to the bean, security, persistence,
transactions, concurrency, and access to and pooling of resources.
The container isolates the enterprise bean from direct access by client applications.
When a client application invokes a remote method on an enterprise bean, the container first
intercepts the invocation to ensure persistence, transactions, and security are applied properly
to every operation a client performs on the bean. The container provides various services
for the EJB to relieve the developer from having to implement such services in the bean
code itself, namely:
 Distribution via proxies : The container will generate a client-side stub and server-
side skeleton for the EJB. The stub and skeleton may use either CORBA’s IIOP
(Internet Inter-ORB Protocol) or Java Remote Method Protocol (JRMP) to
communicate.
 Lifecycle Management : Bean initialization, state management, and destruction is
driven by the container, all the developer must do is implement the appropriate
methods.
 Naming and Registration : The EJB container and server will provide the EJB with
access to naming services. These services are used by local and remote clients to
look up the EJB and by the EJB itself to look up resources it may need.
 Transaction Management : Declarative transactions provide a means for the
developer to easily delegate the creation and control of transactions to the container.
 Security and Access Control : Again, declarative security provides a means for the
developer to easily delegate the enforcement of security to the container.
 Persistence : Using the Entity EJB’s container-managed persistence mechanism,
state can be saved and restored without having to write a single line of code.
The EJB specification defines a bean-container contract, and a strict set of rules that
describe how enterprise beans and their containers will behave at runtime, how security
access is checked, how transactions are managed, how persistence is applied, etc. The
bean-container contract is designed to make enterprise beans portable between EJB containers
so that enterprise beans can be developed once then run in any EJB container.
4.6.6.1. Callback Methods
Every bean implements a subtype of the EnterpriseBean interface which defines several
methods, called callback methods. Each callback method alerts the bean to a different event
in its lifecycle and the container will invoke these methods to notify the bean when it’s about
to activate the bean, persist its state to the database, end a transaction, remove the bean
from memory, etc. The callback methods give the bean a chance to do some housework
immediately before or after some event.
4.6.6.2. EJBContext
Every bean obtains an EJBContext object, which is a reference directly to the container.
The EJBContext interface provides methods for interacting with the container so that bean
can request information about its environment like the identity of its client or the status of a
transaction.

72 ANNA UNIVERSITY CHENNAI


MIDDLE-WARE TECHNOLOGIES

The EJBContext interface provides an instance with access to the container-provided


runtime context of an EJB bean instance. This interface is extended by the SessionContext, NOTES
EntityContext, and MessageDrivenContext interfaces to provide additional methods specific
to the enterprise interface Bean type.

A bean needs the EJB context when it wants to perform the operations listed in Table
4.1

Table 4.1 EJBContext Operations

M ethod D escription
getE nvironm ent Get the values of p roperties for the bean.
getU serT ransaction Get a transaction contex t, w hich allows the coder to
dem arcate transactions prog ram m atically when using
bean m an aged transactions (B M T ). This is v alid only
for beans that have been designated transactional.
setR ollbackO nly Set the current transaction so that it cannot be
com m itted. A pplicable only to contain er-m anaged
transactions.
getR ollbackOn ly C heck w hether the current transaction is m arked fo r
rollback only. A pplicab le only to contain er-m anaged
transactions.
getE JB Hom e R etrieve the object reference to the corresponding
EJB H om e (hom e interface) of the bean.
lookup Use JN D I to retrieve the bean b y enviro nm ent
reference nam e. W hen using this m ethod, you do not
prefix the bean reference w ith "java:com p/env ".

4.6.7 Java Naming and Directory Interface


Java Naming and Directory Interface (JNDI) is a standard extension to the Java platform
for accessing naming systems like LDAP, NetWare, file systems, etc. Every bean automatically
has access to a special naming system called the Environment Naming Context (ENC). The
ENC is managed by the container and accessed by beans using JNDI. The JNDI ENC
allows a bean to access resources like JDBC connections, other enterprise beans, and
properties specific to that bean.
4.6.8 EJB Object
A client never invokes methods directly on an actual bean instance. All invocations go
through the EJB object, which is a tool-generated class. EJB object is a glue between the
client and the bean. Figure 4.4 depicts EJB Objects.
In EJB, the container use the EJB Object to provide all the services like transaction,
security etc. Message requests are intercepted by the EJB object and then delegated to the
actual bean instance. It is also called as request interceptor. Some of the services that is
obtained using interceptor include:
 Implicit distributed transaction Management
 Implicit Security

73 ANNA UNIVERSITY CHENNAI


DMC 1754 / 1945

NOTES EJB Server/Container


Client Code
(Servlets or Applets)
5. Return 2. Call
Result Middleware Transaction service,
API Security service,
Persistence Service,
etc.
1. Call Method
4. Method Return
EJB Object

3. Call Enterprise
Bean Bean

Figure 4.4 EJB Object


 Implicit resource Management and component Life cycle
 Implicit Persistence
 Implicit Remote accessibility
 Implicit support
 Implicicit Component location transperancy
 Implicit Monitoring
4.6.9 Home Object
The Client invokes the method on EJB Object rather than on actual bean instance. To
get the reference of EJB Object, Client use Home Object i.e. lookup the Home Object
through JNDI. To give the reference of EJB Object, Home Object should know how to
initialize the Object and the Home Object Class is provided by the container. So, developers
of EJB code write a Home interface in EJB to provide this information i.e. create method in
home interface.
The client asks for the EJB object from the EJB object factory. This factory is responsible
for instantiating and destroying the EJB objects. This factory is called the Home object. The
responsibilities of Home Objects are the following:
 Create EJB objects
 Find existing EJB objects
 Remove EJB objects
Home objects are proprietary and specific to each EJB Container. It contains container-
specific logic, such as load-balancing logic, logic to track information on a graphical
administrative console.
4.6.10 JAR Files
Jar files are ZIP files that are used specifically for packaging Java classes that are
ready to be used in some type of application. A Jar file containing one or more enterprise
beans includes the bean classes, remote interfaces, home interfaces, and primary keys for
each bean. It also contains one deployment descriptor.

74 ANNA UNIVERSITY CHENNAI


MIDDLE-WARE TECHNOLOGIES

Deployment is the process of reading the bean’s JAR file, changing or adding properties
to the deployment descriptor, mapping the bean to the database, defining access control in NOTES
the security domain, and generating vendor-specific classes needed to support the bean in
the EJB environment. Every EJB server product comes with its own deployment tools
containing a graphical user interface and a set of command-line programs.
For clients like enterprise bean itself, Java RMI or CORBA client, to locate enterprise
beans on the net, Java EJB specifications specify the clients to use Java Naming and Directory
Interface (JNDI). JNDI is a standard Java extension that provides a uniform Application
Programming Interface (API) for accessing a wide range of naming and directory services.
The communication protocol may be Java RMI-IIOP or CORBA’s IIOP
There are some special integrated application development tools available commercially
such as Inprise’s JBuilder, Sun’s Forte and IBM’s VisualAge, for designing EJBs in the
market.
4.7 THE EBJ ECOSYSTEM
To have an EJB deployment up and running, one needs more than an application server
and components. EJB encourages collaborations of more than six different parties. These
parties together is called as an EJB Ecosystem.
4.7.1 The Bean provider
The bean provider supplies the business components to the enterprise applications.
These business components are not complete applications but can be combined to form
complete enterprise applications. These bean providers could be an ISV (Independent Software
Vendor) selling components or an internal component provider. There are three different
types of EJB : Session Bean that is transaction aware and models processes, services and
client-side sessions, Entity Bean that is used to model business functionality and Message-
driven EJB used for asynchronous message interchanges between sender and receiver. As
an application designer, you should choose the most appropriate type of EJB based on the
task to be accomplished.
4.7.2 The Application Assembler
The application assembler is the overall application architect. This party is responsible
for understanding how various components fit together and writing the applications that
combine components. The application assembler is the consumer of the beans supplied by
the Bean provider. The application assembler could perform any or all of the following
tasks:
 From knowledge of the business problem, decide which combination of existing
component and new enterprise beans are needed to provide an effective solution.
 Supply a user interface or Web Service
 Write new enterprise beans to solve some problems specific to your business problem
 Write the code that calls on components supplied by bean providers.
 Write integration code that maps data between components supplied by different
bean providers.
4.7.3 EJB Deployer
After the application developer builds the application, the application must be deployed
on the server. Some challenges are:
 Securing the deployment with a hardware or software firewall and other protective
measures.

75 ANNA UNIVERSITY CHENNAI


DMC 1754 / 1945
 Integrating with enterprise security and policy repositories.
NOTES  Choosing hardware that provides the required level of quality of service.
 Providing redundant hardware and other resources for reliability and fault tolerance
 Performance-tuning the system
4.7.4 The System Administrator
The system administrator is responsible for the upkeep and monitoring of the deployed
system and may make use of monitoring and management tools to closely observe the
deployed system.
4.7.5 The Application Server
The application server supplies the runtime environment where the beans live. Enterprise
beans are software components that run in a special environment called an EJB container
which is supported by the Application Server. Most EJB server vendors also provide EJB
containers The server supplies the middleware services to the beans and manages them.
The figure 4.5 shows the relationship among the EJB server, the EJB container, and an
EJB container contains and provides services to an Enterprise JavaBean.
Some of the various application servers are: BEA’s WebLogic, iPlanet’s iPlanet
Application Server, IBM’s WebSphere, Oracle’s Oracle 9i Application Server and Oracle
10g Application Server, and the JBoss open source Application Server.
4.7.6 The Tool Vendors
There are various IDEs (Integrated Development Environment) available to assist the
developer in rapidly building and debugging components, for example Eclipse, NetBeans,
and JBuilder. For the modeling of components Rational Rose can be used. There are many
other tools, some used for testing (JUnit) and others used for building (Ant, XDoclet).
The role of the EJB Server provider is similar to that of a database systems vendor.
The Server provider offers the container whatever services it needs to do a job. The
server provider will typically have a great deal of expertise in areas such as concurrent
programming, transaction processing, and network communications

Operating System

EJB Server

EJB Container

EJB Bean

Figure 4.5 The relationship among the EJB Server, container and bean

76 ANNA UNIVERSITY CHENNAI


MIDDLE-WARE TECHNOLOGIES
4.8 EJB 3.0 SPECIFICATIONS HIGHLIGHTS
Development of EJB was never easy and it became more complex with every release NOTES
of EJB specification. The main goal of EJB 3.0 specifications was to simplify the EJB
programming model. EJB 3.0 decreases the number of programming artifacts for developers
to provide, eliminates or minimizes callback methods required to be implemented, and
reduces the complexity of the entity bean programming model. The benefits would be
significant to developer productivity, maintenance costs, and application quality.
Some of the main changes in the proposed EJB 3.0 specification include:
 Annotation-based EJB programming model
EJB programs require the developer to extend specific classes, provide several
interfaces, and write deployment descriptors and hence were viewed as overloaded
Java objects that are no longer plain. In EJB 3.0, EJB components are no longer
required to provide different interfaces or extend any specific EJB specific classes.
EJB 3.0 uses metadata annotations as an alternative to deployment descriptors.
Annotations were introduced in J2SE 5.0 and are a key element in the EJB 3.0
simplification. In EJB 3.0, all Enterprise JavaBeans are Plain Old Java Objects
(POJO), with proper annotations. Therefore, a developer marks up the Java code
with annotations and the annotation processor creates the deployment descriptors at
runtime. This mechanism allows the deployer to override the default configurations so
they can replace data sources, etc. Another enhancement here is that the code and
annotations are in one file; the developer does not have to maintain multiple files for
one bean.
For example, a stateless session bean can be declared by using the @Stateless
annotation on the Java class. For stateful beans, the @Remove annotation is marked on a
particular method to indicate that the bean instance should be removed after a call to the
marked method completes.
 Callback Methods and Listener Classes
The EJB 2.1 specifications required the implementation of either the interface
javax.ejb.SessionBean or javax.ejb.EntityBean. Methods like ejbCreate(),
ejbPassivate(), and ejbActivate() were never used in the application and just cluttered
up the application code. Fortunately, they’re not required in EJB 3.0.

In EJB 3.0, bean developers do not have to implement unnecessary callback methods
and can instead designate any arbitrary method as a callback method to receive notifications
for lifecycle events for a SessionBean or MessageDrivenBean (MDB). Callback methods
can be indicated using callback annotations. Also, a callback listener class can be designed
instead of writing callback methods in the bean class itself. The annotations used for callback
methods are the same in both cases—only the method signatures are different. A callback
method defined in a listener class must take a Object as a parameter, which is not needed
when the callback is in the bean itself.
 Interceptors
The runtime services like transaction and security are applied to the bean objects at the
method’s invocation time. These services are often implemented as the interceptor
methods managed by the container. However, EJB 3.0 allows developers to write the
custom interceptor methods that are called before and after the bean method. It is
useful to give the control to the developer for the actions like commit transaction,
security check, etc. The developer can develop, reuse, and execute your own services,
or can re-implement the transaction and security services to override the container’s
default behaviors.

77 ANNA UNIVERSITY CHENNAI


DMC 1754 / 1945

Interceptors offer fine-grained control over method invocation flow. They can be used
NOTES on SessionBeans (stateful and stateless) and MessageDrivenBeans. They can be defined in
the same bean class or in an external class. The interceptor’s methods will be called before
the actual bean class methods are called.
 The new persistence model for entity beans
The new entity beans are also just POJOs with a few annotations and are not persistent
entities by birth. An entity instance becomes persistent once it is associated with an
EntityManager and becomes part of a persistence context.
 Dependency Injection
Dependency injection is a term used to describe a separation between the implementation
of an object and the construction of an object it depends upon. Instead of complicated
XML ejb-refs or resource refs, one can use the @Inject annotation to set the value of
a field or to call a setter method within your session bean with anything registered
within JNDI. EJB 3.0 facilitates this feature by providing annotations to inject the
dependencies into the bean class itself. Dependency annotation may be attached to the
bean class, instance variables, or methods. The main reason for introducing @Inject is
to avoid JNDI lookup to get the resources set the JNDI tree. Also another great effect
of using @Inject is to allow a bean to be tested outside of the container.
 EntityBeans Made Easy
To create an EntityBean, a developer only needs to code a bean class and annotate it
with appropriate metadata annotations. The bean class is a POJO.
 Security Annotations
EJB 3.0 provides annotations to specify security options. The following are the security-
related annotations defined in EJB 3.0:
 @SecurityRoles
 @MethodPermissions
 @Unchecked
 @Exclude
 @RunAs
Annotations applied for package-level elements are called package-level annotations.
These annotations are placed in the file package-info.java. The security roles are applied
to the entire EJB module. The @SecurityRoles annotation must be placed in the package-
info.java file with the package information. When the compiler parses package-info.java,
it will create a synthetic interface. It does not have any source code, because it is created by
the compiler. This interface makes package-level annotations available at runtime. The file
package-info.java is created and stored inside of every package. For example, if the bean
class is inside of the package ejb3.login, then the bean class must be put in the package-
info.java file inside the ejb3.login package with the user role details.

The package-info.java file is new in J2SE 5.0. It contains package declaration,


annotations, package tags and Javadoc tags. It is preferred over the package.html file used
in the previous versions, because package.html can contain only package comments and
Javadocs, not annotations. A package may contain either package-info.java or
package.html, but not both. Either file must be placed in the package directory in the
source tree along with the .java files.

78 ANNA UNIVERSITY CHENNAI


MIDDLE-WARE TECHNOLOGIES
4.9 ENTERPRISE BEANS AS DISTRIBUTED OBJECTS
The remote and home interfaces are types of Java RMI Remote interfaces. The
NOTES
java.rmi.Remote interface is used by distributed objects to represent the bean in a different
address space. An enterprise bean is a distributed object. The bean class is instantiated and
lives in the container but it can be accessed by applications that live in other address spaces.
To make an object instance in one address space available in another requires the
instance in a special object called a skeleton that has a network connection to another
special object called a stub. The stub implements the remote interface so it looks like a
business object. But the stub doesn’t contain business logic; it holds a network socket
connection to the skeleton. Every time a business method is invoked on the stub’s remote
interface, the stub sends a network message to the skeleton telling it which method was
invoked. When the skeleton receives a network message from the stub, it identifies the
method invoked and the arguments, and then invokes the corresponding method on the
actual instance. The instance executes the business method and returns the result to the
skeleton, which sends it to the stub. Figure 4.6 illustrates this.

Figure 4.6 Enterprise beans as distributed objects

The stub returns the result to the application that invoked its remote interface method.
The stub is just a dumb network object that sends the requests across the network to the
skeleton, which in turn invokes the method on the actual instance. The instance does all the
work, the stub and skeleton just pass the method identity and arguments back and forth
across the network.
In EJB, the skeleton for the remote and home interfaces are implemented by the
container, not the bean class. Every method invoked on the reference types by a client
application are first handled by the container and then delegated to the bean instance. The
container must intercept those requests intended for the bean so that it can apply persistence
(entity beans), transactions, and access control automatically.
Distributed object protocols define the format of network messages sent between
address spaces. Most EJB servers support either the Java Remote Method Protocol (JRMP)
or CORBA’s Internet Inter-ORB Protocol (IIOP). The bean and application programmer
only see the bean class and its remote interface, the details of the network communication
are hidden.
With respect to the EJB API, it is not necessary for the programmer to know whether
the EJB server uses JRMP or IIOP as the API is the same. The EJB specification requires

79 ANNA UNIVERSITY CHENNAI


DMC 1754 / 1945

a specialized version the Java RMI API, when working with a bean remotely. Java RMI is
NOTES an API for accessing distributed objects and is somewhat protocol agnostic in the same way
that JDBC is database agnostic. So, an EJB server will support JRMP or IIOP, but the bean
and application developer always uses the same Java RMI API. In order for the EJB server
to have the option of supporting IIOP, a specialized version of Java RMI, called Java RMI-
IIOP was developed. Java RMI-IIOP uses IIOP as the protocol and the Java RMI API.
EJB servers don’t have to use IIOP, but they do have to respect Java RMI-IIOP restrictions,
so EJB 1.1 uses the specialized Java RMI-IIOP conventions and types, but the underlying
protocol can be anything.
4.10 EJB ARCHITECTURE VIEWS
4.10.1 The client’s view of an EJB is defined strictly by interfaces
Synchronous clients can call only those methods exposed by the EJB’s interfaces. In
the EJB Specification, the interfaces are collectively referred to as the client view. Each
EJB publishes ‘factory’ interfaces and ‘business method’ interfaces. The factory interfaces
expose methods that clients can use to create, locate, and remove EJBs of that type. The
business method interfaces define all the methods that clients can call on a specific EJB
after it has been located or created through the factory interface.

Client

F
F B
EJB2
B

EJB1
F B

EJB Server 1 EJB3


EJB Server 2

Data Source

Figure 4.7 Interactions between EJBs and their clients


The interaction between EJBs and their clients are defined in terms of interfaces and
illustrated in Figure 4.7.
Each EJB has a factory interface (‘F’) and a business method interface (‘B’). When
EJBs make method calls on other EJBs, even in the same JVM, then the calling EJB is a
client of the target EJB and can call only those methods exposed by the interfaces.
It is important for the developer to understand that anything that makes method calls
on an EJB is a client of that EJB, and interacts with it via its factory and business method
interfaces. This applies even in the case where multiple EJBs interact within the same
JVM. Enforcing this model allows the EJB infrastructure to provide important services
transparently.
4.10.2 EJBs are isolated and supported by an EJB container
Although EJB clients make method calls as if they were directly on the EJB, in fact the
method calls are on proxies, which delegate to the EJB’s implementation class. These proxies
and their supporting classes form the EJB container. The client never calls EJB methods on
the implementation directly, even if the client and the EJB are actually on the same server,
or even in the same JVM. This strategy allows powerful features like distributed transaction
80 ANNA UNIVERSITY CHENNAI
MIDDLE-WARE TECHNOLOGIES
management to be provided transparently, and provides for pooling of implementation instances
to increase efficiency. NOTES
Message-driven EJBs are not called directly by clients at all, and do not have container
proxies in the same sense. Instead they are called directly by the container when it receives
a message for a queue or topic in which the EJB has registered an interest.
The container encapsulates the EJB and acts as its security manager. It also provides
general services to the EJB, as we shall see. The notion of the container and its proxies is
illustrated in Figure 4.8.

Home object

Client EJB

EJB object
Container
Figure 4.8 EJB Container and its Proxies
The client calls methods on the home object and EJB object, which delegate to the
implementation itself. The process is transparent to the client. There are different home
objects and EJB objects for local and remote access, but the purpose of these objects is
essentially the same. Because the methods on the EJB proxies will delegate to methods on
the EJB itself, the proxies must be generated to match the EJB—that is, the proxies will be
specific to the EJB they serve. The vendor of the EJB server will provide tools to support
this generation, which will typically take place when the EJB is deployed to the server.
4.11 ADVANTAGES OF USING EJB
The EJB architecture provides the following benefits to the application developer:
simplicity, application portability, component reusability, ability to build complex applications,
separation of business logic from presentation logic, deployment in many operating
environments, distributed deployment, application interoperability, integration with non-Java
systems, and educational resources and development tools.
Simplicity
Because the EJB architecture helps the application developer access and utilize
enterprise services with minimal effort and time, writing an enterprise bean is almost as
simple as writing a Java class. The application developer does not have to be concerned
with system-level issues, such as security, transactions, multi-threading, security protocols,
distributed programming, connection resource pooling, and so forth. As a result, the application
developer can concentrate on the business logic for the domain-specific application.
Application portability
An EJB application can be deployed on any J2EE compliant server.
Component reusability
An EJB application consists of enterprise bean components. Each enterprise bean is a
reusable building block.

81 ANNA UNIVERSITY CHENNAI


DMC 1754 / 1945
Ability to build complex applications
NOTES The EJB architecture simplifies building complex enterprise applications. The component-
based EJB architecture is well-suited to the development and maintenance of complex
enterprise applications. With its clear definition of roles and well-defined interfaces, the EJB
architecture promotes and supports team-based development and lessens the demands on
individual developers.
Separation of business logic from presentation logic
An enterprise bean typically encapsulates a business process or a business entity (an
object representing enterprise business data), making it independent of the presentation
logic.
Deployment in many operating environments
The EJB architecture also facilitates the deployment of an application by establishing
deployment standards, such as for data source lookup, other application dependencies, security
configuration.
Distributed deployment
The EJB architecture makes it possible for applications to be deployed in a distributed
manner across multiple servers on a network.
Application interoperability
The EJB architecture makes it easier to integrate applications from different vendors.
The enterprise bean’s client-view interface serves as a well-defined integration point between
applications.
Integration with non-Java systems
The related J2EE APIs, such as the Connector specification and the Java Message
Service (JMS) specification, make it possible to integrate enterprise bean applications with
various non- Java applications, such as ERP systems or mainframe applications, in a standard
way.
Educational resources and development tools
Since the EJB architecture is an industry-wide standard, the EJB application developer
benefits from a growing body of educational resources on how to build EJB applications.
More importantly, the powerful application development tools available from the leading tool
vendors simplify the development and maintenance of EJB applications.
Benefits to Customers
A customer has a different perspective on the EJB architecture from the application
developer. The EJB architecture provides the following benefits to the customer, choice of
application server, facilitation of application management, integration with customer’s existing
applications and data, and application security.
 Choice of the server : Since the EJB architecture is an industry-wide standard and is
part of the J2EE platform, customer organizations have a wide choice of J2EE-compliant
servers.
 Facilitation of application management : Because the EJB architecture provides a
standardized environment, server vendors have had the motivation to develop application
management tools to enhance their products.
 Integration with a customer’s existing applications and data : The EJB architecture
and the other related J2EE APIs simplify and standardize the integration of EJB
applications with any non-Java applications and systems at the customer operational
environment. For example, a customer does not have to change an existing database

82 ANNA UNIVERSITY CHENNAI


MIDDLE-WARE TECHNOLOGIES
schema to fit an application. Instead, an EJB application can be made to fit the existing
database schema when it is deployed. NOTES
 Application security : The EJB architecture shifts most of the responsibility for an
application’s security from the application developer to the server vendor, System
Administrator, and the Deployer. The people performing these roles are more qualified
than the application developer to secure the application. This leads to better security of
the operational applications.
To summarize the benefits:
 EJB components make it simpler to write applications .
 Server-side business logic can be portable .
 EJB architecture has built-in support for typical enterprise-level system services,
including distributed objects, transactions, database, security, and global naming.
 EJB architecture is being adopted by multiple IT vendors.
4.12 DISADVANTAGES OF USING EJB
Some of the main disadvantages in using EJB are:
 EJB has a large and complicated specification.
 EJBs take longer to develop. It is more difficult to debug. Occasionally the bug may
not be in the code but it can be in the application server itself.
4.13 CONCLUSION
Enterprise JavaBeans (EJB) is a comprehensive technology that provides the
infrastructure for building enterprise-level server-side distributed Java components. The
EJB technology provides a distributed component architecture that integrates several
enterprise-level requirements such as distribution, transactions, security, messaging and
persistence. The server-side components are called Enterprise Beans are hosted in the EJB
containers and provide remote services for clients distributed over a network. Enterprise
applications can be built using a set of reusable components; each component performs a
specific task for the system.
This chapter has described the basic architecture of an EJB system. Beans are business
object components. The home interface defines life-cycle methods for creating, finding, and
destroying beans and the remote interface defines the public business methods of the bean.
The bean class implements the state and behavior of the bean. There are two basic kinds of
beans: session and entity. Entity beans are persistent and represent a person, place, or thing.
Session beans are extensions of the client and embody a process or a workflow that defines
how other beans interact. Session beans are not persistent, receiving their state from the
client, and they live only as long as the client needs them.
HAVE YOU UNDERSTOOD QUESTIONS
1. What is the role of EJB in J2EE technologies?
2. What is the history of development of EJB technology?
3. What are the benefits of using EJB?
4. What are the key features of the EJB technology?
5. What are the different components in the EJB architecture?
6. What is the role played by the different components of EJB?
7. What is the EJB Ecosystem?
8. What are the advantages of using EJB technology?
9. What are the disadvantages of using EJB technology?

83 ANNA UNIVERSITY CHENNAI


DMC 1754 / 1945
SUMMARY
NOTES  Enterprise JavaBeans (EJB) is a server side component architecture that simplifies
the process of building distributed component applications in Java.
 The EJB Architecture consists of EJB Container, Remote Interface, Home Interface,
Services like JNDI and Security.
 The home interface defines life-cycle methods for creating, finding, and destroying
beans and the remote interface defines the public business methods of the bean.
 The bean class is where the state and behavior of the bean are implemented.
 The EJB object and EJB home are conceptual constructs that delegate method
invocations to the bean class from the client and help the container to manage the
bean class. The client does not interact with the bean directly. Instead, the client
software interacts with EJBObject and EJBHome stubs, which are connected to
the EJB object and EJB homes respectively.
 The EJB object implements the remote interface and expands the bean class’
functionality. The EJB home implements the home interface and works closely
with the container to create, locate, and remove beans.
 Beans interact with their container through the well-defined bean-container contract.
This contract provides callback methods and the EJBContext. The callback methods
notify the bean class that it is involved in state management event.
EXERCISES
PART I
1. What does the EJB specification architecture define?
a. Transactional components
b. Distributed object components
c. Server-side components
d. All of the above
2. What executes EJB components?
a. Web server
b. Application server
c. EJB container
d. Database server
3. EJB has two interfaces:
a. Home and Remote
b. Remote and Local
c. Local and Home
d. Component and Web
4. What do enterprise beans use to communicate with the EJB container to get runtime
context information?
a. The javax.ejb.EJBContext provided by the container
b. A JNDI ENC context
c. A javax.ejb.EJBHome object provided by the container
d. A javax.ejb.EJBMetaData object provided by the container
5. Through what interface does an application create, find, and remove enterprise beans?
a. java.rmi.Remote
b. javax.ejb.EJBHome
c. javax.ejb.EJBObject

84 ANNA UNIVERSITY CHENNAI


MIDDLE-WARE TECHNOLOGIES
d. javax.ejb.EntityBean
6. What interface must the enterprise bean implement so that an application can invoke its NOTES
operations?
a. javax.ejb.EntityBean
b. javax.ejb.EJBHome
c. javax.ejb.EJBObject
d. javax.rmi.Remote
7. What type of enterprise bean is used to embody business objects?
a. javax.ejb.EnterpriseBean
b. java.rmi.Remote
c. javax.ejb.SessionBean
d. javax.ejb.EntityBean
8. What type of enterprise bean is used to embody application processing state information?
a. javax.ejb.EnterpriseBean
b. javax.rmi.Remote
c. javax.ejb.SessionBean
d. javax.ejb.EntityBean
9. What is a deployment descriptor?
a. An XML file format used by the container to learn about the attributes of a bean,
such as transactional characteristics and access control
b. A method for transporting enterprise beans back and forth between systems
c. An XML file used by enterprise bean clients to learn about the attributes of a bean,
such as access control and transactional characteristics.
d. A format for bundling enterprise beans for delivery
10. All EJB remote interface extend
a. javax.ejb.EJBHome
b. javax.ejb.SessionBean
c. javax.ejb.EntityBean
d. javax.ejb.EJBObject
Part II
11.What is the difference between a normal Java object and EJB ?
12. What is the role of the two interfaces in EJB?
13. What does the EJB specifications define?
14. What are the different types of EJB beans?
15. What are the uses of Callback method ?
Part III
16. What are the various components in EJB Architecture?
17. How does an Enterprise Bean interact with the container?
18. Explain the responsibilities of EJB container?
19. Explain in detail the EJB Eco System?
20. Summarize the advantages and disadvantages of using EJB?

85 ANNA UNIVERSITY CHENNAI


DMC 1754 / 1945

Part I
NOTES Answers:
1) d 2) c 3) a 4) a 5) b 6) c 7) d 8)c 9)a 10) d
REFERENCES
1. java.sun.com/developer/onlineTraining/EJBIntro/EJBIntro.html
2. java.sun.com/products/ejb
3. www.developer.com/ejb
4. www.roseindia.net/javabeans/javabeans.shtml
5. www.techseasaw.com/articles/11111/EJB_part2.htm
6. www.wikipeida.org
7. www.jguru.com
8. Mastering Enterprise JavaBeans Third Edition by Rima Patel Sriganesh and Gerald
Brose
9. Enterprise JavaBeans by Tom Valesky

86 ANNA UNIVERSITY CHENNAI


MIDDLE-WARE TECHNOLOGIES
CHAPTER - 5
NOTES
BUILDING AND DEPLOYING EJB

5.1 INTRODUCTION
Enterprise beans are meant to perform server-side operations, such as executing
complex algorithms or performing high volume business transactions. The server side has
different kinds of needs than the client-side applications. Server side components need to
run in a highly available, fault tolerant, transactional, and multi-user secure environment.
The application server provides this high end server-side environment for the enterprise
beans and it provides the run time containment necessary to manage enterprise beans. This
unit explains about the different type of beans in detail and also gives the procedure for
building and deploying Enterprise Java Beans.
5.2 LEARNING OBJECTIVES
At the end of this unit, the reader must be familiar with the following concepts:
 EJB roles
 Types of Beans
 How to run EJB
5.3 EJB’S ROLES
EJB specifications handle the encapsulation and isolation of system interfaces by clearly
defining the roles for EJB developers. This has been earlier introduced in the EJB ecosystem.
The relationship between the different parties and their roles is given in Figure 5.1.
a) Enterprise Bean Provider
The Enterprise Bean Provider or Component provider is the producer of enterprise
beans. The output is an ejb-jar file that contains one or more enterprise beans. The Bean
Provider is responsible for the Java classes that implement the enterprise beans’ business
methods; the definition of the beans’ client view interfaces; and declarative specification of
the beans’ metadata. The beans’ metadata may take the form of metadata annotations
applied to the bean classes and/or an external XML deployment descriptor. The beans’
metadata - whether expressed in metadata annotations or in the deployment descriptor -
includes the structural information of the enterprise beans and declares all the enterprise
beans’ external dependencies (e.g. the names and types of resources that the enterprise
beans use).
The Enterprise Bean Provider is typically an application domain expert. The Bean
Provider develops reusable enterprise beans that typically implement business tasks or business
entities. The Bean Provider is not required to be an expert at system - level programming.
Therefore, the Bean Provider usually does not program transactions, concurrency, security,
distribution, or other services into the enterprise beans. The Bean Provider relies on the
EJB container for these services.
A Bean Provider of multiple enterprise beans often performs the EJB Role of the
Application Assembler

87 ANNA UNIVERSITY CHENNAI


DMC 1754 / 1945

NOTES
builds Operational
supplies application
tools environment

Tool maintains
system
Provider Application Deployer
Assembler

develops
EJB supplies
application
server Systems
Administrator
Bean Application
Provider Server
Provider

Figure 5.1 EJB ROLES


b) Application Assembler
The Application Assembler combines enterprise beans into larger deployable application
units. The input to the Application Assembler is one or more ejb-jar files produced by the
Bean Provider(s). The Application Assembler outputs one or more ejb-jar files that contain
the enterprise beans along with their application assembly instructions. The Application
Assembler can also combine enterprise beans with other types of application components
when composing an application. The Application Assembler can also use third-party tools of
different vendors to build the application.
The Application Assembler is a domain expert who composes applications that use
enterprise beans. The Application Assembler works with the enterprise bean’s metadata
annotations and/or deployment descriptor and the enterprise bean’s client-view contract.
Although the Assembler must be familiar with the functionality provided by the enterprise
bean’s client-view interfaces, he or she does not need to have any knowledge of the enterprise
bean’s implementation.
c) Deployer
The Deployer takes one or more ejb-jar files produced by a Bean Provider or Application
Assembler and deploys the enterprise beans contained in the ejb-jar files in a specific
operational environment. The operational environment includes a specific EJB server and
container.
The Deployer must resolve all the external dependencies declared by the Bean Provider
(e.g. the Deployer must ensure that all resource manager connection factories used by the
enterprise beans are present in the operational environment, and he or she must bind them to
the resource manager connection factory references declared in the metadata annotations
or deployment descriptor), and must follow the application assembly instructions defined by
the Application Assembler. To perform his or her role, the Deployer uses tools provided by
the EJB Container Provider.
The Deployer’s output is a set of enterprise beans (or an assembled application that
includes enterprise beans) that have been customized for the target operational environment,
and that are deployed in a specific EJB container.

88 ANNA UNIVERSITY CHENNAI


MIDDLE-WARE TECHNOLOGIES
The Deployer is an expert at a specific operational environment and is responsible for
the deployment of enterprise beans. For example, the Deployer is responsible for mapping
the security roles defined by the Bean Provider or Application Assembler to the user groups
NOTES
and accounts that exist in the operational environment in which the enterprise beans are
deployed.
The Deployer uses tools supplied by the EJB Container Provider to perform the
deployment tasks. The deployment process is typically two-stage:
 The Deployer first generates the additional classes and interfaces that enable the
container to manage the enterprise beans at runtime. These classes are container-
specific.
 The Deployer performs the actual installation of the enterprise beans and the
additional classes and interfaces into the EJB container.
In some cases, a qualified Deployer may customize the business logic of the enterprise
beans at their deployment. Such a Deployer would typically use the Container Provider’s
tools to write relatively simple application code that wraps the enterprise bean’s business
methods.
d) EJB Server Provider
The EJB Server Provider is a specialist in the area of distributed transaction management,
distributed objects, and other lower-level system-level services. A typical EJB Server Provider
is an OS vendor, middleware vendor, or database vendor.
e) EJB Container Provider
The EJB Container Provider provides:
 The deployment tools necessary for the deployment of enterprise beans.
 The runtime support for the deployed enterprise bean instances.
From the perspective of the enterprise beans, the container is a part of the target
operational environment. The container runtime provides the deployed enterprise beans
with transaction and security management, network distribution of remote clients, scalable
management of resources, and other services that are generally required as part of a
manageable server platform.
The expertise of the Container Provider is system-level programming, possibly combined
with some application-domain expertise. The focus of a Container Provider is on the
development of a scalable, secure, transaction-enabled container that is integrated with an
EJB server. The Container Provider insulates the enterprise bean from the specifics of an
underlying EJB server by providing a simple, standard API between the enterprise bean and
the container. This API is the Enterprise JavaBeans component contract.
The Container Provider typically provides support for versioning the installed enterprise
bean components. For example, the Container Provider may allow enterprise bean classes
to be upgraded without invalidating existing clients or losing existing enterprise bean objects.
The Container Provider typically provides tools that allow the System Administrator to
monitor and manage the container and the beans running in the container at runtime.
f) System Administrator
The System Administrator is responsible for the configuration and administration of
the enterprise’s computing and networking infrastructure that includes the EJB server and
container. The System Administrator is also responsible for overseeing the well-being of
the deployed enterprise beans applications at runtime.

89 ANNA UNIVERSITY CHENNAI


DMC 1754 / 1945
5.4 TYPES OF BEANS
NOTES Enterprise JavaBeans server-side components come in two fundamentally different
types: entity beans and session beans.
Entity beans are basically used to model business concepts. For example, an entity
bean might represent a customer, a piece of equipment, an item in inventory. Thus entity
beans model real-world objects.
Session beans are for managing processes or tasks. A session bean is mainly for
coordinating particular kinds of activities. That is, session beans are plain remote objects
meant for abstracting business logic. The activity that a session bean represents is
fundamentally transient. A session bean does not represent anything in a database, but it can
access the database. Thus an entity bean has persistent state whereas a session bean
models interactions but does not have persistent state.
Session beans are transaction-aware. In a distributed component environment, managing
transactions across several components mandates distributed transaction processing. The
EJB architecture allows the container to manage transactions declaratively. This mechanism
lets a bean developer to specify transactions across bean methods. Session beans are client-
specific. That is, session bean instances on the server side are specific to the client that
created them on the client side. This eliminates the need for the developer to deal with
multiple threading and concurrency.
Unlike session beans, entity beans have a client-independent identity. This is because
an entity bean encapsulates persistent data. The EJB architecture lets a developer to register
a primary key class to encapsulate the minimal set of attributes required to represent the
identity of an entity bean. Clients can use these primary key objects to accomplish the
database operations, such as create, locate, or delete entity beans. Since entity beans represent
persistent state, entity beans can be shared across different clients. Similar to session beans,
entity beans are also transactional, except for the fact that bean instances are not allowed to
programmatically control transactions. These two types of beans are meant for synchronous
invocation. That is, when a client invokes a method on one of the above types, the client
thread will be blocked till the EJB container completes executing the method on the bean
instance.
Message-driven Beans were introduced in the EJB 2.0 specification. which is supported
by Java 2 Platform, Enterprise Edition 1.3 or higher. A message-driven bean (MDB) is an
EJB 3.0 or EJB 2.1 EJB component that functions as an asynchronous message consumer.
It can service messages which come asynchronously over a messaging service such as
JMS (Java Message Service), which is a messaging standard that allows application
components based on the Java 2 Platform, Enterprise Edition (J2EE) to create, send, receive,
and read messages. Unlike other types of beans, a message-driven bean is a local object
without home and remote interfaces.
The different types of beans are given in Figure 5.2

EJB

MESSAGE-
ENTITY DRIVEN SESSION

CMP BMP Stateful Statelesss

Figure 5.2 Types of Beans

90 ANNA UNIVERSITY CHENNAI


MIDDLE-WARE TECHNOLOGIES
5.4.1 Session Bean
Session Beans are actions that can be executed by client code, such as making a NOTES
reservation or charging a credit card. When a client wants to perform any of these actions,
a session bean should be used. The session bean decides what data to modify. Typically, the
session bean uses an entity bean to access or modify data, Session beans represent business
processes to be performed. They implement business logic, business rules, algorithms, and
work flows. Session beans are relatively short-lived components. Their lifetime is equivalent
to a client session.
For example, if the client code invokes a session bean to perform credit card validation,
the EJB container creates an instance of that session bean. After performing the business
logic, if the client disconnects or terminates the session, the application server may destroy
the session bean. The length of the client’s session generally determines how long a session
bean is in use. The EJB container may destroy a session bean if its client times out. Session
beans neither survive application server crashes nor machine crashes. Session beans are
not persistent, i.e. they are not saved to permanent storage such as a database. A session
bean can perform database operations, but session beans themselves are not persistent.
Based on the possible types of conversations between a client and a bean, there are
two types of session beans:
Stateful Session Beans
These are beans that serve business processes that span multiple method requests or
transactions. A stateful session bean retains its state across multiple method invocations
made by the same client. If the stateful session bean’s state is changed during a method
invocation, then that state will be available to the same client on the following invocation.
For example, consider a customer using a debit card at an ATM machine. The ATM
could perform operations like checking an account balance, transferring funds, or making a
withdrawal. These operations could be performed, one by one, by the same customer. So
the bean needs to keep track of the state for each of these operations.
Another example; a stateful session bean could implement the server side of a shopping
cart on-line application, which would have methods to return a list of objects that are available
for purchase, put items in the customer’s cart, place an order, change a customer’s profile,
and so on.
Stateless Session Beans
These are beans that serve business requests that span only a single method invocation.
They are stateless because after each method call the container may choose to destroy a
stateless session bean or recreate it, clearing all information pertaining to past transactions.
The bean object may not be destroyed and instead may be shared by different clients who
want to use the same session.
For example, a stateless session bean could be a credit card verification component.
This bean takes the credit card number, expiration date, holder’s name, and dollar amount as
input and returns whether the credit card holder’s credit is valid or not. Once the bean’s task
is over it is available to serve a different client and it retains no past knowledge of previous
clients.
5.4.2 Entity Bean
Entity beans deal with data. They typically represent nouns, such as a frequent flier
account, customer, or payment. Plain old Java objects (POJO) come into existence when
they are created in a program. When the program terminates, the object is lost. But an entity
bean stays around until it is deleted. A program can create an entity bean and then the
program can be stopped and restarted, but the entity bean will continue to exist. After being
restarted, the program can again find the entity bean it was working with and continue using
it.

91 ANNA UNIVERSITY CHENNAI


DMC 1754 / 1945

Entity beans can be uniquely identified by a primary key. A primary key is an object that
NOTES uniquely identifies the entity bean. According to the specification, the primary key must be unique
for each entity bean within a container. Hence the bean’s primary key usually maps to the PK in the
database (provided it’s persisted to a database). However, it is not necessary that a primary key has
to be present in the database. As long as the bean’s primary key (which maps to a column or set of
columns) can uniquely identify the bean it should work. It may however be created or the sake of
referential integrity. For example, an “employee” entity bean may use the employee’s social security
number or ID as its primary key.
Unlike Java objects that are used only by one program, an entity bean can be used by any
program on the network. Client programs just need to find the entity bean via JNDI in order to use it.
Methods of an entity bean run on a “server” machine. When a client program calls an entity bean’s
method, the client program’s thread stops executing and control passes over to the server. When the
method returns from the server, the local thread resumes execution.
The characteristics of Entity beans, that is - are persistent, allow shared access, have primary
keys, and may participate in relationships with other entity beans - are given below in greater detail.
Persistence
Because the state of an entity bean is saved in a storage mechanism, it is persistent. Persistence
means that the entity bean’s state exists beyond the lifetime of the application or the J2EE server
process. For example, the data in a database is persistent because it still exists even the database
server or the applications it services are shut down.
There are two types of persistence for entity beans: bean-managed and container-managed.
With bean-managed persistence (BMP), the entity bean code contains the calls that access the
database. In container-managed persistence (CMP), the EJB container automatically generates the
necessary database access calls. The code that you write for the entity bean does not include these
calls.
Shared Access
Entity beans may be shared by multiple clients. Because the clients might want to change the
same data, it’s important that entity beans work within transactions. Typically, the EJB container
provides transaction management.
Primary Key
Each entity bean has a unique object identifier. A customer entity bean, for example, might be
identified by a customer number. The unique identifier, or primary key, enables the client to locate a
particular entity bean.
Relationships
Like a table in a relational database, an entity bean may be related to other entity beans. For
example, in a college enrollment application, StudentEJB and CourseEJB would be related because
students enroll in classes. Relationships are implemented differently for entity beans with bean-
managed persistence and those with container-managed persistence.
With bean-managed persistence, the code is written to implement the relationships. But with
container-managed persistence, the EJB container takes care of the relationships.
Container-Managed Persistence
The term container-managed persistence means that the EJB container handles all database
access required by the entity bean. The bean’s code contains no database access (SQL) calls. As a
result, the bean’s code is not tied to a specific persistent storage mechanism (database). Because of
this flexibility, even if you redeploy the same entity bean on different J2EE servers that use different
databases, it is not necessary to modify or recompile the bean’s code. In short, entity beans are more

92 ANNA UNIVERSITY CHENNAI


MIDDLE-WARE TECHNOLOGIES
portable. In order to generate the data access calls, the container needs information that the
developer provides in the entity bean’s abstract schema. NOTES
Abstract Schema
Part of an entity bean’s deployment descriptor, the abstract schema defines the bean’s
persistent fields and relationships. The term abstract distinguishes this schema from the
physical schema of the underlying data store. In a relational database, for example, the
physical schema is made up of structures such as tables and columns.
The name of an abstract schema is specified in the deployment descriptor. This name
is referenced by queries written in the Enterprise JavaBeans Query Language (“EJB QL”).
For an entity bean with container-managed persistence, it is required to define an EJB QL
query for every finder method (except findByPrimaryKey). The EJB QL query determines
the query that is executed by the EJB container when the finder method is invoked.
5.4.3 Message-Driven Bean
Message-Driven Beans are enterprise beans that allow applications to process messages
asynchronously. They act as a JMS message listener, which is similar to an event listener,
except that it receives messages instead of events. The messages is sent by any J2EE
component, an application client, another enterprise bean, a Web component or by a JMS
application. The most visible difference between message-driven beans and session or entity
beans is that clients do not access message-driven beans through interfaces. Unlike a session
or entity bean, a message-driven bean has only a bean class.
In a J2EE platform, message-driven beans are registered against JMS destinations.
When a JMS message receives a destination, the EJB container invokes the associated
message-driven bean. Thus message-driven beans do not require home and remote interfaces
as instances of these beans are created based on receipt of JM S messages. This is an
asynchronous activity and does not involve clients directly. The main purpose of message-
driven beans is to implement business logic in response to JMS messages. For instance, take
a B2B e-commerce application receiving a purchase order via a JM S message as an XML
document. On receipt of such a message in order to persist this data and perform any
business logic, one can implement a message-driven bean and associate it with the
corresponding JMS destination. Also these beans are completely decoupled from the clients
that send messages.
In several respects, a message-driven bean resembles a stateless session bean:
 A message-driven bean’s instances retain no data or conversational state for a
specific client.
 All instances of a message-driven bean are equivalent, allowing the EJB container
to assign a message to any message-driven bean instance. The container can pool
these instances to allow streams of messages to be processed concurrently.
 A single message-driven bean can process messages from multiple clients.
The instance variables of a message-driven bean can contain some state across the
handling of client messages for example, a JMS API connection, an open database connection,
or an object reference to an enterprise bean. Session beans and entity beans allows to send
JMS messages and to receive them synchronously, but not asynchronously.
5.5 THE LIFE CYCLES OF ENTERPRISE BEANS
An enterprise bean goes through various stages during its lifetime, or life cycle. Each
type of enterprise bean—session, entity, or message-driven—has a different life cycle. The
lifecycle of an EJB involves important events such as creation, passivation, activation, and
removal. Each such event is associated with a callback defined on the EJB class container
invokes the callback prior to or immediately after the lifecycle event (depending on the
event type).

93 ANNA UNIVERSITY CHENNAI


DMC 1754 / 1945

NOTES 5.5.1 Stateful Session Bean


Figure 5.3 illustrates the stages that a session bean passes through during its lifetime.
The client initiates the life cycle by invoking the create method. The EJB container instantiates
the bean and then invokes the setSessionContext and ejbCreate methods in the session
bean. The bean is now ready to have its business methods invoked.
While in the ready stage, the EJB container may decide to deactivate or passivate, the
bean by moving it from memory to secondary storage. (Typically, the EJB container uses a
least-recently-used algorithm to select a bean for passivation.) The EJB container invokes
the bean’s ejbPassivate method immediately before passivating it. If a client invokes a
business method on the bean while it is in the passive stage, the EJB container activates the
bean, moving it back to the ready stage, and then calls the bean’s ejbActivate method.

Figure 5.3 Life Cycle of a Stateful Session Bean

At the end of the life cycle, the client invokes the remove method and the EJB
container calls the bean’s ejbRemove method. The bean’s instance is ready for garbage
collection. Code is written to control the invocation of only two life-cycle methods—the
create and remove methods in the client. All other methods are invoked by the EJB container.
The ejbCreate method, for example, is inside the bean class, allowing one to perform certain
operations right after the bean is instantiated. For instance, it could be used to connect to a
database in the ejbCreate method.
The lifecycle for EJB 3.0 and EBJ 2.1 stateful session beans are identical. The difference
is in how you register lifecycle callback methods.
Table 5.1 lists the EJB 2.1 lifecycle methods, as specified in the javax.ejb.SessionBean
interface, that a stateful session bean must implement. For EJB 2.1 stateful session beans,
the developer must at the least provide an empty implementation for all callback methods.

94 ANNA UNIVERSITY CHENNAI


MIDDLE-WARE TECHNOLOGIES

NOTES
Table 5.1 Lifecycle Methods for an EJB 2.1 Stateful Session Bean

EJB M ethod D escription


ejbCreate The container invokes this method right before it creates
the bean. Stateless session beans must do nothing in this
method. Stateful session beans can initiate state in this
method.
ejbActivate The container invokes this method right after it reactivates
the bean.

ejbPassivate The container invokes this method right before it


passivates the bean.

ejbRem ove A container invokes this m ethod before it ends the life of
the session object. This method perform s any required
clean-up. For example, closing external resources such as
file handles.
setSessionContext The container invokes this m ethod after it first instantiates
the bean. U se this method to obtain a reference to the
context of the bean.

Table 5.2 lists the optional EJB 3.0 stateful session bean lifecycle callback methods you can
define using annotations. For EJB 3.0 stateful session beans, you do not need to implement
these methods.
Table 5.2 Lifecycle Methods for an EJB 3.0 Stateful Session Bean

Annotation Description
@PostConstruct This optional method is invoked for a stateful session
bean before the first business method invocation on the
bean. This is at a point after which any dependency
injection has been performed by the container.
@PreDestroy This optional method is invoked for a stateful session
bean when the instance is in the process of being
removed by the container. The instance typically releases
any resources that it has been holding.
@PrePassivate The container invokes this method right before it
passivates a stateful session bean.

@PostActivate The container invokes this method right after it


reactivates a formerly passivated stateful session bean.

95 ANNA UNIVERSITY CHENNAI


DMC 1754 / 1945

NOTES
Table 5.3 Lifecycle Methods for an EJB 2.1 Stateless Session Bean

EJB M ethod Description


ejbCreate The container invokes this m ethod right before it creates
the bean. Use this m ethod to initialize non-client specific
inform ation such as retrieving a data source.
ejbActivate This m ethod is never called for a stateless session bean.
Provide an em pty im plem entation only.
ejbPassivate This m ethod is never called for a stateless session bean.
Provide an em pty im plem entation only.
ejbRem ove The container invokes this m ethod before it ends the life
of the stateless session bean. Use this m ethod to perform
any required clean-up— for exam ple, closing external
resources such as a data source.
setSessionContext The container invokes this method after it first instantiates
the bean. Use this m ethod to obtain a reference to the
context of the bean.

5.5.2 Stateless Session Bean


Because a stateless session bean is never passivated, its life cycle has just two stages:
nonexistent and ready for the invocation of business methods. Figure 5.4 illustrates the
stages of a stateless session bean.

Figure 5.4 Life Cycle of a Stateless Session Bean


The lifecycle for EJB 3.0 and EBJ 2.1 stateless session beans are identical. The
difference is in how you register lifecycle callback methods.
Table 5.3 lists the EJB 2.1 lifecycle methods, as specified in the javax.ejb.SessionBean
interface, that a stateful session bean must implement. For EJB 2.1 stateful session beans,
you must at the least provide an empty implementation for all callback methods.

96 ANNA UNIVERSITY CHENNAI


MIDDLE-WARE TECHNOLOGIES
Table 5.3 lists the optional EJB 3.0 stateless session bean lifecycle callback methods
you can define using annotations. For EJB 3.0 stateless session beans, you do not need to NOTES
implement these methods.

Table 5.4 Lifecycle Methods for an EJB 3.0 Stateless Session Bean

Annotation Description
@PostConstruct This optional method is invoked for a stateful session bean
before the first business method invocation on the bean. This
is at a point after which any dependency injection has been
performed by the container.

@PreDestroy This optional method is invoked for a stateful session bean


when the instance is in the process of being removed by the
container. The instance typically releases any resources that
it has been holding.

5.5.3 Entity Bean


Figure 5.5 shows the stages that an entity bean passes through during its lifetime.
After the EJB container creates the instance, it calls the setEntityContext method of the
entity bean class. The setEntityContext method passes the entity context to the bean.
After instantiation, the entity bean moves to a pool of available instances. While in the
pooled stage, the instance is not associated with any particular EJB object identity. All
instances in the pool are identical. The EJB container assigns an identity to an instance
when moving it to the ready stage

Figure 5.5 Life Cycle of an Entity Bean

97 ANNA UNIVERSITY CHENNAI


DMC 1754 / 1945

There are two paths from the pooled stage to the ready stage. On the first path, the
NOTES client invokes the create method, causing the EJB container to call the ejbCreate and
ejbPostCreate methods. On the second path, the EJB container invokes the ejbActivate
method. While in the ready stage, an entity bean’s business methods may be invoked.
There are also two paths from the ready stage to the pooled stage. First, a client may
invoke the remove method, which causes the EJB container to call the ejbRemove method.
Second, the EJB container may invoke the ejbPassivate method. At the end of the life
cycle, the EJB container removes the instance from the pool and invokes the
unsetEntityContext method.
In the pooled state, an instance is not associated with any particular EJB object
identity. With bean-managed persistence, when the EJB container moves an instance from
the pooled state to the ready state, it does not automatically set the primary key. Therefore,
the ejbCreate and ejbActivate methods must assign a value to the primary key. If the
primary key is incorrect, the ejbLoad and ejbStore methods cannot synchronize the instance
variables with the database. In the pooled state, the values of the instance variables are
not needed. You can make these instance variables eligible for garbage collection by setting
them to null in the ejbPasssivate method.

5.5.4 Message-Driven Bean

Figure 5.6 illustrates the stages in the life cycle of a message-driven bean. The EJB
container usually creates a pool of message-driven bean instances. For each instance, the
EJB container instantiates the bean and performs these tasks:
 It calls the setMessageDrivenContext() method to pass the context object to the
instance.
 It calls the instance’s ejbCreate() method.
Like a stateless session bean, a message-driven bean is never passivated, and it has
only two states: nonexistent and ready to receive messages. At the end of the life cycle, the
container calls the ejbRemove() method. The bean’s instance is then ready for garbage
collection

Figure 5.6 Life Cycle of a Message-Driven Bean


5.6 EJB USAGE
The different EJB Types are summarized in Table 5.5

98 ANNA UNIVERSITY CHENNAI


MIDDLE-WARE TECHNOLOGIES

Table 5.5 EJB Types


NOTES
Type Description
Session An EJB 3.0 or EJB 2.1 EJB component created by a client for
the duration of a single client-server session used to perform
operations for the client.
Stateless A session bean that does not maintain conversational state. Used
for reusable business services that is not connected to any
specific client.
Stateful A session bean that does maintain conversational state. Used for
conversational sessions with a single client (for the duration of
its lifetime) that maintains state, such as instance variable values
or transactional state.
Entity An EJB 3.0 compliant light-weight entity object that represents
persistent data stored in a relational database using container-
managed persistence. Because it is not a remotely accessible
component, an entity can represent a fine-grained persistent
object.
Entity An EJB 2.1 EJB component that represents persistent data stored
Bean in a relational database.
CMP A Container-Managed Persistence (CMP) entity bean is an
entity bean that delegates persistence management to the
container that hosts it.
BMP A Bean-Managed Persistence (BMP) entity bean is an entity
bean that manages its own persistence.
MDB A Message-Driven Bean (MDB) is an EJB 3.0 or EJB 2.1 EJB
component that functions as an asynchronous consumer of Java
Message Service (JMS) messages.

Session Beans

Stateless session beans are useful mainly in middle-tier application servers that provide
a pool of beans to process frequent and brief requests. Table 5.6 provides a definition for
both BMP and CMP, and a summary of the programmatic and declarative differences
between them.

99 ANNA UNIVERSITY CHENNAI


DMC 1754 / 1945

Table 5.6 Comparison of Bean-Managed and Container-Managed Persistence


NOTES

M an agem ent B e a n - M a n a g e d P e r s is t e n c e C o n ta in e r -M a n a g e d
Issu es P e r s is t e n c e
P e rs is te n c e T h e u s e r h a s to im p le m e n t th e T h e m a n a g e m e n t o f th e
m an agem ent p e rs is te n c e m a n a g e m e n t w ith in p e rs is te n t d a ta is d o n e fo r
th e e jb S to re , e jb L o a d , th e u s e r. T h a t is , th e
e jb C re a te , a n d e jb R e m o v e c o n ta in e r in v o k e s a
E n tity B e a n m e th o d s . T h e s e p e rs is te n c e m a n a g e r o n
m e th o d s m u s t c o n ta in lo g ic fo r b e h a lf o f th e b e a n .
s a v in g a n d re s to rin g th e
p e rs is te n t d a ta . e jb S to re a n d e jb L o a d c a n
b e u s e d fo r p re p a rin g th e
For e x a m p le , th e e jb S to r e d a ta b e f o re th e c o m m it o r
m e th o d m u s t h a v e lo g ic in it to f o r m a n ip u la tin g th e d a ta
s to r e th e e n tity b e a n 's d a ta to a fte r it is re fr e s h e d fro m
th e a p p ro p ria te d a ta b a s e . If it th e d a ta b a s e .
d o e s n o t, th e d a ta c a n b e lo s t.
T h e c o n ta in e r a lw a y s
in v o k e s th e e jb S to re
m e th o d rig h t b e fo re th e
c o m m it. In a d d itio n , it
a lw a y s in v o k e s th e
e jb L o a d m e th o d rig h t a f te r
r e in s ta tin g C M P d a ta fro m
th e d a ta b a s e .
F in d e r m e th o d s T h e fin d B y P rim a r y K e y m e th o d T h e fin d B y P rim a r y K e y
a llo w e d a n d o th e r fin d e r m e th o d s a re m e th o d a n d o th e r fin d e r
a llo w e d . m e th o d s c la u s e a r e
a llo w e d .
D e fin in g C M P N /A R e q u ire d w ith in th e E J B
fie ld s d e p lo y m e n t d e s c r ip to r.
T h e p r im a r y k e y m u s t a ls o
b e d e c la r e d a s a C M P
f ie ld .
M a p p in g C M P N /A R e q u ire d . D e p e n d e n t o n
fie ld s to p e rs is te n c e m a n a g e r .
re s o u rc e
d e s tin a tio n
D e fin itio n o f N /A R e q u ire d w ith in th e
p e rs is te n c e O ra c le -s p e c if ic
m an ager d e p lo y m e n t d e s c r ip to r. B y
d e fa u lt,O C 4 J u s e s th e
T o p L in k p e rs is te n c e
m a n a g e r.

With CMP, it is possible to build components to the EJB 2.0 specification that can
save the state of EJB to any J2EE supporting application server and database without
having to create user’s own low-level JDBC-based persistence system.

100 ANNA UNIVERSITY CHENNAI


MIDDLE-WARE TECHNOLOGIES
With BMP, the persistence layer of the application can be further tailored at the expense
of additional coding and support effort. NOTES
Difference Between Session and Entity Beans

The major differences between session and entity beans are that entity beans involve
a framework for persistent data management, a persistent identity, and complex business
logic. Table 4.8 illustrates the different interfaces for session and entity beans. Notice that
the difference between the two types of EJBs exists within the bean class and the primary
key. All of the persistent data management is done within the bean class methods.

Table 5.7 Session and Entity Bean Differences

<<Interface>> <<Interface>>
java.rmi.Remote java.io.Serializable

<<Interface>>
Javax.ejb.Enterp

<<Interface>> <<Interface>> <<Interface>> <<Interface>>


Javax.ejb.EJB Javax.ejb.EJB Javax.ejb.EJB Javax.ejb.EJB
LocalObject Object Home LocalHome
<<Interface>>
Javax.ejb.SessionBean

<<Interface>> <<Interface>> <<Interface>> <<Interface>>


Hello World Hello World Hello World Hello World Local Hello World Bean
Local Interface Remote Interface Home Interface Home Interface Implementation Class

Hello world Hello world Hello world Hello world


EJB Local EJB Object Home Object Local Home
Object Object

Figure 5.7 Hello World Object Model

101 ANNA UNIVERSITY CHENNAI


DMC 1754 / 1945

NOTES 5.7.1 The Remote Interface

The Remote Interface supports every business method of the bean. The class diagram
of the remote interface is as shown in Figure 5.8.
J2EE
Entity Bean Session Bean
Subject
Local Extends Extends javax.ejb.EJBLocalObject
interface javax.ejb.EJBLocalObject
Remote Extends javax.ejb.EJBObject Extends javax.ejb.EJBObject
interface
Local Extends Extends javax.ejb.EJBLocalHome
Home javax.ejb.EJBLocalHome
interface
Remote Extends javax.ejb.EJBHome Extends javax.ejb.EJBHome
Home
interface
Bean class Extends Extends javax.ejb.SessionBean
javax.ejb.EntityBean
Primary Used to identify and retrieve Not used for session beans.
key specific bean instances Stateful session beans do have an
identity, but it is not externalized.

5.7 DEVELOPING AN EJB COMPONENT


When building an EJB component, the following is a typical order of operations.
Step 1: Write the .java files that compose the bean: the component interface, home
interfaces, enterprise bean class file and any helper classes that are needed.
Step 2: Write the deployment descriptor, or have it generated by your IDE or tools like
XDoclet.

Step 3: Compile the .java files from step 1 into .class files
Step 4: Using the jar utility, create an EJB-jar file containing the deployment descriptor
and .class files.
Step 5: Deploy the Ejb-jar file into your container in a vendor specific manner, perhaps by
running a vendor-specific tool or perhaps by copying your Ejb-jar file into a folder where
your container looks to load Ejb-jar files.
Step 6: Configure your EJB server so that it is properly configured to host your ejb-jar
file.
Step 7: Start your EJB container and confirm that it has loaded your Ejb-jar file.
Step 8: Optionally write a standalone test client.java file. Compile the test client into a
.class file. Run the test client.
Figure 5.7 shows the class diagram for Hello world example and its base clas.

102 ANNA UNIVERSITY CHENNAI


MIDDLE-WARE TECHNOLOGIES

NOTES
<< Interface>>
Java.rmi.Remote

<< Interface>>
Javax.ejb.EJBobject

<< Interface>>
Hello World

Figure 5.8 Class diagram of Remote Interface

Create a file Hello.java to store the java code. The source code for Remote interface
for Hello World is given below :
package examples;
import java.rmi.RemoteException;
import java.rmi.Remote;
import javax.ejb.*;
/* This is HelloBean remote interface. This interface is what clients operate on when
they interact with Ejb objects. The container vendor will implement this interface, the
implemented object is the EJB object, which delegates invocations to the actual bean. */
public interface Hello extends javax.ejb.EJBObject
{
/** The one method Hello returns a greeting to the client.**/
public String hello() throws java.rmi.RemoteException;
}
The Remote Interface includes the following:

103 ANNA UNIVERSITY CHENNAI


DMC 1754 / 1945
a) Javax.ejb.EJBObject. The container-generated EJB object, which implements the
NOTES remote interface, will contain every method that the javax.ejb.EJBObject interface
defines.
b) One business method hello() which returns the string “Hello,World!” to the client.
This method is to be implemented in an enterprise bean class.
The interface javax.ejb.EJBObject is as given below:
public interface javax.ejb.EJBObject extends java.rmi.Remote
{
public javax.ejb.EJBHome getEJBHome() throws
java.rmi.RemoteException;
public java.lang.Object getPrimaryKey() throws
java.rmi.RemoteException;
public void remove() throws java.rmi.RemoteException,
javax.ejb.RemoteException;
public javax.ejb.Handle getHandle() throws
java.rmi.RemoteException;
public Boolean isIdentical(javax.ejb.EJBObject) throws
java.rmi.RemoteException;
}
5.7.2 The Local Interface
Local clients use local interface . The source code for the local interface is given
below.
package examples;
/* This is the HelloBean local interface. This interface is what local clients operate on when
they interact with EJB local objects. */
public interface HelloLocal extends javax.ejb.EJBLocalObject
{
/* The one method –hello- returns a greeting to the client*/
public String hello();
}
The interface is as shown in given below:
public interface javax.ejb.EJBLocalObject
{
public javax.ejb.EJBLoaclHome getEJBLocalHome() throws

javax.ejb.EJBException;
public Object getPrimaryKey() throws javax.ejb.EJBException;

104 ANNA UNIVERSITY CHENNAI


MIDDLE-WARE TECHNOLOGIES
public Boolean isIdentical(javax.ejb.EJBLocalObject) throws
javax.ejb.EJBException; NOTES
public void remove() throws javax.ejb.RemoteException,
javax.ejb.EJBException;
}

5.7.3 The Home Interface

The home interface has methods to create and destroy EJB Objects. The
implementation of the home interface is the home object will be generated by the container
tools. The class diagram for home interface is as shown in Figure 5.9.

<< Interface>>
Java.rmi.Remote

<< Interface>>
Java.ejb.EJBHome

<< Interface>>
Hello Home

Figure 5.9 Class diagram of Home Interface


The code is given below.
/* This is the home interface for HelloBean.*/
public interface HelloHome extends javax.ejb.EJBHome
{

105 ANNA UNIVERSITY CHENNAI


DMC 1754 / 1945
/*This method creates the EJB Object.*/
NOTES Hello create() throws java.rmi.RemoteException,
javax.ejb.CreateException;
}
Home Interface consists of the following:
a) The single create() is a factory method that clients use to get a reference to an EJB
object. The create method is also used to initialize a bean.
b) The create methiod throws two exceptions: java.rmi.RemoteException and
javax.ejb.CreateException.
c) Home interface extends javax.ejb.EJBHome.
The javax.ejb.EJBHome interface is given below:
public interface javx.ejb.EJBHome extends java.rmi.Remote
{
public EJBMetaData getEJBMetaData() throws java.rmi.RemoteException;
public javax.ejb.HomeHandle getHomeHandle() throws
java.rmi.RemoteException;
public void remove(javax.ejb.Handle handle) throws
java.rmi.RemoteException, javax.ejb.RemoveException;
public void remove(Object primarykey) throws java.rmi.RemoteException,
javax.ejb.RemoveException;
}
5.7.4 The Local Home interface
Local home interface, the higher performing home interface used by the local client is given
below:
packages examples;
/* This is the local home interface for HelloBean.*/
public interface HelloLocalHome extends javax.ejb.EJBLocalHome
{
/*This method creates the EJB Object.*/
Hello create() throws java.rmi.RemoteException,
javax.ejb.CreateException;
}
The differences between the remote interface and the local interface are as follows:
 The local home interface extends EJBLocalHome rather than EJBHome. The
EJBLocalHome interface does not extend java.rmi.Remote.
 The local home interface does not throw RemoteExceptions
5.7.5 The Bean class
The bean class diagram is as shown in Figure 5.10.
The bean class consists of the following
a) The bean class implements the javax.ejb.SessionBean interface, which makes it a
session bean.
b) The bean class has an ejbCreate() method that matches the home object’s create()
method, and takes no parameters.
c) One business method , hello(). It returns Hello,world! To the client.
d) The ejbActivate() and ejbPassivate() methods are not applied to stateless session
e) beans.
ejbRemove() method is use to destroy the bean.
106 ANNA UNIVERSITY CHENNAI
MIDDLE-WARE TECHNOLOGIES

<< Interface>>
Java.ejb.EnterpriseBean NOTES

<< Interface>>
Java.ejb.EntityBean

<< Interface>>
Hello Bean

Figure 5.10 Bean Class Diagram


The code for HelloBean is given below:
packages examples;
/* Demonstration stateless session bean.*/
public class HelloBean implements javax.ejb.SessionBean
{
private SessionContext ctx;
//EJB-required Methods
public void ejbCreate()
{
System.out.println(“ejbCreate()”);
}
public void ejbRemove()
{
System.out.println(“ejbRemove()”);
}
}
public void ejbActivate()
{
System.out.println(“ejbActivate()”);
}
public void ejbPassivate()
{
System.out.println(“ejbPassivate()”);
}
public void setSessionContext(javax.ejb.SessionContext ctx)
{
this.ctx=ctx;
}
// Business Methods
public String hello()
{
System.out.println(“hello()”);
return”Hello, World!”;
}
}

107 ANNA UNIVERSITY CHENNAI


DMC 1754 / 1945
5.7.6 The Deployment Descriptor
NOTES The deployment descriptors are found in the javax.ejb.deployment package, which has
five deployment descriptor classes:
5.7.6.1 DeploymentDescriptor
The DeploymentDescriptor is the abstract superclass for both the EntityDescriptor
and SessionDescriptor. It provides the accessor methods for reading properties that
describe
the bean’s version number, and the names of the classes for the bean’s remote interface,
home interface, and bean class. In addition, the deployment descriptor provides access to
the ControlDescriptors and AccessControlEntry s.

5.7.6.2 ControlDescriptor

The ControlDescriptor provides accessor methods for defining the security and
transactional attributes of a bean at runtime. ControlDescriptors can be applied to the
bean as a whole, or to specific methods of the bean. Any method that doesn’t have a
ControlDescriptor uses the default properties defined by the ControlDescriptor for the
bean itself. Security properties in the ControlDescriptor indicate how AccessControlEntry s
are applied at runtime. Transactional properties indicate how the bean or specific method
will be involved in transactions at runtime.

5.7.6.3 AccessControlEntry

Each AccessControlEntry identifies a person, group, or role that can access the bean
or one of its methods. Like the ControlDescriptor, the Access-ControlEntry can be applied
public void ejbRemove()
{
System.out.println(“ejbRemove()”);
}
public void ejbActivate()
{
System.out.println(“ejbActivate()”);
}
public void ejbPassivate()
{
System.out.println(“ejbPassivate()”);
}
public void setSessionContext(javax.ejb.SessionContext ctx)
{
this.ctx=ctx;
}

108 ANNA UNIVERSITY CHENNAI


MIDDLE-WARE TECHNOLOGIES

5.7.6.2 ControlDescriptor
NOTES
The ControlDescriptor provides accessor methods for defining the security and
transactional attributes of a bean at runtime. ControlDescriptors can be applied to the
bean as a whole, or to specific methods of the bean. Any method that doesn’t have a
ControlDescriptor uses the default properties defined by the ControlDescriptor for the
bean itself. Security properties in the ControlDescriptor indicate how AccessControlEntry
s are applied at runtime. Transactional properties indicate how the bean or specific method
will be involved in transactions at runtime.

5.7.6.3 AccessControlEntry
Each AccessControlEntry identifies a person, group, or role that can access the bean
or one of its methods. Like the ControlDescriptor, the Access-ControlEntry can be applied
to the bean as a whole or to a specific method. An AccessControlEntry that is specific to a
method overrides the default AccessControlEntry s set for the bean. The AccessControlEntry
s are used in combination with the security properties in the ControlDescriptor to provide
more control over runtime access to the bean and its methods.
5.7.6.4 EntityDescriptor
The EntityDescriptor extends the DeploymentDescriptor to provide properties specific to
an EntityBean class. Entity bean properties include the name of the primary key class and
what instance variables are managed automatically by the container.
5.7.6.5 SessionDescriptor
The SessionDescriptor extends the DeploymentDescriptor to provide properties specific
to a SessionBean class. Session bean properties include a timeout setting and a stateless
session property. The stateless session property indicates whether the session is a stateless
session bean or a stateful session bean.
<ejb-name> The name for this bean
<home> The fully qualified name of the home interface
<remote> The fully qualified name of the remote interface
<local-home> The fully qualified name of the local home interface
<local> The fully qualified name of the local interface
<ejb-class> The fully qualified name of the enterprise bean class
<Session-type> whether the session is stateless or stateful bean.
The ejb-jar.xml file is given below:
<!DOCTYPRE ejb-jar PUBLIC”-//Sun Microsystems, Inc.//DTD Enterprise JavaBeans
2.0//EN” “http://java.sun.com/dtd/ejb-jar_2_0.dtd>”
<ejb-jar>
<enterprise-beans>
<session>
<ejb-name> Hello</ejb-name>
<home>examples.HelloHome</home>

109 ANNA UNIVERSITY CHENNAI


DMC 1754 / 1945
<remote> examples.Hello</remote>
NOTES <local-home> examples.HelloLocalHome</local-home>
<local>examples.HelloLocal</local>
<ejb-clas>examples.HelloBean</ejb-class>
<session-type>Stateless</session-type>
<transaction-type> Container</transaction-type>
</session>
</enterprise-beans>
</ejb-jar>
5.7.7 The Ejb-jar File
The next step is to package all the files together in an Ejb-jar file. If a development
environment supporting EJB, the development environment contains an automated way to
generate the Ejb-jar file. Thus it can be generated manually as follows:
Jar cf HelloWorld.jar *
The asterisk indicates the files to include in the jar- the bean class, home interface,
remote interface, local interface, deployment descriptor.
The folder structure within the Ejb-jar file looks as follows:
META-INF/
META-INF/MANIFEST.MF
examples/
examples/HelloBean.class
examples/HelloHome.class
examples/Hello.class
examples/HelloLocal.class
examples/HelloLocalHome.class
META-INF/ejb-jar.xml
5.7.8 Deploying the Bean
When deploying an Ejb-jar file into a container, the following steps are performed:
 The Ejb-jar file is verified. The container class checks that the enterprise bean
class, the remote interface, and other items are valid.
 The container tool generates an EJB object and home object.
 The container tool generates any necessary RMI-IIOP stubs and skeletons.
 Start up the EJB container
5.7.8.1 Client source code
The client source code is given below:

110 ANNA UNIVERSITY CHENNAI


MIDDLE-WARE TECHNOLOGIES
Packages examples;
import javax.naming.Context;
NOTES
import javax.naming.InitialContext;
import java.util.Properties;
public class HelloClient
{
public static void main(String[] args) throws Exception
{
/* Setup properties for JNDI initialization*/
Properties props = system.getProperties();
/* Obtain the JNDI initial context . The initial context is a starting point for connecting a
JNDI tree. The JNDI driver , the network location of the server etc, by passing in the
environment properties*/
Context ctx= new InitialContext(props);
/* Get a reference to the home object- the factory for Hello EJB Objects*/
Object obj=ctx.lookup(“HelloHome”);
HelloHome home=(HelloHome) javax.rmi.PortableRemoteObject.narrow(obj,
HelloHome.class);
/* use the factory to create the Hello EJB Object*/
Hello hello=home.create();
/* call the Hello method on the EJB object. The EJB object will delegate the call to the bean,
receive the result, and return it to us*/
System.out.println(“hello.hello());
hello.remove();
}
}
5.7.8.2 Running the system
a) Start the application server
b) Run the client application
When running the client application, the client with JNDI environment information has
to be supplied. JNDI requires a minimum of two properties to retrieve an initial context
i. The name of the initial context factory. Examples are
com.sun.jndi.ldap.LdapctxFactory for an LDAP JNDI context, and
com.sun.jndi.cosnaming. CNCTxFactory for a CORBA Naming Service context.
ii. The provider URL, indicating the location of the JNDI tree to use. Examples are
ldap://louvre:389/o5Airius.com

111 ANNA UNIVERSITY CHENNAI


DMC 1754 / 1945
5.7.8.3. The server side output
NOTES When the client is run, the container shows the following debug log
setsessionContext()
ejbCreate()
hello()
ejbRemove()
5.7.8.4. The client side output
After running the client, the output is
Hello, world!
5.8 CONCLUSION
The Types of beans have been discussed and their Life cycles are explained. Thus the
Sample Program has been developed and deployed.
HAVE YOU UNDERSTOOD QUESTIONS
1. What are the different types of EJB Roles ?
2. What are the different types of beans ?
3. When should stateless and stateful session beans be used ?
4. What are the two types of persistence used for Entity beans ?
5. Know and understand the lifecycle of different types of beans ?
6. What are the steps in developing an EJB component ?
7. What are the five Deployment Descriptors classes found in the Deployment
Descriptor?
8. What is an Ejb-jar file?
SUMMARY
 The Roles of the EJB components are defined in the EJB Specifications
 The different EJB roles help to encapsulate and isolate the system interfaces.
 The different types of beans are Session Bean, Entity Bean and Message-driven
Bean
 Entity beans are persistent and represent a person, place, or thing.
 Session beans are extensions of the client and embody a process or a workflow that
defines how other beans interact. Session beans are not persistent, receiving their
state from the client, and they live only as long as the client needs them
 A message-driven bean is a EJB component that functions as an asynchronous
message consumer.
 There are two types of session beans - stateful and stateless session beans
 Sateful session bean retains its state across multiple method requests made by the
same client. Stateless beans serve business requests that span only a single method
invocation
 Entity beans deal with data. Entity beans can be uniquely identified by a primary
key.

112 ANNA UNIVERSITY CHENNAI


MIDDLE-WARE TECHNOLOGIES
 There are two types of persistence for entity beans: bean-managed and container-
managed. NOTES
 With bean-managed persistence (BMP), the entity bean code contains the calls that
access the database. In container-managed persistence (CMP), the EJB container
automatically generates the necessary database access calls.
 An enterprise bean goes through various stages during its lifetime, or life cycle.
 Each type of enterprise bean—session, entity, or message-driven—has a different
life cycle.
 The lifecycle of an EJB involves important events such as creation, passivation,
activation, and removal.
 Deployment of EJB requires a set of steps to be followed as outlined in this unit
 A sample program “Hello World” has been explained along with the interface design
and code
EXERCISES
PART I
1. What type of enterprise bean is used to embody business objects?
a) javax.ejb.EnterpriseBean
b) java.rmi.Remote
c) javax.ejb.SessionBean
d) javax.ejb.EntityBean
2. What type of enterprise bean is used to embody application processing state information?
a) javax.ejb.EnterpriseBean
b) javax.rmi.Remote
c) javax.ejb.SessionBean
d) javax.ejb.EntityBean
3. ejbActivate() Method is applied to
a) stateful session bean
b) stateless session bean
4. ejbRemove () Method is applied to
a) Session bean
b) Entity bean
c) Message bean
d) All of the above
5. What a bean instance is in which state, it can accept client requests.
a) Ready
b) Pooled
c) Does not exist
6. What extends the DeploymentDescriptor to provide properties specific to a SessionBean
class ?
a) The SessionDescriptor
b) EntityDescriptor
c) DeploymentDescriptor

113 ANNA UNIVERSITY CHENNAI


DMC 1754 / 1945

NOTES 7. Which is the abstract superclass for both the EntityDescriptor and SessionDescriptor ?
a) The SessionDescriptor
b) EntityDescriptor
c) DeploymentDescriptor
8. What kind of bean requires a primary key?
a) Message-driven bean
b) Entity bean
c) Session bean
9. A stateful session bean
a) Allows shared among multiple clients
b) Does not allow shared among multiple clients
10. At what point, precisely, in the life-cycle is a container-managed entity bean considered
created?
a) Immediately prior to the execution of its ejbCreate() method
b) Immediately after the execution of its ejbCreate() method
c) After the CMP bean’s data has been committed to the underlying persistent datastore
d) During the execution of its ejbPostCreate() method
PART II
11. Write the Differences between Session and Entity Beans
12. Explain with examples, the usage of stateful and stateless Session beans?
13. Compare Bean-Managed and Container-Managed Persistence of Entity beans
14. Explain the difference between Remote and Local Interface. Explain their usage
with an example
15. What are the two paths from the pooled stage to the ready stage in an Life cycle
of an Entity bean?
16. Explain and contrast uses for Entity Beans, Entity Classes, Stateful and Stateless
Session Beans, and Message Driven Beans and understand the advantages and
disadvantages of each type
17. Explain the life cycle of Entity and Session Beans.
18. What are the various steps in building an EJB component?
19. What are the different callback methods in Entity Bean? What is the purpose and
usage of each of these methods?
Part III
20. Write a simple program to display “No Man is an Island” on the client side.
Represent the object model as a block diagram
21. Write the following for a Bubble sort:
a) Bean class
b) Home Interface
c) Remote Interface
d) Local interface

114 ANNA UNIVERSITY CHENNAI


MIDDLE-WARE TECHNOLOGIES
22. Write the Bean class and interface codes to develop and deploy a stateless EJB
session which provides two functions: NOTES
a) Add two numbers
b) Multiply two numbers
23. Write the Bean class for a stateful MusicList EJB session that allows clients to
create a shopping cart. The client must be able to add and remove Music CD to
the card from a Music CD Collection database.
24. Write the code to implement a Message-driven bean named “MessageDriven”
that receives multiple messages from a client and displays the message text.
PART I Answers
1) d 2) c 3) a 4) d 5)a 6) a 7) c 8) a 9) b 10) b

REFERENCES
1. java.sun.com/developer/onlineTraining/EJBIntro/EJBIntro.html
2. java.sun.com/products/ejb
3. www.developer.com/ejb
4. www.roseindia.net/javabeans/javabeans.shtml
5. www.wikipeida.org
6. www.jguru.com
7. Mastering Enterprise JavaBeans Third Edition by Rima Patel Sriganesh and Gerald
Brose
8. Enterprise JavaBeans by Tom Valesky

115 ANNA UNIVERSITY CHENNAI


DMC 1754 / 1945

NOTES
UNIT III

CHAPTER - 6
EJB APPLICATIONS

6.1 INTRODUCTION

In the previous unit, the basic concepts of the Enterprise JavaBeans programming
model have been covered. The different types of beans and their life cycle have been
discussed in detail.
This unit demonstrates how to build server-side Java components using the Enterprise
JavaBeans component model using a sample program to add, subtract, multiply and divide
two numbers. Implementation of the Enterprise JavaBeans model by providing concrete
examples and step-by-step guidelines for building and using Enterprise JavaBeans applications.
This unit shows how to program Enterprise JavaBeans, and how to install, or deploy,
them in an Enterprise JavaBeans container.
6.2 LEARNING OBJECTIVES
At the end of this Unit, the reader must be familiar with the following concepts:
 How to program Session Beans
 How to program Entity Beans
 How to Deployment Descriptor
 How to program and deploy EJB’s
6.3 SESSION BEANS
A session bean instance is a relatively short-lived object. It has roughly the lifetime
equivalent of a session or of the client code that is calling the session beans. A session bean
can be one of two types:
 Stateful session bean—maintains state information, which can be accessed across
methods and transactions
 Stateless session bean—does not maintain a state that can be accessed across
methods and transactions; however, it can maintain an internal state.
A session bean must provide the following information to an application server for the
bean’s deployment within an EJB container:
 Definitions of the session bean’s home and remote interfaces
 A Java class that implements the SessionBean interface
 A deployment descriptor called ejb-jar.xml

116 ANNA UNIVERSITY CHENNAI


MIDDLE-WARE TECHNOLOGIES
The home and remote interfaces provide the EJB methods that are externalized to
client components, as follows: NOTES
 The home interface provides methods that the client component uses to manage
the EJB instance, including:
o One or more create() methods that create a new EJB instance
o A remove() method that removes an existing EJB instance
The EJBHome interface defines the home interface.
 The remote interface includes methods that the client component uses to interact
with the EJB instance; these methods are called the business methods of the EJB.
The EJBObject interface defines the remote interface.
Figure 6.1 shows the EJB architecture, which uses an EJB container within an application
server to manage communication between a client component and an EJB instance.

Application server

EJB Container
Remote
Interface
Client EJB
Component Instance

Home
Interface

Figure 6.1 Accessing an Enterprise Application through an EJB


The EJB provider must provide classes that define the home and remote interfaces of
the EJB. When the enterprise bean is deployed, the deployment tools of the application
server use these definitions to generate the implementations of the home and remote interfaces
for the EJB container. The client component uses these implementations when it originates
a request for the enterprise bean. In this way, all interactions with the enterprise bean go
through the EJB container, which routes them to the enterprise bean.
In Java, an interface provides a specification of the behavior of an object but not the
actual behavior. Defining an interface provides only the names of the interface and its
methods. Implementing an interface provides the actual code to implement the interface
methods. The EJB provider provides the definitions of the home and remote interfaces
while the EJB container provides the implementations of these interfaces using information
in the deployment descriptor.
Figure 6.2 shows how information is provided for the home and remote interfaces of
a sample enterprise bean called sessionBean. This enterprise bean provides two
create()methods in its home interface; the client component can create an instance of this
session bean with either of these methods. The bean also provides two business methods

117 ANNA UNIVERSITY CHENNAI


DMC 1754 / 1945

in its remote interface; the client component can interact with this session-bean instance
NOTES with these two business methods.

Figure 6.2 Providing the Home and Remote Interfaces of a Session Bean

For an EJB container to communicate with the session bean, an EJB provider must
provide a Java class that implements the SessionBean interface. This class contains
implementations of the following methods:
 An ejbCreate() method for each create() method in the home interface
 One or more business methods for the session bean

118 ANNA UNIVERSITY CHENNAI


MIDDLE-WARE TECHNOLOGIES

 Other standard methods of the SessionBean interface


 Possible additional methods required to support the implementation of the NOTES
SessionBean class
When the EJB container implements the methods of the home and remote interface, it
includes in these methods calls to the corresponding methods of the SessionBean class, as
shown in Figure 6.3.

Figure 6.3 Calling methods of the SessionBean class

The SessionBean methods use the Enterprise Information System (EIS) specific API
to communicate directly with the EIS. By isolating the EIS-specific API calls to the session
bean, neither client components nor the application server need to know this API. Instead,
the client component uses calls in the home and remote calls to request EIS services through
the session bean.
6.3.1 Stateless Session Beans
Stateless session beans hold no conversational state, all instances of the same stateless
session beans are equivalent and indistinguishable to a client. Stateless session beans can be
pooled, reused and swapped from one client to another client on each method call. Stateless
session bean pooling is illustrated in Figure 6.4.
6.3.1.1 Implementation Details
Implementing a stateless session bean is explained using a Java program, which can
add, subtract, multiply and divide two numbers.
The first step is to write the .java files that compose the bean – the remote interface,
home interface and the client code.
The source code for the remote interface is as shown below. It exposes the four
business methods add, subtract, multiply and divide for access by the client.

119 ANNA UNIVERSITY CHENNAI


DMC 1754 / 1945

NOTES

stateless
bean pool Bean Bean

EJB Object
Invoke()
Client

Bean Bean

Figure 6.4 Stateless Session Bean Pooling

//Remote interface
/mathOperationRemote.java
import javax.ejb.EJBObject;
import java.rmi.RemoteException;
public interface mathOperationRemote extends EJBObject
{
public int add(int a, int b) throws RemoteException;
public int sub(int a, int b) throws RemoteException;
public int mul(int a, int b) throws RemoteException;
public int div(int a, int b) throws RemoteException;
}
The source code for the home interface is as shown below. The home interface has
methods to create the EJB Objects.
//Home Interface
//mathOperationHome.java
import java.io.Serializable;
import java.rmi.RemoteException;
import javax.ejb.CreateException;
import javax.ejb.EJBHome;
public interface mathOperationHome extends EJBHome

120 ANNA UNIVERSITY CHENNAI


MIDDLE-WARE TECHNOLOGIES

{
mathOperationRemote create() throws RemoteException, NOTES
CreateException;
}
The source code for the mathOperation beans which implements the four methods of
add, subtract, multiply and divide is given below.
//mathOperationBeans
import java.rmi.RemoteException;
import javax.ejb.SessionBeans;
import javax.ejb.SessionContext;
public class mathOperationBeans implements SessionBeans
{
public int add(int a, int b)
{
return (a+b);
}
public int sub(int a, int b)
{
return (a-b);
}
public int mul(int a, int b)
{
return (a*b);
}
public int div(int a, int b)
{
return (a/b);
}
public mathOperationBeans() { }
public void ejbCreate() { }
public void ejbRemove() { }
public void ejbActivate() { }
public void ejbPassivate() { }
public void setSessionContext(SessionContext sc) { }
}
The source code for the client, which uses the business functions add, subtract, multiply
and divide, is given below.
//mathOperationClient.java
import javax.naming.Context;
import javax.naming.InitialContext;

121 ANNA UNIVERSITY CHENNAI


DMC 1754 / 1945

import javax.rmi.PortableRemoteObject;
NOTES import mathOperationRemote;
import mathOperationHome;
public class mathOperationClient
{
public static void main(String args[]) throws Exception
{
try
{
Context initial = new InitialContext();
Object objref=initial.lookup(“mathOperJndi”);
System.out.println(“after jndi calling”);
mathOperationHome home=
(mathOperationHome)PortableRemoteObject.narrow(objref,
mathOperationHome.class);
mathOperationRemote mathremote = home.create();
int no1 = 10;
int no2 = 20;
int result=0;
result = mathremote.add(no1, no2);
System.out.println(“Sum of given numbers = “+result);
result = mathremote.sub(no1, no2);
System.out.println(“Difference of given numbers = “+result);
result = mathremote.mul(no1, no2);
System.out.println(“Multiplication of given numbers = “+result);
result = mathremote.div(no1, no2);
System.out.println(“Division of given numbers = “+result);
}
catch (Exception e)
{
System.out.println(“Exception occured = “+e);
}
}
}
Let us assume the Java 2 SDK Enterprise Edition server is installed in c:\j2sdkee1.2.1
folder and Java 2 Standard Edition is installed in c:\jdk1.3.1_11 folder. The following
configuration is required after installation of J2EE and JDK1.3.1_11. Assume that the java
programs are present in the folder c:\java\ejb directory.

122 ANNA UNIVERSITY CHENNAI


MIDDLE-WARE TECHNOLOGIES

 Step 1 : Check if the following files are present in the c:\j2sdkee1.2.1\bin


setenv.bat NOTES
j2ee.bat
deploytool.bat
 Step 2 : Set Java_HOME and J2EE_HOME as given below:
set JAVA_HOME=c:\jdk1.3.1_11
set J2EE_HOME=c:\j2sdkee1.2.1
 Step 3 : Compile the following java programs present in the c:\java\ejb directory
mathOperationRemote.java
mathOperationHome.java
mathOperationBeans.java
 Step 4 : Open the Command Prompt and set the following path (You can set also these
path in the environment variables )
C:\>set path=%path%;c:\jdk1.3.1_11\bin;c:\j2sdkee1.2.1\bin;
C:\>set classpath =
%classpath%;c:\jdk1.3.1_11\lib;c:\j2sdkee1.2.1\lib\j2ee.jar
C:\java\ejb> javac *.java
 Step 5 : Start the J2EE server
C:\j2sdkee1.2.1\bin>j2ee –verbose
 Step 6 : Open another command prompt and set the above path and classpath then
type
C:\j2sdkee1.2.1\bin>deploytool
After starting the deployment tool the GUI based tool is opened as follows.
Choose File  New Application
Type mathOperationApp in Application Display Name text field and click ok
Choose File  New Enterprise Beans
The new Enterprise Beans wizard will open. Read the instruction and click Next
button in the first page of the wizard. In the page, type the JAR Display name as
mathOperationJar. Then click Add button that appear under the Contents area. Click browse
button and choose:
mathOperationRemote.class
mathOperationHome.class
mathOperationBeans.class
Click Add button. Then the page will be displayed as shown in Figure 6.5

123 ANNA UNIVERSITY CHENNAI


DMC 1754 / 1945

NOTES

Figure 6.5 Enterprise Bean Wizard – EJB JAR


Click next button. The next page will be displayed. Do the following :
Choose the mathOperationBeans in Enterprise Beans Class combo box.
Choose the mathOperationHome in Home Interface combo box
Choose the mathOperationRemote in Remote Interface combo box
Type mathOperation in Enterprise Beans Display Name text field.
Choose Session radio button in Beans type and select Stateless.

Figure 6.6 New Enterprise Bean Wizard


The page will be displayed as shown in Figure 6.6.
Click next button and then click finish button
Choose Tools  Deploy Application
The deploy wizard page will open. Do the following:
Choose localhost in the Target Server combo box.

124 ANNA UNIVERSITY CHENNAI


MIDDLE-WARE TECHNOLOGIES

NOTES

Figure 6.7 Deployment of EJB


Tick the “return client jar” check box.
Click next button. The page as shown in Figure 6.7 is displayed.
Type the mathOperJndi in JNDI name field. Click next button and click finish. The
deployment process will be started after finishing the process click ok. The application has
been displayed. Now the client can access the beans’s business method.
 Step 7 : Open a new command prompt and set the following path and classpath
C:\>cd ejb
C:\EJB>set path=c:\jdk1.3\bin;c:\j2sdkee1.2.1\bin
C:\EJB>set classpath=c:\jdk1.3\lib;c:\j2sdkee1.2.1\lib\j2ee.jar
 Step 8 : Do the following.
C:\>cd ejb
C:\EJB>set path=c:\jdk1.3\bin;c:\j2sdkee1.2.1\bin
C:\EJB>set classpath=c:\jdk1.3\lib;c:\j2sdkee1.2.1\lib\j2ee.jar;c:\j2sdkee1.2.1\
bin\moaClient.jar;
C:\EJB>javac *.java
C:\EJB>java mathOperationClient
 Step 9 : The output will be displayed as given below.
Sum of given numbers = 30
Difference of given numbers = -10
Multiplication of given numbers = 200
Division of given numbers = 0

125 ANNA UNIVERSITY CHENNAI


DMC 1754 / 1945

EJB 3.0 Stateless Session Bean


NOTES EJB 3.0 greatly simplifies the development of stateless session beans, removing many
complex development tasks.
 The bean class can be a plain old Java object (POJO); it does not need to implement
javax.ejb.SessionBean.
 The business interface is optional.
Home (javax.ejb.EJBHome and javax.ejb.EJBLocalHome) and component
(javax.ejb.EJBObject and javax.ejb.EJBLocalObject) business interfaces are not required.
 Annotations are used for many features.
 A SessionContext is not required: you can simply use this to resolve a session bean
to itself.
//DateEJBBean Class
package example.model;
import javax.ejb.Stateless;
@Stateless(name=”DateEJB”)
public class DateEJBBean implements DateEJB, DateEJBLocal {
public DateEJBBean() {
}
public String displayDate() {
return “”+new java.util.Date();
}
}
// DateEJBLocal Interface
package example.model;
import javax.ejb.Local;
@Local
public interface DateEJBLocal {
public String displayDate();
}
//DateEJB Remote Interface
package example.model;
import javax.ejb.Remote;
@Remote
public interface DateEJB {
public String displayDate();
}

126 ANNA UNIVERSITY CHENNAI


MIDDLE-WARE TECHNOLOGIES

6.3.2 Stateful Session Beans


A stateful session bean is a bean that is designed to service business processes that
NOTES
span multiple method requests or transactions. To accomplish this, stateful session beans
retain state on behalf of an individual client.
Activation and Passivation
When a client invokes a method on a bean, the client is starting a conversation with the
bean, and the conversational state stored in the bean must be available for that same client’s
method request. Therefore, the container cannot easily pool beans and dynamically assign
them to handle arbitrary client method requests, since each bean is storing state on behalf of
a particular client. But still there is a need to achieve the effect of pooling for stateful
session beans so that resources can be conserved and enhance the overall scalability of the
system. There is always only a finite amount of available resources, such as memory, database
connections and socket connections that are available. If the conversational state that the
beans are holding is large, the EJB server could easily run out of resources. This was not a
problem in stateless session beans because the container could pool only a few beans to
thousands of clients.

To limit the number of stateful session beans instances in memory, the container can
swap out a stateful bean saving its conversational state to a hard disk or other storage. This
is called passivation. After passivating a stateful bean, the conversational state is safely
stored away, allowing resources like memory to be reclaimed. When the original client
invokes a method, the passivation conversational state is swapped into a bean. This is called
activation. This bean now resumes the conversion with the original client. Thus, EJB does
indeed support the effect of pooling stateful session beans. Only a few instances can be in
memory when there are actually many clients. The container decides which beans to activate
and which beans to passivate. Most containers employ a Least Recently Used (LRU)
passivation strategy, which means to passivate the beans that have been called the least
recently. If a bean hasn’t been invoked in a while, the container writes it to disk. Passivation
can occur at any time, as long as a bean is not involved in a method call. To activate beans
most containers commonly use a just-in-time algorithm, which activates the bean on demand
as client requests come in. If a client request comes in, but that client’s conversation has
been passivated, the container activates the beans on demand, reading the passivated state
back into memory.
Activation and Passivation Callbacks

The Passivation process is as shown in Figure 6.8.

127 ANNA UNIVERSITY CHENNAI


DMC 1754 / 1945

NOTES
Client

2.
1. Invoke Enterprise
business object EJB Object Bean
3.

4.

2. LRU Bean Other


3. ejbPassivate() Enterprise
5. Store 4. Serialize Beans

Figure 6.8 Passivation of a Stateful Bean

When an EJB container passivates a bean, the container writes the bean’s
conversational state to secondary storage, such as a file or a database. The container informs
the beans that it’s about to perform passivation by calling the bean’s required ejbPassivate()
callback method. ejbPassivate() is a warning to the bean that its held conversational state is
about to be swapped out. It’s important that the container informs the bean using ejbPassivate()
so that the bean can relinquish held resources. These held resources include database
connections, open sockets, open files, or other resources that it does not make sense to save
to disk or that cannot be transparently saved using object serialization.

The activation process is as shown in figure 6.9.

128 ANNA UNIVERSITY CHENNAI


MIDDLE-WARE TECHNOLOGIES

NOTES
Client

3.
1. Invoke Enterprise
business object EJB Object Bean
4.

5.

3. Reconstruct Other
Bean Enterprise
2. Retreive 4. ejbActivate() Beans
Passivated 5. Invoke
Bean business
object

Figure 6.9 Activation of a stateful bean

The client has invoked a method on an EJB Object that does not have a beans tied to
it in memory. The container needs to activate the required bean. The serialized conversational
state is read back into memory, and the container reconstructs the in-memory state using
object serialization or the equivalent. The container then calls the bean’s required ejbActivate()
method. ejbActivate gives the beans a chance to restore the open resources it released
during ejbPassivate(). The Figure 6.9 illustrates how the client has invoked a method on an
EJB object whose stateful beans has been passivated.
EJB 2.1 Stateful Session Bean Example
It is necessary to create the home interfaces – remote home interface and local home
interface for the bean.
Implementing the remote interface
A remote client invokes the EJB through its remote interface. The client invokes the
create method that is declared within the remote home interface. The container passes the
client call to the ejbCreate method–with the appropriate parameter signature–within the
bean implementation. The requirements for developing the remote home interface include:
 The remote home interface must extend the javax.ejb.EJBHome interface.
 All create methods may throw the following exceptions:
 javax.ejb.CreateException
 javax.ejb.RemoteException
 optional application exceptions

129 ANNA UNIVERSITY CHENNAI


DMC 1754 / 1945

All create methods should not throw the following exceptions:


NOTES javax.ejb.EJBException
java.lang.RunTimeException
The code below shows a remote home interface called HelloHome for a stateful session
bean. You use the arguments passed into the various create methods to initialize the session
bean’s state.
//Remote Home Interface for a Stateful Session Bean
package hello;
import javax.ejb.*;
import java.rmi.*;
public interface HelloHome extends EJBHome {
public Hello create() throws CreateException, RemoteException;
public Hello create(String message) throws CreateException, RemoteException;
public Hello create(Collection messages) throws CreateException, RemoteException;
}
Implementing the Local Home Interface
An EJB can be called locally from a client that exists in the same container. Thus, a
collocated bean, JSP, or servlet invokes the create method that is declared within the local
home interface. The container passes the client call to the ejbCreate method–with the
appropriate parameter signature–within the bean implementation. The requirements for
developing the local home interface include the following:
 The local home interface must extend the javax.ejb.EJBLocalHome interface.
 All create methods may throw the following exceptions:
javax.ejb.CreateException
 javax.ejb.RemoteException
optional application exceptions
 All create methods should not throw the following exceptions:
javax.ejb.EJBException
java.lang.RunTimeException

The code below shows a local home interface called HelloLocalHome for a stateful
session bean. You use the arguments passed into the various create methods to initialize the
session bean’s state.
// Local Home Interface for a Stateful Session Bean
package hello;
import javax.ejb.*;
public interface HelloLocalHome extends EJBLocalHome {
public HelloLocal create() throws CreateException;
public HelloLocal create(String message) throws CreateException;

130 ANNA UNIVERSITY CHENNAI


MIDDLE-WARE TECHNOLOGIES

public HelloLocal create(Collection messages) throws CreateException;


} NOTES
Implement the stateless session bean
 Implement the ejb<METHOD> methods that match the home interface create
methods.
 For a stateful session bean, provide ejbCreate methods with corresponding argument
lists for each create method in the home interface.
a) Implement the business methods that you declared in the home and component
interfaces.
b) Implement the javax.ejb.SessionBean interface to implement the container callback
methods it defines
c) Implement a setSessionContext method that takes an instance of SessionContext.
For a stateful session bean, this method usually adds the SessionContext to the
session bean’s state.
// Session Bean Implementation
package hello;
import javax.ejb.*;
public class HelloBean implements SessionBean {
/* ————————————————————
* State
* ——————————————————— */
private SessionContext ctx;
private Collection messages;
private String defaultMessage = “Hello, World!”;
/* ————————————————————
* Begin business methods. The following methods
* are called by the client code.
* ——————————————————— */
public String sayHello(String myName) throws EJBException {
return (“Hello “ + myName);
}
public String sayHello() throws EJBException {
return defaultMessage;
}
/* ————————————————————
* Begin private methods. The following methods
* are used internally.
* ——————————————————— */
/* ———————————————————————————
* Begin EJB-required methods. The following methods are called
* by the container, and never called by client code.

131 ANNA UNIVERSITY CHENNAI


DMC 1754 / 1945
* ——————————————————————————— */
NOTES public void ejbCreate() throws CreateException {
// when bean is created
}
public void ejbCreate(String message) throws CreateException {
this.defaultMessage = message;
}
public void ejbCreate(Collection messages) throws CreateException {
this.messages = messages;
}
public void setSessionContext(SessionContext ctx) {
this.ctx = ctx;
}
// Life Cycle Methods
public void ejbActivate() {
}
public void ejbPassivate() {
}
public void ejbCreate() {
}
public void ejbRemove() {
}
}
Configure the ejb-jar.xml file
//ejb-jar.xml For a Stateful Session Bean
...
<enterprise-beans>
<session>
<ejb-name>Hello</ejb-name>
<home>hello.HelloHome</home>
<remote>hello.Hello</remote>
<ejb-class>hello.HelloBean</ejb-class>
<session-type>Stateful</session-type>
<transaction-type>Container</transaction-type>
</session>
</enterprise-beans>
EJB 3.0 Stateful Session Bean Example
This gives the example of a simple Stateful Session Bean. It consists of three POJO’s:
Remote interface Cart.java, Bean class CartBean.java and EJB client CartClient.java.
Annotations are used to indicate the remote interface, local interface and session bean

132 ANNA UNIVERSITY CHENNAI


MIDDLE-WARE TECHNOLOGIES

implementation. The following software components need to be installed and configured


correctly: NOTES
 Application Server such as J2EE
 Sun JDK version 1.5 or above
Remote Interface: Cart.java
package ejb_stateful;
import java.util.Collection;
import javax.ejb.Remote;
@Remote
public interface Cart {
public void addItem(String item);
public void removeItem(String item);
public Collection getItems();
}
Stateful Session Bean: CartBean.java
package ejb_stateful;
import java.util.ArrayList;
import java.util.Collection;
import javax.annotation.PostConstruct;
import javax.ejb.Stateful;
@Stateful
public class CartBean implements Cart {
private ArrayList items;
@PostConstruct
public void initialize() {
items = new ArrayList();
}
public void addItem(String item) {
items.add(item);
}
public void removeItem(String item) {
items.remove(item);
}
public Collection getItems() {
return items;
}
}

133 ANNA UNIVERSITY CHENNAI


DMC 1754 / 1945
EJB Client: CartClient.java
NOTES package ejb_stateful;
import java.util.Collection;
import java.util.Iterator;
import javax.naming.Context;
import javax.naming.InitialContext;
import javax.naming.NamingException;
public class CartClient {
public static void main(String [] args) throws NamingException {
try {
final Context context = getInitialContext();
Cart cart = (Cart)context.lookup(“CartBean”);
System.out.println(“Adding items to cart”);
cart.addItem(“Pizza”);
cart.addItem(“Pasta”);
cart.addItem(“Noodles”);
cart.addItem(“Bread”);
cart.addItem(“Butter”);
System.out.println(“Listing cart contents”);
Collection items = cart.getItems();
for (Iterator i = items.iterator(); i.hasNext();) {
String item = (String) i.next();
System.out.println(“ “ + item);
}
} catch (Exception ex) {
ex.printStackTrace();
}
}
private static Context getInitialContext() throws NamingException {
return new InitialContext();
}
}
Compile all the programs, start the server and execute CartClient. You will see following
Output:
Adding items to cart
Listing cart contents
Pizza
Pasta
Noodles
Bread
Butter

134 ANNA UNIVERSITY CHENNAI


MIDDLE-WARE TECHNOLOGIES

6.4 ENTITY BEANS


Entity beans have an identity and client visible state, and their lifetime may be completely
NOTES
independent of the client application’s lifetime. For entity beans, having an identity means
that different entity beans can be distinguished by comparing their identities. It also means
that clients can refer to individual entity beans instances by using that identity, pass handles
to other applications, and actually share common entities with other clients.
6.4.1 Features Of Entity Beans
The features of Entity Beans are detailed below.
6.4.1.1 Entity Beans Survive Failures
Entity beans are long lasting. They survive critical failures, such as application servers
crashing, or even databases crashing. This is because entity beans are just representations
of data in a permanent, fault tolerant, underlying storage. If a machine crashes, the entity
beans can be reconstructed in memory. All we need to do is to read the back in from the
permanent database and instantiate an entity beans java object instance with fields that
contain the data read from the memory. Entity beans have a much longer life cycle than a
client’s session, depending on how long the data sits in the database. The database records
representing an object could have existed before the company even decided to go with a
java-based solution, because a data structure can be language independent.

6.4.1.2 Entity Bean Instances Are View Into A Database

When the entity bean data is loaded into an in-memory entity beans instance, the data
stored in the database is read and the data can be manipulated within a Java Virtual Machine.
The in- memory entity beans is simply a view or lens into the database. There are multiple
physical copies of the same data: the in-memory entity beans instance and the entity data
itself stored in the database. Therefore there must be a mechanism to transfer information

back and forth between the java object and the database. This data transfer is accomplished
with two special methods that the entity beans class must implement, called ejbLoad() and
ejbStore().
 ejbLoad() reads the data in from the persistence storage into the entity bean’s in-
memory fields.
 ejbstore() saves beans instance’s current fields to the underlying data storage. It is
the complement of ejbLoad().
The ejbLoad() and ejbStore() are callback methods that the container invokes. They
are management methods required by EJB.
6.4.1.3 Several Entity Bean Instances May Represent The Same Underlying Data
Let’s consider the scenario in which many threads of execution want to access the
same database simultaneously. In banking, interest might be applied to a bank account,
while at the same time a company directly deposits a check into that account. In E-commerce,
many different client browsers may be simultaneously interacting with a catalog of products.

135 ANNA UNIVERSITY CHENNAI


DMC 1754 / 1945

6.4.1.3 Several Entity Bean Instances May Represent The Same Underlying Data
NOTES Let’s consider the scenario in which many threads of execution want to access the
same database simultaneously. In banking, interest might be applied to a bank account,
while at the same time a company directly deposits a check into that account. In E-commerce,
many different client browsers may be simultaneously interacting with a catalog of products.
To facilitate many clients accessing the same data, there is a need to design a high
performance access system to the entity beans. One possibility is to allow many clients to
share the same entity beans instance, that way, an entity beans can service many client
requests simultaneously. While this is an interesting idea, it is very appropriate for EJB for
two reasons. First is, writing thread-safe code is difficult and error prone. Mandating that
component vendors produce stable thread-safe does not encourage this. Second, having
multiple threads of execution makes transactions almost impossible to control by the underlying
transaction system. For these reasons, EJB dictates that only a single thread can ever be
running within a beans instance. With session beans and message driven beans, as well as
entity beans, all bean instances are single threaded. Mandating that each bean can service
only one client at a time will result in performance bottlenecks. Because each instance is
single threaded, clients need to run in lockstep, each waiting their turn to use a beans. This
will easily grind performance to a halt in any large enterprise deployment. To boost
performance containers is allowed to instantiate multiple instances of the same entity bean
class. This will allow many clients to interact concurrently with separate instances, each
representing the same underlying entity data. Indeed, this is exactly what EJB allows
containers to do. Thus client requests do not necessarily need to be processed, but rather
concurrently.
Having multiple instances of the same data gives rise to data corruption problem. If
many beans instances are representing the same underlying data through caching multiple
in-memory cached replicas are created. To achieve entity beans instance cache consistency,
each entity beans instance needs to be routinely synchronized with the underlying storage
by calling the bean’s ejbLoad() and ejbStore().

6.4.1.4 Entity Beans Instances Can Be Pooled

To save precious time-instantiating objects, entity bean instances are therefore


recyclable objects and may be pooled depending on the container’s policy. This process is
as shown in Figure 6.10 .

136 ANNA UNIVERSITY CHENNAI


MIDDLE-WARE TECHNOLOGIES

NOTES
EJB Object 1
Client 1 (William’s
william Bank Account)

EJB Object 2
Client 2
(Hellen’s Bank Entity Bean
Hellen
Account) Instances

EJB Object 3
Client 3 (Allen’s Bank
Allen Account)

Figure 6.10 EJB container pooling of entity beans

The container may pool and reuse entity beans instances to represent different instances
of the same type of data in an underlying storage. For example, a container could use a bank
account entity bean instances to represent different bank account records. When done
using an entity bean instance, the instance may be assigned to handle a different client’s
request and may represent different data. The container performs this by dynamically
assigning the entity bean instances to different client-specific EJB objects. Not only does
this save the container from unneccessarily instantiating bean instances, but this scheme
also saves on the total amount of resources held by the system.
Instance pooling is an interesting optimization that containers may provide, and it is not
all unique to entity beans. However complications arise when reassigning entity beans
instances to different EJB objects. When an entity beans is assigned to a particular object,
it may be holding resources such as socket connections. When when it’s in the pool, it may
not that socket. Thus, to allow the beans to release and acquire resources, entity beans class
implement two callback methods. ejbActivate() is the callback that the container will invoke
beans instance when transitioning beans out of a generic instance pool. This process is
called activation, and it indicates that the container is associating the beans with a specific
EJB object and a specific primary key. ejbActivate() method should acquire resources, such
as sockets, that the beans needs when assigned to a particular EJB object.

ejbPassivate() is the call back that the container will invoke when transitioning the
beans into a generic instance pool. This process is called passivation, and it indicates that the
container is disassociating the beans from a specific EJB object and a specific primary key.

137 ANNA UNIVERSITY CHENNAI


DMC 1754 / 1945

6.4.1.5 Creation And Removal Of Entity Beans


NOTES In EJB, the clients do not directly invoke beans- they invoke an EJB proxy object. The
EJB object is generated through the home object. Therefore for each ejbCreate() method
signature, a corresponding create() method is to be declared in the home interface. The
client calls the home object’s create() method.
To destroy an entity bean’s data in a database, the client must call remove() on the EJB
object or home object. This method causes the container to issue an ejbRemove() call on the
beans.
6.4.2 Entity Bean Code Example
EJB 2.1 Code Example
An EJB 2.1 Entity EJB bean class implements the EntityBean class and the callback
methods in the interface. An EJB 2.1 entity bean also includes the ejbCreate and ejbPostCreate
methods, which are not required in the EJB 3.0 specification. An entity bean includes the
component and home interfaces that extend the EJBObject/EJBLocalObject and EJBHome/
EJBLocalHome interfaces.
//CatalogBean.java
import javax.ejb.EntityBean;
import javax.ejb.EntityContext;
public class CatalogBean implements EntityBean{
private EntityContext ctx;
public abstract void setCatalogId();
public abstract String getCatalogId();
public abstract void setJournal();
public abstract String getJournal();
public abstract void setPublisher();
public abstract String getPublisher();
public abstract void setEditions(java.util.Collection editions);
public abstract java.util.Collection getEditions();
public String ejbCreate(String catalogId){
setCatalogId(catalogId);
return null;
}
public void ejbRemove() {}
public void ejbActivate() {}
public void ejbPassivate() {}
public void ejbLoad() {
}

138 ANNA UNIVERSITY CHENNAI


MIDDLE-WARE TECHNOLOGIES

public void ejbStore() {


}
NOTES
public void setEntityContext(EntityContext ctx) {
this.ctx=ctx;
}
public void unsetEntityContext() {
ctx = null;
}
}
//Local Component Interface
// Cataloglocal.java
import javax.ejb.EJBLocalObject;
public interface CatalogLocal extends EJBLocalObject{
public void setCatalogId();
public String getCatalogId();
public void setJournal();
public String getJournal();
public void setPublisher();
public String getPublisher();
public void setEditions(java.util.Collection editions);
public java.util.Collection getEditions();
}
// local home interface
//cataloglocalhome.java
import javax.ejb.CreateException;
import javax.ejb.EJBLocalHome;
public interface CatalogLocalHome extends EJBLocalHome{
CatalogLocal create(String catalogId) throws CreateException;
CatalogLocal findByPrimaryKey(String catalogId) throws CreateException;
java.utiil.Collection findByJournal(String journal) throws CreateException;
}
//Catalog client session bean
import javax.naming.InitialContext;
public class CatalogClient implements SessionBean{
{
public void createCatalog(String catalogId){
try{

139 ANNA UNIVERSITY CHENNAI


DMC 1754 / 1945

InitialContext ctx=new InitialContext();


NOTES Object objref=ctx.lookup(“CatalogLocalHome”);
CatalogLocalHome catalogLocalHome=(CatalogLocalHome)objref;
//Create an instance of Entity bean
CatalogLocal catalogLocal=(CatalogLocal)catalogLocalHome.create(catalogId);
}
catch(Exception e){}
}
public void findCatalog(String catalogId){
try{
InitialContext ctx=new InitialContext();
Object objref=ctx.lookup(“CatalogLocalHome”);
CatalogLocalHome catalogLocalHome=(CatalogLocalHome)objref;
//find an instance of Entity bean
CatalogLocalcatalogLocal=(CatalogLocal)catalogLocalHome.findByPrimaryKey(catalogId);
}
catch(Exception e){}
}
public void removeCatalog(String catalogId){
try{
InitialContext ctx=new InitialContext();
Object objref=ctx.lookup(“CatalogLocalHome”);
CatalogLocalHome catalogLocalHome=(CatalogLocalHome)objref;
//find an instance of Entity bean
CatalogLocalcatalogLocal=(CatalogLocal)catalogLocalHome.findByPrimaryKey(catalogId);
catalogLocal.remove();
}
catch(Exception e){}
}
}
//ejb-jar.xml deployment descriptor
<?xml version=”1.0" ?>
<!DOCTYPE ejb-jar (View Source for full doctype...)>
- <ejb-jar>
- <enterprise-beans>
- <entity>
<ejb-name>Catalog</ejb-name>

140 ANNA UNIVERSITY CHENNAI


MIDDLE-WARE TECHNOLOGIES

<local-home>CatalogLocalHome</local-home>
<local>CatalogLocal</local> NOTES
<ejb-class>CatalogBean</ejb-class>
<persistence-type>Container</persistence-type>
<prim-key-class>String</prim-key-class>
<reentrant>False</reentrant>
<cmp-version>2.x</cmp-version>
<abstract-schema-name>Catalog</abstract-schema-name>
- <cmp-field>
<field-name>catalogId</field-name>
</cmp-field>
- <cmp-field>
<field-name>journal</field-name>
</cmp-field>
- <cmp-field>
<field-name>publisher</field-name>
</cmp-field>
- <query>
- <query-method>
<method-name>findByJournal</method-name>
- <method-params>
<method-param>java.lang.String</method-param>
</method-params>
</query-method>
- <ejb-ql>
- <![CDATA[
SELECT DISTINCT OBJECT(obj) FROM Catalog obj WHERE obj.journal = ?1
]]>
</ejb-ql>
</query>
</entity>
</enterprise-beans>
- <relationships>
- <ejb-relation>
<ejb-relation-name>Catalog-Editions</ejb-relation-name>
- <ejb-relationship-role>
<ejb-relationship-role-name>Catalog-Has-Editions</ejb-relationship-role-name>

141 ANNA UNIVERSITY CHENNAI


DMC 1754 / 1945

<multiplicity>One</multiplicity>
NOTES - <relationship-role-source>
<ejb-name>Catalog</ejb-name>
</relationship-role-source>
- <cmr-field>
<cmr-field-name>editions</cmr-field-name>
<cmr-field-type>java.util.Collection</cmr-field-type>
</cmr-field>
</ejb-relationship-role>
- <ejb-relationship-role>
<ejb-relationship-role-name>Editions-Belong-To-Catalog</ejb-relationship-role-name>
<multiplicity>One</multiplicity>
<cascade-delete />
- <relationship-role-source>
<ejb-name>Edition</ejb-name>
</relationship-role-source>
</ejb-relationship-role>
</ejb-relation>
</relationships>
</ejb-jar>
EJB 3.0 Code Example
In comparison, an EJB 3.0 entity bean class is a POJO which does not implement the
EntityBean class. The callback methods, the ejbCreate and ejbPostCreate methods are not
required in the EJB 3.0 entity bean class. Also, the component and home interfaces and
deployment descriptors are not required in EJB 3.0. The values specified in the EJB 2.1
deployment descriptor are included in EJB 3.0 bean class with JDK 5.0 annotations. Thus,
the number of classes/interfaces/deployment descriptors is reduced in the EJB 3.0
specification.
//CatalogBean.java
import javax.persistence.Entity;
import javax.persistence.NamedQuery;
import javax.persistence.Id;
import javax.persistence.Column;
import javax.persistence.OneToMany;
@Entity
@NamedQuery(name=”findByJournal”, queryString=”SELECT DISTINCT OBJECT(obj)
FROM Catalog obj WHERE obj.journal = ?1")
public class CatalogBean{
public CatalogBean(){}

142 ANNA UNIVERSITY CHENNAI


MIDDLE-WARE TECHNOLOGIES

public CatalogBean(String catalogId)


{ NOTES
this.catalogId=catalogId;
}
private String catalogId;
private String journal;
private String publisher;
@Id
@Column(name=”CatalogId”, primaryKey=”true”)
public String getCatalogId(){return catalogId;}
public void setCatalogId(){this.catalogId=catalogId;}
public void setJournal(String journal){this.journal=journal;}
public String getJournal(){return journal;}

public void setPublisher(String publisher){this.publisher=publisher;}


public String getPublisher(){return publisher;}
private java.util.Collection<Edition> editions;
@OneToMany
public void setEditions(java.util.Collection editions){this.editions=editions;}
public java.util.Collection getEditions(){return editions;}
}
//CatalogClient.java
import javax.ejb.Stateless;
import javax.ejb.Resource;
import javax.persistence.EntityManager;
import javax.persistence.Query;
@Stateless
@Local
public class CatalogClient implements CatalogLocal{
{
@Resource
private EntityManager em;
public void create(String catalogId){
CatalogBean catalogBean=new CatalogBean(catalogId);
em.persist(catalogBean);
}
public CatalogBean findByPrimaryKey(String catalogId){
return (CatalogBean)em.find(“CatalogBean”, catalogId);

143 ANNA UNIVERSITY CHENNAI


DMC 1754 / 1945

}
NOTES public java.util.Collection findByJournal(String journal){
Query query=em.createNamedQuery(“findByJournal”);
query.setParameter(0, journal);
return (CatalogBean)query.getResultList();
}
public void remove(CatalogBean catalogBean){
em.remove(catalogBean);
}
}
6.5 DEPLOYMENT
Before you can successfully run your enterprise beans on either a test or production
server, you need to generate deployment code for the enterprise beans. You can do this
using the EJB deployment tool or use the command-line interface.
Using the command line, you can run a build process overnight and have the deployment
tool automatically invoked to generate your deployment code in batch mode.
The EJB deployment tool accepts an input EJB JAR or EAR file that contains one or
more enterprise beans. It then generates an output deployed JAR or EAR file (depending
on the type of the input file) that contains deployment code in the form of .class files.
Jar files are ZIP files that are used specifically for packaging Java classes that are
ready to be used in some type of application. A Jar file containing one or more enterprise
beans includes the bean classes, remote interfaces, home interfaces, and primary keys for
each bean. It also contains one deployment descriptor.
Deployment is the process of reading the bean’s JAR file, changing or adding properties
to the deployment descriptor, mapping the bean to the database, defining access control in
the security domain, and generating vendor-specific classes needed to support the bean in
the EJB environment. Every EJB server product comes with its own deployment tools
containing a graphical user interface and a set of command-line programs.
The javax.ejb.deployment package defines classes used by EJB containers to encapsulate
information about EJB objects. An EJB container should provide a tool that creates an
instance of the EntityDescriptor or SessionDescriptor class for a bean, initializes its fields,
and then serializes that initialized instance. Then, when the bean is deployed into the EJB
container, the container reads the serialized deployment descriptor class and its properties to
obtain configuration information for the bean. Figure 6.11 shows the class hierarchy of this
package.
6.5.1 Deployment Descriptor Class
An enterprise bean is deployed within an EJB container. At deployment of the enterprise
bean, the container generates implementations for both the home and remote interfaces of
the enterprise bean. The container reads a deployment descriptor to obtain the EJB-
specific information it needs. This deployment descriptor, called ejb-jar.xml, is an XML

144 ANNA UNIVERSITY CHENNAI


MIDDLE-WARE TECHNOLOGIES

file that the EJB provider initializes with information about its enterprise bean. This information
includes the names of the Java interfaces that define the home and remote interfaces. In NOTES
this way, the EJB container can build the custom interfaces that the client component needs
to access the enterprise bean.

Figure 6.11 The javax.ejb.deployment package class hierarchy

The deployment descriptor class is the base class used by both the Session Descriptor
and the Entity Descriptor classes. It provides functionality that is common to all types of
Deployment descriptor. This class is the main way in which information is communicated
from the EJB developer to the deployer and to the container in which information will be
deployed. Typically, the bean developer uses the “setter” methods of this class to initialize
the various properties of this class, and the deployment environment uses the “getter” methods
to read these values at deployment time. All currently available EJB tools provide graphical
user interface (GUI) tools to allow the developer and deployer to generate Deployment-
Descriptors and their associated classes via pointing and clicking. The source for the
DeploymentDescriptor class is given below.
public class javax.ejb.deployment.DeploymentDescriptor extends java.lang.Object
implements java.io.Serializable
{
protected int versionNumber;
public deploymentDescriptor();
public AccessControlEntry[] getAccessControlEntries();
public AccessControlEntry getAccessControlEntries(int index);
public Name getBeanHomeName();
public ControlDescriptor[] getcontrolDescriptors();
public String getHomeInterfaceClassName();
public boolean getReentrant();
public String getRemoteInterfaceclassName();
public boolean isReentrant();
public void setAccessControlEntries(AccessControlEntry values[]);
public void setAccessControlEntries(int index AccessControlEntry value);

145 ANNA UNIVERSITY CHENNAI


DMC 1754 / 1945

public void setBeanHomeName(Name value);


NOTES public void setControlDescriptors(int index, controldescriptor value[]);
public void setEnterpriseBeanClassName(String value);
public void setEnviornmentProperties(properties value);
public void setHomeInterfaceClassName(String value);
public void setReentrant(Boolean value);
public void setremoteInterfaceClassName(String value);
}
6.5.2 The Accesscontrolentry Class
An AccessControlEntry is another class in the javax.ejb.deployment package. The
purpose of this class is to pair a given method in the bean with a list of identities. The class
consists mainly of getter and setter methods also known as accessor and mutator methods
that allow to associate a set of identities with a particular method. The class is part of the
package javax.ejb.deployment and the class is serializable. The source code for the
AccessControlEntry class is shown below.
public class javax.ejb.deployment.AccessControlEntry extends java.lang.Object
implements java.io.serializable
{
public AccessControlEntry();
public AccessControlEntry(Method method);
public AccessControlEntry(Method method, Identity identities[]);
public Identity[] getAllowedIdentities();
public Identity[] getAllowedIdentities(int index);
public Method getMethod();
public void setAllowedIdentities(Identity values[]);
public void setAllowedIdentities(int index, Identity values[]);
public void setMethod(Method value);
}
6.5.3 The Controldescriptor Class
The ControlDescriptor class serves a function roughly similar to that of the
AccessciontrolEntry class; it associates information with a particular method. The
ControlDescriptor class can be used both to specify properties of a particular method and to
specify default properties for the beans as a whole. The ControlDescriptor has three concerns:
 The method’s transaction attribute
 The method’s isolation level
 The method’s “run –as” mode

The source code for the ControlDescriptor class is given below.

146 ANNA UNIVERSITY CHENNAI


MIDDLE-WARE TECHNOLOGIES

public class javax.ejb.deployment.ControlDescriptor extends java.lang.Object


implements java.io.serializable NOTES
{
public final static int CLIENT_IDENTITY;
public final static int SPECIFIED_IDENTITY;
public final static int SYSTEM_IDENTITY;
public final static int TRANSACTION_READ_COMMITTED;
public final static int TRANSACTION_READ_UNCOMMITTED;
public final static int TRANSACTION_REPEATABLE_READ;
public final static int TRANSACTION_SERIALIZABLE;
public final static int TX_BEANS_MANAGED;
public final static int TX_MANDATORY;
public final static int TX_NOT_SUPPORTED;
public final static int TX_REQUIRED;
public final static int TX_REQUIRES_NEW;
public final static int TX_SUPPORTS;
public ControlDescriptor();
public ControlDescriptor(Method method);
public int getIsolationLevel();
public method getMethod();
public Identity getRunAsIdentity();
public int getTransactionAttribute();
public void setIsolationLevel(int value);
public void setMethod(Method value);
public void setRunAsIdentity(Identity value);
public void setrunAsMode(int value)
public void setTransactionAttribute(int value)
}
6.5.4 The Sessiondescriptor Class
To contain information about session beans SessionDescriptor class is used.
SessionDescriptor inherit from Deployment Descriptor. The source code for the
SessionDescriptor class is shown below.
public class javax.ejb.deployment.SessionDescriptor extends
javax.ejb.deployment.DeploymentDescriptor
{
public final static int STATEFUL_SESSION;
public final static int STATELESS_SESSION;
public SessionDescriptor();

147 ANNA UNIVERSITY CHENNAI


DMC 1754 / 1945

public int getSessionTimeout();


NOTES public int getStateManagementType();
public void setsessionTimeout(int value);
public void setStateManagementType(int value);
}
6.5.5 The Entitydescriptor Class
EntityDescriptor inherit from Deployment Descriptor. The source code for the
EntityDescriptor class is as shown below.
public class javax.ejb.deployment.EntityDescriptor extends
javax.ejb.deployment.DeploymentDescriptor
{
public EntityDescriptor();
public Field[] getContainerManagedFields();
public Field getContainerManagedFields();
public String getPrimaryKeyClassName();
public void setContainerManagedFields(Field values[]);
public void setContainerManagedFields(int index, Field value);
public void setPrimaryKeyClassName(String value);
}
6.5.6 EJB 3.0 Deployment
In EJB 3.0 annotations have replaced deployment descriptors. Each attribute in the
deployment descriptor has default values so you do not have to specify these attributes
unless they want a value other than the default value. These values can be specified using
annotations in the bean class itself.
The EJB 3.0 specification also defines a set of metadata annotations such as bean
type, type of interfaces, resource references, transaction attributes, security, and so on. For
example, if you want to define security settings for a particular EJB method, you can define
the following in the bean class:
@MethodPermissions(“user”)
public void sharedTask() {
System.out.println(“Shared admin/user method called”);
}

EJB 3.0 continues to support the use of deployment descriptors. You may use Java
language metadata annotations or deployment descriptors. You may also combine the use of
deployment descriptors with Java language metadata annotations to override the values of
annotations or to supplement the use of annotations.

148 ANNA UNIVERSITY CHENNAI


MIDDLE-WARE TECHNOLOGIES

6.6 CONCLUSION
This unit has explained the theoretical concepts behind entity beans and session beans.
NOTES
The concepts have been explained using sample examples.
HAVE YOU UNDERSTOOD QUESTIONS
1. Write the differences between Entity and session beans?
2. What are the two types of session beans?
3. What is ejbActivate() Method?
4. What is SessionDescriptor classs?
5. What is EntityDescriptor class?
6. How to write code for entity and session beans?
7. How to deploy EJB?
SUMMARY
 A session beans instance is a relatively short-lived object. It has roughly the lifetime
equivalent of a session or of the client code that is calling the session beans.
 The two subtypes of session beans are
stateless session beans
stateful session beans
 ejbActivate() is the callback that the container will invoke beans instance when
transitioning beans out of a generic instance pool. This process is called activation,
 ejbPassivate() is the call back that the container will invoke when transitioning the
beans into a generic instance pool. This process is called passivation,
 The deployment descriptor class is the base class used by both the Session Descriptor
and the Entity Descriptor classes. It provides functionality that is common to all
types of Deployment descriptor
 An AccessControlEntry is another class in the javax.ejb.deployment packages, the
purpose of this class is to pair a given method in the beans with a list of identities
EXERCISES
Part I
1. Pooling is simple in?
a. stateful session beans
b. stateless session beans
c. entity beans
d. All of the above
2. Containers employ _______________ strategy?
a. First in First out
b. Least Recently Used
c. Last in First Out

149 ANNA UNIVERSITY CHENNAI


DMC 1754 / 1945

d. Round Robin
NOTES 3. Multiple beans instances represent the same data raises a problem called
a. Data corruption
b. Data consistency
c. Overflow of data
d. All the above
4. What kind of beans hold no conversational state
a. stateless session beans
b. stateful session beans
c. entity beans
d. All of the above
5. Passivation and Activation are not useful for what kind of beans?
a. stateless session beans
b. stateful session beans
c. entity beans
d. All of the above
Part II
6. Explain the difference between entity beans and session beans ?
7. Explain the use of ejbPasivate() method?
8. What is the function of a Deployment Descriptor?
9. What is AccessControlEntry class?
10. Explain where and when ejbLoad() method is used?
11. What are Activation and Passivation Callbacks?
12. How can Entity beans instances be pooled?
13. Explain the difference between stateless and stateful session beans? Where would
you use which bean? Explain with a code example.
14. Explain the features of Entity beans? Where would you use them? Explain with a
code example.
15. Explain the creation and removal of Entity beans?
Part III
16. Write code to develop and deploy the following EJB applications.
a. The Fibonacci series 0,1,1,2,,3,5,8,13,………….n.
b. An EJB bean that takes an integer value and returns the number with its digits
reversed ( for example given the number 78981, the output should be 18987).
c. Print the students mark list for ‘n’ number of students. Include student Register
number, name, marks for 5 subjects and total marks for each student.

150 ANNA UNIVERSITY CHENNAI


MIDDLE-WARE TECHNOLOGIES

d. Maintain and update a sorted List of Names of Countries and their Capitals. Query
on the Country name should return the name of the Capital. NOTES
e. Print the account balance of a customer in a bank. Use the customer code and
account number to display the account balance

Part I - Answers

1) b 2) b 3) a 4) a 5) a

REFERENCES
1. Mastering Enterprise JavaBeans Third Edition by Rima Patel Sriganesh and Gerald
Brose
2. Enterprise JavaBeans by Tom Valesky
3. EJB Overview http://publib.boulder.ibm.com/infocenter/wbihelp/v6rxmx/
index.jsp?topic=/com.ibm.wics_developer.doc/doc/access_dev_ejb/
access_dev_ejb16.htm
4. Migrating EJB 2.1 Entity and Session Beans to EJB 3.0 by Deepak Vohra
5. http://www.regdeveloper.co.uk/2006/04/25/ejb3_migration/

151 ANNA UNIVERSITY CHENNAI


DMC 1754 / 1945

NOTES
UNIT IV

CHAPTER - 7
CORBA
7.1 INTRODUCTION

The previous units have discussed the evolution of business applications from the
monolithic mainframe architecture to the highly decentralized distributed architecture. This
unit discusses CORBA or Common Object Request Broker Architecture. CORBA is a
standard architecture for distributed object systems. Distributed object systems are distributed
systems in which all entities are modeled as objects and CORBA architecture allows
distributed, heterogeneous collection of objects to interoperate. CORBA is just a specification
not a programming language.
CORBA architecture is both platform independent and language independent. Hence
CORBA is an open architecture that provides for interoperability of distributed objects on
different platforms, under different operating systems and implemented in different
programming languages. Furthermore, CORBA objects need not know which language was
used to implement other CORBA objects that they talk to.
Distributed systems rely on the definition of interfaces between components and on
the existence of various services, such as directory registration and lookup that are available
to an application. CORBA provides a standard mechanism for defining the interfaces between
components as well as some tools to facilitate the implementation of those interfaces using
the developer’s choice of languages. In addition a wealth of standard services, such as
directory and naming services, persistent object services, and transaction services have
been defined.
7.2 LEARNING OBJECTIVES
At the end of this unit, the reader must be familiar with the following concepts:
 History of CORBA
 OMG’s Object Management Architecture
 ORB Architecture and its Principal components
 Static and Dynamic invocation
 Advantages of CORBA architecture
 Developing and deploying a CORBA application
7.3 HISTORY OF CORBA
7.3.1 Object Management Group
The Object Management Group (OMG) is responsible for defining CORBA. The
OMG is an international independent not-for-profit corporation. It was founded in April
1989 by eleven companies, including 3Com Corporation, American Airlines, Canon Inc.
152 ANNA UNIVERSITY CHENNAI
MIDDLE-WARE TECHNOLOGIES
Data General, Hewlett-Packard, Philips Telecommunications N.V., Sun Microsystems and
Unisys Corporation. That same year, the OMG converted to a consortium with open
membership. Presently, there are over 800 companies with membership in the OMG and
NOTES
members includes almost all the major vendors and developers of distributed object technology,
including platform, database, and application vendors as well as software tool and corporate
developers. The mission of OMG is to provide a common framework for object-oriented
application development through establishing industry guideline and detailed object management
specifications. A lot of famous specifications, including UML, CORBA, OMA (Object
Management architecture) are managed by this group. The OMG continually makes
improvements to these specifications.
The OMG has developed a conceptual model, known as the core object model, and
reference architecture Object Management Architecture (OMA). The OMG OMA attempts
to define, at a high level of abstraction, the various facilities necessary for distributed object-
oriented computing. These components define the composition of objects and their interfaces.
The core of the OMA is the Object Request Broker (ORB) which is a common communication
bus for objects. The technology adopted for ORBs is known as the Common Object Request
Broker Architecture (CORBA).
7.3.2 Corba
The Common Object Request Bro­ker: Architecture and Specifications has evolved
over the past several years. Many versions of CORBA were released starting from 1991.
The specifications are aimed at software designers and developers who want to produce
applications that comply with OMG standards for the Object Request Broker (ORB). The
benefit of compliance is, in general, to be able to produce interoperable applications that are
based on dis­tributed, interoperating objects.
CORBA 1.0 was introduced and adopted in October 1991. It was followed in 1992 by
CORBA 1.1 and then in 1993 by CORBA 1.2. The specifications defined the Interface
Definition Language (IDL) as well as the API for applications to communicate with an
Object Request Broker (ORB). The CORBA 1.x versions made an important first step
toward object interoperability, allowing objects on different machines, on different
architectures, and written in different languages to communicate with each other.
CORBA 1.x was an important first step in providing distributed object interoperability,
but it wasn’t a complete specification. Its chief limitation was that it did not specify a standard
protocol through which ORBs could communicate with each other. As a result, a CORBA
ORB from one vendor could not communicate with an ORB from another vendor, a restriction
that severely limited interoperability among distributed objects.
Released in 1996, CORBA 2.0’s defined standard protocols by which ORBs from
various CORBA vendors could communicate. General Inter-ORB Protocol / Internet Inter-
ORB Protocol (GIOP/IIOP) were added in to solve the interoperation problem between
CORBA platforms from different vendors. Introduction of these protocols made CORBA
applications to be more vendor-independent. CORBA 2.x revisions introduced evolutionary
advancements in the CORBA architecture
CORBA 2.1 released in August 1997, added additional security features (secure IIOP
and IIOP over SSL), added two language map­pings (COBOL and Ada) and included
interoperability revisions and IDL type extensions. CORBA 2.2, released in February 1998
included the Server Portability enhancements (POA), DCOM Interworking, and the IDL/
JAVA language mapping specification. POA gave an explicit standard specification about
transformation between CORBA platforms from different vendors. In 1999 and 2000,
CORBA 2.3 and 2.4 versions were released and versions 2.5 and 2.6 were released in
2001. These versions included specifications relating ORB security and Quality of Service
(QOS). These versions contained Asynchronous Messaging, Minimum CORBA, and Real-
153 ANNA UNIVERSITY CHENNAI
DMC 1754 / 1945
Time CORBA specifications as well as revisions for the Interoperable Name Service,
NOTES Components, Notification Service, and Firewall specifications.
CORBA 3.0 released in July 2002 is an important version in the history of CORBA.
The CORBA Core specification, v3.0 includes updates based on output from the Core RTF
(Revision Task Force), the Interop RTF and the Object Reference Template. The CORBA
Component Model, v3.0 released simultaneously as a stand-alone specification, enables
tighter integration with Java and other component technologies, making it easier for
programmers to use CORBA. Also with this release, Minimum CORBA and Real-time
CORBA (both added to CORBA Core in Release 2.4) become separate documents. CORBA
3.0.1 (November 2002) and CORBA 3.0.2 (December 2002) contained editorial update to
the 3.0 version.
7.4 OMA REFERENCE MODEL
The Object Management Architecture (OMA) is OMG’s vision for the component
software environment. The architecture provides guidance on how standardization of
component interfaces penetrates up to but not including applications in order to create a
plug-and-play component software environment based on object technology. Figure 7.1
illustrates the primary components in the OMG Object Management Architecture. (OMA)

Figure 7.1 Object Management Architecture


The OMA Reference Model identifies and characterizes the components, interfaces,
and protocols that compose the OMA. Central to the model, is the Object Request Broker
(ORB) component that enables clients and objects to communicate in a distributed
environment. The ORB provides an infrastructure allowing objects to communicate
independent of the specific platforms and techniques used to implement the addressed objects.
The ORB guarantees portability and interoperability of objects over a network of
heterogeneous systems.
The CORBA Services component standardizes the life cycle management of objects.
Functions are provided to create objects, to control access to objects, to keep track of
relocated objects and to consistently maintain the relationship between groups of objects.
The CORBA Services components provide the generic environment in which single objects
can perform their tasks. Standardization of CORBA Services leads to consistency over
different applications and improved productivity for the developer. Specifications for the
CORBA Services that have been adopted as standards by the OMG are contained in
“CORBAservices: Common Object Services Specification”. There are also service
specifications for lifecycle management, security, transactions, and event notification, as
well as many others. The list of some of the CORBA services is given in Table 7.1.

154 ANNA UNIVERSITY CHENNAI


MIDDLE-WARE TECHNOLOGIES

Table 7.1 List of CORBA Services


NOTES
N am ing S ervice P rovides the ability to bind a nam e to an object. S im ilar to
other form s of directory service.
E vent Service S upports asynchronous m essage-based com m unication
am ong objects. S upports chaining of event channels, and a
variety of producer/consum er roles.
Lifec ycle Service D efines conventions for creatin g, deleting, copying and
m oving objects.
Persistence P rovides a m eans for retaining and m ana gin g the persistent
S ervice state of objects.
T ransaction S upports m ultiple transaction m odels, including m andatory
S ervice "flat" and optional "nested" transactions.
C oncurrenc y S upports concurrent, coordinated acc ess to objects from
S ervice m ultiple clients.
R elationship S upports the specification, cre ation and m aintenanc e of
S ervice relationships am ong objects.
E xternalization D efines protocols and conventions for externa liz ing and
S ervice internaliz ing objects across processes and across O R B s.

Vertical CORBA Facilities represent components providing computing solutions for


business problems for a specific vertical market, for example, healthcare, manufacturing or
finance. Horizontal CORBA Facilities represent those components providing support across
an enterprise and across businesses.
The Application Objects represents those application objects performing specific
tasks for users. An application is typically built from a large number of basic object classes.
New classes of application objects can be built by modification of existing classes through
generalization or specialization of existing classes (inheritance) as provided by CORBA
Services. The multi-object class approach to application development leads to improved
productivity for the developer and to options for end users to combine and configure their
applications.
7.5 OVERVIEW OF CORBA
CORBA (Common Object Request Broker Architecture) is an object-oriented
architecture model, which provides an efficient and sophisticated architecture of distributed
object computing. It makes message communications among remote objects on different
machines transparent. Objects can be coded in different programming languages and can
run on different operation systems. This feature depends on a well-defined abstract language
(IDL) for describing object interfaces in CORBA specification combined with a family of
relative models and detail definitions. CORBA objects exhibit many features and traits of
other object-oriented systems, including interface inheritance and polymorphism. It provides
this capability even when used with non object-oriented languages such as C and COBOL,
although CORBA maps particularly well to object-oriented languages like C++ and Java.
Therefore, a CORBA based system is a collection of objects that isolates the requestors
of services (clients) from the providers of services (servers) by a well-defined encapsulating
interface. It is important to note that CORBA objects differ from typical programming
objects in three ways:

155 ANNA UNIVERSITY CHENNAI


DMC 1754 / 1945
 CORBA objects can run on any platform
NOTES  CORBA objects can be located anywhere on the network
 CORBA objects can be written in any language that has IDL mapping

The CORBA specification must have software to implement it. The software that implements
the CORBA specification is called the ORB. The ORB which is the heart of CORBA, is
responsible for all the mechanisms required to perform these tasks: Find the object
implementation for the request, Prepare the object implementation to receive the request
and Communicate the data making up the request.
Figure 7.2 illustrates the primary components in the CORBA ORB architecture. The
Client is the entity that wishes to perform an operation on the object and the Object
Implementation is the code and data that actually implements the object.

Figure 7.2 CORBA ORB Architecture


The Client requests services through the IDL Stub or dynamically through the Dynamic
Invocation Interface (DII). A General Inter-ORB Protocol (GIOP) is defined to support
interoperability of different ORB products. A specification of the GIOP on TCP/IP connection
is defined as the Internet Inter-ORB Protocol (IIOP). With the help of the Object Adapter,
which connects the object to the ORB, the ORB Core passes the request through an IDL
Skeleton or dynamically through the Dynamic Skeleton Interface (DSI) to the Object. The
service is encapsulated by the object. The Interface Repository (IFR) contains all the
registered component interfaces, the methods they support, and the parameters they require.
The IFR stores, updates, and manages object interface definitions. Programs may use the
IFR APIs to access and update this information. The Implementation Repository stores the
information that ORBs use to locate and activate implementations of objects.
In CORBA, an object is an instance of an Interface Definition Language (IDL)
interface. The object is identified by an object reference, which uniquely names that instance
across servers. An ObjectId associates an object with its servant implementation, and is
unique within the scope of an Object Adapter. An object has one or more servants associated
with it that implement the interface. The servant is the component implements the operations
defined by an OMG Interface Definition Language (IDL) interface. In languages like C++
and Java that support object-oriented (OO) programming, servants are implemented using
one or more objects. In non-OO languages like C, servants are typically implemented using
functions and structs. A client never interacts with a servant directly, but always through an
object.

156 ANNA UNIVERSITY CHENNAI


MIDDLE-WARE TECHNOLOGIES

GIOP, the general Inter-ORB protocol is a protocol that was specified to standardize
the interfaces for request interchange between ORBs and thus allows different ORB NOTES
implementations to communicate. More specifically it standardizes the transfer syntax for
requests and the messages that can be used. The IIOP is the Internet-compliant
implementation of GIOP, and is the one that is of relevance in practice.
7.6 OBJECT REQUEST BROKER ARCHITECTURE
Figure 7.3 shows a request being sent by a client to an object implementation.
The Client is the entity that wishes to perform an operation on the object and the
Object Implementation is the code and data that actually implements the object. The ORB
is responsible for all of the mechanisms required to find the object implementation for the
request, to prepare the object implementation to receive the request and to communicate the
data making up the request. The interface the client sees is completely independent of
where the object is located, what programming language it is implemented in, or any other
aspect that is not reflected in the object’s interface.
The key feature of the ORB is the transparency of how it facilitates client/object
communication. ORB hides the following:
 Object location: The client does not know where the target object resides. It
could reside in a different process on another machine across the network, on the
same machine but in a different process, or within the same process.

Figure 7.3 A Request Being Sent Through the Object Request Broker
 Object implementation: The client does not know how the target object is
implemented, what programming or scripting language it was written in, or the
operating system and hardware it executes on.
 Object execution state: When it makes a request on a target object, the client
does not need to know whether that object is currently activated (in an executing
process) and ready to accept requests. The ORB transparently starts the object if
necessary before delivering the request to it.
 Object communication mechanisms: The client does not know what
communication mechanisms (e.g., TCP/IP, shared memory, local method call, etc.)
the ORB uses to deliver the request to the object and return the response to the
client.
These ORB features allow application developers to worry more about their own
application domain issues and less about low-level distributed system programming issues.

157 ANNA UNIVERSITY CHENNAI


DMC 1754 / 1945

Figure 7.4 shows the structure of an individual Object Request Broker (ORB). To
NOTES make a request, the Client can use the Dynamic Invocation interface (the same interface
independent of the target object’s interface) or an OMG IDL stub (the specific stub depending
on the interface of the target object).
Dynamic Invocation is used when at compile time a client does not have knowledge
about an object it wants to invoke. Once an object is discovered, the client program can
obtain a definition of it, issue a parameterized call to it, and receive a reply from it, all
without having a type-specific client stub for the remote object. The Client can also directly
interact with the ORB for some functions.
Definitions of the interfaces to objects can be defined in two ways. Interfaces can be
defined statically in an IDL. This language defines the types of objects according to the
operations that may be performed on them and the parameters to those operations.
Alternatively, or in addition, interfaces can be added to an Interface Repository service; this
service represents the components of an interface as objects, permitting run-time access to
these components. In any ORB implementation, IDL and the Interface Repository have
equivalent expressive power.
Figure 7.5 shows how a client can initiate a request to the ORB in both ways.

Client Object Implementation

Dynamic Client Implementation Object


ORB
Invocation Stubs Interface Skeletons Adapter

ORB Core

standard interface Proprietary ORB interface

One interface per object adaptor Normal call interface

One interface per object Up call interface


operation

Figure 7.4 The Structure of Object Request Interfaces


The client performs a request by having access to an Object Reference for an object
and knowing the type of the object and the desired operation to be performed. The client
initiates the request by calling stub routines that are specific to the object or by constructing
the request dynamically. The dynamic and stub interface for invoking a request satisfy the
same request semantics, and the receiver of the message cannot tell how the request was
invoked.
The Object Implementation receives a request as an up-call either through the OMG
IDL generated skeleton or through a dynamic skeleton. The Object Implementation may
also call the Object Adapter and the ORB while processing a request or at other times.

158 ANNA UNIVERSITY CHENNAI


MIDDLE-WARE TECHNOLOGIES

As shown in Figure 7.6, the ORB locates the appropriate implementation code,
transmits parameters, and transfers control to the Object Implementation through an IDL NOTES
skeleton or a dynamic skeleton.
Skeletons are specific to the interface and the object adapter. In performing the request,
the object implementation may obtain some services from the ORB through the Object
Adapter. When the request is complete, control and output values are returned to the
client. The Object Implementation may choose which Object Adapter to use. This decision
is based on what kind of services the Object Implementation requires.

Figure 7.5 A Client Using the Stub or Dynamic Invocation Interface


Figure 7.7, shows how interface and implementation information is made available to clients
and object implementations. The interface is defined in OMG IDL and the definition is
used to generate the client Stubs and the object implementation Skeletons.

Figure 7.6 An Object Implementation Receiving a Request

159 ANNA UNIVERSITY CHENNAI


DMC 1754 / 1945

The object implementation information is provided at installation time and is stored in


NOTES the Implementation Repository for use during request delivery.

Figure 7.7 Interface and Implementation Repositories


7.6.1 Object Request Broker
In the architecture, the ORB is not required to be implemented as a single component, but
rather it is defined by its interfaces. Any ORB implementation that provides the appropriate
interface is acceptable. The interface is organized into three categories:
 Operations that are the same for all ORB implementations.
 Operations that are specific to particular types of objects.
 Operations that are specific to particular styles of object implementations.
Different ORBs may make quite different implementation choices, and, together with
the IDL compilers, repositories, and various Object Adapters, provide a set of services to
clients and implementations of objects that have different properties and qualities.
There may be multiple ORB implementations (also described as multiple ORBs), which
have different representations for object references and different means of performing
invocations. It may be possible for a client to simultaneously have access to two object
references managed by different ORB implementations. When two ORBs are intended to
work together, those ORBs must be able to distinguish their object references. It is not the
responsibility of the client to do so.
The ORB Core is that part of the ORB that provides the basic representation of objects
and communication of requests. CORBA is designed to support different object mechanisms,
and it does so by structuring the ORB with components above the ORB Core, which provide
interfaces that can mask the differences between ORB Cores.
7.6.2 Clients
A client of an object has access to an object reference for the object, and invokes
operations on the object. A client knows only the logical structure of the object according
to its interface and experiences the behaviour of the object through invocations. The client

160 ANNA UNIVERSITY CHENNAI


MIDDLE-WARE TECHNOLOGIES

is generally considered to be a program or process initiating requests on an object. However


this concept of client is relative. For example, the implementation of one object may be a NOTES
client of other objects.
Clients generally see objects and ORB interfaces through the perspective of a language
mapping, bringing the ORB right up to the programmer’s level. Clients are maximally portable
and should be able to work without source changes on any ORB that supports the desired
language mapping with any object instance that implements the desired interface. Clients
have no knowledge of the implementation of the object, which object adapter is used by the
implementation, or which ORB is used to access it.
7.6.3 Object Implementations
An object implementation provides the semantics of the object, usually by defining data
for the object instance and code for the object’s methods. Often the implementation will use
other objects or additional software to implement the behaviour of the object. In some
cases, the primary function of the object is to have side-effects on other things that are not
objects.
A variety of object implementations can be supported, including separate servers,
libraries, a program per method, an encapsulated application, an object-oriented database,
etc. Through the use of additional object adapters, it is possible to support virtually any style
of object implementation.
Generally, object implementations do not depend on the ORB or how the client invokes
the object. Object implementations may select interfaces to ORB-dependent services by
the choice of Object Adapter.
7.6.4 Object References
An Object Reference is the information needed to specify an object within an ORB.
Both clients and object implementations have an opaque notion of object references according
to the language mapping, and thus are insulated from the actual representation of them. Two
ORB implementations may differ in their choice of Object Reference representations. The
representation of an object reference handed to a client is only valid for the lifetime of that
client.
All ORBs must provide the same language mapping to an object reference (usually
referred to as an Object) for a particular programming language. This permits a program
written in a particular language to access object references independent of the particular
ORB. The language mapping may also provide additional ways to access object references
in a typed way for the convenience of the programmer. CORBA also has a distinguished
object reference, a Null reference, guaranteed to be different from all object references,
that denotes no object.
7.6.5 OMG Interface Definition Language
The OMG Interface Definition Language (OMG IDL) defines the types of objects by
specifying their interfaces. An interface consists of a set of named operations and the
parameters to those operations. Although IDL provides the conceptual framework for
describing the objects manipulated by the ORB, it is not necessary for there to be IDL
source code available for the ORB to work. As long as the equivalent information is available

161 ANNA UNIVERSITY CHENNAI


DMC 1754 / 1945

in the form of stub routines or a run-time interface repository, a particular ORB may be able
NOTES to function correctly.
IDL is the means by which a particular object implementation tells its potential clients
what operations are available and how they should be invoked. From the IDL definitions, it
is possible to map CORBA objects into particular programming languages or object systems.
7.6.6 Mapping of IDL to Programming Languages
Different object-oriented or non-object-oriented programming languages may prefer
to access CORBA objects in different ways. For object-oriented languages, it may be desirable
to see CORBA objects as programming language objects. Even for non object-oriented
languages, it is a good idea to hide the exact ORB representation of the object reference,
method names, etc. A particular mapping of OMG IDL to a programming language should
be the same for all ORB implementations. Language mapping includes definition of the
language-specific data types and procedure interfaces to access objects through the ORB.
It includes the structure of the client stub interface (not required for object-oriented languages),
the dynamic invocation interface, the implementation skeleton, the object adapters, and the
direct ORB interface.
A language mapping also defines the interaction between object invocations and the
threads of control in the client or implementation. The most common mappings provide
synchronous calls, in that the routine returns when the object operation completes. Additional
mappings may be provided to allow a call to be initiated and control returned to the program.
In such cases, additional language-specific routines must be provided to synchronize the
program’s threads of control with the object invocation.
7.6.7 Client Stubs
A client stub is a small piece of code that allows a client component to access a server
component. The remote object reference that is held by the client points to the client stub.
This stub is specific to the IDL interface from which it was generated, and it contains the
information needed for the client to invoke a method on the CORBA object that was defined
in the IDL interface.
Generally, the client stubs will present access to the OMG IDL-defined operations on
an object in a way that is easy for programmers to predict once they are familiar with OMG
IDL and the language mapping for the particular programming language. The stubs make
calls on the rest of the ORB using interfaces that are private to, and presumably optimized
for, the particular ORB Core. If more than one ORB is available, there may be different
stubs corresponding to the different ORBs. In this case, it is necessary for the ORB and
language mapping to cooperate to associate the correct stubs with the particular object
reference.
7.6.8 Dynamic Invocation Interface
An interface is also available that allows the dynamic construction of object invocations,
that is, rather than calling a stub routine that is specific to a particular operation on a
particular object, a client may specify the object to be invoked, the operation to be performed,
and the set of parameters for the operation through a call or sequence of calls. The client
code must supply information about the operation to be performed and the types of the
parameters being passed (perhaps obtaining it from an Interface Repository or other run-

162 ANNA UNIVERSITY CHENNAI


MIDDLE-WARE TECHNOLOGIES

time source). The nature of the dynamic invocation interface may vary substantially from
one programming language mapping to another. NOTES
7.6.9 Implementation Skeleton
A server skeleton is the server side analog to a client stub, and these two classes are
used by ORBs in static invocation. For a particular language mapping, and possibly depending
on the object adapter, there will be an interface to the methods that implement each type of
object. The interface will generally be an up-call interface, in that the object implementation
writes routines that conform to the interface and the ORB calls them through the skeleton.
The existence of a skeleton does not imply the existence of a corresponding client stub
(clients can also make requests via the dynamic invocation interface). It is possible to write
an object adapter that does not use skeletons to invoke implementation methods. For example,
it may be possible to create implementations dynamically for languages such as Smalltalk.
7.6.10 Dynamic Skeleton Interface
An interface is available, which allows dynamic handling of object invocations. That is,
rather than being accessed through a skeleton that is specific to a particular operation, an
object’s implementation is reached through an interface that provides access to the operation
name and parameters in a manner analogous to the client side’s Dynamic Invocation Interface.
Purely static knowledge of those parameters may be used, or dynamic knowledge (perhaps
determined through an Interface Repository) may be also used, to determine the parameters.
The implementation code must provide descriptions of all the operation parameters to
the ORB, and the ORB provides the values of any input parameters for use in performing
the operation. The implementation code provides the values of any output parameters, or an
exception, to the ORB after performing the operation. The nature of the dynamic skeleton
interface may vary substantially from one programming language mapping or object adapter
to another, but will typically be an up-call interface.
Dynamic skeletons may be invoked both through client stubs and through the dynamic
invocation interface; either style of client request construction interface provides identical
results.
7.6.11 Object Adaptors
An object adapter is the primary way that an object implementation accesses services
provided by the ORB. There are expected to be a few object adapters that will be widely
available, with interfaces that are appropriate for specific kinds of objects. Services provided
by the ORB through an Object Adapter often include: generation and interpretation of object
references, method invocation, security of interactions, object and implementation activation
and deactivation, mapping object references to implementations, and registration of
implementations.
The wide range of object granularities, lifetimes, policies, implementation styles, and
other properties make it difficult for the ORB Core to provide a single interface that is
convenient and efficient for all objects. Thus, through Object Adapters, it is possible for the
ORB to target particular groups of object implementations that have similar requirements
with interfaces tailored to them.
7.6.12 ORB Interface
An ORB is an abstraction that can be implemented various ways, e.g., one or more
processes or a set of libraries. To decouple applications from implementation details,

163 ANNA UNIVERSITY CHENNAI


DMC 1754 / 1945

CORBA specification defines an interface to an ORB. The ORB Interface is the interface
NOTES that goes directly to the ORB, which is the same for all ORBs and does not depend on the
object’s interface or object adapter. Because most of the functionality of the ORB is provided
through the object adapter, stubs, skeleton, or dynamic invocation, there are only a few
operations that are common across all objects.
This ORB interface provides standard operations that (1) initialize and shutdown the
ORB, (2) convert object references to strings and back, and (3) create argument lists for
requests made through the dynamic invocation interface (DII) etc. For example, the
interface could provide access to services such as Naming Service, Trader Service and
others. These operations are useful to both clients and implementations of objects.
7.6.13 Interface Repository
The Interface Repository is a service that provides persistent objects that represent
the IDL information in a form available at run-time. The Interface Repository information
may be used by the ORB to perform requests. Moreover, using the information in the
Interface Repository, it is possible for a program to encounter an object whose interface
was not known when the program was compiled, yet, be able to determine what operations
are valid on the object and make an invocation on it at run-time. In addition to its role in the
functioning of the ORB, the Interface Repository is a common place to store additional
information associated with interfaces to ORB objects. For example, debugging information,
libraries of stubs or skeletons, routines that can format or browse particular kinds of objects
might be associated with the Interface Repository.
7.6.14 Implementation Repository
The Implementation Repository contains information that allows the ORB to locate
and activate implementations of objects. Although most of the information in the
Implementation Repository is specific to an ORB or operating environment, the
Implementation Repository is the conventional place for recording such information. Ordinarily,
installation of implementations and control of policies related to the activation and execution
of object implementations is done through operations on the Implementation Repository.
In addition to its role in the functioning of the ORB, the Implementation Repository is a
common place to store additional information associated with implementations of ORB objects.
For example, debugging information, administrative control, resource allocation, security,
etc., might be associated with the Implementation Repository.
7.7 EXAMPLE ORBS

There are a wide variety of ORB implementations possible within the Common ORB
Architecture. Some of the different options are explained below. Note that a particular
ORB might support multiple options and protocols for communication.
Client- and Implementation-resident ORB
If there is a suitable communication mechanism present, an ORB can be implemented
in routines resident in the clients and implementations. The stubs in the client either use a
location-transparent IPC mechanism or directly access a location service to establish
communication with the implementations. Code linked with the implementation is responsible
for setting up appropriate databases for use by clients.

164 ANNA UNIVERSITY CHENNAI


MIDDLE-WARE TECHNOLOGIES

Server-based ORB
NOTES
To centralize the management of the ORB, all clients and implementations can
communicate with one or more servers whose job it is to route requests from clients to
implementations. The ORB could be a normal program as far as the underlying operating
system is concerned, and normal IPC could be used to communicate with the ORB.

System-based ORB

To enhance security, robustness, and performance, the ORB could be provided as a


basic service of the underlying operating system. Object references could be made un-
forgeable, reducing the expense of authentication on each request. Because the operating
system could know the location and structure of clients and implementations, it would be
possible for a variety of optimizations to be implemented, for example, avoiding marshalling
when both are on the same machine.

Library-based ORB

For objects that are light-weight and whose implementations can be shared, the
implementation might actually be in a library. In this case, the stubs could be the actual
methods. This assumes that it is possible for a client program to get access to the data for
the objects and that the implementation trusts the client not to damage the data.

7.8 STRUCTURE OF A CLIENT

Figure 7.8 The Structure of a Typical Client

165 ANNA UNIVERSITY CHENNAI


DMC 1754 / 1945
A client of an object has an object reference that refers to that object. An object reference
NOTES is a token that may be invoked or passed as a parameter to an invocation on a different
object. Invocation of an object involves specifying the object to be invoked, the operation to
be performed, and parameters to be given to the operation or returned from it.
The ORB manages the control transfer and data transfer to the object implementation
and back to the client. In the event that the ORB cannot complete the invocation, an exception
response is provided. Ordinarily, a client calls a routine in its program that performs the
invocation and returns when the operation is complete.
Clients access object-type-specific stubs as library routines in their program as illustrated
in Figure 7.8.
The client program thus sees routines callable in the normal way in its programming
language. All implementations will provide a language specific data type to use to refer to
objects, often an opaque pointer. The client then passes that object reference to the stub
routines to initiate an invocation. The stubs have access to the object reference representation
and interact with the ORB to perform the invocation.
An alternative set of library code is available to perform invocations on objects, for
example, when the object was not defined at compile time. In that case, the client program
provides additional information to name the type of the object and the method being invoked,
and performs a sequence of calls to specify the parameters and initiate the invocation.
Clients most commonly obtain object references by receiving them as output parameters
from invocations on other objects for which they have references. When a client is also an
implementation, it receives object references as input parameters on invocations to objects
it implements. An object reference can also be converted to a string that can be stored in
files or preserved or communicated by different means and subsequently turned back into
an object reference by the ORB that produced the string.
7.9 STRUCTURE OF AN OBJECT IMPLEMENTATION
An object implementation provides the actual state and behaviour of an object. The
object implementation can be structured in a variety of ways. Besides defining the methods
for the operations themselves, an implementation will usually define procedures for activating
and deactivating objects and will use other objects or non object facilities to make the object
state persistent, to control access to the object, as well as to implement the methods.
The object implementation, as illustrated in Figure 7.9, interacts with the ORB in a
variety of ways to establish its identity, to create new objects, and to obtain ORB-dependent
services. It primarily does this via access to an Object Adapter, which provides an interface
to ORB services that is convenient for a particular style of object implementation. Because
of the range of possible object implementations, it is difficult to be definitive about how an
object implementation is structured.
When an invocation occurs, the ORB Core, object adapter, and skeleton arrange that
a call is made to the appropriate method of the implementation. A parameter to that method
specifies the object being invoked, which the method can use to locate the data for the
object. Additional parameters are supplied according to the skeleton definition. When the
method is complete, it returns, causing output parameters or exception results to be
transmitted back to the client.

166 ANNA UNIVERSITY CHENNAI


MIDDLE-WARE TECHNOLOGIES

NOTES

Figure 7.9 The Structure of a Typical Object Implementation

When a new object is created, the ORB may be notified so that it knows where to find
the implementation for that object. Usually, the implementation also registers itself as
implementing objects of a particular interface, and specifies how to start up the implementation
if it is not already running.
Most object implementations provide their behaviour using facilities in addition to the
ORB and object adapter. For example, although the Portable Object Adapter provides some
persistent data associated with an object (its OID or Object ID), that relatively small amount
of data is typically used as an identifier for the actual object data stored in a storage service
of the object implementation’s choosing. With this structure, it is not only possible for different
object implementations to use the same storage service, it is also possible for objects to
choose the service that is most appropriate for them.
7.10 STRUCTURE OF AN OBJECT ADAPTER
An object adapter, as illustrated in Figure 7.10, is the primary means for an object
implementation to access ORB services such as object reference generation.
An object adapter exports a public interface to the object implementation, and a private
interface to the skeleton. It is built on a private ORB-dependent interface.
Object adapters are responsible for the following functions:
 Generation and interpretation of object references
 Method invocation
 Security of interactions

167 ANNA UNIVERSITY CHENNAI


DMC 1754 / 1945

 Object and implementation activation and deactivation


NOTES  Mapping object references to the corresponding object implementations
 Registration of implementations

Figure 7.10 The Structure of a Typical Object Adapter


These functions are performed using the ORB Core and any additional components necessary.
Often, an object adapter will maintain its own state to accomplish its tasks. It may be
possible for a particular object adapter to delegate one or more of its responsibilities to the
Core upon which it is constructed.
The Object Adapter is implicitly involved in invocation of the methods, although the
direct interface is through the skeletons. For example, the Object Adapter may be involved
in activating the implementation or authenticating the request. The Object Adapter defines
most of the services from the ORB that the Object Implementation can depend on. Different
ORBs will provide different levels of service and different operating environments may
provide some properties implicitly and require others to be added by the Object Adapter. For
example, it is common for Object Implementations to want to store certain values in the
object reference for easy identification of the object on an invocation. If the Object Adapter
allows the implementation to specify such values when a new object is created, it may be
able to store them in the object reference for those ORBs that permit it. If the ORB Core
does not provide this feature, the Object Adapter would record the value in its own storage
and provide it to the implementation on an invocation. With Object Adapters, it is possible for
an Object Implementation to have access to a service whether or not it is implemented in
the ORB Core—if the ORB Core provides it, the adapter simply provides an interface to it;
if not, the adapter must implement it on top of the ORB Core. Every instance of a particular
adapter must provide the same interface and service for all the ORBs it is implemented on.
It is also not necessary for all Object Adapters to provide the same interface or
functionality. Some Object Implementations have special requirements. For example, an
object-oriented database system may wish to implicitly register its many thousands of objects
without doing individual calls to the Object Adapter. In such a case, it would be impractical

168 ANNA UNIVERSITY CHENNAI


MIDDLE-WARE TECHNOLOGIES

and unnecessary for the object adapter to maintain any per-object state. By using an object
adapter interface that is tuned towards such object implementations, it is possible to take NOTES
advantage of particular ORB Core details to provide the most effective access to the ORB.

7.11 PORTABLE OBJECT ADAPTER


There are a variety of possible object adapters; however, since the object adapter
interface is something that object implementations depend on, it is desirable that there be as
few as practical. Most object adapters are designed to cover a range of object
implementations, so only when an implementation requires radically different services or
interfaces should a new object adapter be considered.
The Portable Object Adapter (POA) is a standard component in the CORBA
specifications. The POA’s predecessor was the Basic Object Adapter (BOA). BOA was
widely recognized to be incomplete and underspecified. The solution adopted by OMG was
to abandon BOA and develop a new portable adapter. This specification defines a Portable
Object Adapter that can be used for most ORB objects with conventional implementations.
The intent of the POA, as its name suggests, is to provide an Object Adapter that can be
used with multiple ORBs with a minimum of rewriting needed to deal with different vendors’
implementations. The POA is specified in IDL, so its mapping to languages is largely automatic,
following the language mapping rules.
The OMG’s design goals for the Portable Object Adapter (POA) specification include
the following:
 Portability: The POA allows programmers to construct servants that are portable
between different ORB implementations. Hence, the programmer can switch ORBs
without having to modify existing servant code. The lack of this feature was a
major shortcoming of the Basic Object Adapter (BOA).
 Persistent identities: The POA supports objects with persistent identities. More
precisely, the POA is designed to support servants that can provide consistent service
for objects whose lifetimes span multiple server process lifetimes.
 Automation: The POA supports transparent activation of objects and implicit
activation of servants. This automation makes the POA easier and simpler to use.
 Conserving resources: There are many situations where a server must support
many CORBA objects. For example, a database server that models each database
record as a CORBA object can potentially service hundreds of objects. The POA
allows a single servant to support multiple Object Ids simultaneously. This allows
one servant to service many CORBA objects, thereby conserving memory
resources on the server.
 Flexibility: The POA allows servants to assume complete responsibility for an
object’s behaviour. For instance, a servant can control an object’s behavior by
defining the object’s identity, determining the relationship between the object’s
identity and the object’s state, managing the storage and retrieval of the object’s
state, providing code that will be executed in response to requests, and determining
whether or not the object exists at any point in time.
 Behaviour governed by policies: The POA provides an extensible mechanism
for associating policies with servants in a POA. Currently, the POA supports seven
policies, such as threading, retention, and lifespan policies, that can be selected at
POA creation time.

169 ANNA UNIVERSITY CHENNAI


DMC 1754 / 1945

 Nested POAs: The POA allows multiple distinct, nested instances of the POA to
NOTES exist in a server. Each POA in the server provides a namespace for all the objects
registered with that POA and all the child POAs that are created by this POA. The
POA supports recursive deletes, i.e., destroying a POA destroys all its child POAs.
 SSI and DSI support: The POA allows programmers to construct servants that
inherit from (1) static skeleton classes (SSI) generated by OMG IDL compilers or
(2) a Dynamic Skeleton Interface (DSI). Clients need not be aware that a CORBA
object is serviced by a DSI servant or an IDL servant. Two CORBA objects
supporting the same interface can be serviced one by a DSI servant and the other
with an IDL servant. Furthermore, a CORBA object may be serviced by a DSI
servant during some period of time, while the rest of the time is serviced by an IDL
servant.
7.11.1 POA Architecture
The ORB is an abstraction visible to both the client and server. In contrast, the POA is
an ORB component visible only to the server, i.e., clients are not directly aware of the
POA’s existence or structure. The architecture of the request dispatching model defined by
the POA and the interactions between its standard components and the ORB Core are
described in this section.
User-supplied servants are registered with the POA. Clients hold object references
upon which they make requests, which the POA ultimately dispatches as operations on a
servant. The ORB, POA, servant, and skeleton all collaborate to determine (1) which servant
the operation should be invoked on and (2) to dispatch the invocation. Figure 7.11 shows the
POA architecture.
A distinguished POA, called the Root POA, is created and managed by the ORB. The
Root POA is always available to an application through the ORB initialization interface,
resolve initial references. The application developer can register servants with the Root
POA if the policies of the Root POA specified in the POA specification are suitable for the
application.
A server application may want to create multiple POAs to support different kinds of
CORBA objects and/or different kinds of servant styles. For example, a server application
might have two POAs: one supporting transient CORBA objects and the other supporting
persistent CORBA objects. A nested POA can be created by invoking the create POA
factory operation on a parent POA.

170 ANNA UNIVERSITY CHENNAI


MIDDLE-WARE TECHNOLOGIES

NOTES

Figure 7.11 Portable Object Adapter Architecture


The server application in Figure 7.11 contains three other nested POAs: A, B, and C.
POA A and B are children of the Root POA; POA C is B’s child. Each POA has an Active
Object Table that maps Object Ids to servants. Other key components in a POA are
described below:
POA Manager: A POA manager encapsulates the processing state of one or more
POAs. By invoking operations on a POA manager, server applications can cause requests
for the associated POAs to be queued or discarded. In addition, applications can use the
POA manager to deactivate POAs. Figure 7.12 shows the processing states of a POA
Manager and the operations required to transition from one state to another.

171 ANNA UNIVERSITY CHENNAI


DMC 1754 / 1945

NOTES

7.12 POA Manager Processing States

Adapter Activator: An adapter activator can be associated with a POA by an


application. The ORB will invoke an operation on an adapter activator when a request is
received for a child POA that does not yet exist. The adapter activator can then decide
whether or not to create the required POA on demand. For example, if the target object
reference was created by a POA whose full name is /A/B/C and only POA /A and POA /
A/B currently exist, the unknown adapter operation will be invoked on the adapter activator
associated with POA /A/B. In this case, POA /A/B will be passed as the parent parameter
and C as the name of the missing POA to the unknown adapter operation.

172 ANNA UNIVERSITY CHENNAI


MIDDLE-WARE TECHNOLOGIES

Servant Manager: A servant manager is a locality constrained servant that server


applications can associate with a POA. The ORB uses a servant manager to activate NOTES
servants on demand, as well as to deactivate servants. Servant managers are responsible
for (1) managing the association of an object (as characterized by its Object Id value) with
a particular servant and (2) for determining whether an object exists or not. There are two
types of servant managers: ServantActivator and ServantLocator. The type used in a
particular situation depends on the policies in a POA, which are described next.
7.11.2 POA Policies
The characteristics of each POA other than the Root POA can be customized at POA
creation time using different policies. The policies of the Root POA are specified in the
POA specification. The POA specification defines the following policies:
 Threading policy: This policy is used to specify the threading model used with the
POA. A POA can either be single-threaded or have the ORB control its threads. If
it is single-threaded, all requests are processed sequentially. In a multi-threaded
environment, all upcalls made by this POA to implementation code, i.e., servants
and servant managers, are invoked in a manner that is safe for code that is unaware
of multi-threading. In contrast, if the ORB-controlled threading policy is specified,
the ORB determines the thread (or threads) that the POA dispatches its requests
in. In a multi-threaded environment, concurrent requests may be delivered using
multiple threads.
 Lifespan policy: This policy is used to specify whether the CORBA objects created
within a POA are persistent or transient. Persistent objects can outlive the process
in which they are created initially. In contrast, transient objects cannot outlive the
process in which they are created initially. Once the POA is deactivated, use of any
object references generated for a transient object will result in an CORBA::OBJECT
NOT EXIST exception.
 Object Id uniqueness policy: This policy is used to specify whether the servants
activated in the POA must have unique Object Ids. With the unique Id policy, servants
activated with that POA support exactly one Object Id. However, with the multiple
Id policy, a servant activated with that POA may support one or more Object Ids.
 Object Id assignment policy: This policy is used to specify whether Object Ids in
the POA are generated by the application or by the ORB. If the POA also has the
persistent lifespan policy, ORB assigned Object Ids must be unique across all
instantiations of the same POA.
 Implicit activation policy: This policy is used to specify whether implicit activation
of servants is supported in the POA. A C++ server can create a servant, and then
by setting its POA and invoking its this method, it can register the servant implicitly
and create an object reference in a single operation.
 Servant retention policy: This policy is used to specify whether the POA retains
active servants in an Active Object Map. A POA either retains the associations
between servants and CORBA objects or it establishes a new CORBA object/
servant association for each incoming request.
 Request processing policy: This policy is used to specify how requests should be
processed by the POA. When a request arrives for a given CORBA object, the
POA can do one of the following:

173 ANNA UNIVERSITY CHENNAI


DMC 1754 / 1945

 Consult its Active Object Map only – If the Object Id is not found in the Active
NOTES Object Map, the POA returns an CORBA::OBJECT NOT EXIST exception to
the client.
 Use a default servant – If the Object Id is not found in the Active Object Map,
the request is dispatched to the default servant (if available).
 Invoke a servant manager – If the Object Id is not found in the Active Object
Map, the servant manager (if available) is given the opportunity to locate a servant
or raise an exception. The servant manager is an application supplied object that
can incarnate or activate a servant and return it to the POA for continued request
processing. Two forms of servant manager are supported: ServantActivator, which
is used for a POA with the RETAIN policy, and ServantLocator, which is used
with the NON RETAIN policy. Combining these policies with the retention policies
described above provides the POA with a great deal of flexibility.
7.11.3 The POA Semantics
The POA is used primarily in two modes: (1) request processing and (2) the activation
and deactivation of servants and objects. This section describes these two modes and outlines
the semantics and behaviour of the interactions that occur between the components in the
POA architecture.
Request Processing
Each client request contains an Object Key. The Object Key conveys the Object Id of
the target object and the identity of the POA that created the target object reference. The
end-to-end processing of a client request occurs in the follow steps:
 Locate the server process: When a client issues a request, the ORB first locates
an appropriate server process, using the Implementation Repository to create a
new process if necessary. In an ORB that uses IIOP, the host name and port
 number in the Interoperable Object Reference (IOR) identifies the communication
endpoint of the server process.
 Locate the POA: Once the server process has been located, the ORB locates the
appropriate POA within that server. If the designated POA does not exist in the
server process, the server has the opportunity to re-create the required POA by
using an adapter activator. The name of the target POA is specified by the IOR in
a manner that is opaque to the client.
 Locate the servant: Once the ORB has located the appropriate POA, it delivers
the request to that POA. The POA finds the appropriate servant by following its
servant retention and request processing policies, which have been described earlier
 Locate the skeleton: The final step the POA performs is to locate the IDL skeleton
that will transform the parameters in the request into arguments. The skeleton then
passes the de-marshalled arguments as parameters to the correct servant operation.
 Handling replies, exceptions and location forwarding: The skeleton marshals
any exceptions, return values, in-out, and out parameters returned by the servant so
that they can be sent to the client. The only exception that is given special treatment
is the ForwardRequest exception. It causes the ORB to deliver the current request
and subsequent requests to the object denoted in the forward reference member of
the exception.

174 ANNA UNIVERSITY CHENNAI


MIDDLE-WARE TECHNOLOGIES

7.11.4 Object Reference Creation


Object references are created in servers. Object references encapsulate Object Id
NOTES
and other information required by the ORB to locate the server and POA with which the
object is associated, e.g., in which POA scope the reference was created.
Object references can be created in the following ways:
 Explicit creation of object references: A server application can directly create
a reference with the create reference and create reference with id operations on a
POA object. These operations only create a reference, but do not associate the
designated object with an active servant.
 Explicit activation of servants: A server application can activate a servant explicitly
by associating it with an Object Id using the activate object or activate object with
id operations. Once activated, the server application can map the servant to its
corresponding reference using the servant to reference or Id to reference operations.
 Implicit activation of servants: If the server application attempts to obtain an
object reference corresponding to an inactive servant and the POA supports the
implicit activation policy, the POA can automatically assign a generated unique
Object Id to the servant and activate the resulting object.
Once a reference is created in the server, it can be exported to clients in a variety of
ways. For instance, it can be advertised via the OMG Naming and Trading Services. Likewise,
it can be converted to a string via CORBA::object to string and published in some way that
allows the client to discover the string and convert it to a reference using CORBA::string to
object. Moreover, it can be returned as the result of an operation invocation. Regardless of
how an object reference is obtained, however, once a client has an object reference it can
invoke operations on the object.
7.12 THE INTEGRATION OF FOREIGN OBJECT SYSTEMS
The Common ORB Architecture is designed to allow interoperation with a wide range
of object systems as illustrated in Figure 7.13. Because there are many existing object
systems, a common desire will be to allow the objects in those systems to be accessible via
the ORB. For those object systems that are ORBs themselves, they may be connected to
other ORBs through the mechanisms described throughout this manual.
For object systems that simply want to map their objects into ORB objects and receive
invocations through the ORB, one approach is to have those object systems appear to be
implementations of the corresponding ORB objects. The object system would register its
objects with the ORB and handle incoming requests, and could act like a client and perform
outgoing requests.
In some cases, it will be impractical for another object system to act like a POA object
implementation. An object adapter could be designed for objects that are created in conjunction
with the ORB and that are primarily invoked through the ORB. Another object system may
wish to create objects without consulting the ORB, and might expect most invocations to
occur within itself rather than through the ORB. In such a case, a more appropriate object
adapter might allow objects to be implicitly registered when they are passed through the
ORB

175 ANNA UNIVERSITY CHENNAI


DMC 1754 / 1945

NOTES

Figure 7.13 Different Ways to Integrate Foreign Object Systems


7.13 THE CORBA NETWORKING /COMMUNICATION MODEL
CORBA offers different methods to implement communication and data transfer
between objects. The basic communication models provided by CORBA are
synchronous two-way, one-way and deferred synchronous. To alleviate some drawbacks
of these models Asynchronous Method Invocation (AMI) has been introduced. The
Event Service and the Notification Service provide additional communication solutions.
This section briefly describes all these models and discusses their benefits and drawbacks.
Synchronous two-way
In this model, a client sends a two-way request to a target object and waits for
the object to return the response. The fundamental requirement is that the server
must be available to process the client’s request. While it is waiting, the client thread
that invoked the request is blocked and cannot perform any other processing. Thus,
a single-threaded client can be completely blocked while waiting for a response, which
may be unsatisfactory for certain types of performance-constrained applications.
The advantage of this model is that most programmers feel comfortable with it
because it conforms to the well-know method-call on local objects.

One-way

A one-way invocation is composed of only a request, with no response. One-


way is used to achieve “fire and forget” semantics while taking advantage of
CORBA’s type checking, marshalling/un-marshalling, and operation de-multiplexing
features. They can be problematic, however, since application developers are responsible
for ensuring end-to-end reliability.

The creators of the first version of CORBA intended ORBs (Object Request Broker)
to deliver one-way over unreliable transports and protocols such as the UDP. However,

176 ANNA UNIVERSITY CHENNAI


MIDDLE-WARE TECHNOLOGIES
most ORBs implement one-way over TCP, as required by the standard Internet Inter-ORB
Protocol (IIOP). This provides reliable delivery and end-to-end flow control. At the TCP NOTES
level, these features collaborate to suspend a client thread as long as TCP buffers on
its associated server are full.
Thus, one ways over IIOP are not guaranteed to be non-blocking. Consequently,
using one-way may or may not have the desired effect. Furthermore, CORBA states
that one-way operations have “best-effort” semantics, which means that an ORB
need not guarantee their delivery. .
Deferred synchronous
In this model, a client sends a request to a target object and then continues its own
processing. Unlike the way synchronous two-way requests are handled, the client ORB
does not explicitly block the calling thread until the response arrives. Instead, the client can
later either poll to see if the target object has returned a response, or it can perform a
separate blocking call to wait for the response. The deferred synchronous request model
can only be used if the requests are invoked using the Dynamic Invocation Interface (DII).
The DII requires programmers to write much more code than the usual method
(Static Invocation Interface or SSI). In particular, the DII-based application must build
the request incrementally and then explicitly ask the ORB to send it to the target object.
In contrast, all of the code needed to build and invoke requests with the SII is hidden from
the application in the generated stubs. The increased amount of code required to invoke
an operation via the DII yields larger programs that are hard to write and hard to
maintain. Moreover, the SII is type-safe because the C++ compiler ensures the correct
arguments are passed to the static stubs. Conversely, the DII is not type-safe. Thus, the
programmer must make sure to insert the right types into each operation invocation will
otherwise will not succeed. Of course, if one can’t afford to block waiting for
responses on two-way calls, the application developer needs to decouple the send and
receive operations. Historically, this meant the programmer was stuck using the DII. A
key benefit of the CORBA Messaging specification is that it effectively allows
deferred synchronous calls using static stubs (automatically generated communication
methods hiding CORBA complexities), which alleviates much of the tedium associated
with using the DII.
7.14 CORBA “HELLO WORLD” EXAMPLE
This is a high-level overview of how to create a complete CORBA (Common Object
Request Broker Architecture) application using IDL (Interface Definiton Language) to define
interfaces and the Java IDL compiler to generate stubs and skeletons. This example presents
the POA Inheritance model for server-side implementation.
This example details
 The IDL for a simple “Hello World” program
 A server that creates an object and publishes it with the naming service using the
default server-side implementation (POA)
 An application client that knows the object’s name, retrieves a reference for it from
the naming service, and invokes the object
 Instructions for compiling and running the example

177 ANNA UNIVERSITY CHENNAI


DMC 1754 / 1945
Defining the Interface (Hello.idl)
NOTES The first step to creating a CORBA application is to specify all the objects and their
interfaces using the OMG’s Interface Definition Language (IDL). IDL has a syntax similar
to C++ and can be used to define modules, interfaces, data structures, and more. The IDL
can be mapped to a variety of programming languages. The IDL mapping for Java is
summarized in “IDL to Java Language Mapping Summary”
The following code is written in the OMG IDL, and describes a CORBA object whose
sayHello() operation returns a string and whose shutdown() method shuts down the ORB.
Hello.idl
module HelloApp
{
interface Hello
{
string sayHello();
oneway void shutdown();
};
};
When writing code in OMG IDL, do not use an interface name as the name of a
module. Doing so runs the risk of getting inconsistent results when compiling with tools from
different vendors, thereby jeopardizing the code’s portability. For example, code containing
the same names could be compiled with the IDL to Java compiler from Sun Microsystems
and get one result. The same code compiled with another vendor’s IDL to Java compiler
could produce a different result.
Generated Files
The idlj compiler uses the IDL-to-Java mapping to convert IDL interface definitions to
corresponding Java interfaces, classes, and methods, which can then be used to implement
the client and server code. The following files are generated when Hello.idl is compiled with
the IDL-to-Java compiler, using the following command:
idlj -fall Hello.idl
Hello.java, the signature interface
The signature interface file, Hello.java extends org.omg.portable.IDLEntity,
org.omg.CORBA.Object, and the operations interface, HelloOperations. The signature
interface is used as the signature type in method declarations when interfaces of the specified
type are used in other interfaces. From the client’s point of view, an object reference for a
CORBA Hello object implements this interface.
The Stub implements the Hello interface, where it generates code for each method
to marshall the arguments, invoke the method, and then unmarshall the arguments.
HelloApp/Hello.java
package HelloApp;

178 ANNA UNIVERSITY CHENNAI


MIDDLE-WARE TECHNOLOGIES
/**
* HelloApp/Hello.java
NOTES
* Generated by the IDL-to-Java compiler
* from Hello.idl
*/
public interface Hello extends HelloOperations, org.omg.CORBA.Object,
org.omg.CORBA.portable.IDLEntity
{
} // interface Hello
HelloOperations.java, the operations interface
The Java operations interface, HelloOperations.java, is used in the server-side mapping
and as a mechanism for providing optimized calls for co-located clients and server. The
server developer provides implementation for the methods indicated by the operations
interface.
This interface contains the methods sayHello() and shutdown(). The IDL-to-Java
mapping puts all of the operations defined on the IDL interface into this file, which is shared
by both the stubs and skeletons. The server writer usually extends HelloPOA and provides
implementation for the methods provided by the operations interface.
HelloApp/HelloOperations.java
package HelloApp;
/**
* HelloApp/HelloOperations.java
* Generated by the IDL-to-Java compiler
* from Hello.idl
*/
public interface HelloOperations
{
String sayHello ();
void Shutdown ();
} // interface HelloOperations
HelloHelper.java, the Helper class
The Java class HelloHelper provides auxiliary functionality, notably the narrow() method
required to cast CORBA object references to their proper types. The Helper class is
responsible for reading and writing the data type to CORBA streams, and inserting and
extracting the data type from Anys. The Holder class delegates to the methods in the Helper
class for reading and writing.

179 ANNA UNIVERSITY CHENNAI


DMC 1754 / 1945
HelloApp/Heool Helper.java
NOTES package HelloApp;
/**
* HelloApp/HelloHelper.java
* Generated by the IDL-to-Java compiler
* from Hello.idl
*/
abstract public class HelloHelper
{
private static String _id = “IDL:HelloApp/Hello:1.0”;
public static void insert (org.omg.CORBA.Any a, HelloApp.Hello that)
{RBA.portable.OutputStream out = a.create_output_stream ();
a.type (type ());
write (out, that);
a.read_value (out.create_input_stream (), type ());
}
public static HelloApp.Hello extract (org.omg.CORBA.Any a)
{
return read (a.create_input_stream ());
}
private static org.omg.CORBA.TypeCode __typeCode = null;
synchronized public static org.omg.CORBA.TypeCode type ()
{
if (__typeCode == null) public static HelloApp.Hello read
(org.omg.CORBA.portable.InputStream istream)
{
return narrow (istream.read_Object (_HelloStub.class));
}
public static void write (org.omg.CORBA.portable.OutputStream ostream,
HelloApp.Hello value)
{
ostream.write_Object ((org.omg.CORBA.Object) value);
}
public static HelloApp.Hello narrow (org.omg.CORBA.Object obj)
{
if (obj == null)
return null;
else if (obj instanceof HelloApp.Hello)
return (HelloApp.Hello)obj;

180 ANNA UNIVERSITY CHENNAI


MIDDLE-WARE TECHNOLOGIES

else if (!obj._is_a (id ()))


throw new org.omg.CORBA.BAD_PARAM ();
NOTES
else
{
org.omg.CORBA.portable.Delegate delegate =
((org.omg.CORBA.portable.ObjectImpl)obj)._get_delegate ();
HelloApp._HelloStub stub = new HelloApp._HelloStub ();
stub._set_delegate(delegate);
return stub;
}
}
}

HelloHolder.java, the Holder class

The Java class called HelloHolder holds a public instance member of type Hello.
Whenever the IDL type is an out or an inout parameter, the Holder class is used. It provides
operations for org.omg.CORBA.po rtable.OutputStream and
org.omg.CORBA.portable.InputStream arguments, which CORBA allows, but which do
not map easily to Java’s semantics. The Holder class delegates to the methods in the
Helper class for reading and writing. It implements org.omg.CORBA.portable.Streamable.

HelloApp/HelloHolder.java
package HelloApp;
/**
* HelloApp/HelloHolder.java
* Generated by the IDL-to-Java compiler (portable), version “3.0”
* from Hello.idl
* Thursday, March 22, 2001 2:17:15 PM PST
*/
public final class HelloHolder implements org.omg.CORBA.portable.Streamable
{
public HelloApp.Hello value = null;
public HelloHolder ()

181 ANNA UNIVERSITY CHENNAI


DMC 1754 / 1945

{
NOTES }
public HelloHolder (HelloApp.Hello initialValue)
{
value = initialValue;
}
public void _read (org.omg.CORBA.portable.InputStream i)
{
value = HelloApp.HelloHelper.read (i);
}
public void _write (org.omg.CORBA.portable.OutputStream o)
{
HelloApp.HelloHelper.write (o, value);
{
__typeCode = org.omg.CORBA.ORB.init ().create_interface_tc
(HelloApp.HelloHelper.id (), “Hello”);
}
return __typeCode;
}
public static String id ()
{
return _id;
}
}
public org.omg.CORBA.TypeCode _type ()
{
return HelloApp.HelloHelper.type ();
}
}

182 ANNA UNIVERSITY CHENNAI


MIDDLE-WARE TECHNOLOGIES

_HelloStub.java, the client stub


NOTES
The Java class _HelloStub is the stub file for the client-side mapping. It extends
org.omg.CORBA.portable.ObjectImpl and implements the Hello.java interface.
HelloApp/_HelloStub.java
package HelloApp;
/**
* HelloApp/_HelloStub.java
* Generated by the IDL-to-Java compiler
* from Hello.idl
*/
public class _HelloStub extends org.omg.CORBA.portable.ObjectImpl implements
HelloApp.Hello
{

public String sayHello ()


{
org.omg.CORBA.portable.InputStream _in = null;
try {
org.omg.CORBA.portable.OutputStream _out = _request (“sayHello”, true);
_in = _invoke (_out);
String __result = _in.read_string ();
return __result;
} catch (org.omg.CORBA.portable.ApplicationException _ex) {
_in = _ex.getInputStream ();
String _id = _ex.getId ();
throw new org.omg.CORBA.MARSHAL (_id);
} catch (org.omg.CORBA.portable.RemarshalException _rm) {
return sayHello ();
} finally {
_releaseReply (_in);
}
} // sayHello

183 ANNA UNIVERSITY CHENNAI


DMC 1754 / 1945

public void Shutdown ()


NOTES {
org.omg.CORBA.portable.InputStream _in = null;
try {
org.omg.CORBA.portable.OutputStream _out = _request (“Shutdown”, false);
_in = _invoke (_out);
} catch (org.omg.CORBA.portable.ApplicationException _ex) {
_in = _ex.getInputStream ();
String _id = _ex.getId ();
throw new org.omg.CORBA.MARSHAL (_id);
} catch (org.omg.CORBA.portable.RemarshalException _rm) {
Shutdown ();
} finally {
_releaseReply (_in);
}
} // Shutdown
// Type-specific CORBA::Object operations
private static String[] __ids = {
“IDL:HelloApp/Hello:1.0”};
public String[] _ids ()
{
return (String[])__ids.clone ();
}
private void readObject (java.io.ObjectInputStream s) throws java.io.IOException
{
String str = s.readUTF ();
String[] args = null;
java.util.Properties props = null;
org.omg.CORBA.Object obj = org.omg.CORBA.ORB.init (args,
props).string_to_object
(str);

184 ANNA UNIVERSITY CHENNAI


MIDDLE-WARE TECHNOLOGIES

org.omg.CORBA.portable.Delegate delegate = ((org.omg.CORBA.portable.ObjectImpl)


obj)._get_delegate ();
NOTES
_set_delegate (delegate);
}
private void writeObject (java.io.ObjectOutputStream s) throws java.io.IOException
{
String[] args = null;
java.util.Properties props = null;
String str = org.omg.CORBA.ORB.init (args, props).object_to_string (this);
s.writeUTF (str);
}
} // class _HelloStub

HelloPOA.java, the server skeleton

The Java class HelloImplPOA is the skeleton file for the server-side mapping, providing
basic CORBA functionality for the server. It extends org.omg.PortableServer.Servant,
and implements the InvokeHandler interface and the HelloOperations interface. The server
class, HelloServant, extends HelloPOA.
HelloApp/HelloPOA.java
package HelloApp;
/**
* HelloApp/HelloPOA.java
* Generated by the IDL-to-Java compiler
* from Hello.idl
*/
public abstract class HelloPOA extends org.omg.PortableServer.Servant
implements HelloApp.HelloOperations, org.omg.CORBA.portable.InvokeHandler
{
// Constructors
private static java.util.Hashtable _methods = new java.util.Hashtable ();
static
{
_methods.put (“sayHello”, new java.lang.Integer (0));

185 ANNA UNIVERSITY CHENNAI


DMC 1754 / 1945

_methods.put (“Shutdown”, new java.lang.Integer (1));


NOTES }
public org.omg.CORBA.portable.OutputStream _invoke (String method,
org.omg.CORBA.portable.InputStream in,
org.omg.CORBA.portable.ResponseHandler rh)
{
org.omg.CORBA.portable.OutputStream out = null;
java.lang.Integer __method = (java.lang.Integer)_methods.get (method);
if (__method == null)
throw new org.omg.CORBA.BAD_OPERATION (0,
org.omg.CORBA.CompletionStatus.COMPLETED_MAYBE);
switch (__method.intValue ())
{
case 0: // HelloApp/Hello/sayHello
{
String __result = null;
__result = this.sayHello ();
out = rh.createReply();
out.write_string (__result);
break;
}
case 1: // HelloApp/Hello/Shutdown
{
this.Shutdown ();
out = rh.createReply();
break;
}
default:
throw new org.omg.CORBA.BAD_OPERATION (0,
org.omg.CORBA.CompletionStatus.COMPLETED_MAYBE);
}
return out;
} // _invoke
// Type-specific CORBA::Object operations
private static String[] __ids = {

186 ANNA UNIVERSITY CHENNAI


MIDDLE-WARE TECHNOLOGIES
“IDL:HelloApp/Hello:1.0”};
public String[] _all_interfaces (org.omg.PortableServer.POA poa, byte[] objectId) NOTES
{
return (String[])__ids.clone ();
}
public Hello _this()
{
return HelloHelper.narrow(
super._this_object());
}
public Hello _this(org.omg.CORBA.ORB orb)
{
return HelloHelper.narrow(
super._this_object(orb));
}
} // class HelloPOA

Completing the application

To complete the application, the developer must write the client and server code.

HelloServer.java, a transient server

The example server consists of two classes, the servant and the server. The servant,
HelloImpl, is the implementation of the Hello IDL interface; each Hello instance is
implemented by a HelloImpl instance. The servant is a subclass of HelloPOA, which is
generated by the idlj compiler from the example IDL.

The servant contains one method for each IDL operation, in this example, the sayHello()
and shutdown() methods. Servant methods are just like ordinary Java methods; the extra
code to deal with the ORB, with marshaling arguments and results, and so on, is provided
by the skeleton.

This example shows the code for a transient server. “Hello World” application can
also be written for a persistent server

The following code is written by the developer.


HelloServer.java
// HelloServer.java
//
import HelloApp.*;

187 ANNA UNIVERSITY CHENNAI


DMC 1754 / 1945

import org.omg.CosNaming.*;
NOTES import org.omg.CosNaming.NamingContextPackage.*;
import org.omg.CORBA.*;
import org.omg.PortableServer.*;
import org.omg.PortableServer.POA;
import java.util.Properties;
class HelloImpl extends HelloPOA {
private ORB orb;
public void setORB(ORB orb_val) {
orb = orb_val;
}
// implement sayHello() method
public String sayHello() {
return “\nHello world !!\n”;
}
// implement shutdown() method
public void shutdown() {
orb.shutdown(false);
}
}
public class HelloServer {
public static void main(String args[]) {
try{
// create and initialize the ORB
ORB orb = ORB.init(args, null);
// get reference to rootpoa & activate the POAManager
POA rootpoa = POAHelper.narrow(orb.resolve_initial_references(“RootPOA”));
rootpoa.the_POAManager().activate();
// create servant and register it with the ORB
HelloImpl helloImpl = new HelloImpl();
helloImpl.setORB(orb);
// get object reference from the servant
org.omg.CORBA.Object ref = rootpoa.servant_to_reference(helloImpl);
Hello href = HelloHelper.narrow(ref);
// get the root naming context

188 ANNA UNIVERSITY CHENNAI


MIDDLE-WARE TECHNOLOGIES
// NameService invokes the name service
org.omg.CORBA.Object objRef = NOTES
orb.resolve_initial_references(“NameService”);
// Use NamingContextExt which is part of the Interoperable
// Naming Service (INS) specification.
NamingContextExt ncRef = NamingContextExtHelper.narrow(objRef);
// bind the Object Reference in Naming
String name = “Hello”;
NameComponent path[] = ncRef.to_name( name );
ncRef.rebind(path, href);
System.out.println(“HelloServer ready and waiting ...”);
// wait for invocations from clients
orb.run();
}
catch (Exception e) {
System.err.println(“ERROR: “ + e);
e.printStackTrace(System.out);
}
System.out.println(“HelloServer Exiting ...”);
}
}
HelloClient.java, the client application
This example shows a Java client application. CORBA client can be written as a
servlet, a JSP, an applet, etc.
HelloClient.java
//
import HelloApp.*;
import org.omg.CosNaming.*;
import org.omg.CosNaming.NamingContextPackage.*;
import org.omg.CORBA.*;
public class HelloClient
{
static Hello helloImpl;
public static void main(String args[])
{
try{
// create and initialize the ORB
ORB orb = ORB.init(args, null);

189 ANNA UNIVERSITY CHENNAI


DMC 1754 / 1945
// get the root naming context
NOTES org.omg.CORBA.Object objRef =
orb.resolve_initial_references(“NameService”);
// Use NamingContextExt instead of NamingContext. This is
// part of the Interoperable naming Service.
NamingContextExt ncRef = NamingContextExtHelper.narrow(objRef);
// resolve the Object Reference in Naming
String name = “Hello”;
helloImpl = HelloHelper.narrow(ncRef.resolve_str(name));
System.out.println(“Obtained a handle on server object: “ + helloImpl);
System.out.println(helloImpl.sayHello());
helloImpl.shutdown();
} catch (Exception e) {
System.out.println(“ERROR : “ + e) ;
e.printStackTrace(System.out);
}
}
}
Compiling and Running a Java CORBA application
This Hello World program lets the developer learn and experiment with all the tasks
required to develop almost any CORBA program that uses static invocation. Static invocation,
as explained earlier, uses a client stub for the invocation and a server skeleton for the
service being invoked, is used when the interface of the object is known at compile time. If
the interface is not known at compile time, then dynamic invocation must be used.
This example requires a naming service, which is a CORBA service that allows CORBA
objects to be named by means of binding a name to an object reference. The ‘name binding’
may be stored in the naming service, and a client may supply the name to obtain the desired
object reference. The two options for Naming Services shipped with J2SE v.1.4 are
‘tnameserv’, a transient naming service, and orbd, which is a daemon process containing a
Bootstrap Service, a Transient Naming Service, a Persistent Naming Service, and a Server
Manager. This example uses orbd.
When running this example, when using Solaris software, it is necessary that the
developer logs in as ‘root’ to start a process on a port under 1024. For this reason, it is
recommend that a port number greater than or equal to 1024 is used. The -ORBInitialPort
option is used to override the default port number in this example. The following instructions
assume that port 1050 is used for the Java IDL Object Request Broker Daemon, orbd. A
different port can be substituted if necessary. When running these examples on a Windows
machine, subtitute a backslash (\) in path names.
To run this client-server application on the development machine:
1. Change to the directory that contains the file Hello.idl.

190 ANNA UNIVERSITY CHENNAI


MIDDLE-WARE TECHNOLOGIES
2. Run the IDL-to-Java compiler, idlj, on the IDL file to create stubs and skeletons.
This step assumes that you have included the path to the java/bin directory in your NOTES
path.
idlj -fall Hello.idl
The -fall option must be used with the idlj compiler to generate both client and server-
side bindings. This command line will generate the default server-side bindings, which assumes
the POA Inheritance server-side model.
The idlj compiler generates a number of files. The actual number of files generated
depends on the options selected when the IDL file is compiled. The generated files provide
standard functionality, so you can ignore them until it is time to deploy and run your program.
The files generated by the idlj compiler for Hello.idl, with the -fall command line option, are
(as explained earlier):
 HelloPOA.java
This abstract class is the stream-based server skeleton, providing basic CORBA
functionality for the server. It extends org.omg.ProtableServer.Servant and implements the
InvokeHandler interface and the HelloOperations interface. The server class HelloImpl
extends HelloPOA.
 HelloStub.java
This class is the client stub, providing CORBA functionality for the client. It extends
org.omg.CORBA.portable.ObjectImpl and implements the Hello.java interface.
 Hello.java
This interface contains the Java version of our IDL interface. The Hello.java interface
extends org.omg.CORBA.Object, providing standard CORBA object functionality. It also
extends the HelloOperations interface and org.omg.CORBA.portable.IDLEntity.
 HelloHelper.java
This class provides auxiliary functionality, notably the narrow() method required to
cast CORBA object references to their proper types. The Helper class is responsible for
reading and writing the data type to CORBA streams, and inserting and extracting the data
type. The Holder class delegates to the methods in the Helper class for reading and writing.
 HelloHolder.java
This final class holds a public instance member of type Hello. Whenever the IDL type
is an out or an in-out parameter, the Holder class is used. It provides operations for
org.omg.CORBA.portable.OutputStream and org.omg.CORBA.portable.InputStream
arguments, which CORBA allows, but which do not map easily to Java’s semantics. The
Holder class delegates to the methods in the Helper class for reading and writing. It
implements org.omg.CORBA.portable.Streamable.
 HelloOperations.java
This interface contains the methods sayHello() and shutdown(). The IDL-to-Java
mapping puts all of the operations defined on the IDL interface into this file, which is shared
by both the stubs and skeletons.

191 ANNA UNIVERSITY CHENNAI


DMC 1754 / 1945
3. Compile the .java files, including the stubs and skeletons (which are in the directory
NOTES HelloApp). This step assumes the java/bin directory is included in your path.
javac *.java HelloApp/*.java
4. Start orbd.
To start orbd from a UNIX command shell, enter:
orbd -ORBInitialPort 1050&
From an MS-DOS system prompt (Windows), enter:
start orbd -ORBInitialPort 1050
Port 1050 is the port on which the name server is run. The -ORBInitialPort argument
is a required command-line argument.
5. Start the Hello server:
To start the Hello server from a UNIX command shell, enter:
java HelloServer -ORBInitialPort 1050 -ORBInitialHost localhost&
From an MS-DOS system prompt (Windows), enter:
start java HelloServer -ORBInitialPort 1050 -ORBInitialHost localhost
For this example, you can omit -ORBInitialHost localhost since the name server is
running on the same host as the Hello server. If the name server is running on a different
host, use -ORBInitialHost nameserverhost to specify the host on which the IDL name
server is running. Specify the name server (orbd) port as done in the previous step, for
example, -ORBInitialPort 1050.
6. Run the client application:
java HelloClient -ORBInitialPort 1050 -ORBInitialHost localhost
For this example, you can omit -ORBInitialHost localhost since the name server is
running on the same host as the Hello client. If the name server is running on a different
host, use -ORBInitialHost nameserverhost to specify the host on which the IDL name
server is running. Specify the name server (orbd) port as done in the previous step, for
example, -ORBInitialPort 1050.
When you have finished be sure to shut down or kill the name server (orbd). To do this
from a DOS prompt, select the window that is running the server and enter Ctrl+C to shut
it down. To do this from a Unix shell, find the process, and kill it. The server will continue to
wait for invocations until it is explicitly stopped.
The “Hello World” application can also be distributed to run on two machines - a client
and a server.
7.15 THE CORBA OBJECT MODEL
In CORBA, all communication between objects is done through object references.
Visibility to objects is provided only through passing references to those objects; objects
cannot be passed by value. Remote objects in CORBA remain remote. An object cannot
move or copy itself to another location.
Being an object-oriented architecture, CORBA has an object model. Because CORBA
is a distributed architecture, its object model differs somewhat from the traditional object
models. Three of the major differences between the CORBA object model and traditional
models lie in CORBA’s “semi-transparent” support for object distribution, its treatment of

192 ANNA UNIVERSITY CHENNAI


MIDDLE-WARE TECHNOLOGIES
object references, and its use of what are called object adapters—particularly the Basic
Object Adapter (BOA). NOTES
7.15.1 Object Distribution
To a CORBA client, a remote method call looks exactly like a local method call. Thus,
the distributed nature of CORBA objects is transparent to the users of those objects; the
clients are unaware that they are actually dealing with objects which are distributed on a
network. Object distribution has more potential for failure. CORBA offers a set of system
exceptions, which can be raised by any remote method.
7.15.2 Object References
In a distributed application, there are two possible methods for one application component
to obtain access to an object. One method is known as passing by reference and the other
is passing by value. F igure 7.14 illustrates the pa ssing by refer ence

Step 1 Process B Method invocation Process A

Object
reference Object

Step 2 Process B Process A

Object
reference Method invocation Object

Step 3 Process B Process A

Object
reference Object

Step 4 Process B Process A


method.
Object
reference Method invocation Object

Step 5 Process B Process A

Object
reference Object

Figure 7.14 Passing an Object by Reference.

193 ANNA UNIVERSITY CHENNAI


DMC 1754 / 1945

In passing by reference method, the first process, Process A, passes an object reference
NOTES to the second process, Process B. When Process B invokes a method on that object, the
method is executed by Process A because that process owns the object. Process B only has
visibility to the object (through the object reference), and thus can only request that Process
A execute methods on Process B’s behalf. Passing an object by reference means that a
process grants visibility of one of its objects to another process while retaining ownership of
that object. When an object is passed by reference, the object itself remains “in place”
while an object reference for that object is passed. Operations on the object through the
object reference are actually processed by the object itself.
Figure 7.15 illustrates the method of passing an object between application components
using passing by value.

In this method, the actual state of the object (such as the values of its member variables)
is passed to the requesting component through serialization. When methods of the object are
invoked by Process B, they are executed by Process B instead of Process A, where the
original object resides. Furthermore, because the object is passed by value, the state of the
original object is not changed; only the copy (now owned by Process B) is modified. Generally,
it is the responsibility of the developer to write the code that serializes and deserializes
objects. When an object is passed by value, the object’s state is copied and passed to its
destination, where a new copy of the object is instantiated. Operations on that object’s copy
are processed by the copy, not by the original object.

Process B Method invocation Process A


Step 1

Serialized object Object

Process B Process A
Step 2
Object
Copy of object
created

Process B Process A
Step 3 Processing
Occurs locally
Copy of Object
object

Figure 7.15 Passing an Object by Value.

Serialization refers to the encoding of an object’s state into a stream, such as a disk
file or network connection. When an object is serialized, it can be written to such a stream

194 ANNA UNIVERSITY CHENNAI


MIDDLE-WARE TECHNOLOGIES

and subsequently read and deserialized, a process that converts the serialized data containing
the object’s state back into an instance of the object. NOTES
One important aspect of the CORBA object model is that all objects are passed by
reference. In order to facilitate passing objects by value in a distributed application, in addition
to passing the state of the object across the network, it is also necessary to ensure that the
component receiving the object has implementations for the methods supported by that
object

There are a few issues associated with passing objects by reference only. Remember
that when passing by reference is the only option, methods invoked on an object are always
executed by the component that has created that object. An object cannot migrate from one
application component to another. Hence all method calls are remote method calls. But if a
component invokes a lengthy series of method calls on a remote object, a great deal of
overhead can be consumed by the communication between the two components. For this
reason, it might be more efficient to pass an object by value so the component using that
object can manipulate it locally.
7.15.3 Basic Object Adapters (BOAs)
The BOA provides CORBA objects with a common set of methods for accessing
ORB functions. These functions range from user authentication to object activation to object
persistence. The BOA is, in effect, the CORBA object’s interface to the ORB. According
to the CORBA specification, the BOA should be available in every ORB implementation,
and this seems to be the case with most (if not all) CORBA products available.
One particularly important feature of the BOA is its object activation and deactivation
capability. The BOA supports four types of activation policies, which indicate how application
components are to be initialized. These activation policies include the following:
 The shared server policy, in which a single server (which in this context usually
means a process running on a machine) is shared between multiple objects
 The unshared server policy, in which a server contains only one object
 The server-per-method policy, which automatically starts a server when an object
method is invoked and exits the server when the method returns
 The persistent server policy, in which the server is started manually (by a user,
batch job, system daemon, or some other external agent)
7.16 THE CORBA COMPONENT MODEL (CCM)
Increasingly, over the last decade, CORBA has formed the basis of several of the
leading Enterprise Application Integration (EAI) solutions. In fact, it is one of most widely
deployed and well-proven mechanism for software objects to communicate with one another.
As the complexity of solutions increased, there was growing need to extend to a universal
component model, enabling inter-working with EJBs and support for SOAP/WSDL web
services. CORBA Component Model (CCM) specification was crafted carefully over three
years to assure full integration of J2EE, .NET, ActiveX, and CORBA 2 object model. CCM
has been adopted as an OMG standard by its Board of Directors at the Technical Meeting
in Yokohama, Japan, April-2002. CCM is a standard for integrating components across both
multiple programming languages and multiple operating systems.

195 ANNA UNIVERSITY CHENNAI


DMC 1754 / 1945

The Object Management Architecture (OMA) in the CORBA 2.x specification defines
NOTES an advanced Distributed Object Computing (DOC) middleware standard for building portable
distributed applications. The CORBA 2.x specification focuses on interfaces, which are
essentially contracts between clients and servers that define how clients view and access
object services provided by a server. Despite its advanced capabilities, however, the CORBA
2.x standard has the following limitations:
 Lack of functional boundaries. The CORBA 2.x object model treats all interfaces
as client/server contracts. Inter-dependencies between collaborating object
implementations was left to the application developers to handle.
 Lack of generic application server standards. CORBA 2.x does not specify a
generic application server framework to perform common server configuration work,
including initializing a server and its QoS policies, providing common services (such
as notification or naming services), and managing the runtime environment of each
component. Although CORBA 2.x standardized the interactions between object
implementations and object request brokers (ORBs), server developers must still
determine how: (1) object implementations are installed in an ORB; and (2) the
ORB and object implementations interact.
 Lack of software configuration and deployment standards. There is no standard
way to distribute and start up object implementations remotely in CORBA 2.x
specifications. Application administrators must therefore resort to in-house scripts
and procedures to deliver software implementations to target machines, configure
the target machine and software implementations for execution, and then instantiate
software implementations to make them ready for clients.
What is CORBA Component Model (CCM)?
The CORBA Component Model (CCM) is a component middleware that addresses
limitations with earlier generations of DOC middleware. CCM is a multi-language, multi-
platform component standard from OMG, which represents a major extension for enterprise
computing. The CCM specification extends the CORBA object model to support the concept
of components and establishes standards for implementing, packaging, assembling, and
deploying component implementations. From a client perspective, a CCM component is an
extended CORBA object that encapsulates various interaction models via different interfaces
and connection operations. From a server perspective, components are units of implementation
that can be installed and instantiated independently in standard application server runtime
environments stipulated by the CCM specification. Components are larger building blocks
than objects, with more of their interactions managed to simplify and automate key aspects
of construction, composition, and configuration into applications.
It has been argued that CCM is a technical improvement on EJB. In fact, CCM joins
together the best of .NET and J2EE component models. The .NET is multi-language, single-
platform, while J2EE is single-language and multi-platform.
Key terms
A component is an implementation entity that exposes a set of ports, which are named
interfaces and connection points that components use to collaborate with each other. The
CCM specification introduces the concept of components and the definition of a
comprehensive set of interfaces and techniques for specifying implementation, packaging,
and deployment of components. The interfaces and connection points are illustrated in Figure
7.14.

196 ANNA UNIVERSITY CHENNAI


MIDDLE-WARE TECHNOLOGIES
 Facets - The component facets are the interfaces that the component exposes.
 Receptacles - These allow components to “hook” themselves together. Component NOTES
systems contain many components that work together to provide the client
functionality. The receptacle allows a component to declare its dependency to an
object reference that it must use. Receptacles provide the mechanics to specify
interfaces required for a component to function correctly.
 Event sources/Sinks - Allow components to work with each other without being
tightly linked. This is loose coupling as provided by the Observer design pattern.
When a component declares its interest to publish or emit an event, it is an event
source. A publisher is an exclusive provider of an event while an emitter shares an
event channel with other event sources. Other components become subscribers or
consumers of those events by declaring an event sink.
 Attributes - An extension of the traditional notion of CORBA interface attributes
that allow component values to be configured, the CCM version of attribute allows
operations that access and modify values to raise exceptions. This is a useful feature
for raising a configuration exception after the configuration has completed and an
attribute has been accessed or changed.

Figure 7.14 Component Model

A container provides the server runtime environment for component implementations


called executors. It contains various pre-defined hooks and operations that give components
access to strategies and services, such as persistence, event notification, transaction,
replication, load balancing, and security. Each container defines a collection of runtime
strategies and policies, such as an event delivery strategy and component usage categories,
and is responsible for initializing and providing runtime contexts for the managed components.

197 ANNA UNIVERSITY CHENNAI


DMC 1754 / 1945
Component implementations have associated metadata written in XML that specify the
NOTES required container strategies and policies.
In addition to the building blocks outlined above, the CCM specification also standardizes
various aspects of stages in the application development lifecycle, notably component
implementation, packaging, assembly, and deployment, where each stage of the lifecycle
adds information pertaining to these aspects.

The CCM Component Implementation Framework (CIF) automatically generates


component implementation skeletons and persistent state management mechanisms using
the Component Implementation Definition Language (CIDL). CCM packaging tools bundle
implementations of a component with related XML-based component metadata. CCM
assembly tools use XML-based metadata to describe component compositions, including
component locations and interconnections among components, needed to form an assembled
application. Finally, CCM deployment tools use the component assemblies and composition
metadata to deploy and initialize applications.
The tools and mechanisms defined by CCM collaborate to address the limitations with
DOC middleware described earlier. The CCM programming paradigm separates the concerns
of composing and provisioning reusable software components into the following development
roles within the application lifecycle:
 Component designers, who define the component features by specifying what each
component does and how components collaborate with each other and with their
clients. Component designers determine the various types of ports that components
offer and/or require.
 Component implementers, who develop component implementations and specify
the runtime support a component requires via metadata called component descriptors.
 Component packagers, who bundle component implementations with metadata giving
their default properties and their component descriptors into component packages.
 Component assemblers, who configure applications by selecting component
implementations, specifying component instantiation constraints, and connecting ports
of component instances via metadata called assembly descriptors.
 System deployers, who analyze the runtime resource requirements of assembly
descriptors and prepare and deploy required resources where component assemblies
can be realized.
The current CCM specification defines the four following models
The abstract model
The CCM abstract model offers developers to define interfaces and properties of
components. The OMG IDL (Interface Definition Language) has been extended to express
component interconnections. A component can offer multiple interfaces, each one defining
a particular point of view to interact with the component. The four kinds of interfaces are
named ports.
Two interaction modes are provided: facets for synchronous invocations, and event
sinks for asynchronous notifications. Moreover, a component can define its required interfaces,
which define how the component interacts with others: receptacles for synchronous
invocations, and event sources for asynchronous notifications.

198 ANNA UNIVERSITY CHENNAI


MIDDLE-WARE TECHNOLOGIES
The abstract model also defines instance managers — component homes — which
are based on two design patterns: factory and finder. A home is defined to create and NOTES
retrieve a specific component type, and can only manage instances of this type. Nevertheless,
it is possible to define several home types for a single component type.
The programming model
The CCM programming model defines the Component Implementation Definition
Language (CIDL) which is used to describe the implementation structure of a component
and its system requirements: the set of implementation classes, the abstract persistence
state, etc. This language is associated with the Component Implementation Framework
(CIF). This framework allows developers to merge the component functional part they
have produced and the non-functional part generated from OMG IDL3 and CIDL descriptions.
The functional part includes the implementation of the provided interfaces (facets and event
sinks).
Using the OMG IDL3 definition as well as the CIDL description of the component, a
compiler produces the skeleton of the component implementation. This skeleton includes
the non-functional part of the component implementation, i.e. un-marshaling GIOP requests,
port management, activation, and persistence requirements. These skeletons are implemented
on top of APIs provided by containers. Thus, a developer only has to write the functional
code in order to complete the component implementation. The compiler also produces the
OMG IDL2 mapping as well as a XML descriptor for the component implementation. The
OMG IDL2 will be used by component clients. The XML descriptor will be used during the
deployment of the component implementation as discussed in the following subsection.
The implementation generated part also provides a dynamic introspection API. It includes
the operations to discover component ports in a generic manner (same operations for any
component type), or in a specific one (operations generated according to the component
type). These operations can be used in a dynamic platform to introspect and interconnect
component instances at runtime.
The deployment model
The CCM deployment model is based on the use of software packages, i.e. “ZIP”
archives containing component descriptors and implementations. Descriptors are written
using the Open Software Description (OSD) language which is a XML vocabulary. This
language allows architects to describe four kinds of descriptors:
 The Software Package Descriptor provides global information about a package
such as the author of the package, the license, as well as interfaces, properties,
dependencies, and implementations of a component.
 The CORBA Component Descriptor contains technical information about the
component implementation. This information is generated from the CIDL description
and the administrator only has to set the policies (like security).
 The Component Assembly Descriptor describes the initial configuration of the
application. It defines which components to use and how to interconnect them.
 The Property File Descriptor contains the value of the component properties. These
are used while configuring the various component instances.

The deployment process allows architects to easily install an application on various


sites. A ComponentInstallation and an AssemblyFactory have to be running on any site an
application could be installed. These two objects are used by the deployment tool to install

199 ANNA UNIVERSITY CHENNAI


DMC 1754 / 1945
the packages of the application, and to create an instance of it. AssemblyFactory objects
NOTES manage Assembly objects that represent deployed applications. The deployment tool provides
an OSD assembly descriptor to the Assembly object that really performs the deployment of
the application. Thus, the deployment process is fixed in this object, and cannot be controlled
by an architect nor by the application!

The execution model


The CCM execution model defines containers as the technological artifacts. A container
is a runtime environment for component instances and their homes. Several containers
could be hosted by a same component server.
A container is more than a simple execution environment. Containers hide the complexity
of most of the system services like the POA, transaction, security, persistence, and notification
services. Thus, containers take part in the management of non-functional aspects of a
component. Clients interact with a component instance through the generated OMG IDL2
interfaces.
Additionally, the component server has the ability to download archives from repositories.
Thus, downloading packages can be done at any time, and more precisely only when a
component is required.
7.17 CORBA ALTERNATIVES
When designing and implementing distributed applications, CORBA is not the developer’s
only choice. Other mechanisms exist by which such applications can be built. Depending on
the nature of the application—ranging from its complexity to the platform(s) it runs on to the
language(s) used to implement it—there are a number of alternatives for a developer to
consider. Some of these concepts have also been discussed in the earlier units.
7.17.1 Socket Programming
In most modern systems, communication between machines, and sometimes between
processes in the same machine, is done through the use of sockets. Simply put, a socket is
a channel through which applications can connect with each other and communicate. The
most straightforward way to communicate between application components, then, is to use
sockets directly (this is known as socket programming), meaning that the developer writes
data to and/or reads data from a socket.
The Application Programming Interface (API) for socket programming is rather low-
level. As a result, the overhead associated with an application that communicates in this
fashion is very low. However, because the API is low-level, socket programming is not
well-suited to handling complex data types, especially when application components reside
on different types of machines or are implemented in different programming languages.
Whereas direct socket programming can result in very efficient applications, the approach
is usually unsuitable for developing complex applications.
7.17.2 Remote Procedure Call (RPC)
One rung on the ladder above socket programming is Remote Procedure Call (RPC).
RPC provides a function-oriented interface to socket-level communications. Using RPC,
rather than directly manipulating the data that flows to and from a socket, the developer
defines a function—much like those in a functional language such as C—and generates
code that makes that function look like a normal function to the caller. Under the hood, the

200 ANNA UNIVERSITY CHENNAI


MIDDLE-WARE TECHNOLOGIES
function actually uses sockets to communicate with a remote server, which executes the
function and returns the result, again using sockets. NOTES
Because RPC provides a function-oriented interface, it is often much easier to use
than raw socket programming. RPC is also powerful enough to be the basis for many client/
server applications. Although there are varying incompatible implementations of RPC protocol,
a standard RPC protocol exists that is readily available for most platforms.
7.17.3 OSF Distributed Computing Environment (DCE)
The Distributed Computing Environment (DCE), a set of standards pioneered by the
Open Software Foundation (OSF), includes a standard for RPC. Although the DCE standard
has been around for some time it has not gained wide acceptance.
7.17.4 Microsoft Distributed Component Object Model (DCOM)
The Distributed Component Object Model (DCOM), Microsoft’s entry into the distributed
computing foray, offers capabilities similar to CORBA. DCOM is a relatively robust object
model that enjoys particularly good support on Microsoft operating systems. More about
this model is discussed in the next unit.
One interesting development concerning CORBA and DCOM is the availability of
CORBA-DCOM bridges, which enable CORBA objects to communicate with DCOM
objects and vice versa. Because of the “impedance mismatch” between CORBA and DCOM
objects (meaning that there are inherent incompatibilities between the two that are difficult
to reconcile), the CORBA-DCOM bridge is not a perfect solution, but it can prove useful in
situations where both DCOM and CORBA objects might be used.
7.17.5 Java Remote Method Invocation (RMI)
Another alternative is Java Remote Method Invocation (RMI), a very CORBA-like
architecture with a few twists. One advantage of RMI is that it supports the passing of
objects by value. A disadvantage, however, is that RMI is a Java-only solution; that is, RMI
clients and servers must be written in Java. For all-Java applications—particularly those
that benefit from the capability to pass objects by value—RMI might be a good choice, but
if there is a chance that the application will later need to interoperate with applications
written in other languages, CORBA is a better choice. Fortunately, full CORBA
implementations already exist for Java, ensuring that Java applications interoperate with the
rest of the CORBA world.
7.18 INTERFACE DEFINITION LANGUAGE
IDL is the language used to define interfaces between application components. IDL is
not a procedural language; it can define only interfaces, not implementations. IDL ensures
that data is properly exchanged between dissimilar languages. It is the responsibility of the
IDL specification—and the IDL compilers to define data types in a language-independent
way. The IDL is independent of any programming language. It achieves this language
independence through the concept of a language mapping. A language mapping is a
specification that maps IDL language constructs to the constructs of a particular programming
language. The complete details of IDL grammar and syntax is given in the IDL specifications.
7.18.1 Lexical Conventions
It defines tokens in an OMG IDL specification and describes comments, identifiers,
keywords, and literals—integer, character, and floating point constants and string literals.

201 ANNA UNIVERSITY CHENNAI


DMC 1754 / 1945

An OMG IDL specification logically consists of one or more files. A file is conceptually
NOTES translated in several phases. The first phase is preprocessing, which performs file inclusion
and macro substitution. Preprocessing is controlled by directives introduced by lines having
# as the first character other than white space. The result of preprocessing is a sequence of
tokens. Such a sequence of tokens, that is, a file after preprocessing, is called a translation
unit.
Tokens
There are five kinds of tokens: identifiers, keywords, literals, operators, and other
separators. Blanks, horizontal and vertical tabs, newlines, form feeds, and comments. If
the input stream has been parsed into tokens up to a given character, the next token is taken
to be the longest string of characters that could possibly constitute a token.
Comments
The characters /* start a comment, which terminates with the characters */. These
comments do not nest. The characters // start a comment, which terminates at the end of
the line on which they occur. The comment characters //, /*, and */ have no special meaning
within a // comment and are treated just like other characters. Similarly, the comment
characters // and /* have no special meaning within a /* comment. Comments may contain
alphabetic, digit, graphic, space, horizontal tab, vertical tab, form feed, and newline characters.
Identifiers
Identifiers are an arbitrarily long sequence of ASCII alphabetic, digit, and underscore
(“_”) characters. The first character must be an ASCII alphabetic character. All characters
are significant. When comparing two identifiers to see if they collide.
 Upper- and lower-case letters are treated as the same letter.
 All characters are significant.
Identifiers that differ only in case collide, and will yield a compilation error under certain
circumstances. An identifier for a given definition must be spelled identically (e.g., with
respect to case) throughout a specification. There is only one namespace for OMG IDL
identifiers in each scope. Using the same identifier for a constant and an interface, for
example, produces a compilation error.
Escaped Identifiers
As IDL evolved, new keywords that were added to the IDL language, which may
inadvertently collide with identifiers used in existing IDL and programs that use that IDL.
Fixing these collisions will require not only the IDL to be modified, but programming language
code that depends upon that IDL will have to change as well. The language mapping rules
for the renamed IDL identifiers will cause the mapped identifier names (e.g., method names)
to be changed. The following is a non-exclusive list of implications of these rules:
 The underscore does not appear in the Interface Repository.
 The underscore is not used in the DII and DSI.
 The underscore is not transmitted over “the wire.”
 Case sensitivity rules are applied to the identifier after stripping off the leading
underscore.

202 ANNA UNIVERSITY CHENNAI


MIDDLE-WARE TECHNOLOGIES
7.18.2 Literals

The different types of literals include Integer, Character, Floating point, String and
NOTES
Fixed point.
Integer Literals
An integer literal consisting of a sequence of digits is taken to be decimal (base ten),
unless it begins with 0 (digit zero). A sequence of digits starting with 0 is taken to be an octal
integer (base eight). The digits 8 and 9 are not octal digits. A sequence of digits preceded by
0x or 0X is taken to be a hexadecimal integer (base sixteen). The hexadecimal digits include
a or A through f or F with decimal values ten through fifteen, respectively. For example, the
number twelve can be written as 12, 014, or 0XC.
Character Literals
A character literal is one or more characters enclosed in single quotes, as in ’x.’
Character literals have type char. A character is an 8-bit quantity with a numerical value
between 0 and 255 (decimal).
Floating-point Literals
A floating-point literal consists of an integer part, a decimal point, a fraction part, an e
or E, and an optionally signed integer exponent. The integer and fraction parts both consist
of a sequence of decimal (base ten) digits. Either the integer part or the fraction part (but
not both) may be missing; either the decimal point or the letter e (or E) and the exponent (but
not both) may be missing.
String Literals
A string literal is a sequence of characters with the exception of the character with
numeric value 0, surrounded by double quotes, as in “...”.Adjacent string literals are
concatenated. Characters in concatenated strings are kept distinct.
For example, “\xA” “B” contains the two characters ‘\xA’ and ‘B’ after concatenation
(and not the single hexadecimal Character ‘\xAB’).
The size of a string literal is the number of character literals enclosed by the quotes
after concatenation. Within a string, the double quote character “ must be preceded by a \.A
string literal may not contain the character ‘\0’. Wide string literals have an L prefix, for
example:
const wstring S1 = L”Hello”;
Fixed-Point Literals
A fixed-point decimal literal consists of an integer part, a decimal point, a fraction part
and a d or D. The integer and fraction parts both consist of a sequence of decimal (base 10)
digits. Either the integer part or the fraction part (but not both) may be missing; the decimal
point (but not the letter d (or D)) may be missing.
7.18.3 Pre-Processing
OMG IDL is preprocessed according to the specification of the preprocessor in
“International Organization for Standardization. 1998. ISO/IEC 14882 Standard for the C++
Programming Language. Geneva: International Organization for Standardization.” The

203 ANNA UNIVERSITY CHENNAI


DMC 1754 / 1945
preprocessor may be implemented as a separate process or built into the IDL compiler.
NOTES Lines beginning with # (also called “directives”) communicate with this preprocessor.
White space may appear before the #. These lines have syntax independent of the rest
of OMG IDL; they may appear anywhere and have effects that last (independent of the
OMG IDL scoping rules) until the end of the translation unit. The textual location of OMG
IDL-specific pragmas may be semantically constrained.
A preprocessing directive (or any line) may be continued on the next line in a source
file by placing a backslash character (“\”), immediately before the newline at the end of the
line to be continued. The preprocessor effects the continuation by deleting the backslash
and the newline before the input sequence is divided into tokens. A backslash character may
not be the last character in a source file.
A preprocessing token is an OMG IDL token, a file name as in a #include directive, or
any single character other than white space that does not match another preprocessing
token. The primary use of the preprocessing facilities is to include definitions from other
OMG IDL specifications. Text in files included with a #include directive is treated as if it
appeared in the including file, except that RepositoryId related pragmas are handled in a
special way.
7.19 CONCLUSION
Today’s enterprises need flexible, open information systems. Most enterprises have a
wide range of technologies, operating systems, hardware platforms, and programming
languages that need to work together to make the enterprise function. CORBA is a standard
middleware architecture that can be used to develop and integrate a wide variety of distributed
systems that use a variety of hardware, operating systems, and programming languages.
CORBA is an open, standard solution for distributed object systems. CORBA can be
used to describe an enterprise system in object-oriented terms, regardless of the platforms
and technologies used to implement its different parts. CORBA objects communicate directly
across a network, using standard protocols, regardless of the programming languages used
to create objects or the operating systems and platforms on which the objects run.
CORBA solutions are available for every common environment and are used to integrate
applications written in C, C++, Java, Ada, Smalltalk, COBOL, and PL/I, COM, LISP, Python,
and XML, running on embedded systems, PCs, UNIX hosts, and mainframes. CORBA
objects running in these environments can cooperate seamlessly. CORBA offers an extensive
infrastructure that supports all the features required by distributed business objects. This
infrastructure includes important distributed services, such as transactions, messaging, and
security.
HAVE YOU UNDERSTOOD QUESTIONS

 What is CORBA?
 Can you trace the history and development of CORBA?
 Explain OMG’s Object Management Architecture
 What is the architecture of ORB Architecture and the role of its Principal
components?

204 ANNA UNIVERSITY CHENNAI


MIDDLE-WARE TECHNOLOGIES

 What is the difference between static and dynamic invocation of objects? How
does CORBA implement both types of invocations? NOTES
 What are the advantages of CORBA architecture
 How to develop and deploy a CORBA application

SUMMARY

 CORBA (Common Object Request Broker Architecture) is a standard middleware


architecture that can be used to develop and integrate a wide variety of distributed
systems
 CORBA has been developed by The Object Management Group (OMG), which is
an international independent not-for-profit consortium whose members includes
almost all the major vendors and developers of distributed object technology, including
platform, database, and application vendors as well as software tool and corporate
developers
 CORBA Architecture and Specifications have evolved over the past several years.
Many versions of CORBA were released starting from 1991. The current version
is CORBA 3.0 released in 2002
 The software that implements the CORBA specification is called the ORB. The
ORB the heart of CORBA, is responsible for all the mechanisms required to perform
the tasks of finding and managing the communications between objects
 IDL specifies interfaces between CORBA objects, and ensures CORBA’s language
independence. Interfaces described in IDL can be mapped to any programming
language and hence CORBA applications and components are independent of the
languages used to implement them.
 The Client is the entity that wishes to perform an operation on the object and the
Object Implementation is the code and data that actually implements the object.
The Client requests services through the IDL Stub or dynamically through the
Dynamic Invocation Interface (DII).
 A General Inter-ORB Protocol (GIOP) is defined to support interoperability of
different ORB products. A specification of the GIOP on TCP/IP connection is defined
as the Internet Inter-ORB Protocol (IIOP).
 With the help of the Object Adapter, which connects the object to the ORB, the
ORB Core passes the request through an IDL Skeleton or dynamically through the
Dynamic Skeleton Interface (DSI) to the Object
 The Interface Repository (IFR) contains all the registered component interfaces,
the methods they support, and the parameters they require.
 The Implementation Repository stores the information that ORBs use to locate and
activate implementations of objects.
 The Portable Object Adapter (POA) is a standard component in the CORBA
specifications
 CORBA offers different methods to implement communication and data
transfer between objects. The basic communication models provided by CORBA
are synchronous two-way, one-way and deferred synchronous

205 ANNA UNIVERSITY CHENNAI


DMC 1754 / 1945

 The CORBA Component Model (CCM) is a component middleware that addresses


NOTES limitations with earlier generations of DOC middleware. CCM is a multi-language,
multi-platform component standard from OMG, which represents a major extension
for enterprise computing. The CCM specification extends the CORBA object model
to support the concept of components and establishes standards for implementing,
packaging, assembling, and deploying component implementations.
EXERCISES
Part I
1. What is the expansion for CORBA?
a) common object request for business application
b) command object request broker application
c) common object request broker architecture
d) command object request broker architecture
2. What is Stub?
a) Client side proxy
b) Server side proxy
c) Both
d) None
3. What is IDL?
a) Independent definition language
b) Interface definition language
c) Interactive design language
d) None
4. Is CORBA a?
a) Standard
b) Product
c) Language
d) None of the above
5. Is CORBA an independent language?
a) Yes
b) No
c) Maybe
d) None of the above
6. The ORB is responsible for:
a) Returning output values to objects
b) Maintaining information about registered interfaces
c) Delivering requests to objects
d) Object implementation

206 ANNA UNIVERSITY CHENNAI


MIDDLE-WARE TECHNOLOGIES

7. Location Transparency of CORBA means that it does not matter to the client if the
requested object is: NOTES
a) On the same processor but different process
b) In a different process in another processor
c) Both a and b are true
d) Both are false
8. Which of the following contains functionality that is required by both client and server?
a) Dynamic Invocation Interface
b) Dynamic Skeleton Interface
c) Object Adapter
d) ORB Interface
9. An Object Reference :
a) Provides the interface through which a method receives a request
b) Needed to invoke an object by a client
c) Interface an object implementation with the ORB
d) Makes a call on the ORB using the interface
10. A binary protocol used for communication between ORB’s
a) IDL
b) DII
c) GIOP
d) Stubs and skeletions
Part – II
11. What is the role of OMA?
12. What is meant by language mapping?
13. Explain the difference between static and dynamic invocation. Explain how CORBA
handles the two types of invocation.
14. What are the functions of Interface and Implementation Repositories?
15. What are the advantages of CORBA CCM model over the Object Model?
16. Briefly explain about CORBA Alternatives.
17. Describe the Overview of CORBA in detail
18. Explain in detail about Object implementation.
19. Discuss the working of Portable Object Adapter.
20. Briefly explain the steps to build an application using CORBA.
Part III
21. Develop Client and Server Programs for the following. You can use Java or C++
to write the programs.

207 ANNA UNIVERSITY CHENNAI


DMC 1754 / 1945

NOTES a) The Server simply executes and listens for client requests. The client connects to
the server sends the server a string. The Server then echoes it back to the client
which then displays it at the prompt.
b) Write a server program to reverse the string of words sent by the client program.
For example, if the client sends the string “good”, the server such send the response
as “doog” which is displayed by the client.
c) The server provides two functions that can be called by a client. (a) add two numbers
(b) subtract two numbers. Write a client program to call these two functions with
suitable parameters and display the results.
d) Write a server program that converts temperature expressed in Fahrenheit to
Centigrade and vice-versa. The client program will send as parameters the
temperature and type as Centigrade or Fahrenheit.
e) Develop a client that can receive messages from a server. The client has to register
itself with the server and then listen in to receive messages from the server.

Part I – Answers

1. c) 2. a) 3. b) 4. a) 5. a) 6. c) 7.c) 8.d) 9. b) 10.c)

REFERENCES

1. The website of Object Management Group : http://www.omg.org


2. History of CORBA : http://www.omg.org/gettingstarted/history_of_corba.htm
3. Randy Otte, Paul Patrick, and Mark Roy. Understanding CORBA. Prentice-Hall,
1995
4. CORBA Overview : www.cs.wustl.edu/~schmidt/corba-overview.html

5. CORBA: Integrating Diverse Applications Within Distributed Heterogeneous


Environments by Steve Vinoski
6. Java Hello world example http://java.sun.com/j2se/1.4.2/docs/guide/idl/
jidlSampleCode.html
7. Overview of the CORBA Component Model by Wang, Schmidt, O’Ryan

208 ANNA UNIVERSITY CHENNAI


MIDDLE-WARE TECHNOLOGIES

NOTES
UNIT V

CHAPTER 8
COMPONENT OBJECT MODEL (COM)
8.1 INTRODUCTION
This unit introduces and explains COM (Component Object Model), which is a platform-
independent, distributed, object-oriented system for creating binary software components
that can interact. COM is the foundation technology for Microsoft’s OLE (Object Linking
and Embedding) and ActiveX (Internet-enabled components) technologies, DCOM
(Distributed Component Object Model) as well as others.
A client that needs to communicate with a component in another process cannot call
the component directly, but has to use some form of inter-process communication provided
by the operating system. COM provides this communication in a completely transparent
fashion: it intercepts calls from the client and forwards them to the component in another
process. These components can be within a single process, in other processes, even on
remote machines
COM objects can be created with a variety of programming languages and COM is
implemented in a language-neutral way. Components of implementing objects can be used
in environments different from the one they were created in, even across machine boundaries.
COM allows reuse of objects with no knowledge of their internal implementation because it
forces component implementers to provide well-defined interfaces that are separate from
the implementation.
8.2 LEARNING OBJECTIVES
At the end of this Unit, the reader would be familiar with the following concepts
 COM Architecture
 COM Interfaces
 Building up COM client and Server
 Marshaling and Remoting
8.3 COM OVERVIEW
The Component Object Model (COM) is a software architecture that allows the
components made by different software vendors to be combined into a variety of applications.
COM defines a standard for component interoperability; is not dependent on any particular
programming language and is available on multiple platforms and is extensible.

209 ANNA UNIVERSITY CHENNAI


DMC 1754 / 1945

COM allows applications and systems to be built from components supplied by different
NOTES software vendors. COM is the underlying architecture that forms the foundation for higher-
level software services, like those provided by OLE, ActiveX, DCOM and others.
COM is one of the dominant component architecture in use today. The primary focus
in desktop software development is to create front-end applications with which users interact.
An optimized development environment for creating such applications requires pre-built
high-level components.
COM supports a language-neutral interface definition language (IDL) that is used to
describe the interface of a COM component. Using IDL, the designer of a COM component
can describe the interfaces, methods and properties that are supported by the component.
Client applications rely on the IDL definition of the COM component rather than on
implementation-specific details such as programming language and implementation platform.
8.3.1 COM Evolution
In the early 1990s, Microsoft made a strong commitment to Object Linking and
Embedding (OLE). Microsoft quickly recognized that to effectively evolve OLE, it needed
a standard mechanism for packaging components. OLE services span various aspects of
component software, including compound documents, custom controls, inter-application
scripting, data transfer, and other software interactions. Cross-language interoperability was
also crucial so that those components could be implemented in a variety of languages and
then combined in an arbitrary fashion. Microsoft created the Component Object Model
(COM) to provide the infrastructure that was needed to realize its vision for OLE. COM
became the foundation for a wide range of technologies that included but were not exclusive
to OLE. One of the most important new technologies that relied on COM was the OLE
Control Extension (OCX).
In 1996, Microsoft announced that ActiveX would be the new name for those technologies
based primarily on COM. ActiveX controls enables the developer to build sophisticated
controls based on the Common Object Model (COM). These controls can be developed for
many uses, such as database access, data monitoring, or graphing. By the end of 1996,
Microsoft introduced DCOM, a set of RPC-based extensions to COM that allow COM
objects to be distributed. COM has been very slow to appear on non-Windows platforms.
Because of limited platform support, COM is still identified as being more of a component
architecture than a remoting architecture. Now, COM architecture is deprecated by
Microsoft’s .Net.
8.3.2 Component Benefits
Component architectures are used for building applications out of components. Also it
provides the ability to make convenient and flexible upgrades of existing applications. The
benefits include:
 Application Customization
Users often want to customize their applications. End users like to make an application
work the way they work. Component architecture helps customization because each
component can be replaced with a different component that better meets the needs of the
user.

210 ANNA UNIVERSITY CHENNAI


MIDDLE-WARE TECHNOLOGIES
 Component Libraries
One of the great promises of component architectures is rapid application development.
NOTES
The applications can be constructed using the standard components available with Component
libraries. Custom components can be built using standard components.
 Distributed Components
With increasing bandwidth and importance of networks, the need for applications
composed of parts spread all over a network has also increased. Component architecture
helps to simplify the process of developing such distributed applications.
Making a distributed application out of an existing application is easier if the existing
application is built of components. First, the application has already been divided into functional
parts that can be located remotely. Second, since components are replaceable, a component
can be replaced with a remotely located component.

Component C

Component A Remoting C
Network

Component D
Component B Remoting D

Figure 8.1 Components located across a network on a remote system

Figure 8.1 shows an example using remote components. Component C and Component
D have been located on different remote machines on the network. On the local machine,
they have been replaced by two new components, Remoting C and Remoting D. These
new components forward requests from the other components, across the network to
Component C and Component D.
8.3.3 What is not COM ?
 COM is not a computer language. COM tells us how to write components. Any
language can be chosen to write components.
 COM does not compete with or replace DLLs. COM uses DLLs to provide
components with the ability to dynamically link.
 COM is not primarily an API or a set of functions like the Win32 API. COM does
not provide services. Instead, COM is primarily a way to write components that
can provide services in the form of object-oriented APIs.
 COM is also not a C++ class library like the Microsoft Foundation Classes (MFC).
COM lets you provide a way to develop language-independent component libraries,
but COM does not provide any implementation.
8.3.4 Weaknesses of COM
 Platform Limitations:

211 ANNA UNIVERSITY CHENNAI


DMC 1754 / 1945
COM support is extremely limited on UNIX and mainframe platforms. The advanced
NOTES services like Microsoft Transaction Service (MTS) and Microsoft Message Queuing
Service (MSMQ) which uses COM cannot be implemented on non-Windows platforms.
 Usage of COM within Java:
COM usage within Java requires usage of the Microsoft Java Virtual Machine. The
Microsoft JVM that supports COM is not supported on non-Windows platforms.
 Many companies must solely depend on one vendor.
8.4 COM COMPONENTS
COM components consist of executable code distributed either as Win32 dynamic link
libraries (DLLs) or as executables (EXEs). Components written to the COM standard
meet all requirements for component architecture. COM uses DLLs to link components
dynamically. COM components can be encapsulated easily because they satisfy the
constraints:
 COM components are fully language independent. Any language can be modified
to use COM components, including Smalltalk and Visual Basic.
 COM components can be shipped in binary form.
 COM components can be upgraded without breaking old clients.
 COM components can be transparently relocated on a network. A component on a
remote system is treated the same way as the component on the local system.
The most fundamental question COM addresses is: How can a system be designed
such that binary executables from different vendors, written in different parts of the world,
and at different times are able to interoperate? To solve this problem, it is necessary to find
solutions to four specific problems:
 Basic Interoperability: How can developers create their own unique components,
yet be assured that these components will interoperate with other components built
by different developers?
 Versioning: How can one system component be upgraded without requiring all the
system components to be upgraded?
 Language Independence: How can components written in different languages
communicate?
 Transparent Cross-Process Interoperability: How can developers have the
flexibility to write components to run in-process or cross-process (and eventually
cross-network), using one simple programming model?
The Component Object Model (COM) is a component software architecture that allows
applications and systems to be built from components supplied by different software vendors.
COM is the underlying architecture that forms the foundation for higher-level software
services as shown in Figure 8.2.
These services provide distinctly different functionality to the user; however, they share
a fundamental requirement for a mechanism that allows binary software components, supplied
by different software vendors, to connect to and communicate with each other in a well-
defined manner

212 ANNA UNIVERSITY CHENNAI


MIDDLE-WARE TECHNOLOGIES

NOTES
Reusable. Programmable
controls

Compound Documents
T
Automation and Data o
Transfer o
l
Storage and Naming s

Component Object Model

Figure 8.2 Component Object Model

COM supports the following three types of servers, as illustrated in Figure 8.3, for
implementing components:
 In-Process server : An in-process server is implemented as a dynamic linking
library (DLL) that executes within the same process space as the application. The
performance overhead of invoking an in-process server is located in the same process
as the client. An in-process server is commonly referred to as an ActiveX control.
 Local Server : A local server executes in a separate process space on the same
computer. Communication between an application and a local server is accomplished
by the COM runtime system using a high-speed inter-process communication protocol.
The performance overhead of using a local server is typically an order of magnitude
greater than that of using an in-process server.
 Remote Server : A remote server executes on a remote computer. DCOM extends
COM by providing an RPC-based infrastructure that is used to manage
communication between the application and the remote server. The performance
overhead of using a remote server is typically an order of magnitude greater than
that of using a local server.

An application that uses a COM component is not required to know what type of
server it is using. After a COM object instance handle has been obtained by a client, a client
interaction with the COM object instance is the same regardless of server location.
8.4.1 COM Fundamentals
The Component Object Model defines several fundamental concepts that provide the
model’s structural underpinnings. These include:

213 ANNA UNIVERSITY CHENNAI


DMC 1754 / 1945

 A binary standard for function calling between components.


NOTES  A provision for strongly-typed groupings of functions into interfaces.
 A base interface providing:
 A way for components to dynamically discover the interfaces implemented by other
components.
 Reference counting to allow components to track their own lifetime and delete
themselves when appropriate.
 A mechanism to uniquely identify components and their interfaces.
 A “component loader” to set up component interactions and additionally in the cross-
process and cross-network cases to help manage component interactions.

COM Local
In-Process Server
Server
(DLL)

Client Application Remote


DCOM
Server

Computer A Computer B

Figure 8.3 In-Process, Local and Remote COM Servers


8.4.2 Binary Standard
For any given platform (hardware and operating system combination), COM defines a
standard way to lay out virtual function tables (vtables) in memory, and a standard way to
call functions through the vtables as illustrated in Figure 8.4 . Thus, any language that can
call functions via pointers (C, C++, Small Talk®, Ada, and even Basic), can be used to write
components that can interoperate with other components written to the same binary standard.
The double indirection (the client holds a pointer to a pointer to a vtable) allows for vtable
sharing among multiple instances of the same object class. On a system with hundreds of
object instances, vtable sharing can reduce memory requirements considerably.

8.4.3 Interfaces
In COM, applications interact with each other and with the system through collections
of functions called interfaces. A COM interface is a strongly-typed contract between
software components to provide a small but useful set of semantically related operations
(methods). An interface is the definition of an expected behaviour and expected
responsibilities. A good example of this is OLE’s drag-and-drop support. All of the functionality

214 ANNA UNIVERSITY CHENNAI


MIDDLE-WARE TECHNOLOGIES

NOTES

Figure 8.4 Diagram of a vtable


that a component must implement to be a drop target is collected into the IDropTarget
interface; all the drag source functionality is in the IDragSource interface. Interface names
begin with “I” by convention.
8.4.4 COM Data Types
All COM interfaces must be defined in IDL (Interface Definition Language). IDL
allows complex datatypes to be described in a language and platform neutral manner. The
following table shows the base types supported by IDL and their mappings on to C, Java and
Visual Basic languages.

COM allows for a rich set of data types. This includes support for constants, enumerated
types, structures and arrays in addition to common base types like long and short. The
integral and floating point types are the same as in any programming language and hence
self-explanatory. All characters in COM are represented using the OLECHAR data type.
Win32 platforms use the Wchar_t data type to represent 16-bit Unicode characters.
Because pointer types in IDL are assumed to point to single instances, not arrays, IDL
introduces the [string] attribute to indicate that a pointer points to a null-terminated array of
characters.

A somewhat more complex case is conversion between OLECHAR and the Win32
TCHAR data type, as TCHAR is conditionally compiled to either char or wchar_t. The
header file ustring.h contains a family of string library routines that parallel the standard C
library routines found in string.h. For example, the strncpy function has four corresponding
routines based on either parameter being of either of the two possible character types
(wchar_t or char):
inline bool ustrncpy(char *p1, const wchar_t *p2, size_t c)
{
size_t cb=wcstombs(p1,p2,c);
return cb != c && cb != (size_t)-1;
}

215 ANNA UNIVERSITY CHENNAI


DMC 1754 / 1945

inline bool ustrncpy(wchar_t *p1, const wchar_t *p2, size_t c)


NOTES {
wcsncpy(p1,p2,c);
return p1[c-1] == 0;
}

Table 8.1 List of Basic Data Types supported by IDL

Language IDL C Visual Basic Java


boolean unsigned Unsupported char
char

byte unsigned Unsupported char


char
small char Unsupported char
short short Integer short
long long Long int
Base
hyper __int64 Unsupported long
Types
float float Single float
double double Double double
char unsigned Unsupported char
char
Wchar_t Wchar_t Integer short
enum enum Enum int
Interface Pointer Interface Interface Interface Ref.
Pointer Ref.
VARIANT VARIANT Variant ms.com.Variant
Extended BSTR BSTR String java.lang.String
Types VARIANT_BOOL short [-1/0] Boolean boolean
[True/False] [true/false]

inline bool ustrncpy(char *p1, const char *p2, size_t c)


{
strncpy(p1,p2,c);
return p1[c-1] == 0;
}
inline bool ustrncpy(wchar_t *p1, const char *p2, size_t c)
{

216 ANNA UNIVERSITY CHENNAI


MIDDLE-WARE TECHNOLOGIES

size_t cch = mbstowcs(p1,p2,c);


return cch != c && cch != (size_t)-1;
NOTES
}
Corresponding overloads of strlen, strcpy and strrcat are also included in the ustring.h
header file.
8.4.5 Automation Data Types
COM identifies a special subset of data types known as automation types that must be
used when defining automation-compatible interfaces. This list of automation types is given
as follows.
COM IDL Type Description
short 16-bit signed integer
long 32-bit signed integer
float 32-bit signed floating point number
double 64-bit floating point number
BSTR length-prefixed wide character array
VARIANT_BOOL short integer used to indicate true or false
DATE 64-bit floating point number representing the
fractional number of days
IUnknown * Generic Interface pointer
IDispatch * Automation Interface pointer
BSTR
One additional text-related data type that must be discussed is the BSTR. The BSTR
string type must be used in all interfaces that will be used from Visual Basic or Java.
BSTRs are length-prefixed, null-terminated strings of OLECHARs. The length prefix
indicates the number of bytes the string consumes and is stored as a four-byte integer that
immediately precedes the first character of the string.

BSTR

4 0 0 0 ‘H’ 0 ‘i’ 0 0 0

Length Prefix Character data Terminal


(in bytes) Null

Figure 8.5 “Hi” as BSTR

217 ANNA UNIVERSITY CHENNAI


DMC 1754 / 1945

Figure 8.5 shows the string “Hi” as a BSTR. COM provides several API functions for
NOTES managing BSTRs:
//allocate and initialize a BSTR
BSTR SysAllocString(const OLECHAR *psz);
BSTR SysAllocStringLen(const OLECHAR *psz, UINT cch);
//Reallocate and initialize a BSTR
INT SysReAllocString(BSTR *pbstr, const OLECHAR *psz);
INT SysReAllocStringLen(BSTR *pbstr, const OLECHAR *psz, UINT cch);
//free a BSTR
void SysFreeString(BSTR bstr);
//peek at length-prefix as characters or bytes
UINT SysStringLen(BSTR bstr);
UINT SysStringByteLen(BSTR bstr);
Example:
//convert raw OLECHAR string to a BSTR
BSTR bstr = SysAllocString(OLESTR(“Hello”));
//invoke method
HRESULT hr= p->SetString(bstr);
//free BSTR
SysFreeString(bstr);
Structures
The primitive types can be composed using C-style structures. IDL follows the C
rules for the tag namespace, which means most IDL interface definitions either use typedef
statements:
typedef struct tagCOLOR { double red;
double green;
double blue;
} COLOR;
HRESULT SetColor([in] const COLOR *pColor);
where HRESULT is the datatype of the result. struct keyword is used to qualify the tagname.
struct COLOR { double red;
double green;
double blue;
};
HRESULT SetColor([in] const struct COLOR *pColor);
Simple structures like the one shown can be used from both Visual Basic and Java.
However, if the version of Visual Basic used can only access interfaces that use structures,
it cannot be used to implement interfaces that use structures as method parameters.

218 ANNA UNIVERSITY CHENNAI


MIDDLE-WARE TECHNOLOGIES

Unions
IDL and COM also support unions. To ensure that the actual interpretation of the union NOTES
is unambiguous, IDL expects that a discriminator will be provided along with the union that
indicates which union member is actually in use. The discriminator must be of an integral
type and must appear at the same logical level as the union.
The [case] attribute is used to match the actual union member in use to its discriminator.
To associate a discriminator with the usage of a non-encapsulated union, the [switch_is]
attribute must be used.
union NUMBER {
[case(1)] long i;
[case(2)] float f;
}
HRESULT Add([in,switch_is(t)] union NUMBER *pn,[in] short t);
When the union is bundled with its discriminator in a surrounding structure, the aggregate
type is called an encapsulated or discriminated union:
struct UNUMBER {
short t;
[switch_is(t)] union VALUE {
[case(1)] long i;
[case(2) float f;
};
};
8.5 A DISTRIBUTED OBJECT EXAMPLE
A distributed object implementing checking account has been considered as an example
and is illustrated in Figure 8.6. This example implements a COM object and two COM
client applications.
COM (ATL C++) Object
Checking Account class
IAccount
IAccountInit
ICheckingAccount : IAccount

COM Visual Basic Client COM Visual C++ Client


Fig 8.6 A Distributed Object Example

The COM object will support three interfaces: IAccount, IAccountInit and
ICheckingAccount and the details of the interfaces is summarized in Table 8.2.

219 ANNA UNIVERSITY CHENNAI


DMC 1754 / 1945

Table 8.2 List of Interfaces used in the Example


NOTES
Interface Properties Methods
Account Name Deposit(amount)
Balance Withdraw(amount)

AccountInit init(name)
CheckingAccount withdrawUsingCheck(checkNumber,amount)

The ICheckingAccount interface will inherit from the IAccount Interface. Client
platform are not arbitrary. The Visual Basic environment provides excellent support for
COM object integration. The Visual C++ environment will allow us to see how COM
works at the C/C++ API level. The Visual Basic client application is shown in Figure 8.7.

Figure 8.7 A sample COM Client Application in VB

The COM client application is divided into two sections: initialization and account
management. The initialization section is used to specify the name of the person for whom
the account will be created and the server name of the host on which the COM checking
account object is implemented. The COM client will attempt to run against a local server

220 ANNA UNIVERSITY CHENNAI


MIDDLE-WARE TECHNOLOGIES

if not server name is specified. The account management section allows the user to perform
withdrawals and deposits. NOTES
8.6 INTERFACES
An interface provides a connection between two different objects. The set of functions
define the interface between different parts of a computer program. The interface to a DLL
is the set of functions exported by the DLL. The interface to a C++ class is the set of
members of the class. For COM, an interface is a specific memory structure containing an
array of function pointers. Each array element contains the address of a function implemented
by the component. The implementation can be made in C++.
Interfaces are everything in COM. To the client, a component is a set of interfaces.
The client can communicate with the COM component only through an interface. The
client has very little knowledge of a component as a whole. Often the client does not even
know of all of the interfaces that a component supports.
8.6.1 Attributes of interfaces
Given that an interface is a contractual way for a component object to expose its
services, there are four very important points to understand:
 An interface is not a class. While a class can be instantiated to form a component
object, an interface cannot be instantiated by itself because it carries no
implementation. A component object must implement that interface and that
component object must be instantiated for there to be an interface. Furthermore,
different component object classes may implement an interface differently, so long
as the behavior conforms to the interface definition, For example, two objects that
implement IStack, where one uses an array and the other a linked list. Thus the
basic principle of polymorphism fully applies to component objects.
 An interface is not a component object. An interface is just a related group of
functions and is the binary standard through which clients and component objects
communicate. The component object can be implemented in any language with any
internal state representation, so long as it can provide pointers to interface member
functions.
 Clients only interact with pointers to interfaces. When a client has access to a
component object, it has nothing more than a pointer through which it can access
the functions in the interface, called simply an interface pointer. The pointer is
opaque; it hides all aspects of internal implementation. You cannot see of the
component object’s data, as opposed to C++ object pointers through which a client
may directly access the object’s data. In COM, the client can call only methods of
the interface to which it has a pointer. This encapsulation is what allows COM to
provide the efficient binary standard that enables local/remote transparency.
 Component objects can implement multiple interfaces. A component object
can and typically does implement more than one interface. That is, the class has
more than one set of services to provide. For example, a class might support the
ability to exchange data with clients as well as the ability to save its persistent state
information (the data it would need to reload to return to its current state) into a file
at the client’s request. Each of these abilities is expressed through a different interface
(IDataObject and IPersistFile), so the component object must implement two
interfaces.

221 ANNA UNIVERSITY CHENNAI


DMC 1754 / 1945

 Interfaces are strongly typed. Every interface has its own interface identifier, a
NOTES globally unique ID (GUID) described below, thereby eliminating any chance of
collision that would occur with human-readable names. The difference between
components and interfaces has two important implications. If a developer creates a
new interface, a new identifier must also be created for that interface. When a
developer uses an interface, the identifier for the interface must be used to request
a pointer to the interface. This explicit identification improves robustness by
eliminating naming conflicts that would result in run-time failure.
 Interfaces are immutable. COM interfaces are never versioned, which means
that version conflicts between new and old components are avoided. A new version
of an interface, created by adding more functions or changing semantics, is an
entirely new interface and is assigned a new unique identifier. Therefore, a new
interface does not conflict with an old interface even if all that changed is one
operation or semantics (but not even the syntax) of an existing method. For example,
if a new interface adds only one method to an existing interface, and the component
author wishes to support both old-style and new-style clients, both collections of
capabilities can be expressed through two interfaces, but internally implement the
old interfaces as a proper subset of the implementation of the new.
It is convenient to adopt a standard pictorial representation for objects and their
interfaces. The adopted convention is to draw each interface on an object as a “plug-in
jack.”. Examples of interface are illustrated in Figures 8.8, 8.9 and 8.10.

Figure 8.8 A component object that supports three interfaces A, B, and C.


The unique use of interfaces in COM provides five major benefits:
 The ability for functionality in applications (clients or servers of objects) to
evolve over time. This is accomplished through a request called QueryInterface
that absolutely all COM objects must support. It allows an object to make more
interfaces (that is, support new groups of functions) available to new clients while
at the same time retaining complete binary compatibility with existing client code. In
other words, revising an object by adding new functionality will not require any
recompilation on the part of any existing clients. This is a key solution to the problem

222 ANNA UNIVERSITY CHENNAI


MIDDLE-WARE TECHNOLOGIES

NOTES

Figure 8.9 Interfaces extend toward the clients connected to them.

Figure 8.10 Interface Representation of two Applications that connect to


each other’s Objects
of versioning and is a fundamental requirement for achieving a component software
market. COM additionally provides for robust versioning because COM interfaces
are immutable, and components continue to support old interfaces even while adding
new functionality through additional interfaces. This guarantees backward
compatibility as components are upgraded. Other proposed system object models,
on the other hand, generally allow developers to change existing interfaces, leading
ultimately to versioning problems as components are upgraded.
 Fast and simple object interaction. Once a client establishes a connection to an
object, calls to that object’s services (interface functions) are simply indirect functions
calls through two memory pointers. As a result, the performance overhead of
interacting with an in-process COM object (an object that is in the same address
space) as the calling code is negligible. Calls between COM components in the
same process are only a handful of processor instructions slower than a standard
direct function call and no slower than a compile-time bound C++ object invocation.
In addition, using multiple interfaces per object is efficient because the cost of
negotiating interfaces (via QueryInterface) is done in groups of functions instead
of one function at a time.
 Interface reuse. Design experience suggests that there are many sets of operations
that are useful across a broad range of components. For example, it is commonly
useful to provide or use a set of functions for reading or writing streams of bytes.
In COM, components can reuse an existing interface (such as IStream) in a variety
of areas. This not only allows for code reuse, but by reusing interfaces, the
programmer learns the interface once and can apply it throughout many different
applications.
 “Local/Remote Transparency.” The binary standard allows COM to intercept
an interface call to an object and make instead a remote procedure call to an

223 ANNA UNIVERSITY CHENNAI


DMC 1754 / 1945

object that is running in another process or on another machine. A key point is that
NOTES the caller makes this call exactly as it would for an object in the same process. The
binary standard enables COM to perform inter-process and cross-network function
calls transparently. While there is, of course, more overhead in making a remote
procedure call, no special code is necessary in the client to differentiate an in-
process object from out-of-process objects. This means that as long as the client is
written from the start to handle remote procedure call (RPC) exceptions, all objects
(in-process, cross-process, and remote) are available to clients in a uniform,
transparent fashion. DCOM Microsoft’s distributed version of COM requires no
modification to existing components in order to gain distributed capabilities. In other
words, programmers are completely isolated from networking issues.
 Programming language independence. Any programming language that can
create structures of pointers and explicitly or implicitly call functions through pointers
can create and use component objects. Component objects can be implemented in
a number of different programming languages and used from clients that are written
using completely different programming languages. Again, this is because COM,
unlike an object-oriented programming language, represents a binary object standard,
not a source code standard.
8.6.2 Globally Unique Identifiers (GUIDs)
COM uses globally unique identifiers—128-bit integers that are guaranteed to be unique
in the world across space and time—to identify every interface and every component object
class. These globally unique identifiers are UUIDs (universally unique IDs) as defined by
the Open Software Foundation’s Distributed Computing Environment. Human-readable
names are assigned only for convenience and are locally scoped. This helps ensure that
COM components do not accidentally connect to “the wrong” component, interface, or
method, even in networks with millions of component objects. CLSIDs are GUIDs that
refer to component object classes, and IID are GUIDs that refer to interfaces. Microsoft
supplies a tool (uuidgen) that automatically generates GUIDs. Additionally, the
CoCreateGuid function is part of the COM API. Thus, developers create their own GUIDs
when they develop component objects and custom interfaces. Through the use of defines,
developers don’t need to be exposed to the actual 128-bit GUID.
8.6.3 An Example for Interface
The code that implements some simple interfaces is given below. In the code given,
Component CA uses IX and IY to implement two interfaces.
class IX //First Interface
{
public:
virtual void Fx1() = 0;
virtual void Fx2() = 0;
};
class IY //Second Interface
{
public:
virtual void Fy1() = 0;

224 ANNA UNIVERSITY CHENNAI


MIDDLE-WARE TECHNOLOGIES

virtual void Fy2() = 0;


};
NOTES
class CA : public IX, public IY //Component
{
public:
//Implementation of abstract base class IX
virtual void Fx1() { cout << “Fx1” << endl; }
virtual void Fx2() {cout << “Fx2” << endl; }
//Implementation of abstract base class IX
virtual void Fy1() { cout << “Fy1” << endl; }
virtual void Fy2() {cout << “Fy2” << endl; }
};
IX and IY are pure abstract base classes that are used to implement the interfaces. A
pure abstract base class is a base class that contains only pure virtual functions. A pure
virtual function is a virtual function marked with =0, which is known as the pure specifier.
Pure virtual functions are not implemented in the classes in which they are declared as pure.
Functions IX:Fx1, IX:Fx2, IY:Fy1, IY:Fy2 have no function bodies. Pure virtual functions
are implemented in a derived class. The component CA inherits the two pure abstract base
classes, IX and IY, and implements their pure virtual functions.
In order to implement the member functions in both IX and IY, CA uses multiple
Inheritance. Multiple Inheritance occurs when a class inherits directly from more than one
base class.
An abstract base class resembles a form and derived classes fill in the blanks. The
abstract base class specifies the functions of a derived class will provide, and the derived
classes implement these functions. Inheriting publicly from a pure abstract base class is
called Interface Inheritance because the derived class inherits only the descriptions of
functions. The abstract base class provides no implementation to inherit.
IX and IY are not COM interfaces. To be COM interfaces IX and IY must inherit
from an interface named IUnknown.
8.6.4 IUnknown
COM defines one special interface, IUnknown, to implement some essential
functionality. All component objects are required to implement the IUnknown interface,
and conveniently, all other COM and OLE interfaces derive from IUnknown. IUnknown
has three methods: QueryInterface, AddRef, and Release. In C++ syntax, IUnknown
can be represented as:
interface IUnknown {

virtual HRESULT QueryInterface(IID& iid, void** ppvObj) = 0;

virtual ULONG Addref() = 0;

225 ANNA UNIVERSITY CHENNAI


DMC 1754 / 1945

virtual ULONG Release() = 0;


NOTES
}
AddRef and Release are simple reference counting methods. A component object’s
AddRef method is called when another component object is using the interface; the
component object’s Release method is called when the other component no longer requires
use of that interface. While the component object’s reference count is nonzero, it must
remain in memory; when the reference count becomes zero, the component object can
safely unload itself because no other components hold references to it.
8.6.5 Query Interface
Query Inter face is the mechanism that allows clients to dynamically discover (at run
time) whether or not an interface is supported by a component object; at the same time, it is
the mechanism that a client uses to get an interface pointer from a component object. When
an application wants to use some function of a component object, it calls that object’s
QueryInterface, requesting a pointer to the interface that implements the desired function.
If the component object supports that interface, it will return the appropriate interface pointer
and a success code. If the component object doesn’t support the requested interface, then
it will return an error value. The application will then examine the return code; if successful,
it will use the interface pointer to access the desired method. If the QueryInterface failed,
the application will take some other action, letting the user know that the desired method is
not available.
The example below shows a call to QueryInterface on the component PhoneBook.
We are asking this component, “Do you support the ILookup interface?” If the call returns
successfully, then we know that the component object supports the ILookup interface, and
we’ve got a pointer to use to call methods contained in the ILookup interface (either
LookupByName or LookupByNumber). If not, then we know that the component object
PhoneBook does not implement the ILookup interface.
LPLOOKUP *pLookup;
TCHAR szNumber[64];
HRESULT hRes;
// Call QueryInterface on the component object PhoneBook, asking for a pointer
// to the Ilookup interface identified by a unique interface ID.
hRes = pPhoneBook->QueryInterface( IID_ILOOKUP, &pLookup);
if( SUCCEEDED( hRes ) )
{
pLookup->LookupByName(“Daffy Duck”, &szNumber); // use Ilookup interface pointer
pLookup->Release(); // finished using the IPhoneBook interface pointer
}
else
{
// Failed to acquire Ilookup interface pointer.

226 ANNA UNIVERSITY CHENNAI


MIDDLE-WARE TECHNOLOGIES

}
Note that AddRef is not explicitly called in this case because the QueryInterface
NOTES
implementation increments the reference count before it returns an interface pointer.
A Complete Example
The following code shows the complete implementation of interfaces IX and IY.
IFACE.CPP
// Iface.cpp
// To compile, use: cl Iface.cpp
//
#include <iostream.h>
#include <objbase.h> // Define interface
void trace(const char* pMsg)
{ cout <<pMsg << endl; }
//Abstract Interfaces
interface IX
{
virtual void __stdcall Fx1() = 0;
virtual void __stdcall Fx2() = 0;
};
interface IY
{
virtual void __stdcall Fy1() = 0;
virtual void __stdcall Fy2() = 0;
};
// Interface Implementation
class CA : public IX, public IY
{
public:
//Implement interface IX
virtual void __stdcall Fx1() { cout << “CA::Fx1”<<endl;}
virtual void __stdcall Fx2() { cout << “CA::Fx2”<<endl;}
//Implement interface IY
virtual void __stdcall Fy1(){ cout << “CA::Fy1”<<endl;}
virtual void __stdcall Fy2(){ cout << “CA::Fy2”<<endl;}
};
//Client

227 ANNA UNIVERSITY CHENNAI


DMC 1754 / 1945

int main()
NOTES {
trace (“Client: Create an instance of the component”);
CA* pA = new CA;
//Get an IX pointer
IX* pIX =pA;
trace(“Client: Use the IX interface”);
pIX->Fx1();
pIX->Fx2();
//Get an IY pointer
IY* pIY=pA;
trace(“client: Use the IY interface”);
pIY->Fy1();
pIY->Fy2();
trace(“Client: Delete the component”);
delete pA;
return 0;
}
The output from this program is
Client: Create an instance of the component
Client: Use the IX interface
CA::Fx1
CA::Fx2
Client: Use the IY interface
CA::Fy1
CA::Fy2
Client: Delete the component
The client and the component communicate through two interfaces. The interfaces
are implemented using the two pure abstract base classes IX and IY. The component is
implemented by the class CA, which inherits from both IX and IY. Class CA implements
the members of both interfaces.
8.6.6 Behind the Interface

Pure abstract base class defines the specific memory structure that COM requires for
an interface.
Virtual Function Tables
When we define a pure abstract class, we are actually defining the layout for a block
of memory. All implementations of pure abstract classes are blocks of memory that have

228 ANNA UNIVERSITY CHENNAI


MIDDLE-WARE TECHNOLOGIES

the same basic layout. The figure 8.11 shows the memory layout for the abstract base class
defined in the following code: NOTES
interface IX
{
virtual void __stdcall Fx1() = 0;
virtual void __stdcall Fx2() = 0;
virtual void __stdcall Fx3() = 0;
virtual void __stdcall Fx4() = 0;
};
Defining a pure abstract class just defines the memory structure. Memory is not
allocated for the structure until the abstract base class is implemented in a derived class.
When a derived class inherits from an abstract base class, it inherits this memory structure
as shown in the Figure 8.11.
The virtual function table contains Pointers to member functions

IX

Virtual Function Table

vtbl pointer &Fx1


pIX
&Fx2
&Fx3
&Fx4

Fig 8.11 An abstract base class defines a block of memory that is


structured like this
vtbl is an array of pointers that point to the implementations of the virtual functions.
The first entry in the vtbl contains the address of the function Fx1 as it is implemented in the
derived class. The second entry contains the address of Fx2 and so on. The pointer to the
abstract base class points to the vtbl pointer, which points to the vtbl.

229 ANNA UNIVERSITY CHENNAI


DMC 1754 / 1945

8.7 PROXY AND STUB


NOTES The COM and CORBA architectures allow developers to treat distributed objects in
much the same manner as native objects. For example, invoking a method on a COM or
CORBA object in C++ is no different than invoking a method on a native C++ object. The
developer may need to address certain timing and error-handling issues, but the syntax for
the method invocation is identical in both the native and the remote case.
Both COM and CORBA rely on client-side and server-side mechanisms to manage
issues related to remoting. These mechanisms are referred to as proxies and stubs in
COM. In CORBA, the mechanisms are referred to as stubs and skeletons. When describing
both COM and CORBA, we will refer to the client-side mechanism as a client stub and the
server-side mechanism as a server stub.

Client Server

Client Stub Server Stub


(COM Proxy) (COM Stub)
(CORBA Stub) (CORBA Skeleton)

Communication Bus
(DCOM or CORBA)

Figure 8.12 Proxies, Stubs and Skeletons in COM and CORBA

8.7.1 Remote Method Invocation


A remote method invocation is implemented as follows:
 A client invokes a remote method. The remote method is actually invoked in the
client stub.
 The client stub creates a message containing information needed for the remote
invocation. (The message creation process is referred to as marshaling.)
 The client stub sends the message to the server stub using the communication bus.
 The server stub receives the message and unpacks it. (The unpacking process is
referred to as unmarshalling.)
 The server stub calls the appropriate server method based on the information provided
in the received message.
 The server stub creates a message based on the outputs of the call to the server
method (i.e., the return value and out parameters).

230 ANNA UNIVERSITY CHENNAI


MIDDLE-WARE TECHNOLOGIES

 The server stub sends the result message to the client stub using the communication
bus. NOTES
 The client stub receives the result message, unpacks the message, and returns the
result to the client.
From the steps outlined, it is obvious that the client stub, server stub and communication
bus do a lot of work. The communication bus is the generic name for the COM or CORBA
runtime system. In contrast, the client and server stubs must be created to support the
custom interfaces that are used in the system. Hand-coding client and server stubs for
every interface would be tedious and an error-prone task. COM and CORBA solve this
problem by providing tools to generate client and server stubs from IDL descriptions.
8.7.2 COM Proxies and Stubs
COM terminology refers to client stubs as proxies and to server stubs as stubs. In
COM, the proxy and stub are packaged in a single DLL. The DLL is associated with the
appropriate interfaces in the windows system registry. The COM runtime system that uses
the registry to locate proxy-stub DLLs associated with an interface when marshaling of the
interface is required.
After registering the proxy-stub DLL, the IAccount, IAccountInit and
ICheckingAccount interfaces are all associated with ComServer.dll in the registry. Fig 8.13
illustrates the structure of the COM client, server and proxy-stub DLL in the checking
account example. Note that the proxy-stub DLL must be installed on every client machine
so that the client application can properly marshal data.

Client Client
(Visual Basic or C++ (Visual C++ COM Server
Client) Stub
Proxy (ComServer.dll)
(ComServer.dll) RPC

COM/DCOM COM/DCOM

Figure 8.13 Using a COM Proxy-Stub DLL

COM is designed to allow clients to transparently communicate with components


regardless of where those components are running, be it the same process, the same machine,
or a different machine. What this means is that there is a single programming model for all
types of component objects for not only clients of those component object, but also for the
servers of those component objects.

231 ANNA UNIVERSITY CHENNAI


DMC 1754 / 1945

From a client’s point of view, all component objects are accessed through interface
NOTES pointers. A pointer must be in-process, and in fact, any call to an interface function always
reaches some piece of in-process code first. If the component object is in-process, the call
reaches it directly. If the component object is out-of-process, then the call first reaches
what is called a “proxy” object provided by COM itself, which generates the appropriate
remote procedure call to the other process or the other machine. Note that the client from
the start should be programmed to handle RPC exceptions; then it can transparently connect
to an object that is in-process, cross-process, or remote.
From a server’s point of view, all calls to a component object’s interface functions are
made through a pointer to that interface. Again, a pointer only has context in a single process,
and so the caller must always be some piece of in-process code. If the component object is
in-process, the caller is the client itself. Otherwise, the caller is a “stub” object provided by
COM that picks up the remote procedure call from the “proxy” in the client process and
turns it into an interface call to the server component object.
As far as both clients and servers know, they always communicate directly with some
other in-process code, as illustrated in Figure 8.14.

Figure 8.14 Clients, Server and COM


8.8 SERVERS IN EXES
Components can also be implemented as EXE’s. The component and the client in
different EXEs will also be in different processes, since each EXE gets its own process.
Communications between the client and the component will need to cross the process
boundary.
8.8.1 Different Processes
Every EXE runs in a different process. Every process has a separate address space.
The logical address 0x0000ABBA in one process accesses a different physical memory
location than does the logical address 0x0000ABBA in another process. If one process

232 ANNA UNIVERSITY CHENNAI


MIDDLE-WARE TECHNOLOGIES

passed the address 0x0000ABBA to another process, the second process would access a
different piece of memory than the first process intended. This is illustrated in Fig 8.15 NOTES

Process 1 - Address space

pFoo
pFoo 0x0000ABB 0x00001234
A

Process 2 - Address space

pFoo 0x0000ABB 0x0BAD0AD


A D

Figure 8.15 The same memory address in different processes

While each EXE gets its own process, DLLs are referred to as in-process (in-proc)
servers while EXEs are called out-of-process (out-of-proc) servers. EXEs are sometimes
called local servers to differentiate them from the other kind of out-of-process server, the
remote server. A remote server is an out-of-process server that resides in a different machine.
The component passes an interface to the client. An interface is basically an array of
function pointers. The client must be able to access the memory associated with the interface.
If a component is in a DLL, the client can easily access the memory because the component
and the client are in the same address space. But if the client and component are in different
address spaces, the client cannot access the memory in the component’s process. If the
client cannot even access the memory associated with an interface, there isn’t any way for
it to call the functions in the interface. If this were our situation, our interfaces would be
useless.
For an interface to cross a process boundary, we need to be able to count on the
following conditions:
 A process needs to be able to call a function in another process.

233 ANNA UNIVERSITY CHENNAI


DMC 1754 / 1945

 A process must be able to pass data to another process.


NOTES  The client should not have to care whether it is accessing in-proc of out-of-proc
components.

8.8.2 Local Procedure Call (LPC)


There are many options for communicating between processes, including Dynamic
Data Exchange (DDE), named pipes, and shared memory. However, COM uses local
procedure calls (LPCs). LPCx are a means of communication between different
processes on the same machine. LPCs constitute a single-machine, inter-process
communication technique on the remote procedure call (RPC) as illustrated in Figure 8.16.
The standard RPCs is defined in Open Software Foundation’s(OSF) Distributed
Computing Environment (DCE) RPC specification. RPCs allow applications on different
machines to communicate with each other using a variety of network transport mechanisms.
Distributed COM (DCOM), uses RPCs to communicate across the network.

Process Boundary

EXE EXE

Client
Component

Local Procedure Call

Figure 8.16 Win32 LPC Mechanism


8.9 MARSHALING
For calling functions in an EXE, the parameters of a function has to be passed from the
address space of the client to the address space of the component. This is known as
marshaling. The process of moving an object across a boundary is called marshal by
value. Boundaries exist at various levels of abstraction. The most obvious boundary is
between objects running on different machines.
The process of preparing an object to be remoted is called marshaling. On a single
machine, objects might need to be marshaled across context, app domain, or process
boundaries. If both processes are on the same machine, marshaling is a fairly straight forward
process. The data in one process needs to be copied to the address space of the other

234 ANNA UNIVERSITY CHENNAI


MIDDLE-WARE TECHNOLOGIES

process. If the processes are on different machines, the data has to be put into standard
format to account for the differences between machines, such as the order in which they NOTES
store bytes in a word.
To marshal a component, an interface named IMarshal has to be implemented. COM
queries your component for IMarshal as part of its creation strategy. It then calls member
functions in IMarshal to marshal and unmarshal the parameters before and after calling
functions. The COM library implements a standard version of IMarshal that will work for
most interfaces.

Figure 8.17 illustrated how the client communicates with a proxy DLL. The proxy
marshals the function parameters and calls the stub DLL using LPCs. The stub DLL
unmarshals the parameters and calls the correct interface function in the component, passing
it the parameters.

EXE EXE
Process Boundary

Client Component

DLL DLL

Proxy marshals Stub unmarshals


parameters parameters

Figure 8.17 Marshalling and Un-marshalling


8.10 CLIENT/SERVER IMPLEMENTATION
Dynamic linking can be explained by examining how the client creates a component
contained in a DLL.
8.10.1 Creating the Component
Before the client can get an interface pointer, it must load the DLL into its process and
create the component. The function CreateInstance created an instance of the component
and returned the IUnknown interface pointer to the client. This is the only function in the
DLL to which the client must explicitly link. All the functions in the component that the

235 ANNA UNIVERSITY CHENNAI


DMC 1754 / 1945

client needs can be reached from an interface pointer. Therefore, we need to export the
NOTES CreateInstance function so that the client can call it.

8.10.2 Exporting a Function from a DLL


A module definition file lists the functions exported by a dynamic link Library file. The
name of the functions has to be listed by exporting in the EXPORTS section of the file. You
can add an ordinal number to each function if you want to. You should put the actual name
of the DLL on the LIBRARY line. Now we will load the DLL and call this function.
8.10.3 Loading the DLL

The files CREATE.H and CREATE.CPP implement the function CallCreateInstance.


CallCreateInstance takes the name of the DLL as a parameter, loads the DLL, and
attempts to call an exported function named CreateInstance. The code is shown below.
CREATE.CPP
// Create.cpp
//
#include <iostream.h>
#include <unknown.h> //Declare IUnknown.
#include “Create.h”
Typedef IUknown #(#CREATEFUNCPTR) ();
IUnknown * CallCreateInstance (Char* name)
{
// Load dynamic link library into process.
HINSTANCE hComponent = ::LoadLibrary(name);
if (hcomponent ==NULL)
{
COUT << “ Call Create Instance : \tError:Cannot load component.” << end1;
Return NILL;
}
// Get address for Create Instance function.
CREATE FUNCPTR Create Instance
=(CREATEFUNCPTR)::GetProcAddress(hComponent, “CreateInstance”);
If (Create Instance == NULL)
{
COUT << “ Call Create Instance : \tError:”
<< “Cannot find create Instance function.”
Return NULL;

236 ANNA UNIVERSITY CHENNAI


MIDDLE-WARE TECHNOLOGIES

}
Return Create Instance ( ); NOTES
}
To load the DLL, CallCreateInstance calls the Win 32 function Load Library:
HINSTANCE Load Library (
LPCTSTR 1pLibFileName // filename of DLL
);
Load Library takes the DLL’s filename and returns a handle to the loaded DLL. The
Win 32 function GetProcAddress takes this handle and the name of a function (Create
Instance) and returns a pointer to that function:
FARPROC GetProcAddress (
HMODULE hModule. //handle to DLL module
LPCSTR 1pProcName //name of function
);
Using just these two functions, the client can load the DLL into its address space and
get the address of CreateInstance. With the address in hand, creating the component and
getting its IUnknown pointer is a simple process. CallCreateInstance casts the returned
pointer into a usable type and, true to its name, calls createInstance.
But CallCreateInstance binds the client too closely to the implementation of the
component. The client should not be required to know the name of the DLL in which the
component is implemented. We should be able to move the component from one DLL to
another or even from one directory to another.
The client is now in the file CLIENT1.cpp. The client includes the file CREATE.H
and links with CREATE.CPP. These two files encapsulate the creation of the component
contained in the DLL. The Component is now in a file named CMPNT1.CPP. Dynamic
linking requires a definition file listing the functions that are exported from the DLL. The
definition file is named CMPNT1.DEF. The component and the client share two files. The
file IFACE.H contains the declarations of all of the interfaces that CMPNT1 supports. The
file also contains the declarations of the interface IDs for these interfaces. The file
GUIDS.CPP contains the definitions of these interface IDs.
To build all the components and all of the clients, use the following command line:
Nmake-f makefile
Given below is the code for implementing the client. The client asks the user for the
filename of the DLL to use. It passes this filename on to Call Create Instance, loads the
DLL and calls the Create Instance function exported from the DLL.
CLIENT1.CPP
//
// Client1.CPP
// To compile, use : c1 Client1.cpp Create.cpp GUIDs.cpp UUID.lib
//
#include <iostream.h>

237 ANNA UNIVERSITY CHENNAI


DMC 1754 / 1945
#include <objbase.h>
NOTES #include “Iface.h”
#include “Create.h”
Void trace (const char*msg) { Cout << “Client1:\t” << msg << end1’}
//
// Client 1
//
int main ()
{
HRESULT hr ;
// Get the name of the component to use.
Char name [40];
Cout << “Enter the filename of a component to use [Cmpnt?.dll]:”;
Cin >> name ;
Cout << end 1;
// Create component by calling the Create Instance function in the DLL.
Trace (“Get an IUknown pointer.””);
IUknown*pIUknown = Call create Instance (name);
If (pIUknown == NULL)
{
trace (“ Call create Instance Failed.”);
return 1;
}
Trace(“Get interface IX.”);
IX*pIX = NULL;
hr = pIUknown ->Query Interface (IID_IX, (void**)&pIX);
If (SUCCEEDED(hr))
{
trace(“Succeeded getting IX.”);
pIX ->Fx( );‘ // Use interface IX.
pIX -> Release ( );
}
else
{
trace (“Could not get interface IX.”);
}
trace(“Release IUknown interface.”);

238 ANNA UNIVERSITY CHENNAI


MIDDLE-WARE TECHNOLOGIES

pIUknown -> Release ();


return 0; NOTES
}
The code that implements the component is given below. Except for the extern “C”
specification for Create Instance, the component is basically the same as it was before.
Only now, it’s in a file of its own, CMPNT1.CPP. CMPNT1.CPP is compiled using the /LD
switch. It is also linked with CMPNT1.DEF.
CMPNT1.CPP
//
// Cmpnt1.CPP
// To compile, use: c1/LD Cmpnt1.cpp GUIDs.cpp UUID.lib Cmpnt1.def
//
#include <iostream.h>
#include <objbase.h>
#include “Iface.h”
Void trace(const char* msg) { cout << “Component1:\t” << msg << end1;}
//
// Component
//
Class CA : Public IX
{
// IUknown implementation
virtual HRESULE _ _stdcall Query Interface (Const IID& iid. Void ** ppv);
virtual ULONG _ _ stdcall AddRef ( );
virtual ULONG _ _ stdcall Release ( );
// Interface IX implementation
Virtual void _ _ stdcall Fx ( ) { cout << “Fx” << end 1;}
Public :
// Constructor
CA ( ) : m_cRef (0) { }
// Destructor
~CA ( ) { trace (“Destroy Self . “);}
Private :
Long m_cRet;
};
HRESULT _ _stdcall CA :: Query Interface (const IID& iid, void** ppv)
{
if (iid ==IID_IUnknown)

239 ANNA UNIVERSITY CHENNAI


DMC 1754 / 1945

{
NOTES trace (“Return pointer to IUnknown.”);
*ppv = static_cast <IX*> (this);
}
else if (iid == IID_IX)
{
Trace (“Interface not supported.”);
*ppv = NULL ;
return E_NOINTERFACE;
}
Reinterpret_cast <IUnknown*> (*ppv) -> AddRef ();
Return S_OK;
}
ULONG _ _ stdcall CA : : Add Ref ( )
{
Return Inter locked Increment (&m_cRef);
}
ULONG _ _ stdcall CA : : Release ( )
{
If (Inter locked Decrement ( &m_cRef) ==0)
{
delete this ;
return 0;
}
Return m_cRef ;
}
// Creation function
//
extern “C” IUnknown* Create Instance ( )
{
IUnknown* pI = static_cast <IX*> (new CA) ;
pI -> AddRef ( );
return pI;
}
The code for the two shared files IFACE.H and GUIDS.H is given below. The file
IFACE.H declares all of the interfaces that the client and the component use.
IFACE.H
//

240 ANNA UNIVERSITY CHENNAI


MIDDLE-WARE TECHNOLOGIES
// Iface.h
// NOTES
// Interfaces
Interface IX : IUknown
{
Virtual void _ _stdcall Fx( ) = 0;
};
interface IY : IUnkown
{
Virtual void _ _stdcall Fy( ) = 0;
};
interface IZ : IUnkown
{
Virtual void _ _stdcall Fz( ) = 0;
};
// Forward reference for GUIDs
extern “C”
{
extern const IID IID_IX;
extern const IID IID_IY;
extern const IID IID_IZ;
}
As seen, the client and the component are still using the IX, IY, and IZ interfaces. The
IDs for the interfaces are declared at the end of IFACE.H. The definition of the interface
IDs are in GUIDS.CPP is shown in the code below. The client and the component link to
GUIDS.CPP.
GUIDS.CPP
//
// GUIDs.cpp – Interface IDs
//
#include <objbase.h>
Extern “C”
{
// {32bb8320-b41b-11cf-a6bb-0080c7b2d682}
Extern const IID IID_IX =
{0x32bb8320, 0xb41b, 0x11cf,
{0xa6, 0xbb, 0x0, 0x80, 0xc7, 0xb2, 0xd6, 0x82}};
// {32bb8321-b41b-11cf-a6bb-0080c7b2d682}
Extern const IID IID_IY =

241 ANNA UNIVERSITY CHENNAI


DMC 1754 / 1945

{0x32bb8321, 0xb41b, 0x11cf,


NOTES {0xa6, 0xbb, 0x0, 0x80, 0xc7, 0xb2, 0xd6, 0x82}};
// {32bb8322-b41b-11cf-a6bb-0080c7b2d682}
Extern const IID IID_IY =
{0x32bb8322, 0xb41b, 0x11cf,
{0xa6, 0xbb, 0x0, 0x80, 0xc7, 0xb2, 0xd6, 0x82}};
// The extern is required to allocate memory for C++ constants.
}
8.11 CLASS FACTORY
A class factory CFactory is a component whose main purpose is to create other
components. A class object is also a class factory (class object and class factory are
interchangeable), whose main purpose is to create other components by implementing the
standard interface called IClassFactory interface. It can also be considered as a COM
object that lives in a server and is responsible for creating the objects registered by that
server.
Figure 8.18 illustrates the steps involved.
 Client has requested to create a component.
 ClassFactory, that is, class object will be created for that specific class identifier
 Client will create an instance of component by calling create instance of
IClassFactory interface and then releasing the object class.
 The requested interface will be returned to the client.

242 ANNA UNIVERSITY CHENNAI


MIDDLE-WARE TECHNOLOGIES

NOTES

Figure 8.18 Class Factory


IClassFactory provides more control over the creation process of the component.
CoCreateInstance provides a wrapper over a CoGetClassObject and CreateInstance method
of IClassFactory interface. CoCreateInstance internally creates class factory for the specified
CLSID, gets the IClassFactory interface pointer, and then creates the component by calling
CreateInstance on IClassFactory interface pointer and then returns the requested interface
pointer to the client by calling QueryInterface method in the CreateInstance method of
IClassFactory
CoGetClassObject returns a pointer to the Class Factory to the requested
CLSID(component).
CoCreateInstance and CoGetClassObject functions
STDAPI CoCreateInstance(REFCLSID rclsid, LPUNKNOWN pUnkOuter,
DWORDdwClsContext, REFIID riid, LPVOID *ppv);
STDAPI CoGetClassObject(
REFCLSID rclsid, //CLSID associated with the class object
DWORD dwClsContext, //Context for running executable code
COSERVERINFO *pServerInfo, //Pointer to machine on which the
object is to be instantiated
REFIID riid, //Reference to the identifier of the interface
LPVOID *ppv //Address of output variable that receives the interface
pointer requested in riid );

243 ANNA UNIVERSITY CHENNAI


DMC 1754 / 1945

CoCreateInstance function takes IUnknown pointer as an argument where as


NOTES CoGetClassObject takes a COSEVERINFO pointer as an argument. The COSERVERINFO
is meant for identifying a remote machine in a object creation functions.
IClassFactory Interface
interface IClassFactory: IUnknown {
HRESULT __stdcall CreateInstance (IUnknown *pOuterUnk,
const IID& iid,
void ** ppv);
HRESULT __stdcall LockServer (BOOL bLock);
};
The class factory implements the IClassFactory interface. The class factory object
must implement these two methods of IClassFactory to support the object creation. The
client can request an interface in the component at the same time of the component creation
by IClassFactory::CreateInstance method. The CreateInstance takes CLSID as an argument
and it creates the component corresponding to the single CLSID i.e. is passed by
CoGetClassObject. An instance of class factory corresponds to a single CLSID. All classes
registered in the system with a class identifier must implement IClassFactory.
IClassFactory::LockServer method increments and decrements the Lock count, which allows
the client to keep the server, loaded in the memory even when it is serving no components.
Class factory needs to be modified to support out-of-proc servers. Serving components
from an EXE is different from serving components from a DLL. However, the code for the
components themselves will remain unchanged. The code uses the symbol
_OUTPROC_SERVER_ to indicate sections that are specific to local servers (when the
symbol is defined) or specific to in-proc servers (when it isn’t defined).
8.11.1 Object Creation Process
CoGetClassObject, which provides an interface pointer on a class object associated
with a CLSID, needs an entry point in the DLL to create the component’s class factory. In
most cases the class factory is implemented along with the component in the same DLL.
The entry point is called DllGetClassObject, which creates the class factory. The client
initiates the creation process by calling the CoGetClassObject function. The
CoGetClassObject looks for a component in the system registry. If it finds the component, it
loads the COM server, i.e. DLL, into a memory and then calls DllGetClassObject. The
purpose of DllGetClassObject is to create the class object by calling the new operator for a
class object. DllGetClassObject queries the class object for the IClassFactory interface,
which is returned to the client.
The client, after receiving the IClassFactory interface pointer, calls the CreateInstance
method. The IClassFactory::CreateInstance method calls the new operator to create the
component. In addition to calling the new operator it also calls QueryInterface on the
component for the iid interface.
As soon as the component is created and the interface (requested) pointer is returned
to the client, the purpose of the class object (i.e. to create a component) is achieved and the
client can release the class factory for that specific component (CLSID). The client can use
the returned pointer on the component to call methods of that component.

244 ANNA UNIVERSITY CHENNAI


MIDDLE-WARE TECHNOLOGIES

Ever y COM DLL server should implement and expor t a function called
DllGetClassObject. If a COM DLL doesn’t provides DllGetClassObject, then the call to NOTES
CoGetClassObject or CoCreateInstance returns an error “Class Not Registered”.
DllGetClassObject is an entry point which basically creates the class factory for the specific
class (CLSID). CoGetClassObject looks for a component in the registry and, if found, the
component’s CLSID is passed as an argument to CoGetClassObject. If it finds the server,
then it loads the COM DLL server (calls DllMain of the COM Server) that encapsulates
that specific component. After loading the DLL, CoGetClassObject tries to get the address
of DllGetClassObject (exported by the COM DLL) by calling the GetProcAddress function.
If that fails then the COM SCM returns an error called “Class Not Registered” because
there is no way to create a class factory for the requested component.
8.11.2 Sample code explanation
In the sample code, a single class object has been implemented for multiple COM
classes (COM components). There could be a one-to-one mapping between class object
and COM component supported. In that case, the DllGetClassObject should create a class
object corresponding to the requested CLSID. This case is easy as we can have a multiple
if cases in the DllGetClassObject and can call a new operator for the class object corresponding
to a requested CLSID.
The purpose of the class object is to create another object (COM component), so the
different class factory for different classes will cause the unnecessary duplication of code
and makes code less readable. The COM server can be designed in such a way that the
single class object should be able to support multiple COM components. This is what has
been implemented in the sample code.
This has been achieved by the use of the helper function (creator function), which
every COM class must support for its creation. There is a structure called FactoryInfo,
which maps the creation function with the corresponding CLSID for all the COM classes
that are exposed by the COM server.
struct FactoryInfo
{
const CLSID *pCLSID;
FPCOMPCREATOR pFunc;
};
When the client calls CoGetClassObject, the DllGetClassObject function is called with the
requested CLSID as an argument. The DllGetClassObject function looks into the global
array of FactoryInfo structures and traverses the array to fetch the address of the creator
function, which is mapped to the requested CLSID. The class factory class stores this
address in one of its data members, called pCreator, of FPCOMPCREATOR type. This
stored address of the creation function in the CFactory class is used in the CreateInstance
method of the IClassFactory interface. The address of the creator function is passed to the
CFactory at the time of its creation by passing an argument of FPCOMPCREATOR type in
the constructor of CFactory class.
// Code snippet.
// Traverse a list to find the helper function which corresponds to the requested CLSID.
for (int iCount = 0; iCount < 2; iCount++)

245 ANNA UNIVERSITY CHENNAI


DMC 1754 / 1945

{ if (*gFactoryData[iCount].pCLSID == clsid)
NOTES { break;
}
}
CFactory *pFactory = new CFactory(gFactoryData[iCount].pFunc);

The above code is a part of the DlllGetClassObject function. This function traverses
the global array of FactoryData structures and looks for the creation functions address
corresponding to the requested CLSID. Once it gets the address of the creator function, it
stops traversing the array and passes that address to the constructor of the CFactory class.
The CFactory stores this address of the creator function for further use. Once the
IClassFactory interface on the class object is returned to the client the client can call the
CreateInstance method of the IClassFactory to create an instance of the COM class. The
CreateInstance call is the place where the COM component is created and the requested
interface is returned on that newly created COM components instance. The creator function,
whose address has been stored in CFactory’s (class object) data member, is called in the
CreateInstance method and the requested interface is returned to the client.
//// Code snippet.
typedef HRESULT (*FPCOMPCREATOR) (const IID&, void**);
class CFactory : public IClassFactory
{
public:
// Rest of the code has been removed from here to make it readable.
CFactory(FPCOMPCREATOR);
~CFactory();
private:
/* This is to store the address of the creator function of the
* COM component with the requested CLSID.
*/
FPCOMPCREATOR pCreator;
long m_cRef;
};
This is a call to the creator function in the CreateInstance method of the CFactory
class.
hResult = (*pCreator)(iid,ppv);
FPCOMPCREATOR is a synonym for a “pointer to a function which takes const IID
& and void** as an argument and returns an HRESULT”.
The code has been commented to make it self-explanatory. The single class object for
multiple COM classes will help to understand the Class Factory concept in COM.
8.12 INTRODUCTION TO .NET PLATFORM

246 ANNA UNIVERSITY CHENNAI


MIDDLE-WARE TECHNOLOGIES

NET is a collection of tools, technologies, and languages that all work together in a
framework to provide the solutions that are needed to easily build and deploy truly robust NOTES
enterprise applications. These .NET applications are also able to easily communicate with
one another and provide information and application logic, regardless of platforms and
languages.
Net (DOTNET) platform is a development framework that provides a new Application
Programming Interface (API) to new services and APIs for classic Windows operating
systems while bringing together a number of technologies that emerged from Microsoft
during the late 1990s. These include COM+ Component Services, commitment to XML
and object-oriented design, support for new web service protocols such as SOAP, WSDL
and UDDI and a focus on the Internet, all integrated within the Distributed internet
Applications (DNA) architecture.
The platform consists of three product groups:
 A set of languages, including C# and VB, a set of development tools including
Visual Studio .Net, a class library for building web services and web and windows
applications, as well as the Common Language Runtime (CLR) to execute objects
built within this framework.
 Two generations of .Net Enterprise servers
 New .Net-enabled non-PC devices, from cell phones to game boxes

8.12.1 Net Framework - Overview


Microsoft .Net supports not only language independence, but also language integration.
This means that we can inherit from classes, catch exceptions, and take advantage of
polymorphism across different languages. The .Net framework makes this possible with a
specification called the Common Type System (CTS) that all .NET components must
obey. For example, everything in .NET is an object of a specific class that derives from
the root class called System.Object. The CTS supports the general concept of classes,
interfaces and delegates.

Additionally, .NET includes a Common Language Specification (CLS), which


provides a series of basic rules that are required for language integration. The CLS
determines the minimum requirements for being a .NET language. Compilers that conform
to the CLS create objects that can interoperate with one another. The entire Framework
Class Library (FCL) can be used by any language that conforms to the CLS.

8.12.2 Net Framework Architecture

The .NET Framework sits on top of the operating system and the architecture is
given in Figure 8.19

247 ANNA UNIVERSITY CHENNAI


DMC 1754 / 1945

NOTES
.NET Framework

Web Services Web Forms Windows Forms

Data and XML Classes


(ADO.NET, SQL, XSLT, XPath, XML,etc.)

Framework Base Classes


(IO,string,net,security,threading,text,reflection,collections,etc)

.
Common Language Runtime
(debug,exception,type checking,JIT compilers)

Windows Platform

Figure 8.19 .NET Framework Architecture

NET framework consists of a number of components as follows:


 Five official languages: C#, VB, Visual C++, Visual J# and Jscript.NET
 The CLR, an object-oriented platform for Windows and web development that all
these languages share
 A number of related class libraries, collectively known as the Framework Class
Library.

Common Language Runtime (CLR)

The most important component of the .NET Framework is the CLR, which provides
the environment in which programs are executed. The CLR includes a virtual machine,
analogous in many ways to the Java Virtual Machine (JVM). The CLR
 Activates objects
 Performs security checks on objects
 Lays objects in memory
 Executes objects
 Garbage-collects the objects

248 ANNA UNIVERSITY CHENNAI


MIDDLE-WARE TECHNOLOGIES

Framework Class Library (FCL)


NOTES
FCL is one of the largest class libraries and that provides an object-oriented API for
all the functionality that the .NET platform encapsulates. It has more than 4000 classes. It
facilitates rapid development of desktop, client/server, and other web services and
applications. In Figure 8.18, the layer on top of the CLR is a set of framework classes,
followed by additional layer of Data and XML classes, and another layer of classes intended
for web services, Web Forms and Windows Forms. Collectively, these classes make up
the FCL.

Framework Base Class

The set of Framework Base classes, the lowest level of the FCL. These classes
support input and output, string manipulation, security management, network
communication, thread management, text manipulation, reflection and collections
functionality, etc.

Data and XML Classes

Above this level is a tier of classes that extend the base classes to support data
management and XML manipulation. The data classes support persistent management of
data that is maintained on backend databases. These classes include the Structured Query
Language (SQL) classes to let you manipulate persistent data stores through a standard
SQL interface. The .NET Framework also supports a number of classes to let us manipulate
XML data and perform XML searching and translations.

Web Services, Web Forms, Windows Forms

Extending the Framework Base classes and the data and XML classes is a tier of
classes toward building applications using three different technologies:
 Web Services
 Web Forms
 Windows Forms.
Web services include a number of classes that support the development of lightweight
distributed components, which will work even in the face of firewalls and NAT software.
Because web services employ standard HTTP and SOAP as underlying communications
protocols, these components support plug and play across cyberspace. Web Forms and
Windows Forms allow us to apply Rapid Application Development (RAD) techniques to
building web and windows applications. Simply drag and drop controls onto your form,
double-click a control and write the code to respond to the associated event.
8.13 MARSHALING IN .NET
To do a recap, the process of moving an object across a boundary is called Marshal by
value. Boundaries exist at various levels of abstraction in your program. The most obvious
boundary is between objects running on different machines. The process of preparing an

249 ANNA UNIVERSITY CHENNAI


DMC 1754 / 1945

NOTES object to be remoted is called marshaling. On a single machine, objects might need to be
marshaled across context, app domain, or process boundaries.
A process is essentially a running application. If an object in a word processor wants
to interact with an object in a spreadsheet, they must communicate across process boundaries.
Processes are divided into application domains (often called app domains); these in turn are
divided into various contexts. App domains act like light weight processes, and contexts
create boundaries that objects with similar rules can be contained within. At times, objects
will be marshaled across both context and app domain boundaries, as well as cross process
and machine boundaries. When an object is marshaled by value, it appears to be sent through
the wire from one computer to another. A sink is an object whose job is to enforce policy.
The Formatter makes sure the message is in the right format.
This section demonstrates how objects can be marshaled across various boundaries
using proxies and stubs. In addition, this section explains the role of formatters, channels,
and sinks, and how to apply these concepts to programming.
8.13.1 Application Domains
Each .NET application runs in its own process. If you have Word, Excel, and Visual
Studio open, you have three processes running. If you open Outlook, another process starts
up. Each process is subdivided into one or more application domains. An app domain acts
like a process but uses fewer resources.
App domains can be independently started and halted. They are secure, lightweight,
and versatile. An app domain can provide fault tolerance; if you start an object in a second
app domain and the object crashes, it will bring down the app domain but not your entire
program. Hence, web servers might use app domains for running users’ code, so that, if the
code has a problem, the web server can maintain operations.

An app domain is encapsulated by an instance of the AppDomain class, which offers


a number of methods and properties. A few of the most important are listed in Table 8.3.

250 ANNA UNIVERSITY CHENNAI


MIDDLE-WARE TECHNOLOGIES

Table 8.3 Methods and properties of the App Domain class


NOTES

Method or Property Details


CurrentDomain Public static property that returns the
application domain for the current thread
CreateDomain() Overloaded public static method that creates
a new application domain
GetCurrentThreadID() Public static method that return the current
thread identifier
Unload( ) Public static method that removes the
specified app domain
FriendlyName Public property that returns the friendly name
for this app domain
DefineDynamicAssembly( ) Overloaded public method that defines a
dynamic assembly in the current app domain
ExecuteAssembly( ) Public method that executes the designated
assembly
GetData( ) Public method that gets the value stored in
the current application domain given a key
Load( ) Public method that loads an assembly into the
current app domain

SetAppDomainPolicy( ) Public method that sets the security policy for


the current app domain

SetData( ) Public method that puts data into the


specified app domain property

CultureInfo Culture,
Object[ ] activationAttributes,
Evidence securityAttributes
);
To use it:
ObjectHandle oh = ad2.Create Instance(
“ProgCSharp”, // the assembly name
“ProgCSharp.Shape”, //the type name with namespace
false, // ignore case

251 ANNA UNIVERSITY CHENNAI


DMC 1754 / 1945

App domains also support a variety of events – including Assembly Load, Assembly Resolve,
NOTES Process Exit, and Resource Resolve – that are fired as assemblies are found, loaded, run,
and unloaded. Every process has an initial app domain, and can have additional app domains
as you create them. Each app domain exists in exactly one process. Each process has its
own default App domain.
There may be times when a single domain is insufficient. For example, it may be run
a library written by another programmer. A second App domain can be used to isolate it in
its own domain so that if a method in the library crashes the program, only the isolated
domain will be affected. For example, a web server would create a new app domain for
each plug-in application. This would provide fault tolerance so that if one web application
crashed, it would not bring down the web server. It is also possible that the other library
might require a different security environment; creating a second app domain allows the
two security environments to coexist. Each app domain has its own security, and the app
domain serves as a security boundary.

App domains aren’t threads and should be distinguished from threads. A Win32
thread exists in one app domain at a time, and a thread can access (and report) which app
domain it is executing in. App domains are used to isolate applications. Within an app
domain there might be multiple threads operating at any given moment.

8.13.2 Creating and Using APP Domains

A new app domain can be created by calling the static method CreateDomain ( ) on
the App Domain Class:

App Domain ad2 = App Domain. Create Domain (“Shape Domain”);

This creates a new app domain with the name Shape Domain. You can check the
name of the domain you’re working in with the property System.AppDomain.
CurrentDomain.FriendlyName. Once an AppDomain object has been instantiated, can
instances of classes, interfaces, etc. can be created using its CreateInstance( ) method
given below:
Public ObjectHandle CreateInstance (
string assemblyName,
string typeName,
bool ignoreCase,
BindingFlags bindingAttr,
Binder binder,
Object[ ] args,
System.Reflection.BindingFlags.Create Instance, // flag
null, // binder

252 ANNA UNIVERSITY CHENNAI


MIDDLE-WARE TECHNOLOGIES

new object[ ] {3,5} // args


null, // Culture
NOTES
null, // activation attributes
null ); // security attributes

The first parameter (ProgCSharp) is the name of the assembly, and the second
(ProgCSharp.Shape) is the name of the class. The class name must be fully qualified by
namespaces.
A binder is an object that enables dynamic binding of an assembly at runtime. Its job is
to allow you to pass in information about the object you want to create, to create that object
for you, and to bind your reference to that object. In the vast majority of cases, including
this example, you’ll use the default binder, which is accomplished by passing in null. It is
possible to write your own binder. For example, to check your ID against special permissions
in a database and reroute the binding to a different object, based on your identity or your
privileges.
Binding typically refers to attaching an object name to an object. Dynamic binding
refers to the ability to make that attachment when the program is running, as opposed to
when it is compiled. In this example, the Shape object is bound to the instance variable at
runtime, through the app domain’s CreateInstance( ) method. Binding flags help the binder
fine-tune its behavior at binding time. In this example, use the BindingFlags enumeration
value CreateInstance. The default binder normally looks at public classes only for binding,
but you can add flags to have it look at private classes if you have the right permissions.
When you bind an assembly at runtime, don’t specify the assembly to load at compile
time; rather, determine which assembly you want programmatically, and bind your variable
to that assembly when the program is running.
The constructor you’re calling takes two integers, which must be put into an object
array (new object[ ] {3,5}). You can send null for the culture because you’ll use the default
(en) culture and won’t specify activation attributes or security attributes.
An object handle, which is a type that is used to pass an object (in a wrapped state)
between multiple app domains without loading the metadata for the wrapped object in each
object through which the ObjectHandle travels. You can get the actual object itself by calling
Unwrap ( ) on the object handle, and casting the resulting object to the actual – type in this
case, Shape.
The CreateInstance( ) method provides an opportunity to create the object in a new
app domain. If you were to create the object with new, it would be created in the current
app domain.

8.13.3 Marshaling Across App Domain Boundaries

You’ve created a shape object in the shape domain, but you’re accessing it through a
shape object in the original domain. To access the shape object in another domain, you must
marshal the object across the domain boundary.

253 ANNA UNIVERSITY CHENNAI


DMC 1754 / 1945

Marshaling is the process of preparing an object to move across a boundary. Marshaling


NOTES is accomplished in two ways: by value or by reference. When an object is marshaled by
value, a copy is made. Marshaling by reference is almost like sending your own calculator
to do the calculations when called. However, it works using a proxy. When a button is
pressed on the proxy calculator, it sends a signal to the original calculator, and the number
appears over there. Pressing buttons on the proxy looks and feels to me just like the original
calculator.
8.13.4 Understanding Marshaling with Proxies
What actually happens when you marshal by reference? The CLR provides your
calling object with a transparent proxy (TP). The job of the TP is to take everything known
about your method call (the return value, the parameters, etc.) off of the stack and stuff it
into an object that implements the IMessage interface. That IMessage is passed to a
RealProxy object.
RealProxy is an abstract base class from which all proxies derive. You can implement
your own real proxy, or any of the other objects in this process except for the transparent
proxy. The default real proxy will handle the IMessage to a series of sink objects.
Any number of sinks can be used depending on the number of policies you wish to
enforce, but the last sink in a chain will put the IMessage into a channel. Channels are split
into client-side and server-side channels, and their job is to move the message across the
boundary. Channels are responsible for understanding the transport protocol. The actual
format of a message as it moves across the boundary is managed by a formatter. The .NET
Framework provides two formatters: a SOAP formatter, which is the default for HTTP
channels, and a Binary formatter, which is the default for TCP/IP channels. You are free to
create your own formatters.
Once a message is passed across a boundary, it is received by the server-side channel
and formatter, which reconstitute the IMessage and pass it to one or more sinks on the
server side. The final sink in a sink chain is the StackBuilder, whose job is to take the
IMessage and turn it back into a stack frame so that it appears to be a function call to the
server.
8.13.5 Specifying the Marshaling Method
To illustrate the distinction between marshaling by value and marshaling by reference,
in the next example will will tell the shape object to marshal by reference but give it a
member variable of type Point, which we’ll specify as a marshal by value. Note that each
time a class is created, that might be used across a boundary, it is necessary to choose how
it will be marshaled. Normally, objects can’t be marshaled at all; you must take action to
indicate that an object can be marshaled, either by value or by reference.
The easiest way to make an object marshal by value is to mark it with the Serializable
attribute. When an object is serialized, its internal state is written out to a stream, either for
marshaling or for storage.
[Serializable]
public class Point
The easiest way to make an object marshal by reference is to derive its class from
MarshalByRefObject:

254 ANNA UNIVERSITY CHENNAI


MIDDLE-WARE TECHNOLOGIES

public class Shape : Marshal ByRefObject


The Shape class will have just one member variable, upperLeft. This variable will be NOTES
a Point object, which holds the coordinates of the upper-left corner of the shape. The
constructor for Shape will initialize its Point member:
public Shape(int upperLeftX, int upperLeftY)
{
Console.WriteLine(“[{0}] Event {1}”,
System.AppDomain.CurrentDomain.FriendlyName,
“Shape Constructor”);
upperLeft=new Point(upperLeftX, UpperLeftY);
}
// Shape with a method for displaying its position:
public void ShowUpperLeft()
{
Console.WriteLine(“[{0}] Event {1}”,
System.AppDomain.CurrentDomain.FriendlyName,
UpperLeft.X, upperLeft.Y);
}
// A second method for returning its upperLeft member variable:
public Point GetUpperLeft ( )
{
return upperLeft;
}
The Point class is very simple as well. It has a constructor that initializes its two
member variables and accessors to get their value. Once you create the Shape, ask it for its
coordinates:
s1.ShowUpperLeft (); //ask the object to display
Then ask it to return its upperLeft coordinate as a Point object that you’ll change:
Point localPoint = s1.GetupperLeft ( );
localPoint.X = 500;

localPoint.Y = 600;

Ask that point to print its coordinates, and then ask the Shape to print its coordinates.
So, will the change to the local Point object be reflected in the Shape? That depends on
how the Point object is marshaled. If it is marshaled by value, the local Point object will be
a copy, and the Shape object will be unaffected by changing the localPoint variables’
values. If, on the other hand, you change the Point object to marshal by reference, you’ll
have a proxy to the actual upperLeft variable, and changing that will change the Shape.

255 ANNA UNIVERSITY CHENNAI


DMC 1754 / 1945

The Example given below illustrates this point. Build the Example in project named
NOTES ProgCSharp. When Main() instantiates the Shape object, the method is looking for
ProgCSharp.exe.
#region Using directives
Using System;
Using System.Collection.Generic;
Using System.Remoting;
Using System.Reflection;
Using System.Text;
#endregion
namespace Marshaling
{
// for marshal by reference comment out
// the attribute and uncomment the base class
[Serializable]
public class Point // : Marshal ByRefObject
{
private int x;
private int y;
public Point(int x, int y)
{
Console.WriteLine( “[{0}] {1}”,
System.AppDomain.CurrentDomain.FriendlyName,
“Point Constructor”);
this.x=x;
this.y=y;
}
public int X
{
get
{
Console.WriteLine( “[{0}] {1}”,
System.AppDomain.CurrentDomain.FriendlyName,
“Point x.get”);
return this.x;
}

256 ANNA UNIVERSITY CHENNAI


MIDDLE-WARE TECHNOLOGIES

set NOTES
{
Console.WriteLine( “[{0}] {1}”,
System.AppDomain.CurrentDomain.FriendlyName,
“point x.set”);
this.x=value;
}
}
public int Y
{
get
{
Console.WriteLine( “[{0}] {1}”,
System.AppDomain.CurrentDomain.FriendlyName,
“point y.get”);
return this.y;
}
set
{
Console.WriteLine( “[{0}] {1}”,
System.AppDomain.CurrentDomain.FriendlyName,
“point y.set”);
this.y=value;
}
}
}

// the shape class marshals by reference


public class Shape : MarshalByRefObject
{
private Point upperLeft;
public Shape(int upperLeftX, int upperLeftY)
{
Console.WriteLine( “[{0}] {1}”,
System.AppDomain.CurrentDomain.FriendlyName,
“Shape Constructor”);
upperLeft = new Point(upperLeftX, upperLeftY);

257 ANNA UNIVERSITY CHENNAI


DMC 1754 / 1945
}
NOTES public Point GetUpperLeft ()
{
return upperLeft;
}
public void ShowUpperLeft ( )
{
Console.WriteLine( “[{0}] {1}”,
System.AppDomain.CurrentDomain.FriendlyName,
upperLeft.X, upperLeft.Y);
}
}
public class Tester
{
public static void Main( )
{
Console.WriteLine( “[{0}] {1}”,
System.AppDomain.CurrentDomain.FriendlyName,
“Entered Main”);
//create the new app domain
AppDomain ad2 =AppDomain.CreateDomain(“Shape Domain”);
// Assembly a= Assembly.LoadFrom(“ProgCSharp.exe”);
// Object theShape = a.CreateInstance(“Shape”);
// instantiate a Shape object
objectHandle oh = ad2.CreateInstance(“Marshaling”,
“Marshaling.Shape”, false,
System.Reflection.BindingFlags.CreateInstance,
null, new object[ ] {3,5},
null, null, null);
Shape s1 = (shape)oh.Unwrap( );
s1. ShowUpperLeft ( ); // ask the object to display

// get a local copy? Proxy?


Point localPoint = s1.GetUpperLeft ( );
//assign new values
localPoint.X =500;
localPoint.Y = 600;
// display the value of the local point object
Console.WriteLine( “[{0}] localPoint: {1}, {2}”,

258 ANNA UNIVERSITY CHENNAI


MIDDLE-WARE TECHNOLOGIES

System.AppDomain.CurrentDomain.FriendlyName,
localPoint.X, localPoint.Y); NOTES
s1.ShowUpperLeft(); //show the value once more
}
}
}
Output:
[[Marshaling.vshost.exe] Entered Main
[Shape Domain] Shape constructor
[Shape Domain] Point constructor
[Shape Domain] Point x.get
[Shape Domain] Point y.get
[Shape Domain] Upper left : 3,5
[Marshaling.vshost.exe] Point x.set
[Marshaling.vshost.exe] Point y.set
[Marshaling.vshost.exe] Point x.get
[Marshaling.vshost.exe] Point y.get
[Marshaling.vshost.exe] localPoint: 500, 600
[Shape Domain] Point x.get
[Shape Domain] Point y.get
[Shape Domain] Upper left : 3,5
The output reveals that the Shape and Point constructors run in the Shape domain, as
does the access of the value of the Point object in the Shape.
The property is set in the original app domain, setting the local copy of the Point object
to 500 and 600. Because Point is marshaled by value, however, you are setting a copy of
the Point object. When you ask the Shape to display its upperLeft member variable, it is
unchanged.
Now run the program again using marshalling by reference. The output is quite different.
//[Serializable]
Public class Point : Marshal ByRefObject
[Marshaling.vshost.exe] Entered Main
[Shape Domain] Shape constructor
[Shape Domain] Point constructor
[Shape Domain] Point x.get
[Shape Domain] Point y.get
[Shape Domain] upper left : 3,5

259 ANNA UNIVERSITY CHENNAI


DMC 1754 / 1945
[Shape Domain] Point x.set
NOTES [Shape Domain] Point y.set
[Shape Domain] Point x.get
[Shape Domain] Point y.get
[Marshaling.vshost.exe] localPoint: 500, 600
[Shape Domain] Point x.get
[Shape Domain] Point y.get
[Shape Domain] upper left : 500, 600
This time you get a proxy for the point object and the properties are set through the
proxy on the original Point member variable. Thus, the changes are reflected within the
Shape itself.
8.14 REMOTING IN .NET
The process of Remoting could be shown in the Figure 8.20. In addition to being
marshaled across app domain boundaries, objects can be marshaled across process
boundaries, and even across machine boundaries. When an object is marshaled, either by
value or by proxy, across a process or machine boundary, it is said to be remoted.

Figure 8.20 Remoting


8.14.1 Understanding Server Object Types
There are two types of server objects supported for remoting in .NET: well-known and
client-activated. The communication with well-known objects is established each time a
message is sent by the client. There is no permanent connection with a well known object,
as there is with client-activated objects.
Well-known objects come in two varieties: Singleton and single-call. With a well-
known singleton object, all messages for the object, from all clients, are dispatched to a
single object running on the server. The object is created the first time a client attempts to

260 ANNA UNIVERSITY CHENNAI


MIDDLE-WARE TECHNOLOGIES
connect to it, and is there to provide service to any client that can reach it. Well-known
objects must have a parameterless constructor. NOTES
With a well-known single-call object, each new message from a client is handled by a
new object. This is highly advantageous on server farms, where a series of messages from
a given client might be handled in turn by different machines depending on load balancing.
Client-activated objects are typically used by programmers who are creating dedicated
servers, which provide services to a client they are also writing. In this scenario, the client
and the server create a connection, and they maintain that connection until the needs of the
client are fulfilled.
8.14.2 Specifying a Server with an Interface
The best way to understand remoting is to use an example. In this example, a simple
four-function calculator class has been build that implements the interface.
The Calculator interface
#region using directives
using System;
using System.Collections.Generic;
using System.Text;
#endregion
namespace Calculator
{
public interface ICalc
{
double Add(double x, double y);
double Sub(double x, double y);
double Mult(double x, double y);
double Div(double x, double y);
}
}
Save this in a file named Icalc.cs and compile it into a file named calculator.dll. To
create and compile the source file in Visual Studio, create a new project of type C# Class
Library, enter the interface definition in the Edit window, and then Select Build on the
Visual Studio menu bar. Alternatively, if you have entered the source code using Notepad or
another text editor, you can compile the file at the command line by entering:
csc/t:library ICalc.cs
There are tremendous advantages to implementing a server through an interface. If
you implement the calculator as a class, the client must link to that class to declare instance
on the client. This greatly diminishes the advantages of remoting because changes to the
server require the class definition to be updated on the client. In other words, the client and
server would be tightly coupled. Interfaces held decouple the two objects; in fact, you can
later update that implementation of the server, and as long as the server still fulfills the
contract implied by the interface, the client need not change at all.
8.14.3 Building a Server
To build the server used in Example, create CalculatorServer.cs in a new project of
type C# Console Application (be sure to include a reference to Calculator.dll) and then
compile it by selecting Build on the Visual Studio menu bar. The CalculatorServer class

261 ANNA UNIVERSITY CHENNAI


DMC 1754 / 1945
implements ICalc. It derives from MarshalByRefObject so that it will deliver a proxy of the
NOTES calculator to the client application:
class CalculatorServer : MarshalByRefObject, Calculator.ICalc
The implementation consists of little more than a constructor and simple methods to
implement the four functions. In this example, the logic for the server is put into the Main()
method of CalculatorServer.cs.
The first task is to create a channel. Use HTTP as the transport mechanism. You can
use the HTTPChannel type provided by .NET and register it on TCP/IP port 65100.
HTTPChannel chan = new HTTPChannel (65100);
Next, register the channel with the CLR ChannelServices using the static method
RegisterChannel:
ChannelServices.RegisterChannel(Chan);
The step informs .NET that you will be providing HTTP services on port 65100. Because
you’ve registered an HTTP channel and not provided your own formatter, your method calls
will use the SOAP formatter by default.
Now you’re ready to ask the RemotingConfiguration class to register your well-known
object. You must pass in the type of the object you want to register, along with an endpoint.
An endpoint is a name that RemotingConfiguration will associate with your type. It completes
the address. If the IP address identifies the machine and the port identifies the channel, the
endpoint indicates the exact service. To get the type of the object, you can use typeof,
which returns a Type object. Pass in the full name of the object whose type you want:
Type CalcType = Typeof (“CalculatorServersNS.CalculatorServer”);
Also, pass in the enumerated type that indicates whether you are registering a singlecall
or Singleton:
RemotingConfiguration.RegisterWellKnownServiceType
(calcType, “theEndPoint”,WellKnownObjectMode.Singleton);
The call to RegisterWellKnownServiceType creates the server-side sink chain. The
example given below provides the entire source code for the server.
Example : The Calculator server
#region Using directives
Using System;
using System.Collections.Generic;
using System.Runtime.Remoting;
using System.Runtime.Remoting.Channels;
using System.Runtime.Remoting.Channels.Http;
using System.Text;
#endregion
namespace CalculatorServerNS
{
class CalculatorServer : MarshalByRefObject, Calculator.ICalc
{
public CalculatorServer( )
{

262 ANNA UNIVERSITY CHENNAI


MIDDLE-WARE TECHNOLOGIES
Console.WriteLine(“CalculatorServer constructor”);
} NOTES
//implement the four functions
public double Add(double x, double y)
{
Console.WriteLine (“Add {0} + {1}”, x,y,);
return x+y;
}
public double Sub(double x, double y)
{
Console.WriteLine( “Sub {0} – {1}”, x, y);
return x-y;
}
public double Mult(double x, double y)
{
Console.WriteLine( “Mult {0} * {1}”, x, y);
return x*y;
}
public double Div(double x, double y)
{
Console.WriteLine( “Div {0} / {1}”, x, y);
return x/y;
}
}
public class ServerTest
{
public static void Main( )
{
// create a channel and register it
HttpChannel chan = new HttpChannel (65100);
ChannelServices.RegisterChannel(chan);
Type calcType =
Type.GetType(“CalculatorServerNS.CalculatorServer”);
// register our well-known type and tell the server
// to connect the type to the endpoint “the Endpoint”
RemotingConfiguration.RegisterWellKnownServiceType
(CalcType,”theEndPoint”,
WellKnownObjectMode.Singleton);
//”They also serve who only stand and wait.”
Console.WriteLine(“Press [enter] to exit…”);
Console.ReadLine();
}
}

263 ANNA UNIVERSITY CHENNAI


DMC 1754 / 1945
}
NOTES When you run this program, it prints its self-deprecating message:
Press [enter] to exit ….
And then waits for a client to ask for service.
8.14.4 Building the client
While the CLR will pre-register the TCP and HTTP channel, you will need to register
a channel on the client if you want to receive call backs or you are using a nonstandard
channel. For this example, you can use channel 0;
HTTPChannel chan = new HTTPChannel(0);
ChannelServices.RegisterChannel(chan);
The client now need only connect through the remoting services, passing a Type object
representing the type of the object it needs (in our case, the ICalc interface) and the Uniform
Resource Identifier (URI) of the service.
Object obj = RemotingServices.Connect(typeof(Programming_CSharp.ICalc),
“http://localhost:65100/theEndPoint”);
In this case, the server is assumed to be running on your local machine, so the URI is
http://localhost, followed by the port for the server (65100), followed in turn by the endpoint
you declared in the server (theEndPoint).
The remoting service should return an object representing the interface you’ve
requested. You can then cast that object to the interface and begin using it. Because
remoting can’t be guaranteed (the network might be down, the host machine may not be
available, and so forth), you should wrap the usage in a try block:
try
{
Programming_CSharp.ICalc calc =
obj as Programming_CSharp.ICalc;
double sum = calc.Add(3,4);
}
You now have a proxy of the calculator operating on the server, but usable on the
client, across the process boundary and, if you like, across the machine boundary. The
following example shows the entire client (to compile it, you must include a reference to
Calculator.dll as you did with CalcServer.cs).
Example The remoting Calculator client
#region using directives
using System;
using System.Collection.Generic;
using System.Runtime.Remoting;
using System.Runtime.Remoting.Channels;
using System.Runtime.Remoting.Channels.Http;
using System.Text;

264 ANNA UNIVERSITY CHENNAI


MIDDLE-WARE TECHNOLOGIES
#endregion
namespace CalculatorCleint NOTES
{
class CalcClient
{
public static void Main( )
{
int[ ] myIntArray = new int [3];
// create an Http channel and register it
// uses port 0 to indicate won’t be listening
HttpChannel chan = new HttpChannel(0);
ChannelServices.RegisterChannel(chan);
Object obj = RemotingServices.Connect
(typeof (Calculator.IClac),
http://localhost:65100/theEndPoint);
try
{
// cast the object to our interface
Calculator.IClac calc = obj as Calcultor.ICalc;

// use the interface to call methods


double sum = calc.Add(3.0, 4.0);
double difference = Calc.Sub(3,4);
double product = Calc.Mult(3,4);
double quotient = Calc.Div(3,4);
// print the results
Console.WriteLine(“3+4 = {0}”, sum);
Console.WriteLine(“3-4 = {0}”, difference);
Console.WriteLine(“3*4 = {0}”, Product);
Console.WriteLine(“3/4 = {0}”, quotient);
}
catch(System.Exception ex)
{
Console.WriteLine(“Exception caught:”);
Console.WriteLine(ex.Message);
}
}
}
}
Output on client:
3+4=7
3-4 = -1
3*4 =12

265 ANNA UNIVERSITY CHENNAI


DMC 1754 / 1945
¾ = 0.75
NOTES Output of server:
Calculator constructor
Press [enter] to exit …
Add 3 + 4
Sub 3 – 4
Mult 3 * 4
Div 3 / 4
The server starts up and waits for the user to press Enter to signal that it can shut
down. The client starts and displays a message to the console. The client then calls each of
the four operations. You see the server printing its message as each method is called, and
then the results are printed on the client. You now have code running on the server and
providing services to your client.
8.14.5 Using Single Call
To see the difference that Single Call makes versus Singleton, change one line in the
Server’s Main( ) method. Here’s the existing code:
RemotingConfiguration.RegisterWellKnownServiceType
(calcType,”theEndPoint”,WellKnownObjectMode.Singleton);
Change the object to Single Call:
RemotingConfiguration.RegisterWellKnownServiceType
(calcType,”theEndPoint”,WellKnownObjectMode.SingleCall);
The output reflects that a new object is created to handle each request:
Calculator constructor
Press [enter] to exit ………….
Calculator constructor
Add 3 + 4
Calculator constructor
Sub 3 – 4
Calculator constructor
Mult 3 * 4
Calculator constructor
Div 3 / 4
8.15 ADVANTAGES OF USING COM
Traditional operating systems (OS) only dealt with application binaries (EXE) and not
components. Objects in one process could not communicate with objects in another process
using their own defined methods. The operating system defined certain mechanisms of
inter-process communication like DDE, TCP/IP, Sockets, memory-mapped I/O or named
pipes etc. These objects needed to use these OS defined mechanisms to communicate with
each other.
Components developed using Microsoft’s COM provide a way by which two objects
in different object spaces or networks, could talk together by calling each other’s methods.
This technology forces the operating system to see applications as objects.
 COM forces the OS to act as a central registry for objects. The OS takes the
responsibility of creating objects when they are required, deleting them when they
are not, and handling communications between them, be it in the same or different

266 ANNA UNIVERSITY CHENNAI


MIDDLE-WARE TECHNOLOGIES
processes or machines. One major advantage of this mechanism is versioning. If
the COM object ever changes to a new version, the applications that use that object
need not be recompiled.
NOTES
 COM components is that they are never linked to any application. The only thing
that an application may know about a COM object is what functions it may or may
not support. In fact, the object model is so flexible that applications can query the
COM object at run-time as to what functionality it provides.
 Garbage Collection is one other major advantage to using COM. When there are
no outstanding references to an object, the COM object destroys itself.
 COM supports Marshalling. Marshalling is the process of packaging and transmitting
data between different address spaces, automatically resolving pointer problems,
preserving the data’s original form and integrity. Even though COM objects reside
in separate processes or address spaces or even different machines, the operating
system takes care of marshalling the call and calling objects running in a different
application (or address space) on a different machine.
 Over and above all this, COM is a binary standard. A fully compliant COM object
can be written in any language that can produce binary compatible code. So you
can write ‘em using C, C++, Java, J++ or Visual Basic. All of the Windows NT shell
has been written using COM.
8.16 DISADVANTAGES OF COM
  Resiliency
Resiliency can be inconsistent with repackaged applications because the re-packager
utility may not fully understand the component dependencies or what the key paths of the
application should be. Therefore, an application may be packaged into one large feature that
gets entirely reinstalled if a component keypath is missing. If it were broken up into multiple
smaller features it would enable a more manageable resiliency.
  COM/ActiveX Registration
Component Object Model (COM) and ActiveX controls may not be properly registered.
Prior to Windows Installer, COM and ActiveX registration was a black box. Except for the
exported functions DLLRegisterServer and DLLUnregister server, COM and ActiveX
controls offered very few hints of their registration process. RegSvr32.exe was responsible
for calling the previously mentioned functions and then the DLL was responsible for
registering itself. There is no utility that can view a DLL, an OCX, or an EXE and figure out
what goes on inside DllRegisterServer and DllUnregisterServer for that file. There
are standard registry entries that most COM and ActiveX controls register, such as
HKCR\CLSID, HKCR\ProgID, and HKCR\TypeLib. Information on COM registration
may or may not get entered into the appropriate MSI tables by the re-packager.
  Isolated Components
The only way to take advantage of isolated components is to author a new MSI
package.
8.17 COMPARISON OF COM AND CORBA
COM/DCOM and CORBA are two of the most popular distributed object models that
have brought distributed computing into the main stream. Both of these technologies have
been discussed in great detail in the current and previous units.
Both DCOM and CORBA frameworks provide client-server type of communications.
To request a service, a client invokes a method implemented by a remote object, which acts
as the server in the client-server model. The interface in both the case is described using
IDL – Interface Definition Language. However there are differences in the way the
technlogies are implemented.

267 ANNA UNIVERSITY CHENNAI


DMC 1754 / 1945

Their main differences are also summarized below:


NOTES  DCOM supports objects with multiple interfaces and provides a standard
QueryInterface() method to navigate among the interfaces. This also introduces
the notion of an object proxy/stub dynamically loading multiple interface proxies/
stubs in the remoting layer. CORBA allows an interface to inherit from multiple
interfaces.
 Every CORBA interface inherits from CORBA::Object, the constructor of which
implicitly performs such common tasks as object registration, object reference
generation, skeleton instantiation, etc. In DCOM, such tasks are either explicitly
performed by the server programs or handled dynamically by DCOM run-time
system.
 DCOM’s wire protocol is strongly tied to RPC, but CORBA’s is not. CORBA does
not specify a protocol for communication between a client and an object server
running on ORBs provided by the same vendor. The protocol for inter-ORB
communication between the same vendor ORBs is vendor dependent. However, in
order to support the interoperability of different ORB products, a General Inter-
ORB Protocol (GIOP) is specified. A specific mapping of the GIOP on TCP/IP
connections is defined, and known as the Internet Inter-ORB Protocol (IIOP).
The summary of corresponding terms and entities in the three layers is given in Table
8.4. The top layer is the the programmers’ view of DCOM and CORBA. This describes
how a client requests an object and invokes its methods, and how a server creates an object
instance and makes it available to the client. Exactly how the client is connected to the
server is totally hidden from the programmers. The middle layer consists of the infrastructure
necessary for providing the client and the server with the illusion that they are in the same
address space. The main differences between DCOM and CORBA at this layer include
how server objects are registered and when proxy/stub/skeleton instances are created. The
bottom layer specifies the wire protocol for supporting the client and the server running on
different machines.. The main difference between DCOM and CORBA at this layer include
how remote interface pointers or object references are represented to convey the server
endpoint information to the client, and the standard format in which the data is marshaled
for transmission in a heterogeneous environment.

The architectures of CORBA and DCOM provide mechanisms for transparent


invocation and accessing of remote distributed objects. Though the mechanisms that they
employ to achieve remoting may be different, the approach each of them take is more or
less similar. A more detailed comparison is given in Table 8.5

268 ANNA UNIVERSITY CHENNAI


MIDDLE-WARE TECHNOLOGIES

Table 8.4 Summary of corresponding terms and entities.


NOTES

DCOM CORBA
Top layer: Basic programming architecture
Common base class IUnknown CORBA::Object
Object class identifier CLSID interface name
Interface identifier IID interface name
Client-side object activation CoCreateInstance() a method call/bind()
Object handle interface pointer object reference
Middle layer: Remoting architecture
Name to implementation Registry Implementation
mapping Repository

Type information for Type library Interface Repository


methods
Locate implementation SCM ORB
Activate implementation SCM OA
Client-side stub proxy stub/proxy
Server-side stub stub skeleton
Bottom layer: Wire protocol architecture
Server endpoint r. OXID resolver ORB
esolver

Server endpoint object exporter OA


Object reference OBJREF IOR (or object
reference)
Object reference generation object exporter OA
Marshaling data format NDR CDR
Interface instance identifier IPID object_key

269 ANNA UNIVERSITY CHENNAI


DMC 1754 / 1945
Table 8.5 Comparison of COM and CORBA
NOTES

DCOM CORBA
Supports m ultiple interfaces for objects Supports m ultiple inheritance at th e
and uses the Q ueryInterface() m ethod to interface level
navigate am on g interfaces. This m eans
that a client prox y d ynam ically loads
m ultiple server stubs in the rem oting
layer depending on the num ber of
interfaces being used.
Every object im plem ents IU nknow n. Every interface inherits from
C O R B A .O bject
U niquely identifies a rem ote server U niquely identifies rem ote server
object through its interface pointer, objects through object
w hich serv es as the object handle at run - references(objref), w hich serves as
tim e. the object handle at run-tim e. These
object references can be ex ternalized
into strings w hich can then be
converted back into an o bjref.
U niquely id entifies an interface using U niquely identifies an interface
the concept of Interface ID s (IID ) and using the interface nam e and
uniquely identifies a nam ed uniquely identifies a nam ed
im plem entation of the server object im plem entation of the server object
using the concept of C lass ID s (C LS ID ) by its m apping to a nam e in the
the m apping of w hich is found in the Im plem entation R epository
registry.
The rem ote serv er o bject reference The rem ote server object reference
gen eration is perform ed on the w ire gen eration is perform ed on the w ire
protocol b y the O bject E xporter protocol b y the O bject A dapter
Tasks like object registration, skeleton The constructo r im plicitly perform s
instantiation etc. are either ex plicitly com m on tasks like object
perform ed b y the serv er pro gram or registration, skeleton instantiation
handled d ynam ically b y the C O M run - etc
tim e system .
U ses the O bject R em ote P rocedure U ses the Internet Inter-O R B
C all(O R P C ) as its underlying rem oting Protocol(IIO P ) as its underlyin g
protocol rem oting proto col
W hen a client object needs to activate a W hen a client object needs to
server object, it can do a activate a serv er object, it binds to a
C oC reateInstance() or use other w ays to nam ing or a trader service or use
get a server's interface pointer other w ays to get a server reference
The object handle that the client uses is The object handle that the client uses
the interface pointer is the O bject R eference
The m apping of O bject N am e to its The m apping o f O bject N am e to its
Im plem entation is handled by the Im plem entation is han dled b y th e
R egistry Im plem entation R epository
The type inform ation for m ethods is The type inform ation for m ethods is
held in the T ype Library held in the Interface R ep ository

270 ANNA UNIVERSITY CHENNAI


MIDDLE-WARE TECHNOLOGIES

NOTES

The responsibility of locating an The responsibility of locating an


object implementation falls on the object implementation falls on the
Service Control Manager (SCM) Object Request Broker (ORB)
The responsibility of activating an The responsibility of locating an
object implementation falls on the object implementation falls on the
Service Control Manager (SCM) Object Adapter (OA) - either the
Basic Object Adapter (BOA) or the
Portable Object Adapter (POA)
The client side stub is called a proxy The client side stub is called a
proxy or stub
The server side stub is called stub The server side stub is called a
skeleton
All parameters passed between the When passing parameters between
client and server objects are defined in the client and the remote server
the Interface Definition file. Hence, object, all interface types are
depending on what the IDL specifies, passed by reference. All other
parameters are passed either by value objects are passed by value
or by reference. including highly complex data
types
Attempts to perform distributed Does not attempt to perform
garbage collection on the wire by general-purpose distributed
pinging. The DCOM wire protocol garbage collection.
uses a Pinging mechanism to garbage
collect remote server object
references. These are encapsulated in
the IOXIDResolver interface.
Allows you to define arbitrarily Complex types that will cross
complex structs, discriminated unions interface boundaries must be
and conformant arrays in IDL and declared in the IDL
pass these as method parameters.
Complex types that will cross
interface boundaries must be declared
in the IDL.
Will run on any platform as long as Will run on any platform as long as
there is a COM Service there is a CORBA ORB
implementation for that platform implementation for that platform
Since the specification is at the binary Since this is just a specification,
level, diverse programming languages diverse programming languages
like C++, Java, Object Pascal can be used to code these objects
(Delphi), Visual Basic and even as long as there are ORB libraries
COBOL can be used to code these available for use to code in that
objects language

271 ANNA UNIVERSITY CHENNAI


DMC 1754 / 1945

8.18 CONCLUSION
NOTES
This article describes the Component Object Model (COM), a software architecture
that allows the components made by different software vendors to be combined into a
variety of applications. COM defines a standard for component interoperability, is not
dependent on any particular programming language, is available on multiple platforms, and is
extensible. COM allows applications and systems to be built from components supplied by
different software vendors. COM is the underlying architecture that forms the foundation
for higher-level software services, like those provided by OLE. OLE services span various
aspects of component software, including compound documents, custom controls, inter-
application scripting, data transfer, and other software interactions.
HAVE YOU UNDERSTOOD QUESTIONS
1. What is COM architecture?
2. What are the benefits of COM?
3. How COM does supports Binary Standard?
4. What are Interfaces? How is it used in COM?
5. What is IUnknown? What methods are provided by IUnknown?
6. What are the purposes of AddRef, Release and QueryInterface functions?
7. How can you create an instance of the object in COM?
8. What is marshalling?
9. What is marshalling by value and reference? What is the difference?
10. What’s the difference between COM and DCOM?
11. What is the use of Query Interfaces?
12. Explain about RPC and LPC mechanisms?
13. What is the use of IUnknown Interface in COM?
14. What is Remoting?
SUMMARY
The Component Object Model (COM):
 Defines a binary standard for component interoperability
 Is programming-language-independent
 Is provided on multiple platforms
 Provides for robust evolution of component-based applications and systems
 Is extensible by developers in a consistent manner
 Uses a single programming model for components to communicate within the same
process, and also across process and network boundaries
 Allows for shared memory management between components
 Provides rich error and status reporting
 Allows dynamic loading and unloading of components
EXERCISES
Part I
1. What is IUnknown? What methods are provided by IUnknown?

272 ANNA UNIVERSITY CHENNAI


MIDDLE-WARE TECHNOLOGIES

2. What are the purposes of AddRef, Release and QueryInterface functions?


3. What are the three types of servers supported by COM? NOTES
4. What is Global Unique Identifiers? Where is it used?
5. Explain the role of CLR component in .NET architecture.

Part II
6. Explain in detail the QueryInterface?What should QueryInterface functions do if
requested object was not found?
7. What is a COM interface? Discuss the attributes of the interface.
8. Discuss the advantages and disadvantages of COM architecture
9. Explain the terms proxy and stub with reference to COM. Discuss how they
support remote method invocation.
10. What is marshalling? Explain the difference between marshalling by value and by
reference.

Part III
11. Build a COM object and a MFC client to access that COM object.

COM Server: Implement a Student object with the following functions


a. Set/Get Student ID
b. Set/Get Name
c. Set/Get numeric grade (integer) for mid term exam
d. Set/Get numeric grade (integer) for final exam
e. Set/Get numeric grade (integer) for term project
f. Calculate grade for course based on the weighting 50% term project, 20% mid
term exam and 30% final exam

MFC Client: Write a MFC client which will allow to use your COM object.
12. Creation of the project

Create a new Win32 console Application project – name it CoCarApp. Create a


new file called CoCarApp.cpp and it to the project – make an empty main function
in it.

13. Interface Design

Create a file called interfaces.h and use the macro DECLARE_INTERFACE


to declare three interfaces: IEngine, IStatus and IConfig. All three should inherit
from IUnknown.
IEngine should have the methods Run, Start, and Stop
IStatus should have the methods GetCurrentSpeed, GetMaxSpeed and GetOwner.

273 ANNA UNIVERSITY CHENNAI


DMC 1754 / 1945

IConfig should have the methods SetMaxSpeed and SetOwner.


NOTES
Select fitting parameters for all methods. Include windows.h to get the IUnknown
definition. Create another file called iid.h where you put the GUID definitions for your
interfaces – your interfaceidentifiers. Use guidgen.exe to produce GUIDs with the
DEFINE_GUID macro. DEFINE_GUID is defined in initguid.h, which needs
windows.h to be included before it.
14. Implementing IUnknown
Make your CoClass, name it CoCar. It should inherit from all the custom interfaces.
Put it in a file called CoCar.h. Use the macros STDMETHOD and
STDMETHOD_ when declaring methods. Implement AddRef, Release and
QueryInterface in CoCar.cpp.
15. Implementing the custom interfaces
Create your own interface with variables for which initialization /allocation /
deallocation is done in the constructor/destructor
16. Create the class factory
Make a global function named CarFactory (put it in CoCarApp.cpp), it should
return an HRESULT and take a void** as parameter. The function should create
an instance of the CoClass dynamically using new and query for the IUnknown
interface. If the interface is found it should return a pointer to the interface and
S_OK, otherwise NULL and E_FAIL.
17. COM Class factory
Develop a COM class factory for your CoClass. Create a new file called
CoCarClassFactory.cpp and name the class CoCarClassFactory. Your class should
inherit from IClassFactory and implement LockServer, CreateInstance as well as
qualified constructor and destructors.

REFERENCES
1. Microsoft web site and MSDN web site
2. COM and CORBA side by side by Jason Pritchard, Addison Wesley

274 ANNA UNIVERSITY CHENNAI

You might also like