Professional Documents
Culture Documents
UNIT I
NOTES
CHAPTER - 1
CLIENT / SERVER ARCHITECTURE
1.1 INTRODUCTION
The actual client/server model started gaining acceptance in the late 1980s. The term
client/server was first used in the 1980s in reference to personal computers (PCs) on a
network. The client/server software architecture provides a versatile, message-based and
modular infrastructure that is intended to improve usability, flexibility, interoperability and
scalability as compared to centralized, mainframe, time sharing computing.
This unit introduces the reader to the client / server architecture. The usage and
functionality of the different types of servers : File server, Database server, Group server
and more recently the Object server have been explained. This unit covers the different
types of client / server models which have been illustrated using examples.
1.2 LEARNING OBJECTIVES
At the end of this unit, the reader must be familiar with the following concepts:
Evolution of client / server architecture
Client / Server architecture
Characteristics of client / server model
Different types of servers
Client / server on the Internet
Different types of client/server models
1.3 EVOLUTION OF CLIENT / SERVER ARCHITECTURE
Before the advent of client / server architecture, computing environments consisted of
mainframes hooked to dumb terminals that only did processing at the mainframe. In mainframe
software architectures all intelligence (processing, data) was within the central host computer.
Users interacted with the host through a terminal that captures keystrokes and sends that
information to the host. As the number of users increased, the power of the mainframes
had also to increase to cope with the increased processing requirements and user connectivity.
This era saw the development of very powerful mainframe computers capable of immense
processing power and providing support to hundreds of users. Computers were generally
large, costly systems owned by large corporations, universities, government agencies and
similar-sized institutions.
The advent of Personal Computers (PC)’s saw drastic changes in the computing
scenario. Personal computers are normally operated by one user at a time to perform such
general purpose tasks such as word processing and data anlysis using spreadsheet software.
PC’s were also widely used for multimedia applications and other entertainment like games.
The software industry provided a wide range of products for use in personal computers,
targeted at both the expert and the non-expert user. Like the telephone, automobile, and
television before it, the PC changed the way people communicate, shop, retrieve information
NOTES and entertain themselves.
The personal computers started to replace these dumb terminals but the processing
continued to be done on the mainframe. The improved capacity of personal computers were
largely ignored or used on an individual level. With so much computing power idle, many
organizations started thinking about sharing, or splitting, some of the processing demands
between the mainframe and the PC.
Client/server technology evolved out of this movement for greater computing control
and more computing value. Client/server refers to the way in which software components
interact to form a system that can be designed for multiple users. This technology is a
computing architecture that forms a composite system allowing distributed computation,
analysis, and presentation between PCs and one or more larger computers on a network.
Each function of an application resides on the computer most capable of managing that
particular function. There is no requirement that the client and server must reside on the
same machine. In practice, it is quite common to place a server at one site in a local area
network (LAN) and the clients at the other sites. The client, a PC or workstation, is the
requesting machine and the server, a LAN file server, mini or mainframe, is the supplying
machine. Clients may be running on heterogeneous operating systems and networks to
make queries to the server(s).
1.4 CLIENT / SERVER MODEL
Client/server describes the relationship between two computer programs in which one
program, the client, makes a service request from another program, the server, which fulfills
the request. The client/server idea can be used by programs within a single computer, but
finds greater application of the idea in a network. In a network, the client/server model
provides a convenient way to interconnect programs that are distributed efficiently across
different locations.
Businesses of various sizes have various computing needs. Larger businesses may
therefore need to use more computers than smaller businesses do. This type of architecture
provides a division of labor for the computing functions required by a large business. Under
the structure of the client-server architecture, a business’s computer network will have a
server computer, which functions as the “brains” of the organization, and a group of client
computers, which are commonly called “workstations”. The server part of the client-server
architecture will be a large-capacity computer, perhaps even a mainframe, which supports
multiple uses and has usually a large amount of data and functionality stored on it. The client
portions of the client-server architecture are smaller computers that employees use to perform
their computer-based responsibilities.
Servers commonly contain data files and applications that can be accessed across the
network by workstations or employee computers. The server can be used to store the
organization’s data which could be accessed by the client computers. For example, client
requests for files from servers may be implemented using the File Transfer Protocol (FTP).
The server is not used just for storage only. Many networks have a client-server
architecture in which the server acts as a processing power source as well. In this scenario,
the client computers are virtually “plugged in” to the server and gain their processing power
from it. In this way, a client computer can simulate the greater processing power of a server
without having the requisite processor stored within its framework. Alternatively, the clients
may also access applications available in the server. Examples of such applications are
numerous; among the most popular are word processors, spreadsheets. NOTES
The client/server model has become one of the central ideas of network computing. In
a true client / server environment, both clients and servers must share in the business
processing. Most business applications being written today use the client/server model. In
the usual client/server model, one server is developed and activated to await client requests.
Typically, multiple client programs share the services of a common server program. Both
client programs and server programs are often part of a larger program or application.
1.5 CHARACTERISTICS OF CLIENT / SERVER MODEL
A client is defined as a requester of services and a server is defined as the provider of
services. Figure 1.1 illustrates a simple client-sever architecture.
SERVER CLIENT
the bank. The request would be serviced by a server program in the main server and the
information is returned to the client which displays the information to the user. NOTES
To consider a more complex scenario as illustrated in Figure 1.2:
CLIENT
2
MAIN SERVER
Depending on the size of the organization, either a single database server or multiple
database servers could also be used to service user’s requests. Depending on the business NOTES
requirement, individual database servers could be used for each department. For a marketing
department the database would store data concerning customers, order, sales persons etc.
Such departmental database servers would be linked to enable access to data stored in any
of the Servers from anywhere in the organization.
NOTES
The two tier client/server architecture is a good solution for distributed computing
when work groups are defined with small number of users, limited to say about 100 people NOTES
interacting on a LAN simultaneously. It does have a number of limitations. When the number
of users exceeds the LAN limit, the performance begins to deteriorate.
In the Internet processing environment, the first tier, the client, generally operates on a
web browser environment. The server side is the place where the functionality of the
information service is supported; the information service provides data and responds to user
queries. The client / server architecture for this type of environment is given in Figure 1.10.
Request
Request
Web
Response Server
Client Response
embedded in the DBMS (and could be considered a two tier architecture), it is referred to
as “TP Lite” because experience has shown performance degradation when over 100 clients NOTES
are connected. TP monitor technology also provides
the ability to update multiple different DBMS in a single transaction
connectivity to a variety of data sources including flat files, non-relational DBMS,
and the mainframe
the ability to attach priorities to transactions
robust security
Using a three tier client/server architecture with TP monitor technology results in an
environment that is considerably more scalable than a two tier architecture with direct client
to server connection. For systems with thousands of users, TP monitor technology (not
embedded in the DBMS) has been reported as one of the most effective solutions.
Three tier with message server
Messaging is another way to implement three tier architectures. Messages are prioritized
and processed asynchronously. Messages consist of headers that contain priority information,
and the address and identification number. The message server connects to the relational
DBMS and other data sources. The difference between TP monitor technology and message
server is that the message server architecture focuses on intelligent messages, whereas the
TP Monitor environment has the intelligence in the monitor, and treats transactions as dumb
data packets. Messaging systems are good solutions for wireless infrastructures.
Three tier with an application server
The three tier application server architecture allocates the main body of an application
to run on a shared host rather than in the user system interface client environment. The
application server does not drive the GUIs; rather it shares business logic, computations,
and a data retrieval engine. Advantages are that with less software on the client there is less
security to worry about, applications are more scalable, and support and installation costs
are less on a single server than maintaining each on a desktop client. The application server
design should be used when security, scalability, and cost are major considerations
Three tier with an Object Server Architecture
The middle tier can be designed to be an Object Server that clients can interface to
access application objects for business processing. The server objects provide an integrated
model of the disparate data sources and back-end applications. The client objects can be
insulated from the need to know about stored procedures and databases that are present in
the third tier. The server objects communicate with the third tier to process and deliver the
client requests.
Figure 1.12 illustrates a three-tier model in the Internet. The first tier is the web client
(browser), the second tier is the Web server and the third tier is the Application server.
1.8.3 Multi-tier Model
Multi-tier models have four or more tiers. Figures 1.13, 1.14 and 1.15 illustrate different
types of four tier model over the Internet.
NOTES
NOTES
1.10 CONCLUSION
NOTES In a client / server architecture, clients request information or a service from a server
and that server responds to the client by acting on that request and returning the results. The
client and server could be on the same computer system or more generally the client and
server applications reside on different computer systems accessed over a network. The
client and server are two separate devices which can work over a LAN, long-distance
WANs or the Internet.
This approach to networking has proven to be a cost-effective way to share data
between tens or hundreds of clients. Client/server is just one approach to distributed
computing. The client/server model has been popular for a long time, but peer-to-peer
networking and grid technology have emerged as viable alternatives for distributed computing.
HAVE YOU UNDERSTOOD QUESTIONS?
a) What is a client / server model?
b) How did the client / server architecture evolve?
c) What are the roles and functions of client and server?
d) What are the important characteristics of the client / server architecture?
e) What are the different types of servers and what are the functionalities provided by
each type of server?
f) What are the different types of client/server models?
g) What are the advantages and disadvantages of client/server architecture?
SUMMARY
In a client/server model (also known as client/server architecture), processing is
shared between the client and server
The client issues the request and the server services the request
The client/server technology has evolved from mainframe computing environment.
The advent of PC (of low cost and with processing power) saw the replacement of
dumb terminals of mainframes with PC’s as clients capable of sharing the processing
load.
The client and server can be on the same computer or different computer systems.
Typically, the client and server are connected over a LAN or WAN using standard
communication protocols.
The important characteristics of a client / server environment includes location
transparency of files, data and application, message-based exchanges and modular
extensible designs that can be scaled to support numerous clients and multiple servers.
Servers can be classified based on the type of service they provide : File servers,
Database servers, Application servers, Object servers etc.
Different types of client / server models can be deployed to match the processing
needs of an organization or Institution. These are typically 2-tier, 3-tier and multi-
tier models.
Some of key advantages of client / server architecture include sharing of processing
load, sharing of resources between multiple users, easy maintenance and security
of access.
Part II
NOTES 11. What are the various functions of Client/Server?
12. What is the difference between file and database servers?
13. What are the objectives of a file server?
14. What are the features of public web server’s?
15. Explain the role played by middle tier architecture?
Part III
16. What are the benefits provided by the server in client/server model?
17. What are the advantages / disadvantages of client / server model?
18. List out and explain the various types of server?
19. Explain the important characteristics of Client/Server architecture?
20. Explain three-tier architecture and the functions of each tier in detail with examples?
Part I
Answers:
1) a 2) d 3) b 4) c 5) c 6) a 7) a 8) b 9) b 10) d
REFERENCES
1. Websites : http://www.sei.cmu.edu/str/descriptions/clientserver_body.html http://
www.webdeveloper snotes.com/basics/client_ser ver_ar chitectur e.php3,
wikipedia.org, http://faqs.org/faqs/client-server-faq/
CHAPTER - 2 NOTES
REMOTE PROCEDURE CALL / PEER-TO-PEER
2.1 INTRODUCTION
Remote Procedure Call (RPC) is a client/server infrastructure that increases the
interoperability, portability, and flexibility of an application by allowing the application to
be distributed over multiple heterogeneous platforms. It reduces the complexity of developing
applications that span multiple operating systems and network protocols by insulating the
application developer from the details of the various operating system and network interfaces.
Peer-to-peer, also known by the acronym P2P describes a type of network in which
each workstation (peer or computer system) has equivalent capabilities and responsibilities.
This differs from client/server architectures, in which some computers are dedicated as
servers to service client request.
RPC and Peer-to-Peer networking have been explained in this unit. The comparison
of peer-to-peer and client /server architecture has been discussed.
2.2 LEARNING OBJECTIVES
Overview of Remote Procedure Call
How RPC works?
RPC Implementation Issues
RPC Usage Considerations
Peer-to-Peer Networking
Common Peer-to-Peer Applications
Comparison of Client/Server & PEER-TO-PEER
2.3 REMOTE PROCEDURE CALL
2.3.1 RPC Overview
Remote Procedure Call (RPC) is a powerful technique for constructing distributed,
client-server based applications. The idea of RPC is quite simple. It is based on the observation
that procedure calls are a well-known and well understood mechanism for transfer of control
and data within a program running on a single computer to another program. RPC extends
this mechanism to provide for transfer of control and data across a communication network.
Hence, the called procedure need not exist in the same address space as the calling procedure.
The two processes may be on the same system, or they may be on different systems with a
network connecting them. By using RPC, programmers of distributed applications avoid the
details of the interface with the network. The transport independence of RPC isolates the
application from the physical and logical elements of the data communications mechanism
and allows the application to use a variety of transports.
Remote Procedure Call is implemented as a protocol (is a set of rules) that one program
(client) can use to request a service from a program (server) located in another computer in
a network without having to understand network details. A procedure call is also sometimes
known as a function call or a subroutine call. RPC uses the client/server model of distributed
computing. An RPC is initiated by the client sending a request message to a known remote
server in order to execute a specified procedure using supplied parameters. A response is
It is useful to use Asynchronous RPC on the client when the remote procedure call
NOTES takes a while to complete and the client can do other work on the thread before it needs the
results of this RPC. The client can also make simultaneous calls to one or more servers.
For example, if a client wants to make simultaneous synchronous RPC calls to four servers,
it cannot do so with one thread. It has to spin off at least three threads and make an RPC
call in each thread. However, if it is using Asynchronous RPC, it can make all four calls on
the same thread and then wait for all of them.
It is useful to use Asynchronous RPC on the server when the processing of the call will
take a long time to complete. Instead of just processing the call in as with synchronous
RPC, the server can add it to the work queue and process it later. In synchronous RPC is
used, the server will have to start a thread for every RPC call. The server application can
notify the client on completion of the task.
2.3.3 Onc RPC Protocol
The Open Network Computing (ONC) Remote Procedure Call (RPC) protocol is
documented in RFC 1831. It is based on the remote procedure call model. One thread of
control logically winds through two processes: the caller’s (client) process, and a server’s
process. The caller process first sends a call message to the server process and waits
(blocks) for a reply message. The call message includes the procedure’s parameters, and
the reply message includes the procedure’s results. Once the reply message is received,
the results of the procedure are extracted, and caller’s execution is resumed. On the server
side, a process is dormant awaiting the arrival of a call message. When one arrives, the
server process extracts the procedure’s parameters, computes the results, sends a reply
message and then awaits the next call message.
However, this model is only given as an example. The ONC RPC protocol makes no
restrictions on the concurrency model implemented and others are possible. For example,
an implementation may choose to have RPC calls be asynchronous, so that the client may
do useful work while waiting for the reply from the server. Another possibility is to have the
server create a separate task to process an incoming call, so that the original server can be
free to receive other requests.
There are a few important ways in which remote procedure calls differ from local
procedure calls:
Error handling: failures of the remote server or network must be handled when
using remote procedure calls
Global variables and side-effects: since the server does not have access to the
client’s address space, hidden arguments cannot be passed as global variables or
returned as side effects.
Performance : remote procedures usually operate one or more orders of magnitude
slower than local procedure call
Authentication: since remote procedure calls can be transported over unsecured
networks, authentication may be necessary. Authentication prevents one entity
from masquerading as some other entity.
The RPC protocol can be implemented on several different transport protocols like
TCP or UDP. The RPC protocol does not care how a message is passed from one process
to another, but only with specification and interpretation of messages. However, the application
may wish to obtain information about (and perhaps control over) the transport layer through
an interface not specified in this document. For example, the transport protocol may impose
a restriction on the maximum size of RPC messages, or it may be stream-oriented like TCP
with no size limit. The client and server must agree on their transport protocol choices.
It is important to point out that RPC does not try to implement any kind of reliability and
that the application may need to be aware of the type of transport protocol underneath NOTES
RPC. If it knows it is running on top of a reliable transport such as TCP then most of the
work is already done for it. On the other hand, if it is running on top of an unreliable
transport such as UDP, it must implement its own time-out, retransmission, and duplicate
detection policies as the RPC protocol does not provide these services.
The RPC protocol provides the fields necessary for a client to identify itself to a service,
and vice-versa, in each call and reply message. Security and access control mechanisms
can be built on top of this message authentication. Several different authentication protocols
can be supported. A field in the RPC header indicates which protocol is being used.
To summarize RPC protocol implementations must provide for the following:
Unique specification of a procedure to be called.
Provisions for matching response messages to request messages.
Provisions for authenticating the caller to service and vice-versa.
Besides these requirements, features that detect the following are worth supporting
because of protocol roll-over errors, implementation bugs, user error, and network
administration:
RPC protocol mismatches
Remote program protocol version mismatches.
Protocol errors (such as misspecification of a procedure’s parameters).
Reasons why remote authentication failed.
Any other reasons why the desired procedure was not called.
2.3.4 RPC Implementation Issues
RPC provides a simple means for an application programmer to construct distributed
programs because it abstracts away from the details of communication and transmission.
However, the achievement of true transparency is a problem which needs to be resolved.
The following issues regarding the properties of remote procedure calls need to be considered
in the design of an RPC system if the distributed system is to achieve transparency.
Messages
The semantics of RPC are the same as those of a local procedure call. The calling
process calls and passes arguments to the procedure and it blocks while the procedure
executes.
The ONC RPC message protocol consists of two distinct structures: the call message
and the reply message. A client makes a remote procedure call to a network server and
receives a reply containing the results of the procedure’s execution. By providing a unique
specification for the remote procedure, RPC can match a reply message to each call (or
request) message. The RPC message protocol is defined using the eXternal Data
Representation (XDR) data description, which includes structures, enumerations, and unions.
The initial structure of an RPC message is as follows:
struct rpc_msg
{ unsigned int xid;
union switch (enum msg_type mtype)
{ case CALL;
NOTES call_body cbody;
case REPLY;
reply_body rbody;
} body;
};
All RPC call and reply messages start with a transaction identifier, xid, which is
followed by a two-armed discriminated union. The union’s discriminant is msg_type, which
switches to one of the following message types: CALL or REPLY. The msg_type has the
following enumeration:
enum msg_type {
CALL = 0,
REPLY = 1
};
The xid parameter is used by clients matching a reply message to a call message or
by servers detecting retransmissions. The initial structure of an RPC message is followed
by the body of the message. The body of a call message has one form. The body of a reply
message, however, takes one of two forms, depending on whether a call is accepted or
rejected by the server.
The RPC protocol for a reply message varies depending on whether the call message
is accepted or rejected by the network server. A call message can be rejected by the server
for two reasons: either the server is not running a compatible version of the RPC protocol,
or there is an authentication failure. The reply message to a request contains information to
distinguish the following conditions:
RPC executed the call message successfully.
The remote program is not available on the remote system.
The remote program does not support the requested version number. The lowest
and highest supported remote program version numbers are returned.
The requested procedure number does not exist. This is usually a caller-side protocol
or programming error.
Communication transparency
The users should be unaware that the procedure they are calling is remote. The three
difficulties when attempting to achieve transparency are:
the detection and correction of errors due to communication and site failures
the passing of parameters
Exception handling
Communication and site failures can result in inconsistent data because of partially
completed processes. The solution to this problem is often left to the application programmer.
Parameter passing in most systems is restricted to the use of value parameters. Exception
handling is a problem also associated with heterogeneity. The exceptions available in different
languages vary and have to be limited to the lowest common denominator.
For example, if a request is sent, but no response is received, what should the requestor
do?
26 ANNA UNIVERSITY CHENNAI
MIDDLE-WARE TECHNOLOGIES
If the request is blindly retransmitted, the remote procedure might be executed
twice (or more) NOTES
If the request is not retransmitted, the remote procedure might not be executed at
all
It may be possible for some remote procedures to be safely executed twice. Such
procedures are said to be idempotent. It is essential that remote procedures must execute
with desired behavior.
Location of services
In a distributed environment, Servers need to advertise their services and clients need
to identify compatible servers. Hence some type of Directory Service or Registry services
must be implemented for registration and location of available services as shown in Figure
2.4. The RPC runtime can access the Directory Service to locate the server.
The RPC architecture is designed so that clients send a call message and then wait for
servers to reply that the call succeeded. This implies that clients do not compute while NOTES
servers are processing a call. However, the client may not want or need an acknowledgment
for every message sent. Therefore, clients can use RPC batch facilities to continue computing
while they wait for a response.
Batching can be thought of as placing RPC messages in a pipeline of calls to a desired
server. Batching assumes the following:
Each remote procedure call in the pipeline requires no response from the server,
and the server does not send a response message.
The pipeline of calls is transported on a reliable byte stream transport such as TCP/
IP.
Because the server sends no message, the clients are not notified of any failures that
occur. Therefore, clients must handle their own errors. Because the server does not respond
to every call, the client can generate new calls that run parallel to the server’s execution of
previous calls. Furthermore, the TCP/IP implementation can buffer many call messages,
and send them to the server with one write subroutine. This overlapped execution decreases
the inter-process communication overhead of the client and server processes as well as the
total elapsed time of a series of calls. Batched calls are buffered, so the client should eventually
perform a non-batched remote procedure call to flush the pipeline with positive
acknowledgment.
Broadcasting Calls
In broadcast RPC-based protocols, the client sends a broadcast packet to the network
and waits for numerous replies. Broadcast RPC uses only packet-based protocols, such as
User Datagram Protocol/Internet Protocol (UDP/IP), for its transports. Servers that support
broadcast protocols respond only when the request is successfully processed and remain
silent when errors occur. Broadcast RPC requires the RPC port map service to achieve its
semantics. The port map daemon converts RPC program numbers into Internet protocol
port numbers. The main differences between broadcast RPC and normal RPC are as follows:
Normal RPC expects only one answer, while broadcast RPC expects one or more
answers from each responding machine.
The implementation of broadcast RPC treats unsuccessful responses as garbage
by filtering them out. Therefore, if there is a version mismatch between the
broadcaster and a remote service, the user of broadcast RPC may never know.
All broadcast messages are sent to the port-mapping port. As a result, only services
that register themselves with their port mapper are accessible through the broadcast
RPC mechanism.
Broadcast requests are limited in size to the maximum transfer unit (MTU) of the
local network. For the Ethernet system, the MTU is 1500 bytes.
Broadcast RPC is supported only by packet-oriented (connectionless) transport
protocols such as UPD/IP.
Call-back Procedures
Occasionally, the server may need to become a client by making an RPC callback to
the client’s process. To make an RPC callback, the user needs a program number on which
to make the call. The program number is dynamically generated.
Authentication
The server may require the client to identify itself before being allowed access to
services. Remote Procedure Call (RPC) authentication provides a certain degree of security.
RPC deals only with authentication and not with access control of individual services.
NOTES Each service must implement its own access control policy and reflect this policy as return
statuses in its protocol. The programmer can build additional security and access controls on
top of the message authentication. The authentication subsystem of the RPC package is
open-ended. Different forms of authentication can be associated with RPC clients. That is,
multiple types of authentication are easily supported at one time. Examples of authentication
types include UNIX®, DES, and NULL. The default authentication type is none.
The RPC protocol provisions for authentication of the caller to the server, and vice
versa, are provided as part of the RPC protocol. Every remote procedure call is authenticated
by the RPC package on the server. Similarly, the RPC client package generates and sends
authentication parameters. The call message has two authentication fields: credentials and
verifier. The reply message has one authentication field: response verifier.
2.3.6 RPC Application Development
RPC is typically implemented in one of two ways:
within a broader, more encompassing propriety product
by a programmer using a proprietary tool to create client/server RPC stubs
For example, a client/server application can be developed to lookup a database located
on a remote machine. A server has to be established on the remote machine that can
respond to queries. The client can retrieve information by sending a query to the remote
server for processing and obtaining the reply.
To develop an RPC application, therefore the following steps are needed:
Specify the protocol for client server communication
Develop the client program
Develop the server program
The programs will be compiled separately. The communication protocol is achieved by
generated stubs and these stubs and RPC (and other libraries) will need to be linked in.
When program statements that use RPC are compiled into an executable program, a stub is
included in the compiled code to act as the representative of the remote procedure code.
When the program is run and the procedure call is issued, the stub receives the request and
forwards it to a client runtime program in the local computer. The client runtime program
has the knowledge of how to address the remote computer and server application and sends
the message across the network that requests the remote procedure. Similarly, the server
includes a runtime program and stub that interface with the remote procedure itself. Results
are returned the same way.
Some of the terms and definitions associated with RPC are:
Client: A process such as a program or task that requests a service provided by another
program. The client process uses the requested service without having to “deal” with many
working details about the other program or the service.
Server: A process, such as a program or task, that responds to requests from a client.
Endpoint: The name, port, or group of ports on a host system that is monitored by a server
program for incoming client requests. The endpoint is a network-specific address of a server
process for remote procedure calls. The name of the endpoint depends on the protocol
sequence being used. NOTES
Endpoint Mapper (EPM): Part of the RPC subsystem that resolves dynamic endpoints in
response to client requests and, in some configurations, dynamically assigns endpoints to
servers.
Client Stub: Module within a client application containing all of the functions necessary for
the client to make remote procedure calls using the model of a traditional function call in a
standalone application. The client stub is responsible for invoking the marshalling engine and
some of the RPC application programming interfaces (APIs).
Server Stub: Module within a server application or service that contains all of the functions
necessary for the server to handle remote requests using local procedure calls.
The sequence of steps of a client / server interchange is depicted in Figure 2.3.
2 9 3 7 4
Kernel Kernel
Network routines Network routines
8
2.3.8 XML-RPC
It’s a specifications and a set of implementations that allow software running on disparate
operating systems, running in different environments to make procedure calls over the Internet.
It is a remote procedure call protocol which uses HTTP as the transport and XML
(Extensible Markup Language) to encode the calls as shown in Figure 2.8.
sharing) and Kaaza (free music download). SETI@home, another example of P2P, is a
NOTES scientific experiment that uses Internet-connected computers in the Search for Extraterrestrial
Intelligence (SETI). You can participate by running a free program that downloads and
analyzes radio telescope data. However, it is increasingly becoming an important technique
in various areas, such as distributed and collaborative computing both on the Web and in ad-
hoc networks.
Although Peer-to-Peer networking is still an emerging area, some Peer-to-Peer concepts
are already applied successfully in different contexts. Good examples are Internet routers,
which deliver IP packages along paths that are considered efficient. Theses routers form a
decentralized, hierarchical network. They consider each others as peers, which collaborate
in the routing process and in updating each other. Unlike centralized networks, they can
compensate node failures and remain functional as a network. But, unlike a typical P2P
application, a router by itself does not change how resources within the network are shared.
Peer-to-Peer as it has evolved today takes these concepts from the network to the application
layer, where software defines purpose and algorithms of virtual (non-physical) Peer-to-
Peer networks.
There also exist countless hybrid peer-to-peer systems. Such systems normally have a
central server that keeps information on peers and responds to requests for that information.
Peers are responsible for hosting available resources (as the central server does not have
them), for letting the central server know what resources they want to share, and for making
its shareable resources available to peers that request it.
CHAPTER - 3
NOTES
MIDDLEWARE
3.1 INTRODUCTION
Middleware is the software layer that functions between the client and the server. In
the distributed computing system, middleware is defined as the software layer that lies
above the operating system and the networking software and below the applications.
Middleware consists of a set of enabling services that allow multiple processes running on
one or more computer systems to interact across a network. This technology evolved to
provide for interoperability in support of the move from mainframe computing to client/
server architecture. The role of middleware is to ease the task of designing, programming
and managing distributed applications by providing a simple, consistent and integrated
distributed programming environment.
This unit discusses the middleware architecture and the services provided by middleware.
The categories and the different types of middleware have been discussed.
3.2 LEARNING OBJECTIVES
At the end of this unit, the reader must be familiar with the following concepts:
Evolution towards middleware
Middleware architecture
Services offered by middleware
Types of middleware
General and special purpose middleware
3.3 EVOLUTION TOWARDS MIDDLEWARE
The evolution towards middleware is depicted in the illustrations given in Figures 3.1 to
3.5. From programming and working with a single computer, the IT industry has progressed
towards distributed computing and middleware. While the term middleware has been around
for a long time, the middleware technology, as it is used now, evolved in the 1990’s to
provide for interoperability in support of the move to client/server architecture.
Using the client/server architecture, the computing facilities of large scale enterprises
evolved into an enterprise wide network of information services, including applications and
databases, on the local area and wide area networks. Servers on the local area network
typically supported files and file based applications, such as electronic mail, bulletin boards,
document preparation, and printing. Local area servers also supported a directory service,
to help a desktop user to find other users and to find and connect to services of interest.
Servers on the wide area network generally supported database access, such as corporate
directories and electronic libraries, or transaction processing applications, such as purchasing,
billing, and inventory control. Some servers also acted as gateways to services offered
outside the enterprise, such as travel or information retrieval services, news feeds (weather,
stock prices, etc.), and electronic document interchange with business partners.
NOTES
NOTES
Application
Presentation
Session
Transport
Network
Data Link
Physical
To generalize, middleware services are sets of distributed software that exist between
NOTES the application and the operating system and network services as shown in Figure 3.8.
C C C C
Middleware
Network
Typical transaction
curves, makes infrastructure support easier, and leverages the use of a common
database, if needed. NOTES
Middleware performs the following functions.
Hiding distribution, which is the fact that an application is usually made up of many
interconnected parts running in distributed locations.
Hiding the heterogeneity of the various hardware components, operating systems
and communication protocols that are used by the different parts of an application.
Providing uniform, standard, high-level interfaces to the application developers and
integrators, so that applications can easily interoperate and be reused, ported, and
composed
Supplying a set of common services to perform various general purpose functions,
in order to avoid duplicating efforts and to facilitate collaboration between
applications.
Using middleware has many benefits most of which derive from abstraction:
hiding low-level details
providing language and platform independence
reusing expertise and possibly code
ease of application development
As a consequence, one may expect a reduction in application development cost and
time, better quality (since most efforts may be devoted to application specific problems),
and better portability and interoperability. A potential drawback is the possible performance
penalty linked to the use of multiple software layers.
events that occur in some other component. An example is a distributed stock ticker application
NOTES where an event, such as a share price update, needs to be communicated to multiple distributed
display components, to inform traders about the update. Although the basic mechanisms for
this push-style communication are available in multi-cast networking protocols additional
support is needed to achieve reliable delivery and marshalling of complex request parameters.
A slightly different coordination problem arises due to the sheer number of components
that a distributed system may have. The components, i.e. modules or libraries, of a centralized
application reside in virtual memory while the application is executing. This is inappropriate
for distributed components for the following reasons:
Hosts sometimes have to be shut down and components hosted on these machines
have to be stopped and restarted when the host resumes operation
The resources required by all components on a host may be greater than the resources
the host can provide
Depending on the nature of the application, components may be idle for long periods,
thus wasting resources if they were kept in virtual memory all the time.
For these reasons, distributed systems need to use a concept called activation that
allows for component executing processes to be started (activated) and terminated
(deactivated) independently from the applications that they execute.
The middleware should manage persistent storage of components’ state prior to
deactivation and restore components’ state during activation. Middleware should also enable
application programmers to determine the activation policies that define when components
are activated and de-activated. Given that components execute concurrently on distributed
hosts, a server component may be requested from different client components at the same
time. The middleware should support different mechanisms called threading policies to
control how the server component reacts to such concurrent requests. The server component
may be single-threaded, queue requests and process them in the order of their arrival.
Alternatively, the component may also spawn new threads and execute each request in its
own thread. Finally the component may use a hybrid threading policy that uses a pool with
a fixed number of threads to execute requests, but starts queuing once there are no free
threads in the pool.
Reliability
Network protocols have varying degrees of reliability. Protocols that are used in practice
do not necessarily guarantee that every packet that a sender transmits is actually received
by the receiver and that the order in which they are sent is preserved. Thus, distributed
system implementations have to put error detection and correction mechanisms in place to
cope with these unreliabilities. Unfortunately, reliable delivery of service requests and service
results does not come for free. Reliability has to be paid for with decreases in performance.
To allow engineers to trade-off reliability and performance in a flexible manner, different
degrees of service request reliability are needed in practice.
For communication about service requests between two components, the reliabilities
that have been suggested for a distributed system are best effort, at-most-once, atleast-
once and exactly-once. Best effort service requests do not give any assurance about the
execution of the request. At-most-once requests are guaranteed to execute only once. It
may happen that they are not executed, but then the requester is notified about the failure.
At-least-once service requests are guaranteed to be executed, possibly more than once.
The highest degree of reliability is provided by exactly-once requests, which are guaranteed NOTES
to be executed once and only once. Additional reliabilities can be defined for group requests.
The above reliability discussion applies to individual requests. It can be extended to
consider more than one request. Transactions are important primitives that are used in
reliable distributed systems. Transactions have ACID properties, which means they enable
multiple request to be executed in an atomic, consistency-preserving, isolated and durable
manner. This means that the sequence of requests is either performed completely, or not at
all. It enforces that every completed transaction is consistent. It demands that a transaction
is isolated from concurrent transaction and, finally that once the transaction is completed its
effect cannot be undone. Every middleware that is used in critical applications needs to
support distributed transactions.
Reliability may also be increased by replicating components, that is, components are
available in multiple copies on different hosts. If one component is unavailable, for example
because its host needs to be rebooted, a replica on a different host can take over and
provide the requested service. Sometimes components have an internal state and then the
middleware should support replication in such a way that these states are kept in sync.
Scalability
Scalability denotes the ability to accommodate a growing future load. In centralized or
client/server systems, scalability is limited by the load that the server host can bear. This can
be overcome by distributing the load across several hosts. The challenge of building a scalable
distributed system is to support changes in the allocation of components to hosts without
changing the architecture of the system or the design and code of any component. This can
only be achieved by respecting the different dimensions of transparency identified in the
ISO Open Distributed Processing (ODP) reference model in the architecture and design of
the system.
Access transparency, for example demands that the way a component accesses the
services of another component is independent of whether it is local or remote. Another
example is location transparency, which demands that components do not know the physical
location of the components they interact with. If components can access services without
knowing the physical location and without changing the way they request it, load balancing
mechanisms can migrate components between machines in order to reduce the load on one
host and increase it on another host. It should again be transparent to users whether or not
such a migration occurred. This is referred to as migration transparency.
Replication can also be used for load balancing. Components whose services are in high
demand may have to exist in multiple copies. Replication transparency means that it is
transparent for the requesting components, whether they obtain a service from the master
component itself or from a replicated site.
The different transparency criteria that will lead to scalable systems are very difficult
to achieve if distributed systems are built directly on network operating system primitives.
To overcome these difficulties, middleware must support access, location, migration and
replication transparency.
Heterogeneity
The components of distributed systems may be procured off-the-shelf, may include
legacy systems and new components. As a result they are often rather heterogeneous. This
Message queues provide temporary storage when the destination program is busy or
not connected. MOM reduces the involvement of application developers with the complexity NOTES
of the master-slave nature of the client/server mechanism. MOM increases the flexibility
of architecture by enabling applications to exchange messages with other programs without
having to know what platform or processor the other application resides on within the network.
A A
T T
P N N P
R R
P M E E P
A A
L M L
O N T T N
I W W O I
C M S S
C
P O O P M
A R R A
O O
T K K T
R R
I I
T T
O O
N N
A MOM implementation may cost more if multiple kernels are required for a
NOTES heterogeneous system, especially when a system is maintaining kernels for old
platforms and new platforms simultaneously.
MOM can be effectively combined with remote procedure call (RPC) technology-
RPC can be used for synchronous support by a MOM.
Products in this category include IBM’s MQSeries and Sun’s Java Message Queue.
A strength of MOM is that this paradigm supports asynchronous message delivery
very naturally. The client can continue processing as soon as the middleware has taken the
message. Eventually the server will send a message including the result and the client will
be able to collect that message at an appropriate time. This achieves de-coupling of client
and server and leads to more scalable systems. The weakness, at the same time, is that the
implementation of synchronous requests is cumbersome as the synchronization needs to be
implemented manually in the client. A further strength of MOM is that it supports group
communication by distributing the same message to multiple receivers in a transparent way.
However, asynchronous message passing can also introduce other problems. What
happens if a message cannot be delivered? The sender may never wait for delivery of the
message, and thus never hear about the error. Similarly, a mechanism is needed to notify an
asynchronous receiver that a message has arrived. The operation invoker could learn about
completion/errors by polling, getting a software interrupt, or by waiting explicitly for completion
later using a special synchronous wait call. An asynchronous operation needs to return a
call/transaction ID (identification) if the application needs to be later notified about the
operation. At notification time, this ID would be placed in some global location or passed as
an argument to a handler or wait call.
MOMs do not support access transparency very well, because client components use
message queues for communication with remote components, while it does not make sense
to use queues for local communication. This lack of access transparency disables migration
and replication transparency, which complicates scalability. Moreover, queues need to be
set up by administrators and the use of queues is hard-coded in both client and server
components, which leads to rather inflexible and poorly adaptable architectures.
MOM does not support data heterogeneity very well either, as the application engineers
have to write the code that marshals. With most products, there are different programming
language bindings available.
MOM is most appropriate for event-driven applications. When an event occurs, the
client application hands over to the messaging middle-ware application, the responsibility of
notifying a server that some action needs to be taken. However, message oriented middleware
also has some weaknesses as it only supports at-least once reliability. Thus the same message
could be delivered more than once. Moreover, MOM does not support transaction properties,
such as atomic delivery of messages to all or none receiversMOM is also well-suited for
object-oriented systems because it furnishes a conceptual mechanism for peer-to-peer
communications between objects. MOM insulates developers from connectivity concerns-
the application developers write to APIs that handle the complexity of the specific interfaces.
Implementations of MOM first became available in the mid-to-late 1980s. Many MOM
implementations currently exist that support a variety of protocols and operating systems.
Many implementations support multiple protocols and operating systems simultaneously.
Some vendors provide tool sets to help extend existing inter-process communication across
a heterogeneous network.
APPLI- T T
R N N R APPLI-
CATION
M A E E A CATION
O N T T N
M S W W S
P O O P
O R R O
RPC R K K R RPC
Stub T T Stub
program program
CORBA 3.0, for example, supports both deferred synchronous and asynchronous object
requests. Object middleware supports different activation policies. These include whether
server objects are active all the time or started on demand. Threading policies are available
that determine whether new threads are started if more than one operation is requested by
concurrent clients, or whether they are queued and executed sequentially. CORBA also
supports group communication through its Event and Notification services. This service can
be used to implement push-style architectures.
The default reliability for object requests is atmost-once. Object middleware support
exceptions, which clients catch in order to detect that a failure occurred during execution of
Part III
16. Discuss the role of middleware and how it helps in the design, development and
management of distributed applications.
17. Explain in detail the different requirements that must be satisfied by middleware.
18. Discuss and explain which type of middleware supports transaction processing.
19. Explain in detail the differences between message-oriented middleware and RPC.
20. Explain how evolution of object-oriented middleware. Discuss how ORB implements
the object-oriented middleware.
REFERENCES
1. Software Engineering and Middleware: A Roadmap by Wolfgang Emmerich , Dept.
of Computer Science, University College London
2. Middleware: Past and Present a Comparison by Hennadiy Pinus
3. What is middleware and where is it going? By Peter Bye, Unisys Technical Consulting
Services
4. Middleware Architecture with Patterns and Frameworks by Sacha Krakowiak
5. Middleware: An Architecture for Distributed System Services by Philip A.
6. Bernstein, ACM
7. Middleware by David E Bakken, Washington State University
8. Websites: http://www.sei.cmu.edu/str/descriptions/middleware.html,
9. http://www.sei.cmu.edu/str/descriptions/clientserver_body.html,
10. wikipedia.org
65 ANNA UNIVERSITY CHENNAI
DMC 1754 / 1945
NOTES
UNIT II
CHAPTER - 4
EJB ARCHITECTURE
4.1. INTRODUCTION
Enterprise JavaBeans (EJB) represents a new direction in the development, installation,
and management of distributed Java applications in the enterprise. EJB is a server side
component architecture that simplifies the process of building distributed component
applications in Java. EJB technology enables rapid and simplified development of distributed,
transactional, secure and portable applications based on Java technology. EJB is designed to
support application portability and reusability over any vendor’s middleware services. Hence,
the componential architecture of EJB has greatly simplified the development and management
of corporate applications.
This unit introduces the reader to the EJB architecture. The history of EJB and the
components of EJB system have been dealt with in this unit.
4.2 LEARNING OBJECTIVES
At the end of this Unit, the reader must be familiar with the following concepts:
EJB Architecture
EJB Technology Design goals
Features of the EJB architecture
4.3. THE HISTORY OF EJB
Applications have evolved over the past few decades and more so in the last ten years.
In the beginning, applications were complete entities, sometimes including an operating
system, but most probably managing data storage. Because of the repetitious task of storing
and retrieving data, and the complexity involved in transaction management and
synchronization, database technology was evolved to provide an application-independent
interface to an application’s data. As applications grew complicated and required more
resources to do the processing, applications came to be distributed across multiple processes
that were responsible for a certain part of the application’s business logic.
The advent of distributed programming was followed shortly by the birth of distributed
component models. A distributed component model can be as simple as defining a
mechanism for one component to locate and use the services of another component (also
referred to as Object Request Broker) or as complicated as managing transactions,
distributed objects, concurrency, security, persistence, and resource management ( also
referred to as Component Transaction Monitors or CTMs.). CTMs are by far the most
complicated of these component models; because they manage not only components but
also database transactions, resources, and so on, and they are also referred to as application
servers. The Enterprise JavaBeans technology is Sun Microsystems answer to an application
server.
With the prevalence of the Internet, distributed technologies have added another tier to
enterprise applications. In this case, Web browsers are thin clients that talk to Web servers.
Application server
EJB EJB EJB Middle tier
instance instance instance
An application server
Instances of Enterprise JavaBeans, called simply enterprise beans (or EJBs)
In the context of an enterprise bean, an application server provides basic resource-
allocation services to enterprise beans, which access EISs.
The benefit of accessing an EIS through an application server is that the client component
does not need to know the details of connection management, security management, and
transaction management. The client component is a client-side module that communicates
with an application server to access various components (such as EJBs) that the application
server manages.
The EJB specification was originally developed in 1997 by IBM and later adopted by
Sun Microsystems (EJB 1.0 and 1.1) and enhanced under the Java Community Process as
JSR 19 (EJB 2.0), JSR 153 (EJB 2.1) and JSR 220 (EJB 3.0).
The EJB specification provides a standard way to implement the back-end ‘business’
code typically found in enterprise applications. Enterprise Java Beans were intended to
handle such common concerns as persistence, transactional integrity, and security in a standard
way. It details how an application server provides support for:
Transaction Processing
Persistence
Concurrency control
Security
Naming and Directory Service (JNDI)
Events using Java Message Service (JMS)
Security ( Java Cryptography Extension (JCE) and Java Authenticatin and
Authorization Services (JAAS ) )
Deployment of software components in an application server
Remote procedure calls using RMI-IIOP.
Exposing business methods as Web Services.
4.5. EJB TECHONOLGY DESIGN GOALS
The goals for the EJB architecture are:
Enterprise JavaBeans architecture will be the standard component architecture for
building distributed object-oriented business applications in the Java programming
EJB Server
EJB Container
Creat Home
e Home
Interface Object
EJB Enterprise
Remote
Client EJB Java Bean
Interface Database
Invok Object
e
Home Interface defines the create, delete (remove), and query methods for an enterprise
bean type and Deployment Descriptor is used to describe the enterprise bean’s runtime
behaviour to the container. As explained later, EJB 3.0 uses metadata annotations as an
NOTES alternative to Deployment Descriptors.
The remote and home interfaces are used by applications to access enterprise beans
at runtime. The home interface allows the application to create or locate the bean, while the
remote interface allows the application to invoke a bean’s business methods. The bean class
runs in the environment provided by the EJB Server and Container. More details are given
in the following sections.
4.6.1 Bean Class
Bean Class is the implementation class of the bean that defines its business, persistence,
and passivation logic. The bean class runs inside the EJB container. Instances of the bean
class service the client request indirectly, instances of the bean class are not visible to the
client. An entity bean must implement javax.ejb.EntityBean and a session bean must implement
javax.ejb.SessionBean. Both EntityBean and Session Bean extend javax.ejb.EnterpriseBean.
Bean class has to implement the bean’s business methods in the remote interface apart
from some other callback methods.
4.6.2 Remote Interface
Remote Interface defines the business methods that will be visible to the client’s that
use the enterprise bean. The remote interface extends the javax.ejb.EJBObject interface
and is implemented by a remote (distributed object) reference. Client applications interact
with the enterprise bean through its remote interface.
4.6.3 Home Interface
This interface defines the bean’s life cycle methods such as creation of new beans,
removal of beans, and locating beans. The home interface extends the javax.ejb.EJBHome
interface which in turn extends java.rmi.Remote. The client application will use the home
interface to create beans, find existing beans, and remove specific beans.
4.6.4 Deployment Descriptors
Deployment descriptor is used to describe the enterprise bean’s runtime behaviour to
the container. Among other things the deployment descriptor allows the transaction,
persistence, and authorization security behaviour of a bean to be defined using declarative
attributes. This greatly simplifies the programming model when developing beans
Deployment descriptor describes how to apply the primary services to each bean class
at runtime to EJB server. Deployment Descriptors are used to specify the following
requirements of a bean:
Bean management and lifecycle requirements
Persistence requirements
Transaction Requirements
Security Requirements
4.6.5 EJB Server
The EJB server provides an environment that supports the execution of applications
developed using EJB components. It manages and coordinates the allocation of resources
to the applications.
CLIENT
Client Request Transaction
Management
Persistence
Management
Security
Management
Bean
Callback Methods
the EJB. An EJB server can have more than one container and each container in turn can
NOTES accommodate more than one enterprise bean.
The container hosts and manages an enterprise bean in the same manner that the Java
Web Server hosts a servlet or an HTML browser hosts a Java applet. An enterprise bean
cannot function outside of an EJB container. The EJB container manages every aspect of
an enterprise bean at runtimes including remote access to the bean, security, persistence,
transactions, concurrency, and access to and pooling of resources.
The container isolates the enterprise bean from direct access by client applications.
When a client application invokes a remote method on an enterprise bean, the container first
intercepts the invocation to ensure persistence, transactions, and security are applied properly
to every operation a client performs on the bean. The container provides various services
for the EJB to relieve the developer from having to implement such services in the bean
code itself, namely:
Distribution via proxies : The container will generate a client-side stub and server-
side skeleton for the EJB. The stub and skeleton may use either CORBA’s IIOP
(Internet Inter-ORB Protocol) or Java Remote Method Protocol (JRMP) to
communicate.
Lifecycle Management : Bean initialization, state management, and destruction is
driven by the container, all the developer must do is implement the appropriate
methods.
Naming and Registration : The EJB container and server will provide the EJB with
access to naming services. These services are used by local and remote clients to
look up the EJB and by the EJB itself to look up resources it may need.
Transaction Management : Declarative transactions provide a means for the
developer to easily delegate the creation and control of transactions to the container.
Security and Access Control : Again, declarative security provides a means for the
developer to easily delegate the enforcement of security to the container.
Persistence : Using the Entity EJB’s container-managed persistence mechanism,
state can be saved and restored without having to write a single line of code.
The EJB specification defines a bean-container contract, and a strict set of rules that
describe how enterprise beans and their containers will behave at runtime, how security
access is checked, how transactions are managed, how persistence is applied, etc. The
bean-container contract is designed to make enterprise beans portable between EJB containers
so that enterprise beans can be developed once then run in any EJB container.
4.6.6.1. Callback Methods
Every bean implements a subtype of the EnterpriseBean interface which defines several
methods, called callback methods. Each callback method alerts the bean to a different event
in its lifecycle and the container will invoke these methods to notify the bean when it’s about
to activate the bean, persist its state to the database, end a transaction, remove the bean
from memory, etc. The callback methods give the bean a chance to do some housework
immediately before or after some event.
4.6.6.2. EJBContext
Every bean obtains an EJBContext object, which is a reference directly to the container.
The EJBContext interface provides methods for interacting with the container so that bean
can request information about its environment like the identity of its client or the status of a
transaction.
A bean needs the EJB context when it wants to perform the operations listed in Table
4.1
M ethod D escription
getE nvironm ent Get the values of p roperties for the bean.
getU serT ransaction Get a transaction contex t, w hich allows the coder to
dem arcate transactions prog ram m atically when using
bean m an aged transactions (B M T ). This is v alid only
for beans that have been designated transactional.
setR ollbackO nly Set the current transaction so that it cannot be
com m itted. A pplicable only to contain er-m anaged
transactions.
getR ollbackOn ly C heck w hether the current transaction is m arked fo r
rollback only. A pplicab le only to contain er-m anaged
transactions.
getE JB Hom e R etrieve the object reference to the corresponding
EJB H om e (hom e interface) of the bean.
lookup Use JN D I to retrieve the bean b y enviro nm ent
reference nam e. W hen using this m ethod, you do not
prefix the bean reference w ith "java:com p/env ".
3. Call Enterprise
Bean Bean
Deployment is the process of reading the bean’s JAR file, changing or adding properties
to the deployment descriptor, mapping the bean to the database, defining access control in NOTES
the security domain, and generating vendor-specific classes needed to support the bean in
the EJB environment. Every EJB server product comes with its own deployment tools
containing a graphical user interface and a set of command-line programs.
For clients like enterprise bean itself, Java RMI or CORBA client, to locate enterprise
beans on the net, Java EJB specifications specify the clients to use Java Naming and Directory
Interface (JNDI). JNDI is a standard Java extension that provides a uniform Application
Programming Interface (API) for accessing a wide range of naming and directory services.
The communication protocol may be Java RMI-IIOP or CORBA’s IIOP
There are some special integrated application development tools available commercially
such as Inprise’s JBuilder, Sun’s Forte and IBM’s VisualAge, for designing EJBs in the
market.
4.7 THE EBJ ECOSYSTEM
To have an EJB deployment up and running, one needs more than an application server
and components. EJB encourages collaborations of more than six different parties. These
parties together is called as an EJB Ecosystem.
4.7.1 The Bean provider
The bean provider supplies the business components to the enterprise applications.
These business components are not complete applications but can be combined to form
complete enterprise applications. These bean providers could be an ISV (Independent Software
Vendor) selling components or an internal component provider. There are three different
types of EJB : Session Bean that is transaction aware and models processes, services and
client-side sessions, Entity Bean that is used to model business functionality and Message-
driven EJB used for asynchronous message interchanges between sender and receiver. As
an application designer, you should choose the most appropriate type of EJB based on the
task to be accomplished.
4.7.2 The Application Assembler
The application assembler is the overall application architect. This party is responsible
for understanding how various components fit together and writing the applications that
combine components. The application assembler is the consumer of the beans supplied by
the Bean provider. The application assembler could perform any or all of the following
tasks:
From knowledge of the business problem, decide which combination of existing
component and new enterprise beans are needed to provide an effective solution.
Supply a user interface or Web Service
Write new enterprise beans to solve some problems specific to your business problem
Write the code that calls on components supplied by bean providers.
Write integration code that maps data between components supplied by different
bean providers.
4.7.3 EJB Deployer
After the application developer builds the application, the application must be deployed
on the server. Some challenges are:
Securing the deployment with a hardware or software firewall and other protective
measures.
Operating System
EJB Server
EJB Container
EJB Bean
Figure 4.5 The relationship among the EJB Server, container and bean
In EJB 3.0, bean developers do not have to implement unnecessary callback methods
and can instead designate any arbitrary method as a callback method to receive notifications
for lifecycle events for a SessionBean or MessageDrivenBean (MDB). Callback methods
can be indicated using callback annotations. Also, a callback listener class can be designed
instead of writing callback methods in the bean class itself. The annotations used for callback
methods are the same in both cases—only the method signatures are different. A callback
method defined in a listener class must take a Object as a parameter, which is not needed
when the callback is in the bean itself.
Interceptors
The runtime services like transaction and security are applied to the bean objects at the
method’s invocation time. These services are often implemented as the interceptor
methods managed by the container. However, EJB 3.0 allows developers to write the
custom interceptor methods that are called before and after the bean method. It is
useful to give the control to the developer for the actions like commit transaction,
security check, etc. The developer can develop, reuse, and execute your own services,
or can re-implement the transaction and security services to override the container’s
default behaviors.
Interceptors offer fine-grained control over method invocation flow. They can be used
NOTES on SessionBeans (stateful and stateless) and MessageDrivenBeans. They can be defined in
the same bean class or in an external class. The interceptor’s methods will be called before
the actual bean class methods are called.
The new persistence model for entity beans
The new entity beans are also just POJOs with a few annotations and are not persistent
entities by birth. An entity instance becomes persistent once it is associated with an
EntityManager and becomes part of a persistence context.
Dependency Injection
Dependency injection is a term used to describe a separation between the implementation
of an object and the construction of an object it depends upon. Instead of complicated
XML ejb-refs or resource refs, one can use the @Inject annotation to set the value of
a field or to call a setter method within your session bean with anything registered
within JNDI. EJB 3.0 facilitates this feature by providing annotations to inject the
dependencies into the bean class itself. Dependency annotation may be attached to the
bean class, instance variables, or methods. The main reason for introducing @Inject is
to avoid JNDI lookup to get the resources set the JNDI tree. Also another great effect
of using @Inject is to allow a bean to be tested outside of the container.
EntityBeans Made Easy
To create an EntityBean, a developer only needs to code a bean class and annotate it
with appropriate metadata annotations. The bean class is a POJO.
Security Annotations
EJB 3.0 provides annotations to specify security options. The following are the security-
related annotations defined in EJB 3.0:
@SecurityRoles
@MethodPermissions
@Unchecked
@Exclude
@RunAs
Annotations applied for package-level elements are called package-level annotations.
These annotations are placed in the file package-info.java. The security roles are applied
to the entire EJB module. The @SecurityRoles annotation must be placed in the package-
info.java file with the package information. When the compiler parses package-info.java,
it will create a synthetic interface. It does not have any source code, because it is created by
the compiler. This interface makes package-level annotations available at runtime. The file
package-info.java is created and stored inside of every package. For example, if the bean
class is inside of the package ejb3.login, then the bean class must be put in the package-
info.java file inside the ejb3.login package with the user role details.
The stub returns the result to the application that invoked its remote interface method.
The stub is just a dumb network object that sends the requests across the network to the
skeleton, which in turn invokes the method on the actual instance. The instance does all the
work, the stub and skeleton just pass the method identity and arguments back and forth
across the network.
In EJB, the skeleton for the remote and home interfaces are implemented by the
container, not the bean class. Every method invoked on the reference types by a client
application are first handled by the container and then delegated to the bean instance. The
container must intercept those requests intended for the bean so that it can apply persistence
(entity beans), transactions, and access control automatically.
Distributed object protocols define the format of network messages sent between
address spaces. Most EJB servers support either the Java Remote Method Protocol (JRMP)
or CORBA’s Internet Inter-ORB Protocol (IIOP). The bean and application programmer
only see the bean class and its remote interface, the details of the network communication
are hidden.
With respect to the EJB API, it is not necessary for the programmer to know whether
the EJB server uses JRMP or IIOP as the API is the same. The EJB specification requires
a specialized version the Java RMI API, when working with a bean remotely. Java RMI is
NOTES an API for accessing distributed objects and is somewhat protocol agnostic in the same way
that JDBC is database agnostic. So, an EJB server will support JRMP or IIOP, but the bean
and application developer always uses the same Java RMI API. In order for the EJB server
to have the option of supporting IIOP, a specialized version of Java RMI, called Java RMI-
IIOP was developed. Java RMI-IIOP uses IIOP as the protocol and the Java RMI API.
EJB servers don’t have to use IIOP, but they do have to respect Java RMI-IIOP restrictions,
so EJB 1.1 uses the specialized Java RMI-IIOP conventions and types, but the underlying
protocol can be anything.
4.10 EJB ARCHITECTURE VIEWS
4.10.1 The client’s view of an EJB is defined strictly by interfaces
Synchronous clients can call only those methods exposed by the EJB’s interfaces. In
the EJB Specification, the interfaces are collectively referred to as the client view. Each
EJB publishes ‘factory’ interfaces and ‘business method’ interfaces. The factory interfaces
expose methods that clients can use to create, locate, and remove EJBs of that type. The
business method interfaces define all the methods that clients can call on a specific EJB
after it has been located or created through the factory interface.
Client
F
F B
EJB2
B
EJB1
F B
Data Source
Home object
Client EJB
EJB object
Container
Figure 4.8 EJB Container and its Proxies
The client calls methods on the home object and EJB object, which delegate to the
implementation itself. The process is transparent to the client. There are different home
objects and EJB objects for local and remote access, but the purpose of these objects is
essentially the same. Because the methods on the EJB proxies will delegate to methods on
the EJB itself, the proxies must be generated to match the EJB—that is, the proxies will be
specific to the EJB they serve. The vendor of the EJB server will provide tools to support
this generation, which will typically take place when the EJB is deployed to the server.
4.11 ADVANTAGES OF USING EJB
The EJB architecture provides the following benefits to the application developer:
simplicity, application portability, component reusability, ability to build complex applications,
separation of business logic from presentation logic, deployment in many operating
environments, distributed deployment, application interoperability, integration with non-Java
systems, and educational resources and development tools.
Simplicity
Because the EJB architecture helps the application developer access and utilize
enterprise services with minimal effort and time, writing an enterprise bean is almost as
simple as writing a Java class. The application developer does not have to be concerned
with system-level issues, such as security, transactions, multi-threading, security protocols,
distributed programming, connection resource pooling, and so forth. As a result, the application
developer can concentrate on the business logic for the domain-specific application.
Application portability
An EJB application can be deployed on any J2EE compliant server.
Component reusability
An EJB application consists of enterprise bean components. Each enterprise bean is a
reusable building block.
Part I
NOTES Answers:
1) d 2) c 3) a 4) a 5) b 6) c 7) d 8)c 9)a 10) d
REFERENCES
1. java.sun.com/developer/onlineTraining/EJBIntro/EJBIntro.html
2. java.sun.com/products/ejb
3. www.developer.com/ejb
4. www.roseindia.net/javabeans/javabeans.shtml
5. www.techseasaw.com/articles/11111/EJB_part2.htm
6. www.wikipeida.org
7. www.jguru.com
8. Mastering Enterprise JavaBeans Third Edition by Rima Patel Sriganesh and Gerald
Brose
9. Enterprise JavaBeans by Tom Valesky
5.1 INTRODUCTION
Enterprise beans are meant to perform server-side operations, such as executing
complex algorithms or performing high volume business transactions. The server side has
different kinds of needs than the client-side applications. Server side components need to
run in a highly available, fault tolerant, transactional, and multi-user secure environment.
The application server provides this high end server-side environment for the enterprise
beans and it provides the run time containment necessary to manage enterprise beans. This
unit explains about the different type of beans in detail and also gives the procedure for
building and deploying Enterprise Java Beans.
5.2 LEARNING OBJECTIVES
At the end of this unit, the reader must be familiar with the following concepts:
EJB roles
Types of Beans
How to run EJB
5.3 EJB’S ROLES
EJB specifications handle the encapsulation and isolation of system interfaces by clearly
defining the roles for EJB developers. This has been earlier introduced in the EJB ecosystem.
The relationship between the different parties and their roles is given in Figure 5.1.
a) Enterprise Bean Provider
The Enterprise Bean Provider or Component provider is the producer of enterprise
beans. The output is an ejb-jar file that contains one or more enterprise beans. The Bean
Provider is responsible for the Java classes that implement the enterprise beans’ business
methods; the definition of the beans’ client view interfaces; and declarative specification of
the beans’ metadata. The beans’ metadata may take the form of metadata annotations
applied to the bean classes and/or an external XML deployment descriptor. The beans’
metadata - whether expressed in metadata annotations or in the deployment descriptor -
includes the structural information of the enterprise beans and declares all the enterprise
beans’ external dependencies (e.g. the names and types of resources that the enterprise
beans use).
The Enterprise Bean Provider is typically an application domain expert. The Bean
Provider develops reusable enterprise beans that typically implement business tasks or business
entities. The Bean Provider is not required to be an expert at system - level programming.
Therefore, the Bean Provider usually does not program transactions, concurrency, security,
distribution, or other services into the enterprise beans. The Bean Provider relies on the
EJB container for these services.
A Bean Provider of multiple enterprise beans often performs the EJB Role of the
Application Assembler
NOTES
builds Operational
supplies application
tools environment
Tool maintains
system
Provider Application Deployer
Assembler
develops
EJB supplies
application
server Systems
Administrator
Bean Application
Provider Server
Provider
EJB
MESSAGE-
ENTITY DRIVEN SESSION
Entity beans can be uniquely identified by a primary key. A primary key is an object that
NOTES uniquely identifies the entity bean. According to the specification, the primary key must be unique
for each entity bean within a container. Hence the bean’s primary key usually maps to the PK in the
database (provided it’s persisted to a database). However, it is not necessary that a primary key has
to be present in the database. As long as the bean’s primary key (which maps to a column or set of
columns) can uniquely identify the bean it should work. It may however be created or the sake of
referential integrity. For example, an “employee” entity bean may use the employee’s social security
number or ID as its primary key.
Unlike Java objects that are used only by one program, an entity bean can be used by any
program on the network. Client programs just need to find the entity bean via JNDI in order to use it.
Methods of an entity bean run on a “server” machine. When a client program calls an entity bean’s
method, the client program’s thread stops executing and control passes over to the server. When the
method returns from the server, the local thread resumes execution.
The characteristics of Entity beans, that is - are persistent, allow shared access, have primary
keys, and may participate in relationships with other entity beans - are given below in greater detail.
Persistence
Because the state of an entity bean is saved in a storage mechanism, it is persistent. Persistence
means that the entity bean’s state exists beyond the lifetime of the application or the J2EE server
process. For example, the data in a database is persistent because it still exists even the database
server or the applications it services are shut down.
There are two types of persistence for entity beans: bean-managed and container-managed.
With bean-managed persistence (BMP), the entity bean code contains the calls that access the
database. In container-managed persistence (CMP), the EJB container automatically generates the
necessary database access calls. The code that you write for the entity bean does not include these
calls.
Shared Access
Entity beans may be shared by multiple clients. Because the clients might want to change the
same data, it’s important that entity beans work within transactions. Typically, the EJB container
provides transaction management.
Primary Key
Each entity bean has a unique object identifier. A customer entity bean, for example, might be
identified by a customer number. The unique identifier, or primary key, enables the client to locate a
particular entity bean.
Relationships
Like a table in a relational database, an entity bean may be related to other entity beans. For
example, in a college enrollment application, StudentEJB and CourseEJB would be related because
students enroll in classes. Relationships are implemented differently for entity beans with bean-
managed persistence and those with container-managed persistence.
With bean-managed persistence, the code is written to implement the relationships. But with
container-managed persistence, the EJB container takes care of the relationships.
Container-Managed Persistence
The term container-managed persistence means that the EJB container handles all database
access required by the entity bean. The bean’s code contains no database access (SQL) calls. As a
result, the bean’s code is not tied to a specific persistent storage mechanism (database). Because of
this flexibility, even if you redeploy the same entity bean on different J2EE servers that use different
databases, it is not necessary to modify or recompile the bean’s code. In short, entity beans are more
At the end of the life cycle, the client invokes the remove method and the EJB
container calls the bean’s ejbRemove method. The bean’s instance is ready for garbage
collection. Code is written to control the invocation of only two life-cycle methods—the
create and remove methods in the client. All other methods are invoked by the EJB container.
The ejbCreate method, for example, is inside the bean class, allowing one to perform certain
operations right after the bean is instantiated. For instance, it could be used to connect to a
database in the ejbCreate method.
The lifecycle for EJB 3.0 and EBJ 2.1 stateful session beans are identical. The difference
is in how you register lifecycle callback methods.
Table 5.1 lists the EJB 2.1 lifecycle methods, as specified in the javax.ejb.SessionBean
interface, that a stateful session bean must implement. For EJB 2.1 stateful session beans,
the developer must at the least provide an empty implementation for all callback methods.
NOTES
Table 5.1 Lifecycle Methods for an EJB 2.1 Stateful Session Bean
ejbRem ove A container invokes this m ethod before it ends the life of
the session object. This method perform s any required
clean-up. For example, closing external resources such as
file handles.
setSessionContext The container invokes this m ethod after it first instantiates
the bean. U se this method to obtain a reference to the
context of the bean.
Table 5.2 lists the optional EJB 3.0 stateful session bean lifecycle callback methods you can
define using annotations. For EJB 3.0 stateful session beans, you do not need to implement
these methods.
Table 5.2 Lifecycle Methods for an EJB 3.0 Stateful Session Bean
Annotation Description
@PostConstruct This optional method is invoked for a stateful session
bean before the first business method invocation on the
bean. This is at a point after which any dependency
injection has been performed by the container.
@PreDestroy This optional method is invoked for a stateful session
bean when the instance is in the process of being
removed by the container. The instance typically releases
any resources that it has been holding.
@PrePassivate The container invokes this method right before it
passivates a stateful session bean.
NOTES
Table 5.3 Lifecycle Methods for an EJB 2.1 Stateless Session Bean
Table 5.4 Lifecycle Methods for an EJB 3.0 Stateless Session Bean
Annotation Description
@PostConstruct This optional method is invoked for a stateful session bean
before the first business method invocation on the bean. This
is at a point after which any dependency injection has been
performed by the container.
There are two paths from the pooled stage to the ready stage. On the first path, the
NOTES client invokes the create method, causing the EJB container to call the ejbCreate and
ejbPostCreate methods. On the second path, the EJB container invokes the ejbActivate
method. While in the ready stage, an entity bean’s business methods may be invoked.
There are also two paths from the ready stage to the pooled stage. First, a client may
invoke the remove method, which causes the EJB container to call the ejbRemove method.
Second, the EJB container may invoke the ejbPassivate method. At the end of the life
cycle, the EJB container removes the instance from the pool and invokes the
unsetEntityContext method.
In the pooled state, an instance is not associated with any particular EJB object
identity. With bean-managed persistence, when the EJB container moves an instance from
the pooled state to the ready state, it does not automatically set the primary key. Therefore,
the ejbCreate and ejbActivate methods must assign a value to the primary key. If the
primary key is incorrect, the ejbLoad and ejbStore methods cannot synchronize the instance
variables with the database. In the pooled state, the values of the instance variables are
not needed. You can make these instance variables eligible for garbage collection by setting
them to null in the ejbPasssivate method.
Figure 5.6 illustrates the stages in the life cycle of a message-driven bean. The EJB
container usually creates a pool of message-driven bean instances. For each instance, the
EJB container instantiates the bean and performs these tasks:
It calls the setMessageDrivenContext() method to pass the context object to the
instance.
It calls the instance’s ejbCreate() method.
Like a stateless session bean, a message-driven bean is never passivated, and it has
only two states: nonexistent and ready to receive messages. At the end of the life cycle, the
container calls the ejbRemove() method. The bean’s instance is then ready for garbage
collection
Session Beans
Stateless session beans are useful mainly in middle-tier application servers that provide
a pool of beans to process frequent and brief requests. Table 5.6 provides a definition for
both BMP and CMP, and a summary of the programmatic and declarative differences
between them.
M an agem ent B e a n - M a n a g e d P e r s is t e n c e C o n ta in e r -M a n a g e d
Issu es P e r s is t e n c e
P e rs is te n c e T h e u s e r h a s to im p le m e n t th e T h e m a n a g e m e n t o f th e
m an agem ent p e rs is te n c e m a n a g e m e n t w ith in p e rs is te n t d a ta is d o n e fo r
th e e jb S to re , e jb L o a d , th e u s e r. T h a t is , th e
e jb C re a te , a n d e jb R e m o v e c o n ta in e r in v o k e s a
E n tity B e a n m e th o d s . T h e s e p e rs is te n c e m a n a g e r o n
m e th o d s m u s t c o n ta in lo g ic fo r b e h a lf o f th e b e a n .
s a v in g a n d re s to rin g th e
p e rs is te n t d a ta . e jb S to re a n d e jb L o a d c a n
b e u s e d fo r p re p a rin g th e
For e x a m p le , th e e jb S to r e d a ta b e f o re th e c o m m it o r
m e th o d m u s t h a v e lo g ic in it to f o r m a n ip u la tin g th e d a ta
s to r e th e e n tity b e a n 's d a ta to a fte r it is re fr e s h e d fro m
th e a p p ro p ria te d a ta b a s e . If it th e d a ta b a s e .
d o e s n o t, th e d a ta c a n b e lo s t.
T h e c o n ta in e r a lw a y s
in v o k e s th e e jb S to re
m e th o d rig h t b e fo re th e
c o m m it. In a d d itio n , it
a lw a y s in v o k e s th e
e jb L o a d m e th o d rig h t a f te r
r e in s ta tin g C M P d a ta fro m
th e d a ta b a s e .
F in d e r m e th o d s T h e fin d B y P rim a r y K e y m e th o d T h e fin d B y P rim a r y K e y
a llo w e d a n d o th e r fin d e r m e th o d s a re m e th o d a n d o th e r fin d e r
a llo w e d . m e th o d s c la u s e a r e
a llo w e d .
D e fin in g C M P N /A R e q u ire d w ith in th e E J B
fie ld s d e p lo y m e n t d e s c r ip to r.
T h e p r im a r y k e y m u s t a ls o
b e d e c la r e d a s a C M P
f ie ld .
M a p p in g C M P N /A R e q u ire d . D e p e n d e n t o n
fie ld s to p e rs is te n c e m a n a g e r .
re s o u rc e
d e s tin a tio n
D e fin itio n o f N /A R e q u ire d w ith in th e
p e rs is te n c e O ra c le -s p e c if ic
m an ager d e p lo y m e n t d e s c r ip to r. B y
d e fa u lt,O C 4 J u s e s th e
T o p L in k p e rs is te n c e
m a n a g e r.
With CMP, it is possible to build components to the EJB 2.0 specification that can
save the state of EJB to any J2EE supporting application server and database without
having to create user’s own low-level JDBC-based persistence system.
The major differences between session and entity beans are that entity beans involve
a framework for persistent data management, a persistent identity, and complex business
logic. Table 4.8 illustrates the different interfaces for session and entity beans. Notice that
the difference between the two types of EJBs exists within the bean class and the primary
key. All of the persistent data management is done within the bean class methods.
<<Interface>> <<Interface>>
java.rmi.Remote java.io.Serializable
<<Interface>>
Javax.ejb.Enterp
The Remote Interface supports every business method of the bean. The class diagram
of the remote interface is as shown in Figure 5.8.
J2EE
Entity Bean Session Bean
Subject
Local Extends Extends javax.ejb.EJBLocalObject
interface javax.ejb.EJBLocalObject
Remote Extends javax.ejb.EJBObject Extends javax.ejb.EJBObject
interface
Local Extends Extends javax.ejb.EJBLocalHome
Home javax.ejb.EJBLocalHome
interface
Remote Extends javax.ejb.EJBHome Extends javax.ejb.EJBHome
Home
interface
Bean class Extends Extends javax.ejb.SessionBean
javax.ejb.EntityBean
Primary Used to identify and retrieve Not used for session beans.
key specific bean instances Stateful session beans do have an
identity, but it is not externalized.
Step 3: Compile the .java files from step 1 into .class files
Step 4: Using the jar utility, create an EJB-jar file containing the deployment descriptor
and .class files.
Step 5: Deploy the Ejb-jar file into your container in a vendor specific manner, perhaps by
running a vendor-specific tool or perhaps by copying your Ejb-jar file into a folder where
your container looks to load Ejb-jar files.
Step 6: Configure your EJB server so that it is properly configured to host your ejb-jar
file.
Step 7: Start your EJB container and confirm that it has loaded your Ejb-jar file.
Step 8: Optionally write a standalone test client.java file. Compile the test client into a
.class file. Run the test client.
Figure 5.7 shows the class diagram for Hello world example and its base clas.
NOTES
<< Interface>>
Java.rmi.Remote
<< Interface>>
Javax.ejb.EJBobject
<< Interface>>
Hello World
Create a file Hello.java to store the java code. The source code for Remote interface
for Hello World is given below :
package examples;
import java.rmi.RemoteException;
import java.rmi.Remote;
import javax.ejb.*;
/* This is HelloBean remote interface. This interface is what clients operate on when
they interact with Ejb objects. The container vendor will implement this interface, the
implemented object is the EJB object, which delegates invocations to the actual bean. */
public interface Hello extends javax.ejb.EJBObject
{
/** The one method Hello returns a greeting to the client.**/
public String hello() throws java.rmi.RemoteException;
}
The Remote Interface includes the following:
javax.ejb.EJBException;
public Object getPrimaryKey() throws javax.ejb.EJBException;
The home interface has methods to create and destroy EJB Objects. The
implementation of the home interface is the home object will be generated by the container
tools. The class diagram for home interface is as shown in Figure 5.9.
<< Interface>>
Java.rmi.Remote
<< Interface>>
Java.ejb.EJBHome
<< Interface>>
Hello Home
<< Interface>>
Java.ejb.EnterpriseBean NOTES
<< Interface>>
Java.ejb.EntityBean
<< Interface>>
Hello Bean
5.7.6.2 ControlDescriptor
The ControlDescriptor provides accessor methods for defining the security and
transactional attributes of a bean at runtime. ControlDescriptors can be applied to the
bean as a whole, or to specific methods of the bean. Any method that doesn’t have a
ControlDescriptor uses the default properties defined by the ControlDescriptor for the
bean itself. Security properties in the ControlDescriptor indicate how AccessControlEntry s
are applied at runtime. Transactional properties indicate how the bean or specific method
will be involved in transactions at runtime.
5.7.6.3 AccessControlEntry
Each AccessControlEntry identifies a person, group, or role that can access the bean
or one of its methods. Like the ControlDescriptor, the Access-ControlEntry can be applied
public void ejbRemove()
{
System.out.println(“ejbRemove()”);
}
public void ejbActivate()
{
System.out.println(“ejbActivate()”);
}
public void ejbPassivate()
{
System.out.println(“ejbPassivate()”);
}
public void setSessionContext(javax.ejb.SessionContext ctx)
{
this.ctx=ctx;
}
5.7.6.2 ControlDescriptor
NOTES
The ControlDescriptor provides accessor methods for defining the security and
transactional attributes of a bean at runtime. ControlDescriptors can be applied to the
bean as a whole, or to specific methods of the bean. Any method that doesn’t have a
ControlDescriptor uses the default properties defined by the ControlDescriptor for the
bean itself. Security properties in the ControlDescriptor indicate how AccessControlEntry
s are applied at runtime. Transactional properties indicate how the bean or specific method
will be involved in transactions at runtime.
5.7.6.3 AccessControlEntry
Each AccessControlEntry identifies a person, group, or role that can access the bean
or one of its methods. Like the ControlDescriptor, the Access-ControlEntry can be applied
to the bean as a whole or to a specific method. An AccessControlEntry that is specific to a
method overrides the default AccessControlEntry s set for the bean. The AccessControlEntry
s are used in combination with the security properties in the ControlDescriptor to provide
more control over runtime access to the bean and its methods.
5.7.6.4 EntityDescriptor
The EntityDescriptor extends the DeploymentDescriptor to provide properties specific to
an EntityBean class. Entity bean properties include the name of the primary key class and
what instance variables are managed automatically by the container.
5.7.6.5 SessionDescriptor
The SessionDescriptor extends the DeploymentDescriptor to provide properties specific
to a SessionBean class. Session bean properties include a timeout setting and a stateless
session property. The stateless session property indicates whether the session is a stateless
session bean or a stateful session bean.
<ejb-name> The name for this bean
<home> The fully qualified name of the home interface
<remote> The fully qualified name of the remote interface
<local-home> The fully qualified name of the local home interface
<local> The fully qualified name of the local interface
<ejb-class> The fully qualified name of the enterprise bean class
<Session-type> whether the session is stateless or stateful bean.
The ejb-jar.xml file is given below:
<!DOCTYPRE ejb-jar PUBLIC”-//Sun Microsystems, Inc.//DTD Enterprise JavaBeans
2.0//EN” “http://java.sun.com/dtd/ejb-jar_2_0.dtd>”
<ejb-jar>
<enterprise-beans>
<session>
<ejb-name> Hello</ejb-name>
<home>examples.HelloHome</home>
NOTES 7. Which is the abstract superclass for both the EntityDescriptor and SessionDescriptor ?
a) The SessionDescriptor
b) EntityDescriptor
c) DeploymentDescriptor
8. What kind of bean requires a primary key?
a) Message-driven bean
b) Entity bean
c) Session bean
9. A stateful session bean
a) Allows shared among multiple clients
b) Does not allow shared among multiple clients
10. At what point, precisely, in the life-cycle is a container-managed entity bean considered
created?
a) Immediately prior to the execution of its ejbCreate() method
b) Immediately after the execution of its ejbCreate() method
c) After the CMP bean’s data has been committed to the underlying persistent datastore
d) During the execution of its ejbPostCreate() method
PART II
11. Write the Differences between Session and Entity Beans
12. Explain with examples, the usage of stateful and stateless Session beans?
13. Compare Bean-Managed and Container-Managed Persistence of Entity beans
14. Explain the difference between Remote and Local Interface. Explain their usage
with an example
15. What are the two paths from the pooled stage to the ready stage in an Life cycle
of an Entity bean?
16. Explain and contrast uses for Entity Beans, Entity Classes, Stateful and Stateless
Session Beans, and Message Driven Beans and understand the advantages and
disadvantages of each type
17. Explain the life cycle of Entity and Session Beans.
18. What are the various steps in building an EJB component?
19. What are the different callback methods in Entity Bean? What is the purpose and
usage of each of these methods?
Part III
20. Write a simple program to display “No Man is an Island” on the client side.
Represent the object model as a block diagram
21. Write the following for a Bubble sort:
a) Bean class
b) Home Interface
c) Remote Interface
d) Local interface
REFERENCES
1. java.sun.com/developer/onlineTraining/EJBIntro/EJBIntro.html
2. java.sun.com/products/ejb
3. www.developer.com/ejb
4. www.roseindia.net/javabeans/javabeans.shtml
5. www.wikipeida.org
6. www.jguru.com
7. Mastering Enterprise JavaBeans Third Edition by Rima Patel Sriganesh and Gerald
Brose
8. Enterprise JavaBeans by Tom Valesky
NOTES
UNIT III
CHAPTER - 6
EJB APPLICATIONS
6.1 INTRODUCTION
In the previous unit, the basic concepts of the Enterprise JavaBeans programming
model have been covered. The different types of beans and their life cycle have been
discussed in detail.
This unit demonstrates how to build server-side Java components using the Enterprise
JavaBeans component model using a sample program to add, subtract, multiply and divide
two numbers. Implementation of the Enterprise JavaBeans model by providing concrete
examples and step-by-step guidelines for building and using Enterprise JavaBeans applications.
This unit shows how to program Enterprise JavaBeans, and how to install, or deploy,
them in an Enterprise JavaBeans container.
6.2 LEARNING OBJECTIVES
At the end of this Unit, the reader must be familiar with the following concepts:
How to program Session Beans
How to program Entity Beans
How to Deployment Descriptor
How to program and deploy EJB’s
6.3 SESSION BEANS
A session bean instance is a relatively short-lived object. It has roughly the lifetime
equivalent of a session or of the client code that is calling the session beans. A session bean
can be one of two types:
Stateful session bean—maintains state information, which can be accessed across
methods and transactions
Stateless session bean—does not maintain a state that can be accessed across
methods and transactions; however, it can maintain an internal state.
A session bean must provide the following information to an application server for the
bean’s deployment within an EJB container:
Definitions of the session bean’s home and remote interfaces
A Java class that implements the SessionBean interface
A deployment descriptor called ejb-jar.xml
Application server
EJB Container
Remote
Interface
Client EJB
Component Instance
Home
Interface
in its remote interface; the client component can interact with this session-bean instance
NOTES with these two business methods.
Figure 6.2 Providing the Home and Remote Interfaces of a Session Bean
For an EJB container to communicate with the session bean, an EJB provider must
provide a Java class that implements the SessionBean interface. This class contains
implementations of the following methods:
An ejbCreate() method for each create() method in the home interface
One or more business methods for the session bean
The SessionBean methods use the Enterprise Information System (EIS) specific API
to communicate directly with the EIS. By isolating the EIS-specific API calls to the session
bean, neither client components nor the application server need to know this API. Instead,
the client component uses calls in the home and remote calls to request EIS services through
the session bean.
6.3.1 Stateless Session Beans
Stateless session beans hold no conversational state, all instances of the same stateless
session beans are equivalent and indistinguishable to a client. Stateless session beans can be
pooled, reused and swapped from one client to another client on each method call. Stateless
session bean pooling is illustrated in Figure 6.4.
6.3.1.1 Implementation Details
Implementing a stateless session bean is explained using a Java program, which can
add, subtract, multiply and divide two numbers.
The first step is to write the .java files that compose the bean – the remote interface,
home interface and the client code.
The source code for the remote interface is as shown below. It exposes the four
business methods add, subtract, multiply and divide for access by the client.
NOTES
stateless
bean pool Bean Bean
EJB Object
Invoke()
Client
Bean Bean
//Remote interface
/mathOperationRemote.java
import javax.ejb.EJBObject;
import java.rmi.RemoteException;
public interface mathOperationRemote extends EJBObject
{
public int add(int a, int b) throws RemoteException;
public int sub(int a, int b) throws RemoteException;
public int mul(int a, int b) throws RemoteException;
public int div(int a, int b) throws RemoteException;
}
The source code for the home interface is as shown below. The home interface has
methods to create the EJB Objects.
//Home Interface
//mathOperationHome.java
import java.io.Serializable;
import java.rmi.RemoteException;
import javax.ejb.CreateException;
import javax.ejb.EJBHome;
public interface mathOperationHome extends EJBHome
{
mathOperationRemote create() throws RemoteException, NOTES
CreateException;
}
The source code for the mathOperation beans which implements the four methods of
add, subtract, multiply and divide is given below.
//mathOperationBeans
import java.rmi.RemoteException;
import javax.ejb.SessionBeans;
import javax.ejb.SessionContext;
public class mathOperationBeans implements SessionBeans
{
public int add(int a, int b)
{
return (a+b);
}
public int sub(int a, int b)
{
return (a-b);
}
public int mul(int a, int b)
{
return (a*b);
}
public int div(int a, int b)
{
return (a/b);
}
public mathOperationBeans() { }
public void ejbCreate() { }
public void ejbRemove() { }
public void ejbActivate() { }
public void ejbPassivate() { }
public void setSessionContext(SessionContext sc) { }
}
The source code for the client, which uses the business functions add, subtract, multiply
and divide, is given below.
//mathOperationClient.java
import javax.naming.Context;
import javax.naming.InitialContext;
import javax.rmi.PortableRemoteObject;
NOTES import mathOperationRemote;
import mathOperationHome;
public class mathOperationClient
{
public static void main(String args[]) throws Exception
{
try
{
Context initial = new InitialContext();
Object objref=initial.lookup(“mathOperJndi”);
System.out.println(“after jndi calling”);
mathOperationHome home=
(mathOperationHome)PortableRemoteObject.narrow(objref,
mathOperationHome.class);
mathOperationRemote mathremote = home.create();
int no1 = 10;
int no2 = 20;
int result=0;
result = mathremote.add(no1, no2);
System.out.println(“Sum of given numbers = “+result);
result = mathremote.sub(no1, no2);
System.out.println(“Difference of given numbers = “+result);
result = mathremote.mul(no1, no2);
System.out.println(“Multiplication of given numbers = “+result);
result = mathremote.div(no1, no2);
System.out.println(“Division of given numbers = “+result);
}
catch (Exception e)
{
System.out.println(“Exception occured = “+e);
}
}
}
Let us assume the Java 2 SDK Enterprise Edition server is installed in c:\j2sdkee1.2.1
folder and Java 2 Standard Edition is installed in c:\jdk1.3.1_11 folder. The following
configuration is required after installation of J2EE and JDK1.3.1_11. Assume that the java
programs are present in the folder c:\java\ejb directory.
NOTES
NOTES
To limit the number of stateful session beans instances in memory, the container can
swap out a stateful bean saving its conversational state to a hard disk or other storage. This
is called passivation. After passivating a stateful bean, the conversational state is safely
stored away, allowing resources like memory to be reclaimed. When the original client
invokes a method, the passivation conversational state is swapped into a bean. This is called
activation. This bean now resumes the conversion with the original client. Thus, EJB does
indeed support the effect of pooling stateful session beans. Only a few instances can be in
memory when there are actually many clients. The container decides which beans to activate
and which beans to passivate. Most containers employ a Least Recently Used (LRU)
passivation strategy, which means to passivate the beans that have been called the least
recently. If a bean hasn’t been invoked in a while, the container writes it to disk. Passivation
can occur at any time, as long as a bean is not involved in a method call. To activate beans
most containers commonly use a just-in-time algorithm, which activates the bean on demand
as client requests come in. If a client request comes in, but that client’s conversation has
been passivated, the container activates the beans on demand, reading the passivated state
back into memory.
Activation and Passivation Callbacks
NOTES
Client
2.
1. Invoke Enterprise
business object EJB Object Bean
3.
4.
When an EJB container passivates a bean, the container writes the bean’s
conversational state to secondary storage, such as a file or a database. The container informs
the beans that it’s about to perform passivation by calling the bean’s required ejbPassivate()
callback method. ejbPassivate() is a warning to the bean that its held conversational state is
about to be swapped out. It’s important that the container informs the bean using ejbPassivate()
so that the bean can relinquish held resources. These held resources include database
connections, open sockets, open files, or other resources that it does not make sense to save
to disk or that cannot be transparently saved using object serialization.
NOTES
Client
3.
1. Invoke Enterprise
business object EJB Object Bean
4.
5.
3. Reconstruct Other
Bean Enterprise
2. Retreive 4. ejbActivate() Beans
Passivated 5. Invoke
Bean business
object
The client has invoked a method on an EJB Object that does not have a beans tied to
it in memory. The container needs to activate the required bean. The serialized conversational
state is read back into memory, and the container reconstructs the in-memory state using
object serialization or the equivalent. The container then calls the bean’s required ejbActivate()
method. ejbActivate gives the beans a chance to restore the open resources it released
during ejbPassivate(). The Figure 6.9 illustrates how the client has invoked a method on an
EJB object whose stateful beans has been passivated.
EJB 2.1 Stateful Session Bean Example
It is necessary to create the home interfaces – remote home interface and local home
interface for the bean.
Implementing the remote interface
A remote client invokes the EJB through its remote interface. The client invokes the
create method that is declared within the remote home interface. The container passes the
client call to the ejbCreate method–with the appropriate parameter signature–within the
bean implementation. The requirements for developing the remote home interface include:
The remote home interface must extend the javax.ejb.EJBHome interface.
All create methods may throw the following exceptions:
javax.ejb.CreateException
javax.ejb.RemoteException
optional application exceptions
The code below shows a local home interface called HelloLocalHome for a stateful
session bean. You use the arguments passed into the various create methods to initialize the
session bean’s state.
// Local Home Interface for a Stateful Session Bean
package hello;
import javax.ejb.*;
public interface HelloLocalHome extends EJBLocalHome {
public HelloLocal create() throws CreateException;
public HelloLocal create(String message) throws CreateException;
When the entity bean data is loaded into an in-memory entity beans instance, the data
stored in the database is read and the data can be manipulated within a Java Virtual Machine.
The in- memory entity beans is simply a view or lens into the database. There are multiple
physical copies of the same data: the in-memory entity beans instance and the entity data
itself stored in the database. Therefore there must be a mechanism to transfer information
back and forth between the java object and the database. This data transfer is accomplished
with two special methods that the entity beans class must implement, called ejbLoad() and
ejbStore().
ejbLoad() reads the data in from the persistence storage into the entity bean’s in-
memory fields.
ejbstore() saves beans instance’s current fields to the underlying data storage. It is
the complement of ejbLoad().
The ejbLoad() and ejbStore() are callback methods that the container invokes. They
are management methods required by EJB.
6.4.1.3 Several Entity Bean Instances May Represent The Same Underlying Data
Let’s consider the scenario in which many threads of execution want to access the
same database simultaneously. In banking, interest might be applied to a bank account,
while at the same time a company directly deposits a check into that account. In E-commerce,
many different client browsers may be simultaneously interacting with a catalog of products.
6.4.1.3 Several Entity Bean Instances May Represent The Same Underlying Data
NOTES Let’s consider the scenario in which many threads of execution want to access the
same database simultaneously. In banking, interest might be applied to a bank account,
while at the same time a company directly deposits a check into that account. In E-commerce,
many different client browsers may be simultaneously interacting with a catalog of products.
To facilitate many clients accessing the same data, there is a need to design a high
performance access system to the entity beans. One possibility is to allow many clients to
share the same entity beans instance, that way, an entity beans can service many client
requests simultaneously. While this is an interesting idea, it is very appropriate for EJB for
two reasons. First is, writing thread-safe code is difficult and error prone. Mandating that
component vendors produce stable thread-safe does not encourage this. Second, having
multiple threads of execution makes transactions almost impossible to control by the underlying
transaction system. For these reasons, EJB dictates that only a single thread can ever be
running within a beans instance. With session beans and message driven beans, as well as
entity beans, all bean instances are single threaded. Mandating that each bean can service
only one client at a time will result in performance bottlenecks. Because each instance is
single threaded, clients need to run in lockstep, each waiting their turn to use a beans. This
will easily grind performance to a halt in any large enterprise deployment. To boost
performance containers is allowed to instantiate multiple instances of the same entity bean
class. This will allow many clients to interact concurrently with separate instances, each
representing the same underlying entity data. Indeed, this is exactly what EJB allows
containers to do. Thus client requests do not necessarily need to be processed, but rather
concurrently.
Having multiple instances of the same data gives rise to data corruption problem. If
many beans instances are representing the same underlying data through caching multiple
in-memory cached replicas are created. To achieve entity beans instance cache consistency,
each entity beans instance needs to be routinely synchronized with the underlying storage
by calling the bean’s ejbLoad() and ejbStore().
NOTES
EJB Object 1
Client 1 (William’s
william Bank Account)
EJB Object 2
Client 2
(Hellen’s Bank Entity Bean
Hellen
Account) Instances
EJB Object 3
Client 3 (Allen’s Bank
Allen Account)
The container may pool and reuse entity beans instances to represent different instances
of the same type of data in an underlying storage. For example, a container could use a bank
account entity bean instances to represent different bank account records. When done
using an entity bean instance, the instance may be assigned to handle a different client’s
request and may represent different data. The container performs this by dynamically
assigning the entity bean instances to different client-specific EJB objects. Not only does
this save the container from unneccessarily instantiating bean instances, but this scheme
also saves on the total amount of resources held by the system.
Instance pooling is an interesting optimization that containers may provide, and it is not
all unique to entity beans. However complications arise when reassigning entity beans
instances to different EJB objects. When an entity beans is assigned to a particular object,
it may be holding resources such as socket connections. When when it’s in the pool, it may
not that socket. Thus, to allow the beans to release and acquire resources, entity beans class
implement two callback methods. ejbActivate() is the callback that the container will invoke
beans instance when transitioning beans out of a generic instance pool. This process is
called activation, and it indicates that the container is associating the beans with a specific
EJB object and a specific primary key. ejbActivate() method should acquire resources, such
as sockets, that the beans needs when assigned to a particular EJB object.
ejbPassivate() is the call back that the container will invoke when transitioning the
beans into a generic instance pool. This process is called passivation, and it indicates that the
container is disassociating the beans from a specific EJB object and a specific primary key.
<local-home>CatalogLocalHome</local-home>
<local>CatalogLocal</local> NOTES
<ejb-class>CatalogBean</ejb-class>
<persistence-type>Container</persistence-type>
<prim-key-class>String</prim-key-class>
<reentrant>False</reentrant>
<cmp-version>2.x</cmp-version>
<abstract-schema-name>Catalog</abstract-schema-name>
- <cmp-field>
<field-name>catalogId</field-name>
</cmp-field>
- <cmp-field>
<field-name>journal</field-name>
</cmp-field>
- <cmp-field>
<field-name>publisher</field-name>
</cmp-field>
- <query>
- <query-method>
<method-name>findByJournal</method-name>
- <method-params>
<method-param>java.lang.String</method-param>
</method-params>
</query-method>
- <ejb-ql>
- <![CDATA[
SELECT DISTINCT OBJECT(obj) FROM Catalog obj WHERE obj.journal = ?1
]]>
</ejb-ql>
</query>
</entity>
</enterprise-beans>
- <relationships>
- <ejb-relation>
<ejb-relation-name>Catalog-Editions</ejb-relation-name>
- <ejb-relationship-role>
<ejb-relationship-role-name>Catalog-Has-Editions</ejb-relationship-role-name>
<multiplicity>One</multiplicity>
NOTES - <relationship-role-source>
<ejb-name>Catalog</ejb-name>
</relationship-role-source>
- <cmr-field>
<cmr-field-name>editions</cmr-field-name>
<cmr-field-type>java.util.Collection</cmr-field-type>
</cmr-field>
</ejb-relationship-role>
- <ejb-relationship-role>
<ejb-relationship-role-name>Editions-Belong-To-Catalog</ejb-relationship-role-name>
<multiplicity>One</multiplicity>
<cascade-delete />
- <relationship-role-source>
<ejb-name>Edition</ejb-name>
</relationship-role-source>
</ejb-relationship-role>
</ejb-relation>
</relationships>
</ejb-jar>
EJB 3.0 Code Example
In comparison, an EJB 3.0 entity bean class is a POJO which does not implement the
EntityBean class. The callback methods, the ejbCreate and ejbPostCreate methods are not
required in the EJB 3.0 entity bean class. Also, the component and home interfaces and
deployment descriptors are not required in EJB 3.0. The values specified in the EJB 2.1
deployment descriptor are included in EJB 3.0 bean class with JDK 5.0 annotations. Thus,
the number of classes/interfaces/deployment descriptors is reduced in the EJB 3.0
specification.
//CatalogBean.java
import javax.persistence.Entity;
import javax.persistence.NamedQuery;
import javax.persistence.Id;
import javax.persistence.Column;
import javax.persistence.OneToMany;
@Entity
@NamedQuery(name=”findByJournal”, queryString=”SELECT DISTINCT OBJECT(obj)
FROM Catalog obj WHERE obj.journal = ?1")
public class CatalogBean{
public CatalogBean(){}
}
NOTES public java.util.Collection findByJournal(String journal){
Query query=em.createNamedQuery(“findByJournal”);
query.setParameter(0, journal);
return (CatalogBean)query.getResultList();
}
public void remove(CatalogBean catalogBean){
em.remove(catalogBean);
}
}
6.5 DEPLOYMENT
Before you can successfully run your enterprise beans on either a test or production
server, you need to generate deployment code for the enterprise beans. You can do this
using the EJB deployment tool or use the command-line interface.
Using the command line, you can run a build process overnight and have the deployment
tool automatically invoked to generate your deployment code in batch mode.
The EJB deployment tool accepts an input EJB JAR or EAR file that contains one or
more enterprise beans. It then generates an output deployed JAR or EAR file (depending
on the type of the input file) that contains deployment code in the form of .class files.
Jar files are ZIP files that are used specifically for packaging Java classes that are
ready to be used in some type of application. A Jar file containing one or more enterprise
beans includes the bean classes, remote interfaces, home interfaces, and primary keys for
each bean. It also contains one deployment descriptor.
Deployment is the process of reading the bean’s JAR file, changing or adding properties
to the deployment descriptor, mapping the bean to the database, defining access control in
the security domain, and generating vendor-specific classes needed to support the bean in
the EJB environment. Every EJB server product comes with its own deployment tools
containing a graphical user interface and a set of command-line programs.
The javax.ejb.deployment package defines classes used by EJB containers to encapsulate
information about EJB objects. An EJB container should provide a tool that creates an
instance of the EntityDescriptor or SessionDescriptor class for a bean, initializes its fields,
and then serializes that initialized instance. Then, when the bean is deployed into the EJB
container, the container reads the serialized deployment descriptor class and its properties to
obtain configuration information for the bean. Figure 6.11 shows the class hierarchy of this
package.
6.5.1 Deployment Descriptor Class
An enterprise bean is deployed within an EJB container. At deployment of the enterprise
bean, the container generates implementations for both the home and remote interfaces of
the enterprise bean. The container reads a deployment descriptor to obtain the EJB-
specific information it needs. This deployment descriptor, called ejb-jar.xml, is an XML
file that the EJB provider initializes with information about its enterprise bean. This information
includes the names of the Java interfaces that define the home and remote interfaces. In NOTES
this way, the EJB container can build the custom interfaces that the client component needs
to access the enterprise bean.
The deployment descriptor class is the base class used by both the Session Descriptor
and the Entity Descriptor classes. It provides functionality that is common to all types of
Deployment descriptor. This class is the main way in which information is communicated
from the EJB developer to the deployer and to the container in which information will be
deployed. Typically, the bean developer uses the “setter” methods of this class to initialize
the various properties of this class, and the deployment environment uses the “getter” methods
to read these values at deployment time. All currently available EJB tools provide graphical
user interface (GUI) tools to allow the developer and deployer to generate Deployment-
Descriptors and their associated classes via pointing and clicking. The source for the
DeploymentDescriptor class is given below.
public class javax.ejb.deployment.DeploymentDescriptor extends java.lang.Object
implements java.io.Serializable
{
protected int versionNumber;
public deploymentDescriptor();
public AccessControlEntry[] getAccessControlEntries();
public AccessControlEntry getAccessControlEntries(int index);
public Name getBeanHomeName();
public ControlDescriptor[] getcontrolDescriptors();
public String getHomeInterfaceClassName();
public boolean getReentrant();
public String getRemoteInterfaceclassName();
public boolean isReentrant();
public void setAccessControlEntries(AccessControlEntry values[]);
public void setAccessControlEntries(int index AccessControlEntry value);
EJB 3.0 continues to support the use of deployment descriptors. You may use Java
language metadata annotations or deployment descriptors. You may also combine the use of
deployment descriptors with Java language metadata annotations to override the values of
annotations or to supplement the use of annotations.
6.6 CONCLUSION
This unit has explained the theoretical concepts behind entity beans and session beans.
NOTES
The concepts have been explained using sample examples.
HAVE YOU UNDERSTOOD QUESTIONS
1. Write the differences between Entity and session beans?
2. What are the two types of session beans?
3. What is ejbActivate() Method?
4. What is SessionDescriptor classs?
5. What is EntityDescriptor class?
6. How to write code for entity and session beans?
7. How to deploy EJB?
SUMMARY
A session beans instance is a relatively short-lived object. It has roughly the lifetime
equivalent of a session or of the client code that is calling the session beans.
The two subtypes of session beans are
stateless session beans
stateful session beans
ejbActivate() is the callback that the container will invoke beans instance when
transitioning beans out of a generic instance pool. This process is called activation,
ejbPassivate() is the call back that the container will invoke when transitioning the
beans into a generic instance pool. This process is called passivation,
The deployment descriptor class is the base class used by both the Session Descriptor
and the Entity Descriptor classes. It provides functionality that is common to all
types of Deployment descriptor
An AccessControlEntry is another class in the javax.ejb.deployment packages, the
purpose of this class is to pair a given method in the beans with a list of identities
EXERCISES
Part I
1. Pooling is simple in?
a. stateful session beans
b. stateless session beans
c. entity beans
d. All of the above
2. Containers employ _______________ strategy?
a. First in First out
b. Least Recently Used
c. Last in First Out
d. Round Robin
NOTES 3. Multiple beans instances represent the same data raises a problem called
a. Data corruption
b. Data consistency
c. Overflow of data
d. All the above
4. What kind of beans hold no conversational state
a. stateless session beans
b. stateful session beans
c. entity beans
d. All of the above
5. Passivation and Activation are not useful for what kind of beans?
a. stateless session beans
b. stateful session beans
c. entity beans
d. All of the above
Part II
6. Explain the difference between entity beans and session beans ?
7. Explain the use of ejbPasivate() method?
8. What is the function of a Deployment Descriptor?
9. What is AccessControlEntry class?
10. Explain where and when ejbLoad() method is used?
11. What are Activation and Passivation Callbacks?
12. How can Entity beans instances be pooled?
13. Explain the difference between stateless and stateful session beans? Where would
you use which bean? Explain with a code example.
14. Explain the features of Entity beans? Where would you use them? Explain with a
code example.
15. Explain the creation and removal of Entity beans?
Part III
16. Write code to develop and deploy the following EJB applications.
a. The Fibonacci series 0,1,1,2,,3,5,8,13,………….n.
b. An EJB bean that takes an integer value and returns the number with its digits
reversed ( for example given the number 78981, the output should be 18987).
c. Print the students mark list for ‘n’ number of students. Include student Register
number, name, marks for 5 subjects and total marks for each student.
d. Maintain and update a sorted List of Names of Countries and their Capitals. Query
on the Country name should return the name of the Capital. NOTES
e. Print the account balance of a customer in a bank. Use the customer code and
account number to display the account balance
Part I - Answers
1) b 2) b 3) a 4) a 5) a
REFERENCES
1. Mastering Enterprise JavaBeans Third Edition by Rima Patel Sriganesh and Gerald
Brose
2. Enterprise JavaBeans by Tom Valesky
3. EJB Overview http://publib.boulder.ibm.com/infocenter/wbihelp/v6rxmx/
index.jsp?topic=/com.ibm.wics_developer.doc/doc/access_dev_ejb/
access_dev_ejb16.htm
4. Migrating EJB 2.1 Entity and Session Beans to EJB 3.0 by Deepak Vohra
5. http://www.regdeveloper.co.uk/2006/04/25/ejb3_migration/
NOTES
UNIT IV
CHAPTER - 7
CORBA
7.1 INTRODUCTION
The previous units have discussed the evolution of business applications from the
monolithic mainframe architecture to the highly decentralized distributed architecture. This
unit discusses CORBA or Common Object Request Broker Architecture. CORBA is a
standard architecture for distributed object systems. Distributed object systems are distributed
systems in which all entities are modeled as objects and CORBA architecture allows
distributed, heterogeneous collection of objects to interoperate. CORBA is just a specification
not a programming language.
CORBA architecture is both platform independent and language independent. Hence
CORBA is an open architecture that provides for interoperability of distributed objects on
different platforms, under different operating systems and implemented in different
programming languages. Furthermore, CORBA objects need not know which language was
used to implement other CORBA objects that they talk to.
Distributed systems rely on the definition of interfaces between components and on
the existence of various services, such as directory registration and lookup that are available
to an application. CORBA provides a standard mechanism for defining the interfaces between
components as well as some tools to facilitate the implementation of those interfaces using
the developer’s choice of languages. In addition a wealth of standard services, such as
directory and naming services, persistent object services, and transaction services have
been defined.
7.2 LEARNING OBJECTIVES
At the end of this unit, the reader must be familiar with the following concepts:
History of CORBA
OMG’s Object Management Architecture
ORB Architecture and its Principal components
Static and Dynamic invocation
Advantages of CORBA architecture
Developing and deploying a CORBA application
7.3 HISTORY OF CORBA
7.3.1 Object Management Group
The Object Management Group (OMG) is responsible for defining CORBA. The
OMG is an international independent not-for-profit corporation. It was founded in April
1989 by eleven companies, including 3Com Corporation, American Airlines, Canon Inc.
152 ANNA UNIVERSITY CHENNAI
MIDDLE-WARE TECHNOLOGIES
Data General, Hewlett-Packard, Philips Telecommunications N.V., Sun Microsystems and
Unisys Corporation. That same year, the OMG converted to a consortium with open
membership. Presently, there are over 800 companies with membership in the OMG and
NOTES
members includes almost all the major vendors and developers of distributed object technology,
including platform, database, and application vendors as well as software tool and corporate
developers. The mission of OMG is to provide a common framework for object-oriented
application development through establishing industry guideline and detailed object management
specifications. A lot of famous specifications, including UML, CORBA, OMA (Object
Management architecture) are managed by this group. The OMG continually makes
improvements to these specifications.
The OMG has developed a conceptual model, known as the core object model, and
reference architecture Object Management Architecture (OMA). The OMG OMA attempts
to define, at a high level of abstraction, the various facilities necessary for distributed object-
oriented computing. These components define the composition of objects and their interfaces.
The core of the OMA is the Object Request Broker (ORB) which is a common communication
bus for objects. The technology adopted for ORBs is known as the Common Object Request
Broker Architecture (CORBA).
7.3.2 Corba
The Common Object Request Broker: Architecture and Specifications has evolved
over the past several years. Many versions of CORBA were released starting from 1991.
The specifications are aimed at software designers and developers who want to produce
applications that comply with OMG standards for the Object Request Broker (ORB). The
benefit of compliance is, in general, to be able to produce interoperable applications that are
based on distributed, interoperating objects.
CORBA 1.0 was introduced and adopted in October 1991. It was followed in 1992 by
CORBA 1.1 and then in 1993 by CORBA 1.2. The specifications defined the Interface
Definition Language (IDL) as well as the API for applications to communicate with an
Object Request Broker (ORB). The CORBA 1.x versions made an important first step
toward object interoperability, allowing objects on different machines, on different
architectures, and written in different languages to communicate with each other.
CORBA 1.x was an important first step in providing distributed object interoperability,
but it wasn’t a complete specification. Its chief limitation was that it did not specify a standard
protocol through which ORBs could communicate with each other. As a result, a CORBA
ORB from one vendor could not communicate with an ORB from another vendor, a restriction
that severely limited interoperability among distributed objects.
Released in 1996, CORBA 2.0’s defined standard protocols by which ORBs from
various CORBA vendors could communicate. General Inter-ORB Protocol / Internet Inter-
ORB Protocol (GIOP/IIOP) were added in to solve the interoperation problem between
CORBA platforms from different vendors. Introduction of these protocols made CORBA
applications to be more vendor-independent. CORBA 2.x revisions introduced evolutionary
advancements in the CORBA architecture
CORBA 2.1 released in August 1997, added additional security features (secure IIOP
and IIOP over SSL), added two language mappings (COBOL and Ada) and included
interoperability revisions and IDL type extensions. CORBA 2.2, released in February 1998
included the Server Portability enhancements (POA), DCOM Interworking, and the IDL/
JAVA language mapping specification. POA gave an explicit standard specification about
transformation between CORBA platforms from different vendors. In 1999 and 2000,
CORBA 2.3 and 2.4 versions were released and versions 2.5 and 2.6 were released in
2001. These versions included specifications relating ORB security and Quality of Service
(QOS). These versions contained Asynchronous Messaging, Minimum CORBA, and Real-
153 ANNA UNIVERSITY CHENNAI
DMC 1754 / 1945
Time CORBA specifications as well as revisions for the Interoperable Name Service,
NOTES Components, Notification Service, and Firewall specifications.
CORBA 3.0 released in July 2002 is an important version in the history of CORBA.
The CORBA Core specification, v3.0 includes updates based on output from the Core RTF
(Revision Task Force), the Interop RTF and the Object Reference Template. The CORBA
Component Model, v3.0 released simultaneously as a stand-alone specification, enables
tighter integration with Java and other component technologies, making it easier for
programmers to use CORBA. Also with this release, Minimum CORBA and Real-time
CORBA (both added to CORBA Core in Release 2.4) become separate documents. CORBA
3.0.1 (November 2002) and CORBA 3.0.2 (December 2002) contained editorial update to
the 3.0 version.
7.4 OMA REFERENCE MODEL
The Object Management Architecture (OMA) is OMG’s vision for the component
software environment. The architecture provides guidance on how standardization of
component interfaces penetrates up to but not including applications in order to create a
plug-and-play component software environment based on object technology. Figure 7.1
illustrates the primary components in the OMG Object Management Architecture. (OMA)
The CORBA specification must have software to implement it. The software that implements
the CORBA specification is called the ORB. The ORB which is the heart of CORBA, is
responsible for all the mechanisms required to perform these tasks: Find the object
implementation for the request, Prepare the object implementation to receive the request
and Communicate the data making up the request.
Figure 7.2 illustrates the primary components in the CORBA ORB architecture. The
Client is the entity that wishes to perform an operation on the object and the Object
Implementation is the code and data that actually implements the object.
GIOP, the general Inter-ORB protocol is a protocol that was specified to standardize
the interfaces for request interchange between ORBs and thus allows different ORB NOTES
implementations to communicate. More specifically it standardizes the transfer syntax for
requests and the messages that can be used. The IIOP is the Internet-compliant
implementation of GIOP, and is the one that is of relevance in practice.
7.6 OBJECT REQUEST BROKER ARCHITECTURE
Figure 7.3 shows a request being sent by a client to an object implementation.
The Client is the entity that wishes to perform an operation on the object and the
Object Implementation is the code and data that actually implements the object. The ORB
is responsible for all of the mechanisms required to find the object implementation for the
request, to prepare the object implementation to receive the request and to communicate the
data making up the request. The interface the client sees is completely independent of
where the object is located, what programming language it is implemented in, or any other
aspect that is not reflected in the object’s interface.
The key feature of the ORB is the transparency of how it facilitates client/object
communication. ORB hides the following:
Object location: The client does not know where the target object resides. It
could reside in a different process on another machine across the network, on the
same machine but in a different process, or within the same process.
Figure 7.3 A Request Being Sent Through the Object Request Broker
Object implementation: The client does not know how the target object is
implemented, what programming or scripting language it was written in, or the
operating system and hardware it executes on.
Object execution state: When it makes a request on a target object, the client
does not need to know whether that object is currently activated (in an executing
process) and ready to accept requests. The ORB transparently starts the object if
necessary before delivering the request to it.
Object communication mechanisms: The client does not know what
communication mechanisms (e.g., TCP/IP, shared memory, local method call, etc.)
the ORB uses to deliver the request to the object and return the response to the
client.
These ORB features allow application developers to worry more about their own
application domain issues and less about low-level distributed system programming issues.
Figure 7.4 shows the structure of an individual Object Request Broker (ORB). To
NOTES make a request, the Client can use the Dynamic Invocation interface (the same interface
independent of the target object’s interface) or an OMG IDL stub (the specific stub depending
on the interface of the target object).
Dynamic Invocation is used when at compile time a client does not have knowledge
about an object it wants to invoke. Once an object is discovered, the client program can
obtain a definition of it, issue a parameterized call to it, and receive a reply from it, all
without having a type-specific client stub for the remote object. The Client can also directly
interact with the ORB for some functions.
Definitions of the interfaces to objects can be defined in two ways. Interfaces can be
defined statically in an IDL. This language defines the types of objects according to the
operations that may be performed on them and the parameters to those operations.
Alternatively, or in addition, interfaces can be added to an Interface Repository service; this
service represents the components of an interface as objects, permitting run-time access to
these components. In any ORB implementation, IDL and the Interface Repository have
equivalent expressive power.
Figure 7.5 shows how a client can initiate a request to the ORB in both ways.
ORB Core
As shown in Figure 7.6, the ORB locates the appropriate implementation code,
transmits parameters, and transfers control to the Object Implementation through an IDL NOTES
skeleton or a dynamic skeleton.
Skeletons are specific to the interface and the object adapter. In performing the request,
the object implementation may obtain some services from the ORB through the Object
Adapter. When the request is complete, control and output values are returned to the
client. The Object Implementation may choose which Object Adapter to use. This decision
is based on what kind of services the Object Implementation requires.
in the form of stub routines or a run-time interface repository, a particular ORB may be able
NOTES to function correctly.
IDL is the means by which a particular object implementation tells its potential clients
what operations are available and how they should be invoked. From the IDL definitions, it
is possible to map CORBA objects into particular programming languages or object systems.
7.6.6 Mapping of IDL to Programming Languages
Different object-oriented or non-object-oriented programming languages may prefer
to access CORBA objects in different ways. For object-oriented languages, it may be desirable
to see CORBA objects as programming language objects. Even for non object-oriented
languages, it is a good idea to hide the exact ORB representation of the object reference,
method names, etc. A particular mapping of OMG IDL to a programming language should
be the same for all ORB implementations. Language mapping includes definition of the
language-specific data types and procedure interfaces to access objects through the ORB.
It includes the structure of the client stub interface (not required for object-oriented languages),
the dynamic invocation interface, the implementation skeleton, the object adapters, and the
direct ORB interface.
A language mapping also defines the interaction between object invocations and the
threads of control in the client or implementation. The most common mappings provide
synchronous calls, in that the routine returns when the object operation completes. Additional
mappings may be provided to allow a call to be initiated and control returned to the program.
In such cases, additional language-specific routines must be provided to synchronize the
program’s threads of control with the object invocation.
7.6.7 Client Stubs
A client stub is a small piece of code that allows a client component to access a server
component. The remote object reference that is held by the client points to the client stub.
This stub is specific to the IDL interface from which it was generated, and it contains the
information needed for the client to invoke a method on the CORBA object that was defined
in the IDL interface.
Generally, the client stubs will present access to the OMG IDL-defined operations on
an object in a way that is easy for programmers to predict once they are familiar with OMG
IDL and the language mapping for the particular programming language. The stubs make
calls on the rest of the ORB using interfaces that are private to, and presumably optimized
for, the particular ORB Core. If more than one ORB is available, there may be different
stubs corresponding to the different ORBs. In this case, it is necessary for the ORB and
language mapping to cooperate to associate the correct stubs with the particular object
reference.
7.6.8 Dynamic Invocation Interface
An interface is also available that allows the dynamic construction of object invocations,
that is, rather than calling a stub routine that is specific to a particular operation on a
particular object, a client may specify the object to be invoked, the operation to be performed,
and the set of parameters for the operation through a call or sequence of calls. The client
code must supply information about the operation to be performed and the types of the
parameters being passed (perhaps obtaining it from an Interface Repository or other run-
time source). The nature of the dynamic invocation interface may vary substantially from
one programming language mapping to another. NOTES
7.6.9 Implementation Skeleton
A server skeleton is the server side analog to a client stub, and these two classes are
used by ORBs in static invocation. For a particular language mapping, and possibly depending
on the object adapter, there will be an interface to the methods that implement each type of
object. The interface will generally be an up-call interface, in that the object implementation
writes routines that conform to the interface and the ORB calls them through the skeleton.
The existence of a skeleton does not imply the existence of a corresponding client stub
(clients can also make requests via the dynamic invocation interface). It is possible to write
an object adapter that does not use skeletons to invoke implementation methods. For example,
it may be possible to create implementations dynamically for languages such as Smalltalk.
7.6.10 Dynamic Skeleton Interface
An interface is available, which allows dynamic handling of object invocations. That is,
rather than being accessed through a skeleton that is specific to a particular operation, an
object’s implementation is reached through an interface that provides access to the operation
name and parameters in a manner analogous to the client side’s Dynamic Invocation Interface.
Purely static knowledge of those parameters may be used, or dynamic knowledge (perhaps
determined through an Interface Repository) may be also used, to determine the parameters.
The implementation code must provide descriptions of all the operation parameters to
the ORB, and the ORB provides the values of any input parameters for use in performing
the operation. The implementation code provides the values of any output parameters, or an
exception, to the ORB after performing the operation. The nature of the dynamic skeleton
interface may vary substantially from one programming language mapping or object adapter
to another, but will typically be an up-call interface.
Dynamic skeletons may be invoked both through client stubs and through the dynamic
invocation interface; either style of client request construction interface provides identical
results.
7.6.11 Object Adaptors
An object adapter is the primary way that an object implementation accesses services
provided by the ORB. There are expected to be a few object adapters that will be widely
available, with interfaces that are appropriate for specific kinds of objects. Services provided
by the ORB through an Object Adapter often include: generation and interpretation of object
references, method invocation, security of interactions, object and implementation activation
and deactivation, mapping object references to implementations, and registration of
implementations.
The wide range of object granularities, lifetimes, policies, implementation styles, and
other properties make it difficult for the ORB Core to provide a single interface that is
convenient and efficient for all objects. Thus, through Object Adapters, it is possible for the
ORB to target particular groups of object implementations that have similar requirements
with interfaces tailored to them.
7.6.12 ORB Interface
An ORB is an abstraction that can be implemented various ways, e.g., one or more
processes or a set of libraries. To decouple applications from implementation details,
CORBA specification defines an interface to an ORB. The ORB Interface is the interface
NOTES that goes directly to the ORB, which is the same for all ORBs and does not depend on the
object’s interface or object adapter. Because most of the functionality of the ORB is provided
through the object adapter, stubs, skeleton, or dynamic invocation, there are only a few
operations that are common across all objects.
This ORB interface provides standard operations that (1) initialize and shutdown the
ORB, (2) convert object references to strings and back, and (3) create argument lists for
requests made through the dynamic invocation interface (DII) etc. For example, the
interface could provide access to services such as Naming Service, Trader Service and
others. These operations are useful to both clients and implementations of objects.
7.6.13 Interface Repository
The Interface Repository is a service that provides persistent objects that represent
the IDL information in a form available at run-time. The Interface Repository information
may be used by the ORB to perform requests. Moreover, using the information in the
Interface Repository, it is possible for a program to encounter an object whose interface
was not known when the program was compiled, yet, be able to determine what operations
are valid on the object and make an invocation on it at run-time. In addition to its role in the
functioning of the ORB, the Interface Repository is a common place to store additional
information associated with interfaces to ORB objects. For example, debugging information,
libraries of stubs or skeletons, routines that can format or browse particular kinds of objects
might be associated with the Interface Repository.
7.6.14 Implementation Repository
The Implementation Repository contains information that allows the ORB to locate
and activate implementations of objects. Although most of the information in the
Implementation Repository is specific to an ORB or operating environment, the
Implementation Repository is the conventional place for recording such information. Ordinarily,
installation of implementations and control of policies related to the activation and execution
of object implementations is done through operations on the Implementation Repository.
In addition to its role in the functioning of the ORB, the Implementation Repository is a
common place to store additional information associated with implementations of ORB objects.
For example, debugging information, administrative control, resource allocation, security,
etc., might be associated with the Implementation Repository.
7.7 EXAMPLE ORBS
There are a wide variety of ORB implementations possible within the Common ORB
Architecture. Some of the different options are explained below. Note that a particular
ORB might support multiple options and protocols for communication.
Client- and Implementation-resident ORB
If there is a suitable communication mechanism present, an ORB can be implemented
in routines resident in the clients and implementations. The stubs in the client either use a
location-transparent IPC mechanism or directly access a location service to establish
communication with the implementations. Code linked with the implementation is responsible
for setting up appropriate databases for use by clients.
Server-based ORB
NOTES
To centralize the management of the ORB, all clients and implementations can
communicate with one or more servers whose job it is to route requests from clients to
implementations. The ORB could be a normal program as far as the underlying operating
system is concerned, and normal IPC could be used to communicate with the ORB.
System-based ORB
Library-based ORB
For objects that are light-weight and whose implementations can be shared, the
implementation might actually be in a library. In this case, the stubs could be the actual
methods. This assumes that it is possible for a client program to get access to the data for
the objects and that the implementation trusts the client not to damage the data.
NOTES
When a new object is created, the ORB may be notified so that it knows where to find
the implementation for that object. Usually, the implementation also registers itself as
implementing objects of a particular interface, and specifies how to start up the implementation
if it is not already running.
Most object implementations provide their behaviour using facilities in addition to the
ORB and object adapter. For example, although the Portable Object Adapter provides some
persistent data associated with an object (its OID or Object ID), that relatively small amount
of data is typically used as an identifier for the actual object data stored in a storage service
of the object implementation’s choosing. With this structure, it is not only possible for different
object implementations to use the same storage service, it is also possible for objects to
choose the service that is most appropriate for them.
7.10 STRUCTURE OF AN OBJECT ADAPTER
An object adapter, as illustrated in Figure 7.10, is the primary means for an object
implementation to access ORB services such as object reference generation.
An object adapter exports a public interface to the object implementation, and a private
interface to the skeleton. It is built on a private ORB-dependent interface.
Object adapters are responsible for the following functions:
Generation and interpretation of object references
Method invocation
Security of interactions
and unnecessary for the object adapter to maintain any per-object state. By using an object
adapter interface that is tuned towards such object implementations, it is possible to take NOTES
advantage of particular ORB Core details to provide the most effective access to the ORB.
Nested POAs: The POA allows multiple distinct, nested instances of the POA to
NOTES exist in a server. Each POA in the server provides a namespace for all the objects
registered with that POA and all the child POAs that are created by this POA. The
POA supports recursive deletes, i.e., destroying a POA destroys all its child POAs.
SSI and DSI support: The POA allows programmers to construct servants that
inherit from (1) static skeleton classes (SSI) generated by OMG IDL compilers or
(2) a Dynamic Skeleton Interface (DSI). Clients need not be aware that a CORBA
object is serviced by a DSI servant or an IDL servant. Two CORBA objects
supporting the same interface can be serviced one by a DSI servant and the other
with an IDL servant. Furthermore, a CORBA object may be serviced by a DSI
servant during some period of time, while the rest of the time is serviced by an IDL
servant.
7.11.1 POA Architecture
The ORB is an abstraction visible to both the client and server. In contrast, the POA is
an ORB component visible only to the server, i.e., clients are not directly aware of the
POA’s existence or structure. The architecture of the request dispatching model defined by
the POA and the interactions between its standard components and the ORB Core are
described in this section.
User-supplied servants are registered with the POA. Clients hold object references
upon which they make requests, which the POA ultimately dispatches as operations on a
servant. The ORB, POA, servant, and skeleton all collaborate to determine (1) which servant
the operation should be invoked on and (2) to dispatch the invocation. Figure 7.11 shows the
POA architecture.
A distinguished POA, called the Root POA, is created and managed by the ORB. The
Root POA is always available to an application through the ORB initialization interface,
resolve initial references. The application developer can register servants with the Root
POA if the policies of the Root POA specified in the POA specification are suitable for the
application.
A server application may want to create multiple POAs to support different kinds of
CORBA objects and/or different kinds of servant styles. For example, a server application
might have two POAs: one supporting transient CORBA objects and the other supporting
persistent CORBA objects. A nested POA can be created by invoking the create POA
factory operation on a parent POA.
NOTES
NOTES
Consult its Active Object Map only – If the Object Id is not found in the Active
NOTES Object Map, the POA returns an CORBA::OBJECT NOT EXIST exception to
the client.
Use a default servant – If the Object Id is not found in the Active Object Map,
the request is dispatched to the default servant (if available).
Invoke a servant manager – If the Object Id is not found in the Active Object
Map, the servant manager (if available) is given the opportunity to locate a servant
or raise an exception. The servant manager is an application supplied object that
can incarnate or activate a servant and return it to the POA for continued request
processing. Two forms of servant manager are supported: ServantActivator, which
is used for a POA with the RETAIN policy, and ServantLocator, which is used
with the NON RETAIN policy. Combining these policies with the retention policies
described above provides the POA with a great deal of flexibility.
7.11.3 The POA Semantics
The POA is used primarily in two modes: (1) request processing and (2) the activation
and deactivation of servants and objects. This section describes these two modes and outlines
the semantics and behaviour of the interactions that occur between the components in the
POA architecture.
Request Processing
Each client request contains an Object Key. The Object Key conveys the Object Id of
the target object and the identity of the POA that created the target object reference. The
end-to-end processing of a client request occurs in the follow steps:
Locate the server process: When a client issues a request, the ORB first locates
an appropriate server process, using the Implementation Repository to create a
new process if necessary. In an ORB that uses IIOP, the host name and port
number in the Interoperable Object Reference (IOR) identifies the communication
endpoint of the server process.
Locate the POA: Once the server process has been located, the ORB locates the
appropriate POA within that server. If the designated POA does not exist in the
server process, the server has the opportunity to re-create the required POA by
using an adapter activator. The name of the target POA is specified by the IOR in
a manner that is opaque to the client.
Locate the servant: Once the ORB has located the appropriate POA, it delivers
the request to that POA. The POA finds the appropriate servant by following its
servant retention and request processing policies, which have been described earlier
Locate the skeleton: The final step the POA performs is to locate the IDL skeleton
that will transform the parameters in the request into arguments. The skeleton then
passes the de-marshalled arguments as parameters to the correct servant operation.
Handling replies, exceptions and location forwarding: The skeleton marshals
any exceptions, return values, in-out, and out parameters returned by the servant so
that they can be sent to the client. The only exception that is given special treatment
is the ForwardRequest exception. It causes the ORB to deliver the current request
and subsequent requests to the object denoted in the forward reference member of
the exception.
NOTES
One-way
The creators of the first version of CORBA intended ORBs (Object Request Broker)
to deliver one-way over unreliable transports and protocols such as the UDP. However,
The Java class called HelloHolder holds a public instance member of type Hello.
Whenever the IDL type is an out or an inout parameter, the Holder class is used. It provides
operations for org.omg.CORBA.po rtable.OutputStream and
org.omg.CORBA.portable.InputStream arguments, which CORBA allows, but which do
not map easily to Java’s semantics. The Holder class delegates to the methods in the
Helper class for reading and writing. It implements org.omg.CORBA.portable.Streamable.
HelloApp/HelloHolder.java
package HelloApp;
/**
* HelloApp/HelloHolder.java
* Generated by the IDL-to-Java compiler (portable), version “3.0”
* from Hello.idl
* Thursday, March 22, 2001 2:17:15 PM PST
*/
public final class HelloHolder implements org.omg.CORBA.portable.Streamable
{
public HelloApp.Hello value = null;
public HelloHolder ()
{
NOTES }
public HelloHolder (HelloApp.Hello initialValue)
{
value = initialValue;
}
public void _read (org.omg.CORBA.portable.InputStream i)
{
value = HelloApp.HelloHelper.read (i);
}
public void _write (org.omg.CORBA.portable.OutputStream o)
{
HelloApp.HelloHelper.write (o, value);
{
__typeCode = org.omg.CORBA.ORB.init ().create_interface_tc
(HelloApp.HelloHelper.id (), “Hello”);
}
return __typeCode;
}
public static String id ()
{
return _id;
}
}
public org.omg.CORBA.TypeCode _type ()
{
return HelloApp.HelloHelper.type ();
}
}
The Java class HelloImplPOA is the skeleton file for the server-side mapping, providing
basic CORBA functionality for the server. It extends org.omg.PortableServer.Servant,
and implements the InvokeHandler interface and the HelloOperations interface. The server
class, HelloServant, extends HelloPOA.
HelloApp/HelloPOA.java
package HelloApp;
/**
* HelloApp/HelloPOA.java
* Generated by the IDL-to-Java compiler
* from Hello.idl
*/
public abstract class HelloPOA extends org.omg.PortableServer.Servant
implements HelloApp.HelloOperations, org.omg.CORBA.portable.InvokeHandler
{
// Constructors
private static java.util.Hashtable _methods = new java.util.Hashtable ();
static
{
_methods.put (“sayHello”, new java.lang.Integer (0));
To complete the application, the developer must write the client and server code.
The example server consists of two classes, the servant and the server. The servant,
HelloImpl, is the implementation of the Hello IDL interface; each Hello instance is
implemented by a HelloImpl instance. The servant is a subclass of HelloPOA, which is
generated by the idlj compiler from the example IDL.
The servant contains one method for each IDL operation, in this example, the sayHello()
and shutdown() methods. Servant methods are just like ordinary Java methods; the extra
code to deal with the ORB, with marshaling arguments and results, and so on, is provided
by the skeleton.
This example shows the code for a transient server. “Hello World” application can
also be written for a persistent server
import org.omg.CosNaming.*;
NOTES import org.omg.CosNaming.NamingContextPackage.*;
import org.omg.CORBA.*;
import org.omg.PortableServer.*;
import org.omg.PortableServer.POA;
import java.util.Properties;
class HelloImpl extends HelloPOA {
private ORB orb;
public void setORB(ORB orb_val) {
orb = orb_val;
}
// implement sayHello() method
public String sayHello() {
return “\nHello world !!\n”;
}
// implement shutdown() method
public void shutdown() {
orb.shutdown(false);
}
}
public class HelloServer {
public static void main(String args[]) {
try{
// create and initialize the ORB
ORB orb = ORB.init(args, null);
// get reference to rootpoa & activate the POAManager
POA rootpoa = POAHelper.narrow(orb.resolve_initial_references(“RootPOA”));
rootpoa.the_POAManager().activate();
// create servant and register it with the ORB
HelloImpl helloImpl = new HelloImpl();
helloImpl.setORB(orb);
// get object reference from the servant
org.omg.CORBA.Object ref = rootpoa.servant_to_reference(helloImpl);
Hello href = HelloHelper.narrow(ref);
// get the root naming context
Object
reference Object
Object
reference Method invocation Object
Object
reference Object
Object
reference Object
In passing by reference method, the first process, Process A, passes an object reference
NOTES to the second process, Process B. When Process B invokes a method on that object, the
method is executed by Process A because that process owns the object. Process B only has
visibility to the object (through the object reference), and thus can only request that Process
A execute methods on Process B’s behalf. Passing an object by reference means that a
process grants visibility of one of its objects to another process while retaining ownership of
that object. When an object is passed by reference, the object itself remains “in place”
while an object reference for that object is passed. Operations on the object through the
object reference are actually processed by the object itself.
Figure 7.15 illustrates the method of passing an object between application components
using passing by value.
In this method, the actual state of the object (such as the values of its member variables)
is passed to the requesting component through serialization. When methods of the object are
invoked by Process B, they are executed by Process B instead of Process A, where the
original object resides. Furthermore, because the object is passed by value, the state of the
original object is not changed; only the copy (now owned by Process B) is modified. Generally,
it is the responsibility of the developer to write the code that serializes and deserializes
objects. When an object is passed by value, the object’s state is copied and passed to its
destination, where a new copy of the object is instantiated. Operations on that object’s copy
are processed by the copy, not by the original object.
Process B Process A
Step 2
Object
Copy of object
created
Process B Process A
Step 3 Processing
Occurs locally
Copy of Object
object
Serialization refers to the encoding of an object’s state into a stream, such as a disk
file or network connection. When an object is serialized, it can be written to such a stream
and subsequently read and deserialized, a process that converts the serialized data containing
the object’s state back into an instance of the object. NOTES
One important aspect of the CORBA object model is that all objects are passed by
reference. In order to facilitate passing objects by value in a distributed application, in addition
to passing the state of the object across the network, it is also necessary to ensure that the
component receiving the object has implementations for the methods supported by that
object
There are a few issues associated with passing objects by reference only. Remember
that when passing by reference is the only option, methods invoked on an object are always
executed by the component that has created that object. An object cannot migrate from one
application component to another. Hence all method calls are remote method calls. But if a
component invokes a lengthy series of method calls on a remote object, a great deal of
overhead can be consumed by the communication between the two components. For this
reason, it might be more efficient to pass an object by value so the component using that
object can manipulate it locally.
7.15.3 Basic Object Adapters (BOAs)
The BOA provides CORBA objects with a common set of methods for accessing
ORB functions. These functions range from user authentication to object activation to object
persistence. The BOA is, in effect, the CORBA object’s interface to the ORB. According
to the CORBA specification, the BOA should be available in every ORB implementation,
and this seems to be the case with most (if not all) CORBA products available.
One particularly important feature of the BOA is its object activation and deactivation
capability. The BOA supports four types of activation policies, which indicate how application
components are to be initialized. These activation policies include the following:
The shared server policy, in which a single server (which in this context usually
means a process running on a machine) is shared between multiple objects
The unshared server policy, in which a server contains only one object
The server-per-method policy, which automatically starts a server when an object
method is invoked and exits the server when the method returns
The persistent server policy, in which the server is started manually (by a user,
batch job, system daemon, or some other external agent)
7.16 THE CORBA COMPONENT MODEL (CCM)
Increasingly, over the last decade, CORBA has formed the basis of several of the
leading Enterprise Application Integration (EAI) solutions. In fact, it is one of most widely
deployed and well-proven mechanism for software objects to communicate with one another.
As the complexity of solutions increased, there was growing need to extend to a universal
component model, enabling inter-working with EJBs and support for SOAP/WSDL web
services. CORBA Component Model (CCM) specification was crafted carefully over three
years to assure full integration of J2EE, .NET, ActiveX, and CORBA 2 object model. CCM
has been adopted as an OMG standard by its Board of Directors at the Technical Meeting
in Yokohama, Japan, April-2002. CCM is a standard for integrating components across both
multiple programming languages and multiple operating systems.
The Object Management Architecture (OMA) in the CORBA 2.x specification defines
NOTES an advanced Distributed Object Computing (DOC) middleware standard for building portable
distributed applications. The CORBA 2.x specification focuses on interfaces, which are
essentially contracts between clients and servers that define how clients view and access
object services provided by a server. Despite its advanced capabilities, however, the CORBA
2.x standard has the following limitations:
Lack of functional boundaries. The CORBA 2.x object model treats all interfaces
as client/server contracts. Inter-dependencies between collaborating object
implementations was left to the application developers to handle.
Lack of generic application server standards. CORBA 2.x does not specify a
generic application server framework to perform common server configuration work,
including initializing a server and its QoS policies, providing common services (such
as notification or naming services), and managing the runtime environment of each
component. Although CORBA 2.x standardized the interactions between object
implementations and object request brokers (ORBs), server developers must still
determine how: (1) object implementations are installed in an ORB; and (2) the
ORB and object implementations interact.
Lack of software configuration and deployment standards. There is no standard
way to distribute and start up object implementations remotely in CORBA 2.x
specifications. Application administrators must therefore resort to in-house scripts
and procedures to deliver software implementations to target machines, configure
the target machine and software implementations for execution, and then instantiate
software implementations to make them ready for clients.
What is CORBA Component Model (CCM)?
The CORBA Component Model (CCM) is a component middleware that addresses
limitations with earlier generations of DOC middleware. CCM is a multi-language, multi-
platform component standard from OMG, which represents a major extension for enterprise
computing. The CCM specification extends the CORBA object model to support the concept
of components and establishes standards for implementing, packaging, assembling, and
deploying component implementations. From a client perspective, a CCM component is an
extended CORBA object that encapsulates various interaction models via different interfaces
and connection operations. From a server perspective, components are units of implementation
that can be installed and instantiated independently in standard application server runtime
environments stipulated by the CCM specification. Components are larger building blocks
than objects, with more of their interactions managed to simplify and automate key aspects
of construction, composition, and configuration into applications.
It has been argued that CCM is a technical improvement on EJB. In fact, CCM joins
together the best of .NET and J2EE component models. The .NET is multi-language, single-
platform, while J2EE is single-language and multi-platform.
Key terms
A component is an implementation entity that exposes a set of ports, which are named
interfaces and connection points that components use to collaborate with each other. The
CCM specification introduces the concept of components and the definition of a
comprehensive set of interfaces and techniques for specifying implementation, packaging,
and deployment of components. The interfaces and connection points are illustrated in Figure
7.14.
An OMG IDL specification logically consists of one or more files. A file is conceptually
NOTES translated in several phases. The first phase is preprocessing, which performs file inclusion
and macro substitution. Preprocessing is controlled by directives introduced by lines having
# as the first character other than white space. The result of preprocessing is a sequence of
tokens. Such a sequence of tokens, that is, a file after preprocessing, is called a translation
unit.
Tokens
There are five kinds of tokens: identifiers, keywords, literals, operators, and other
separators. Blanks, horizontal and vertical tabs, newlines, form feeds, and comments. If
the input stream has been parsed into tokens up to a given character, the next token is taken
to be the longest string of characters that could possibly constitute a token.
Comments
The characters /* start a comment, which terminates with the characters */. These
comments do not nest. The characters // start a comment, which terminates at the end of
the line on which they occur. The comment characters //, /*, and */ have no special meaning
within a // comment and are treated just like other characters. Similarly, the comment
characters // and /* have no special meaning within a /* comment. Comments may contain
alphabetic, digit, graphic, space, horizontal tab, vertical tab, form feed, and newline characters.
Identifiers
Identifiers are an arbitrarily long sequence of ASCII alphabetic, digit, and underscore
(“_”) characters. The first character must be an ASCII alphabetic character. All characters
are significant. When comparing two identifiers to see if they collide.
Upper- and lower-case letters are treated as the same letter.
All characters are significant.
Identifiers that differ only in case collide, and will yield a compilation error under certain
circumstances. An identifier for a given definition must be spelled identically (e.g., with
respect to case) throughout a specification. There is only one namespace for OMG IDL
identifiers in each scope. Using the same identifier for a constant and an interface, for
example, produces a compilation error.
Escaped Identifiers
As IDL evolved, new keywords that were added to the IDL language, which may
inadvertently collide with identifiers used in existing IDL and programs that use that IDL.
Fixing these collisions will require not only the IDL to be modified, but programming language
code that depends upon that IDL will have to change as well. The language mapping rules
for the renamed IDL identifiers will cause the mapped identifier names (e.g., method names)
to be changed. The following is a non-exclusive list of implications of these rules:
The underscore does not appear in the Interface Repository.
The underscore is not used in the DII and DSI.
The underscore is not transmitted over “the wire.”
Case sensitivity rules are applied to the identifier after stripping off the leading
underscore.
The different types of literals include Integer, Character, Floating point, String and
NOTES
Fixed point.
Integer Literals
An integer literal consisting of a sequence of digits is taken to be decimal (base ten),
unless it begins with 0 (digit zero). A sequence of digits starting with 0 is taken to be an octal
integer (base eight). The digits 8 and 9 are not octal digits. A sequence of digits preceded by
0x or 0X is taken to be a hexadecimal integer (base sixteen). The hexadecimal digits include
a or A through f or F with decimal values ten through fifteen, respectively. For example, the
number twelve can be written as 12, 014, or 0XC.
Character Literals
A character literal is one or more characters enclosed in single quotes, as in ’x.’
Character literals have type char. A character is an 8-bit quantity with a numerical value
between 0 and 255 (decimal).
Floating-point Literals
A floating-point literal consists of an integer part, a decimal point, a fraction part, an e
or E, and an optionally signed integer exponent. The integer and fraction parts both consist
of a sequence of decimal (base ten) digits. Either the integer part or the fraction part (but
not both) may be missing; either the decimal point or the letter e (or E) and the exponent (but
not both) may be missing.
String Literals
A string literal is a sequence of characters with the exception of the character with
numeric value 0, surrounded by double quotes, as in “...”.Adjacent string literals are
concatenated. Characters in concatenated strings are kept distinct.
For example, “\xA” “B” contains the two characters ‘\xA’ and ‘B’ after concatenation
(and not the single hexadecimal Character ‘\xAB’).
The size of a string literal is the number of character literals enclosed by the quotes
after concatenation. Within a string, the double quote character “ must be preceded by a \.A
string literal may not contain the character ‘\0’. Wide string literals have an L prefix, for
example:
const wstring S1 = L”Hello”;
Fixed-Point Literals
A fixed-point decimal literal consists of an integer part, a decimal point, a fraction part
and a d or D. The integer and fraction parts both consist of a sequence of decimal (base 10)
digits. Either the integer part or the fraction part (but not both) may be missing; the decimal
point (but not the letter d (or D)) may be missing.
7.18.3 Pre-Processing
OMG IDL is preprocessed according to the specification of the preprocessor in
“International Organization for Standardization. 1998. ISO/IEC 14882 Standard for the C++
Programming Language. Geneva: International Organization for Standardization.” The
What is CORBA?
Can you trace the history and development of CORBA?
Explain OMG’s Object Management Architecture
What is the architecture of ORB Architecture and the role of its Principal
components?
What is the difference between static and dynamic invocation of objects? How
does CORBA implement both types of invocations? NOTES
What are the advantages of CORBA architecture
How to develop and deploy a CORBA application
SUMMARY
7. Location Transparency of CORBA means that it does not matter to the client if the
requested object is: NOTES
a) On the same processor but different process
b) In a different process in another processor
c) Both a and b are true
d) Both are false
8. Which of the following contains functionality that is required by both client and server?
a) Dynamic Invocation Interface
b) Dynamic Skeleton Interface
c) Object Adapter
d) ORB Interface
9. An Object Reference :
a) Provides the interface through which a method receives a request
b) Needed to invoke an object by a client
c) Interface an object implementation with the ORB
d) Makes a call on the ORB using the interface
10. A binary protocol used for communication between ORB’s
a) IDL
b) DII
c) GIOP
d) Stubs and skeletions
Part – II
11. What is the role of OMA?
12. What is meant by language mapping?
13. Explain the difference between static and dynamic invocation. Explain how CORBA
handles the two types of invocation.
14. What are the functions of Interface and Implementation Repositories?
15. What are the advantages of CORBA CCM model over the Object Model?
16. Briefly explain about CORBA Alternatives.
17. Describe the Overview of CORBA in detail
18. Explain in detail about Object implementation.
19. Discuss the working of Portable Object Adapter.
20. Briefly explain the steps to build an application using CORBA.
Part III
21. Develop Client and Server Programs for the following. You can use Java or C++
to write the programs.
NOTES a) The Server simply executes and listens for client requests. The client connects to
the server sends the server a string. The Server then echoes it back to the client
which then displays it at the prompt.
b) Write a server program to reverse the string of words sent by the client program.
For example, if the client sends the string “good”, the server such send the response
as “doog” which is displayed by the client.
c) The server provides two functions that can be called by a client. (a) add two numbers
(b) subtract two numbers. Write a client program to call these two functions with
suitable parameters and display the results.
d) Write a server program that converts temperature expressed in Fahrenheit to
Centigrade and vice-versa. The client program will send as parameters the
temperature and type as Centigrade or Fahrenheit.
e) Develop a client that can receive messages from a server. The client has to register
itself with the server and then listen in to receive messages from the server.
Part I – Answers
REFERENCES
NOTES
UNIT V
CHAPTER 8
COMPONENT OBJECT MODEL (COM)
8.1 INTRODUCTION
This unit introduces and explains COM (Component Object Model), which is a platform-
independent, distributed, object-oriented system for creating binary software components
that can interact. COM is the foundation technology for Microsoft’s OLE (Object Linking
and Embedding) and ActiveX (Internet-enabled components) technologies, DCOM
(Distributed Component Object Model) as well as others.
A client that needs to communicate with a component in another process cannot call
the component directly, but has to use some form of inter-process communication provided
by the operating system. COM provides this communication in a completely transparent
fashion: it intercepts calls from the client and forwards them to the component in another
process. These components can be within a single process, in other processes, even on
remote machines
COM objects can be created with a variety of programming languages and COM is
implemented in a language-neutral way. Components of implementing objects can be used
in environments different from the one they were created in, even across machine boundaries.
COM allows reuse of objects with no knowledge of their internal implementation because it
forces component implementers to provide well-defined interfaces that are separate from
the implementation.
8.2 LEARNING OBJECTIVES
At the end of this Unit, the reader would be familiar with the following concepts
COM Architecture
COM Interfaces
Building up COM client and Server
Marshaling and Remoting
8.3 COM OVERVIEW
The Component Object Model (COM) is a software architecture that allows the
components made by different software vendors to be combined into a variety of applications.
COM defines a standard for component interoperability; is not dependent on any particular
programming language and is available on multiple platforms and is extensible.
COM allows applications and systems to be built from components supplied by different
NOTES software vendors. COM is the underlying architecture that forms the foundation for higher-
level software services, like those provided by OLE, ActiveX, DCOM and others.
COM is one of the dominant component architecture in use today. The primary focus
in desktop software development is to create front-end applications with which users interact.
An optimized development environment for creating such applications requires pre-built
high-level components.
COM supports a language-neutral interface definition language (IDL) that is used to
describe the interface of a COM component. Using IDL, the designer of a COM component
can describe the interfaces, methods and properties that are supported by the component.
Client applications rely on the IDL definition of the COM component rather than on
implementation-specific details such as programming language and implementation platform.
8.3.1 COM Evolution
In the early 1990s, Microsoft made a strong commitment to Object Linking and
Embedding (OLE). Microsoft quickly recognized that to effectively evolve OLE, it needed
a standard mechanism for packaging components. OLE services span various aspects of
component software, including compound documents, custom controls, inter-application
scripting, data transfer, and other software interactions. Cross-language interoperability was
also crucial so that those components could be implemented in a variety of languages and
then combined in an arbitrary fashion. Microsoft created the Component Object Model
(COM) to provide the infrastructure that was needed to realize its vision for OLE. COM
became the foundation for a wide range of technologies that included but were not exclusive
to OLE. One of the most important new technologies that relied on COM was the OLE
Control Extension (OCX).
In 1996, Microsoft announced that ActiveX would be the new name for those technologies
based primarily on COM. ActiveX controls enables the developer to build sophisticated
controls based on the Common Object Model (COM). These controls can be developed for
many uses, such as database access, data monitoring, or graphing. By the end of 1996,
Microsoft introduced DCOM, a set of RPC-based extensions to COM that allow COM
objects to be distributed. COM has been very slow to appear on non-Windows platforms.
Because of limited platform support, COM is still identified as being more of a component
architecture than a remoting architecture. Now, COM architecture is deprecated by
Microsoft’s .Net.
8.3.2 Component Benefits
Component architectures are used for building applications out of components. Also it
provides the ability to make convenient and flexible upgrades of existing applications. The
benefits include:
Application Customization
Users often want to customize their applications. End users like to make an application
work the way they work. Component architecture helps customization because each
component can be replaced with a different component that better meets the needs of the
user.
Component C
Component A Remoting C
Network
Component D
Component B Remoting D
Figure 8.1 shows an example using remote components. Component C and Component
D have been located on different remote machines on the network. On the local machine,
they have been replaced by two new components, Remoting C and Remoting D. These
new components forward requests from the other components, across the network to
Component C and Component D.
8.3.3 What is not COM ?
COM is not a computer language. COM tells us how to write components. Any
language can be chosen to write components.
COM does not compete with or replace DLLs. COM uses DLLs to provide
components with the ability to dynamically link.
COM is not primarily an API or a set of functions like the Win32 API. COM does
not provide services. Instead, COM is primarily a way to write components that
can provide services in the form of object-oriented APIs.
COM is also not a C++ class library like the Microsoft Foundation Classes (MFC).
COM lets you provide a way to develop language-independent component libraries,
but COM does not provide any implementation.
8.3.4 Weaknesses of COM
Platform Limitations:
NOTES
Reusable. Programmable
controls
Compound Documents
T
Automation and Data o
Transfer o
l
Storage and Naming s
COM supports the following three types of servers, as illustrated in Figure 8.3, for
implementing components:
In-Process server : An in-process server is implemented as a dynamic linking
library (DLL) that executes within the same process space as the application. The
performance overhead of invoking an in-process server is located in the same process
as the client. An in-process server is commonly referred to as an ActiveX control.
Local Server : A local server executes in a separate process space on the same
computer. Communication between an application and a local server is accomplished
by the COM runtime system using a high-speed inter-process communication protocol.
The performance overhead of using a local server is typically an order of magnitude
greater than that of using an in-process server.
Remote Server : A remote server executes on a remote computer. DCOM extends
COM by providing an RPC-based infrastructure that is used to manage
communication between the application and the remote server. The performance
overhead of using a remote server is typically an order of magnitude greater than
that of using a local server.
An application that uses a COM component is not required to know what type of
server it is using. After a COM object instance handle has been obtained by a client, a client
interaction with the COM object instance is the same regardless of server location.
8.4.1 COM Fundamentals
The Component Object Model defines several fundamental concepts that provide the
model’s structural underpinnings. These include:
COM Local
In-Process Server
Server
(DLL)
Computer A Computer B
8.4.3 Interfaces
In COM, applications interact with each other and with the system through collections
of functions called interfaces. A COM interface is a strongly-typed contract between
software components to provide a small but useful set of semantically related operations
(methods). An interface is the definition of an expected behaviour and expected
responsibilities. A good example of this is OLE’s drag-and-drop support. All of the functionality
NOTES
COM allows for a rich set of data types. This includes support for constants, enumerated
types, structures and arrays in addition to common base types like long and short. The
integral and floating point types are the same as in any programming language and hence
self-explanatory. All characters in COM are represented using the OLECHAR data type.
Win32 platforms use the Wchar_t data type to represent 16-bit Unicode characters.
Because pointer types in IDL are assumed to point to single instances, not arrays, IDL
introduces the [string] attribute to indicate that a pointer points to a null-terminated array of
characters.
A somewhat more complex case is conversion between OLECHAR and the Win32
TCHAR data type, as TCHAR is conditionally compiled to either char or wchar_t. The
header file ustring.h contains a family of string library routines that parallel the standard C
library routines found in string.h. For example, the strncpy function has four corresponding
routines based on either parameter being of either of the two possible character types
(wchar_t or char):
inline bool ustrncpy(char *p1, const wchar_t *p2, size_t c)
{
size_t cb=wcstombs(p1,p2,c);
return cb != c && cb != (size_t)-1;
}
BSTR
4 0 0 0 ‘H’ 0 ‘i’ 0 0 0
Figure 8.5 shows the string “Hi” as a BSTR. COM provides several API functions for
NOTES managing BSTRs:
//allocate and initialize a BSTR
BSTR SysAllocString(const OLECHAR *psz);
BSTR SysAllocStringLen(const OLECHAR *psz, UINT cch);
//Reallocate and initialize a BSTR
INT SysReAllocString(BSTR *pbstr, const OLECHAR *psz);
INT SysReAllocStringLen(BSTR *pbstr, const OLECHAR *psz, UINT cch);
//free a BSTR
void SysFreeString(BSTR bstr);
//peek at length-prefix as characters or bytes
UINT SysStringLen(BSTR bstr);
UINT SysStringByteLen(BSTR bstr);
Example:
//convert raw OLECHAR string to a BSTR
BSTR bstr = SysAllocString(OLESTR(“Hello”));
//invoke method
HRESULT hr= p->SetString(bstr);
//free BSTR
SysFreeString(bstr);
Structures
The primitive types can be composed using C-style structures. IDL follows the C
rules for the tag namespace, which means most IDL interface definitions either use typedef
statements:
typedef struct tagCOLOR { double red;
double green;
double blue;
} COLOR;
HRESULT SetColor([in] const COLOR *pColor);
where HRESULT is the datatype of the result. struct keyword is used to qualify the tagname.
struct COLOR { double red;
double green;
double blue;
};
HRESULT SetColor([in] const struct COLOR *pColor);
Simple structures like the one shown can be used from both Visual Basic and Java.
However, if the version of Visual Basic used can only access interfaces that use structures,
it cannot be used to implement interfaces that use structures as method parameters.
Unions
IDL and COM also support unions. To ensure that the actual interpretation of the union NOTES
is unambiguous, IDL expects that a discriminator will be provided along with the union that
indicates which union member is actually in use. The discriminator must be of an integral
type and must appear at the same logical level as the union.
The [case] attribute is used to match the actual union member in use to its discriminator.
To associate a discriminator with the usage of a non-encapsulated union, the [switch_is]
attribute must be used.
union NUMBER {
[case(1)] long i;
[case(2)] float f;
}
HRESULT Add([in,switch_is(t)] union NUMBER *pn,[in] short t);
When the union is bundled with its discriminator in a surrounding structure, the aggregate
type is called an encapsulated or discriminated union:
struct UNUMBER {
short t;
[switch_is(t)] union VALUE {
[case(1)] long i;
[case(2) float f;
};
};
8.5 A DISTRIBUTED OBJECT EXAMPLE
A distributed object implementing checking account has been considered as an example
and is illustrated in Figure 8.6. This example implements a COM object and two COM
client applications.
COM (ATL C++) Object
Checking Account class
IAccount
IAccountInit
ICheckingAccount : IAccount
The COM object will support three interfaces: IAccount, IAccountInit and
ICheckingAccount and the details of the interfaces is summarized in Table 8.2.
AccountInit init(name)
CheckingAccount withdrawUsingCheck(checkNumber,amount)
The ICheckingAccount interface will inherit from the IAccount Interface. Client
platform are not arbitrary. The Visual Basic environment provides excellent support for
COM object integration. The Visual C++ environment will allow us to see how COM
works at the C/C++ API level. The Visual Basic client application is shown in Figure 8.7.
The COM client application is divided into two sections: initialization and account
management. The initialization section is used to specify the name of the person for whom
the account will be created and the server name of the host on which the COM checking
account object is implemented. The COM client will attempt to run against a local server
if not server name is specified. The account management section allows the user to perform
withdrawals and deposits. NOTES
8.6 INTERFACES
An interface provides a connection between two different objects. The set of functions
define the interface between different parts of a computer program. The interface to a DLL
is the set of functions exported by the DLL. The interface to a C++ class is the set of
members of the class. For COM, an interface is a specific memory structure containing an
array of function pointers. Each array element contains the address of a function implemented
by the component. The implementation can be made in C++.
Interfaces are everything in COM. To the client, a component is a set of interfaces.
The client can communicate with the COM component only through an interface. The
client has very little knowledge of a component as a whole. Often the client does not even
know of all of the interfaces that a component supports.
8.6.1 Attributes of interfaces
Given that an interface is a contractual way for a component object to expose its
services, there are four very important points to understand:
An interface is not a class. While a class can be instantiated to form a component
object, an interface cannot be instantiated by itself because it carries no
implementation. A component object must implement that interface and that
component object must be instantiated for there to be an interface. Furthermore,
different component object classes may implement an interface differently, so long
as the behavior conforms to the interface definition, For example, two objects that
implement IStack, where one uses an array and the other a linked list. Thus the
basic principle of polymorphism fully applies to component objects.
An interface is not a component object. An interface is just a related group of
functions and is the binary standard through which clients and component objects
communicate. The component object can be implemented in any language with any
internal state representation, so long as it can provide pointers to interface member
functions.
Clients only interact with pointers to interfaces. When a client has access to a
component object, it has nothing more than a pointer through which it can access
the functions in the interface, called simply an interface pointer. The pointer is
opaque; it hides all aspects of internal implementation. You cannot see of the
component object’s data, as opposed to C++ object pointers through which a client
may directly access the object’s data. In COM, the client can call only methods of
the interface to which it has a pointer. This encapsulation is what allows COM to
provide the efficient binary standard that enables local/remote transparency.
Component objects can implement multiple interfaces. A component object
can and typically does implement more than one interface. That is, the class has
more than one set of services to provide. For example, a class might support the
ability to exchange data with clients as well as the ability to save its persistent state
information (the data it would need to reload to return to its current state) into a file
at the client’s request. Each of these abilities is expressed through a different interface
(IDataObject and IPersistFile), so the component object must implement two
interfaces.
Interfaces are strongly typed. Every interface has its own interface identifier, a
NOTES globally unique ID (GUID) described below, thereby eliminating any chance of
collision that would occur with human-readable names. The difference between
components and interfaces has two important implications. If a developer creates a
new interface, a new identifier must also be created for that interface. When a
developer uses an interface, the identifier for the interface must be used to request
a pointer to the interface. This explicit identification improves robustness by
eliminating naming conflicts that would result in run-time failure.
Interfaces are immutable. COM interfaces are never versioned, which means
that version conflicts between new and old components are avoided. A new version
of an interface, created by adding more functions or changing semantics, is an
entirely new interface and is assigned a new unique identifier. Therefore, a new
interface does not conflict with an old interface even if all that changed is one
operation or semantics (but not even the syntax) of an existing method. For example,
if a new interface adds only one method to an existing interface, and the component
author wishes to support both old-style and new-style clients, both collections of
capabilities can be expressed through two interfaces, but internally implement the
old interfaces as a proper subset of the implementation of the new.
It is convenient to adopt a standard pictorial representation for objects and their
interfaces. The adopted convention is to draw each interface on an object as a “plug-in
jack.”. Examples of interface are illustrated in Figures 8.8, 8.9 and 8.10.
NOTES
object that is running in another process or on another machine. A key point is that
NOTES the caller makes this call exactly as it would for an object in the same process. The
binary standard enables COM to perform inter-process and cross-network function
calls transparently. While there is, of course, more overhead in making a remote
procedure call, no special code is necessary in the client to differentiate an in-
process object from out-of-process objects. This means that as long as the client is
written from the start to handle remote procedure call (RPC) exceptions, all objects
(in-process, cross-process, and remote) are available to clients in a uniform,
transparent fashion. DCOM Microsoft’s distributed version of COM requires no
modification to existing components in order to gain distributed capabilities. In other
words, programmers are completely isolated from networking issues.
Programming language independence. Any programming language that can
create structures of pointers and explicitly or implicitly call functions through pointers
can create and use component objects. Component objects can be implemented in
a number of different programming languages and used from clients that are written
using completely different programming languages. Again, this is because COM,
unlike an object-oriented programming language, represents a binary object standard,
not a source code standard.
8.6.2 Globally Unique Identifiers (GUIDs)
COM uses globally unique identifiers—128-bit integers that are guaranteed to be unique
in the world across space and time—to identify every interface and every component object
class. These globally unique identifiers are UUIDs (universally unique IDs) as defined by
the Open Software Foundation’s Distributed Computing Environment. Human-readable
names are assigned only for convenience and are locally scoped. This helps ensure that
COM components do not accidentally connect to “the wrong” component, interface, or
method, even in networks with millions of component objects. CLSIDs are GUIDs that
refer to component object classes, and IID are GUIDs that refer to interfaces. Microsoft
supplies a tool (uuidgen) that automatically generates GUIDs. Additionally, the
CoCreateGuid function is part of the COM API. Thus, developers create their own GUIDs
when they develop component objects and custom interfaces. Through the use of defines,
developers don’t need to be exposed to the actual 128-bit GUID.
8.6.3 An Example for Interface
The code that implements some simple interfaces is given below. In the code given,
Component CA uses IX and IY to implement two interfaces.
class IX //First Interface
{
public:
virtual void Fx1() = 0;
virtual void Fx2() = 0;
};
class IY //Second Interface
{
public:
virtual void Fy1() = 0;
}
Note that AddRef is not explicitly called in this case because the QueryInterface
NOTES
implementation increments the reference count before it returns an interface pointer.
A Complete Example
The following code shows the complete implementation of interfaces IX and IY.
IFACE.CPP
// Iface.cpp
// To compile, use: cl Iface.cpp
//
#include <iostream.h>
#include <objbase.h> // Define interface
void trace(const char* pMsg)
{ cout <<pMsg << endl; }
//Abstract Interfaces
interface IX
{
virtual void __stdcall Fx1() = 0;
virtual void __stdcall Fx2() = 0;
};
interface IY
{
virtual void __stdcall Fy1() = 0;
virtual void __stdcall Fy2() = 0;
};
// Interface Implementation
class CA : public IX, public IY
{
public:
//Implement interface IX
virtual void __stdcall Fx1() { cout << “CA::Fx1”<<endl;}
virtual void __stdcall Fx2() { cout << “CA::Fx2”<<endl;}
//Implement interface IY
virtual void __stdcall Fy1(){ cout << “CA::Fy1”<<endl;}
virtual void __stdcall Fy2(){ cout << “CA::Fy2”<<endl;}
};
//Client
int main()
NOTES {
trace (“Client: Create an instance of the component”);
CA* pA = new CA;
//Get an IX pointer
IX* pIX =pA;
trace(“Client: Use the IX interface”);
pIX->Fx1();
pIX->Fx2();
//Get an IY pointer
IY* pIY=pA;
trace(“client: Use the IY interface”);
pIY->Fy1();
pIY->Fy2();
trace(“Client: Delete the component”);
delete pA;
return 0;
}
The output from this program is
Client: Create an instance of the component
Client: Use the IX interface
CA::Fx1
CA::Fx2
Client: Use the IY interface
CA::Fy1
CA::Fy2
Client: Delete the component
The client and the component communicate through two interfaces. The interfaces
are implemented using the two pure abstract base classes IX and IY. The component is
implemented by the class CA, which inherits from both IX and IY. Class CA implements
the members of both interfaces.
8.6.6 Behind the Interface
Pure abstract base class defines the specific memory structure that COM requires for
an interface.
Virtual Function Tables
When we define a pure abstract class, we are actually defining the layout for a block
of memory. All implementations of pure abstract classes are blocks of memory that have
the same basic layout. The figure 8.11 shows the memory layout for the abstract base class
defined in the following code: NOTES
interface IX
{
virtual void __stdcall Fx1() = 0;
virtual void __stdcall Fx2() = 0;
virtual void __stdcall Fx3() = 0;
virtual void __stdcall Fx4() = 0;
};
Defining a pure abstract class just defines the memory structure. Memory is not
allocated for the structure until the abstract base class is implemented in a derived class.
When a derived class inherits from an abstract base class, it inherits this memory structure
as shown in the Figure 8.11.
The virtual function table contains Pointers to member functions
IX
Client Server
Communication Bus
(DCOM or CORBA)
The server stub sends the result message to the client stub using the communication
bus. NOTES
The client stub receives the result message, unpacks the message, and returns the
result to the client.
From the steps outlined, it is obvious that the client stub, server stub and communication
bus do a lot of work. The communication bus is the generic name for the COM or CORBA
runtime system. In contrast, the client and server stubs must be created to support the
custom interfaces that are used in the system. Hand-coding client and server stubs for
every interface would be tedious and an error-prone task. COM and CORBA solve this
problem by providing tools to generate client and server stubs from IDL descriptions.
8.7.2 COM Proxies and Stubs
COM terminology refers to client stubs as proxies and to server stubs as stubs. In
COM, the proxy and stub are packaged in a single DLL. The DLL is associated with the
appropriate interfaces in the windows system registry. The COM runtime system that uses
the registry to locate proxy-stub DLLs associated with an interface when marshaling of the
interface is required.
After registering the proxy-stub DLL, the IAccount, IAccountInit and
ICheckingAccount interfaces are all associated with ComServer.dll in the registry. Fig 8.13
illustrates the structure of the COM client, server and proxy-stub DLL in the checking
account example. Note that the proxy-stub DLL must be installed on every client machine
so that the client application can properly marshal data.
Client Client
(Visual Basic or C++ (Visual C++ COM Server
Client) Stub
Proxy (ComServer.dll)
(ComServer.dll) RPC
COM/DCOM COM/DCOM
From a client’s point of view, all component objects are accessed through interface
NOTES pointers. A pointer must be in-process, and in fact, any call to an interface function always
reaches some piece of in-process code first. If the component object is in-process, the call
reaches it directly. If the component object is out-of-process, then the call first reaches
what is called a “proxy” object provided by COM itself, which generates the appropriate
remote procedure call to the other process or the other machine. Note that the client from
the start should be programmed to handle RPC exceptions; then it can transparently connect
to an object that is in-process, cross-process, or remote.
From a server’s point of view, all calls to a component object’s interface functions are
made through a pointer to that interface. Again, a pointer only has context in a single process,
and so the caller must always be some piece of in-process code. If the component object is
in-process, the caller is the client itself. Otherwise, the caller is a “stub” object provided by
COM that picks up the remote procedure call from the “proxy” in the client process and
turns it into an interface call to the server component object.
As far as both clients and servers know, they always communicate directly with some
other in-process code, as illustrated in Figure 8.14.
passed the address 0x0000ABBA to another process, the second process would access a
different piece of memory than the first process intended. This is illustrated in Fig 8.15 NOTES
pFoo
pFoo 0x0000ABB 0x00001234
A
While each EXE gets its own process, DLLs are referred to as in-process (in-proc)
servers while EXEs are called out-of-process (out-of-proc) servers. EXEs are sometimes
called local servers to differentiate them from the other kind of out-of-process server, the
remote server. A remote server is an out-of-process server that resides in a different machine.
The component passes an interface to the client. An interface is basically an array of
function pointers. The client must be able to access the memory associated with the interface.
If a component is in a DLL, the client can easily access the memory because the component
and the client are in the same address space. But if the client and component are in different
address spaces, the client cannot access the memory in the component’s process. If the
client cannot even access the memory associated with an interface, there isn’t any way for
it to call the functions in the interface. If this were our situation, our interfaces would be
useless.
For an interface to cross a process boundary, we need to be able to count on the
following conditions:
A process needs to be able to call a function in another process.
Process Boundary
EXE EXE
Client
Component
process. If the processes are on different machines, the data has to be put into standard
format to account for the differences between machines, such as the order in which they NOTES
store bytes in a word.
To marshal a component, an interface named IMarshal has to be implemented. COM
queries your component for IMarshal as part of its creation strategy. It then calls member
functions in IMarshal to marshal and unmarshal the parameters before and after calling
functions. The COM library implements a standard version of IMarshal that will work for
most interfaces.
Figure 8.17 illustrated how the client communicates with a proxy DLL. The proxy
marshals the function parameters and calls the stub DLL using LPCs. The stub DLL
unmarshals the parameters and calls the correct interface function in the component, passing
it the parameters.
EXE EXE
Process Boundary
Client Component
DLL DLL
client needs can be reached from an interface pointer. Therefore, we need to export the
NOTES CreateInstance function so that the client can call it.
}
Return Create Instance ( ); NOTES
}
To load the DLL, CallCreateInstance calls the Win 32 function Load Library:
HINSTANCE Load Library (
LPCTSTR 1pLibFileName // filename of DLL
);
Load Library takes the DLL’s filename and returns a handle to the loaded DLL. The
Win 32 function GetProcAddress takes this handle and the name of a function (Create
Instance) and returns a pointer to that function:
FARPROC GetProcAddress (
HMODULE hModule. //handle to DLL module
LPCSTR 1pProcName //name of function
);
Using just these two functions, the client can load the DLL into its address space and
get the address of CreateInstance. With the address in hand, creating the component and
getting its IUnknown pointer is a simple process. CallCreateInstance casts the returned
pointer into a usable type and, true to its name, calls createInstance.
But CallCreateInstance binds the client too closely to the implementation of the
component. The client should not be required to know the name of the DLL in which the
component is implemented. We should be able to move the component from one DLL to
another or even from one directory to another.
The client is now in the file CLIENT1.cpp. The client includes the file CREATE.H
and links with CREATE.CPP. These two files encapsulate the creation of the component
contained in the DLL. The Component is now in a file named CMPNT1.CPP. Dynamic
linking requires a definition file listing the functions that are exported from the DLL. The
definition file is named CMPNT1.DEF. The component and the client share two files. The
file IFACE.H contains the declarations of all of the interfaces that CMPNT1 supports. The
file also contains the declarations of the interface IDs for these interfaces. The file
GUIDS.CPP contains the definitions of these interface IDs.
To build all the components and all of the clients, use the following command line:
Nmake-f makefile
Given below is the code for implementing the client. The client asks the user for the
filename of the DLL to use. It passes this filename on to Call Create Instance, loads the
DLL and calls the Create Instance function exported from the DLL.
CLIENT1.CPP
//
// Client1.CPP
// To compile, use : c1 Client1.cpp Create.cpp GUIDs.cpp UUID.lib
//
#include <iostream.h>
{
NOTES trace (“Return pointer to IUnknown.”);
*ppv = static_cast <IX*> (this);
}
else if (iid == IID_IX)
{
Trace (“Interface not supported.”);
*ppv = NULL ;
return E_NOINTERFACE;
}
Reinterpret_cast <IUnknown*> (*ppv) -> AddRef ();
Return S_OK;
}
ULONG _ _ stdcall CA : : Add Ref ( )
{
Return Inter locked Increment (&m_cRef);
}
ULONG _ _ stdcall CA : : Release ( )
{
If (Inter locked Decrement ( &m_cRef) ==0)
{
delete this ;
return 0;
}
Return m_cRef ;
}
// Creation function
//
extern “C” IUnknown* Create Instance ( )
{
IUnknown* pI = static_cast <IX*> (new CA) ;
pI -> AddRef ( );
return pI;
}
The code for the two shared files IFACE.H and GUIDS.H is given below. The file
IFACE.H declares all of the interfaces that the client and the component use.
IFACE.H
//
NOTES
Ever y COM DLL server should implement and expor t a function called
DllGetClassObject. If a COM DLL doesn’t provides DllGetClassObject, then the call to NOTES
CoGetClassObject or CoCreateInstance returns an error “Class Not Registered”.
DllGetClassObject is an entry point which basically creates the class factory for the specific
class (CLSID). CoGetClassObject looks for a component in the registry and, if found, the
component’s CLSID is passed as an argument to CoGetClassObject. If it finds the server,
then it loads the COM DLL server (calls DllMain of the COM Server) that encapsulates
that specific component. After loading the DLL, CoGetClassObject tries to get the address
of DllGetClassObject (exported by the COM DLL) by calling the GetProcAddress function.
If that fails then the COM SCM returns an error called “Class Not Registered” because
there is no way to create a class factory for the requested component.
8.11.2 Sample code explanation
In the sample code, a single class object has been implemented for multiple COM
classes (COM components). There could be a one-to-one mapping between class object
and COM component supported. In that case, the DllGetClassObject should create a class
object corresponding to the requested CLSID. This case is easy as we can have a multiple
if cases in the DllGetClassObject and can call a new operator for the class object corresponding
to a requested CLSID.
The purpose of the class object is to create another object (COM component), so the
different class factory for different classes will cause the unnecessary duplication of code
and makes code less readable. The COM server can be designed in such a way that the
single class object should be able to support multiple COM components. This is what has
been implemented in the sample code.
This has been achieved by the use of the helper function (creator function), which
every COM class must support for its creation. There is a structure called FactoryInfo,
which maps the creation function with the corresponding CLSID for all the COM classes
that are exposed by the COM server.
struct FactoryInfo
{
const CLSID *pCLSID;
FPCOMPCREATOR pFunc;
};
When the client calls CoGetClassObject, the DllGetClassObject function is called with the
requested CLSID as an argument. The DllGetClassObject function looks into the global
array of FactoryInfo structures and traverses the array to fetch the address of the creator
function, which is mapped to the requested CLSID. The class factory class stores this
address in one of its data members, called pCreator, of FPCOMPCREATOR type. This
stored address of the creation function in the CFactory class is used in the CreateInstance
method of the IClassFactory interface. The address of the creator function is passed to the
CFactory at the time of its creation by passing an argument of FPCOMPCREATOR type in
the constructor of CFactory class.
// Code snippet.
// Traverse a list to find the helper function which corresponds to the requested CLSID.
for (int iCount = 0; iCount < 2; iCount++)
{ if (*gFactoryData[iCount].pCLSID == clsid)
NOTES { break;
}
}
CFactory *pFactory = new CFactory(gFactoryData[iCount].pFunc);
The above code is a part of the DlllGetClassObject function. This function traverses
the global array of FactoryData structures and looks for the creation functions address
corresponding to the requested CLSID. Once it gets the address of the creator function, it
stops traversing the array and passes that address to the constructor of the CFactory class.
The CFactory stores this address of the creator function for further use. Once the
IClassFactory interface on the class object is returned to the client the client can call the
CreateInstance method of the IClassFactory to create an instance of the COM class. The
CreateInstance call is the place where the COM component is created and the requested
interface is returned on that newly created COM components instance. The creator function,
whose address has been stored in CFactory’s (class object) data member, is called in the
CreateInstance method and the requested interface is returned to the client.
//// Code snippet.
typedef HRESULT (*FPCOMPCREATOR) (const IID&, void**);
class CFactory : public IClassFactory
{
public:
// Rest of the code has been removed from here to make it readable.
CFactory(FPCOMPCREATOR);
~CFactory();
private:
/* This is to store the address of the creator function of the
* COM component with the requested CLSID.
*/
FPCOMPCREATOR pCreator;
long m_cRef;
};
This is a call to the creator function in the CreateInstance method of the CFactory
class.
hResult = (*pCreator)(iid,ppv);
FPCOMPCREATOR is a synonym for a “pointer to a function which takes const IID
& and void** as an argument and returns an HRESULT”.
The code has been commented to make it self-explanatory. The single class object for
multiple COM classes will help to understand the Class Factory concept in COM.
8.12 INTRODUCTION TO .NET PLATFORM
NET is a collection of tools, technologies, and languages that all work together in a
framework to provide the solutions that are needed to easily build and deploy truly robust NOTES
enterprise applications. These .NET applications are also able to easily communicate with
one another and provide information and application logic, regardless of platforms and
languages.
Net (DOTNET) platform is a development framework that provides a new Application
Programming Interface (API) to new services and APIs for classic Windows operating
systems while bringing together a number of technologies that emerged from Microsoft
during the late 1990s. These include COM+ Component Services, commitment to XML
and object-oriented design, support for new web service protocols such as SOAP, WSDL
and UDDI and a focus on the Internet, all integrated within the Distributed internet
Applications (DNA) architecture.
The platform consists of three product groups:
A set of languages, including C# and VB, a set of development tools including
Visual Studio .Net, a class library for building web services and web and windows
applications, as well as the Common Language Runtime (CLR) to execute objects
built within this framework.
Two generations of .Net Enterprise servers
New .Net-enabled non-PC devices, from cell phones to game boxes
The .NET Framework sits on top of the operating system and the architecture is
given in Figure 8.19
NOTES
.NET Framework
.
Common Language Runtime
(debug,exception,type checking,JIT compilers)
Windows Platform
The most important component of the .NET Framework is the CLR, which provides
the environment in which programs are executed. The CLR includes a virtual machine,
analogous in many ways to the Java Virtual Machine (JVM). The CLR
Activates objects
Performs security checks on objects
Lays objects in memory
Executes objects
Garbage-collects the objects
The set of Framework Base classes, the lowest level of the FCL. These classes
support input and output, string manipulation, security management, network
communication, thread management, text manipulation, reflection and collections
functionality, etc.
Above this level is a tier of classes that extend the base classes to support data
management and XML manipulation. The data classes support persistent management of
data that is maintained on backend databases. These classes include the Structured Query
Language (SQL) classes to let you manipulate persistent data stores through a standard
SQL interface. The .NET Framework also supports a number of classes to let us manipulate
XML data and perform XML searching and translations.
Extending the Framework Base classes and the data and XML classes is a tier of
classes toward building applications using three different technologies:
Web Services
Web Forms
Windows Forms.
Web services include a number of classes that support the development of lightweight
distributed components, which will work even in the face of firewalls and NAT software.
Because web services employ standard HTTP and SOAP as underlying communications
protocols, these components support plug and play across cyberspace. Web Forms and
Windows Forms allow us to apply Rapid Application Development (RAD) techniques to
building web and windows applications. Simply drag and drop controls onto your form,
double-click a control and write the code to respond to the associated event.
8.13 MARSHALING IN .NET
To do a recap, the process of moving an object across a boundary is called Marshal by
value. Boundaries exist at various levels of abstraction in your program. The most obvious
boundary is between objects running on different machines. The process of preparing an
NOTES object to be remoted is called marshaling. On a single machine, objects might need to be
marshaled across context, app domain, or process boundaries.
A process is essentially a running application. If an object in a word processor wants
to interact with an object in a spreadsheet, they must communicate across process boundaries.
Processes are divided into application domains (often called app domains); these in turn are
divided into various contexts. App domains act like light weight processes, and contexts
create boundaries that objects with similar rules can be contained within. At times, objects
will be marshaled across both context and app domain boundaries, as well as cross process
and machine boundaries. When an object is marshaled by value, it appears to be sent through
the wire from one computer to another. A sink is an object whose job is to enforce policy.
The Formatter makes sure the message is in the right format.
This section demonstrates how objects can be marshaled across various boundaries
using proxies and stubs. In addition, this section explains the role of formatters, channels,
and sinks, and how to apply these concepts to programming.
8.13.1 Application Domains
Each .NET application runs in its own process. If you have Word, Excel, and Visual
Studio open, you have three processes running. If you open Outlook, another process starts
up. Each process is subdivided into one or more application domains. An app domain acts
like a process but uses fewer resources.
App domains can be independently started and halted. They are secure, lightweight,
and versatile. An app domain can provide fault tolerance; if you start an object in a second
app domain and the object crashes, it will bring down the app domain but not your entire
program. Hence, web servers might use app domains for running users’ code, so that, if the
code has a problem, the web server can maintain operations.
CultureInfo Culture,
Object[ ] activationAttributes,
Evidence securityAttributes
);
To use it:
ObjectHandle oh = ad2.Create Instance(
“ProgCSharp”, // the assembly name
“ProgCSharp.Shape”, //the type name with namespace
false, // ignore case
App domains also support a variety of events – including Assembly Load, Assembly Resolve,
NOTES Process Exit, and Resource Resolve – that are fired as assemblies are found, loaded, run,
and unloaded. Every process has an initial app domain, and can have additional app domains
as you create them. Each app domain exists in exactly one process. Each process has its
own default App domain.
There may be times when a single domain is insufficient. For example, it may be run
a library written by another programmer. A second App domain can be used to isolate it in
its own domain so that if a method in the library crashes the program, only the isolated
domain will be affected. For example, a web server would create a new app domain for
each plug-in application. This would provide fault tolerance so that if one web application
crashed, it would not bring down the web server. It is also possible that the other library
might require a different security environment; creating a second app domain allows the
two security environments to coexist. Each app domain has its own security, and the app
domain serves as a security boundary.
App domains aren’t threads and should be distinguished from threads. A Win32
thread exists in one app domain at a time, and a thread can access (and report) which app
domain it is executing in. App domains are used to isolate applications. Within an app
domain there might be multiple threads operating at any given moment.
A new app domain can be created by calling the static method CreateDomain ( ) on
the App Domain Class:
This creates a new app domain with the name Shape Domain. You can check the
name of the domain you’re working in with the property System.AppDomain.
CurrentDomain.FriendlyName. Once an AppDomain object has been instantiated, can
instances of classes, interfaces, etc. can be created using its CreateInstance( ) method
given below:
Public ObjectHandle CreateInstance (
string assemblyName,
string typeName,
bool ignoreCase,
BindingFlags bindingAttr,
Binder binder,
Object[ ] args,
System.Reflection.BindingFlags.Create Instance, // flag
null, // binder
The first parameter (ProgCSharp) is the name of the assembly, and the second
(ProgCSharp.Shape) is the name of the class. The class name must be fully qualified by
namespaces.
A binder is an object that enables dynamic binding of an assembly at runtime. Its job is
to allow you to pass in information about the object you want to create, to create that object
for you, and to bind your reference to that object. In the vast majority of cases, including
this example, you’ll use the default binder, which is accomplished by passing in null. It is
possible to write your own binder. For example, to check your ID against special permissions
in a database and reroute the binding to a different object, based on your identity or your
privileges.
Binding typically refers to attaching an object name to an object. Dynamic binding
refers to the ability to make that attachment when the program is running, as opposed to
when it is compiled. In this example, the Shape object is bound to the instance variable at
runtime, through the app domain’s CreateInstance( ) method. Binding flags help the binder
fine-tune its behavior at binding time. In this example, use the BindingFlags enumeration
value CreateInstance. The default binder normally looks at public classes only for binding,
but you can add flags to have it look at private classes if you have the right permissions.
When you bind an assembly at runtime, don’t specify the assembly to load at compile
time; rather, determine which assembly you want programmatically, and bind your variable
to that assembly when the program is running.
The constructor you’re calling takes two integers, which must be put into an object
array (new object[ ] {3,5}). You can send null for the culture because you’ll use the default
(en) culture and won’t specify activation attributes or security attributes.
An object handle, which is a type that is used to pass an object (in a wrapped state)
between multiple app domains without loading the metadata for the wrapped object in each
object through which the ObjectHandle travels. You can get the actual object itself by calling
Unwrap ( ) on the object handle, and casting the resulting object to the actual – type in this
case, Shape.
The CreateInstance( ) method provides an opportunity to create the object in a new
app domain. If you were to create the object with new, it would be created in the current
app domain.
You’ve created a shape object in the shape domain, but you’re accessing it through a
shape object in the original domain. To access the shape object in another domain, you must
marshal the object across the domain boundary.
localPoint.Y = 600;
Ask that point to print its coordinates, and then ask the Shape to print its coordinates.
So, will the change to the local Point object be reflected in the Shape? That depends on
how the Point object is marshaled. If it is marshaled by value, the local Point object will be
a copy, and the Shape object will be unaffected by changing the localPoint variables’
values. If, on the other hand, you change the Point object to marshal by reference, you’ll
have a proxy to the actual upperLeft variable, and changing that will change the Shape.
The Example given below illustrates this point. Build the Example in project named
NOTES ProgCSharp. When Main() instantiates the Shape object, the method is looking for
ProgCSharp.exe.
#region Using directives
Using System;
Using System.Collection.Generic;
Using System.Remoting;
Using System.Reflection;
Using System.Text;
#endregion
namespace Marshaling
{
// for marshal by reference comment out
// the attribute and uncomment the base class
[Serializable]
public class Point // : Marshal ByRefObject
{
private int x;
private int y;
public Point(int x, int y)
{
Console.WriteLine( “[{0}] {1}”,
System.AppDomain.CurrentDomain.FriendlyName,
“Point Constructor”);
this.x=x;
this.y=y;
}
public int X
{
get
{
Console.WriteLine( “[{0}] {1}”,
System.AppDomain.CurrentDomain.FriendlyName,
“Point x.get”);
return this.x;
}
set NOTES
{
Console.WriteLine( “[{0}] {1}”,
System.AppDomain.CurrentDomain.FriendlyName,
“point x.set”);
this.x=value;
}
}
public int Y
{
get
{
Console.WriteLine( “[{0}] {1}”,
System.AppDomain.CurrentDomain.FriendlyName,
“point y.get”);
return this.y;
}
set
{
Console.WriteLine( “[{0}] {1}”,
System.AppDomain.CurrentDomain.FriendlyName,
“point y.set”);
this.y=value;
}
}
}
System.AppDomain.CurrentDomain.FriendlyName,
localPoint.X, localPoint.Y); NOTES
s1.ShowUpperLeft(); //show the value once more
}
}
}
Output:
[[Marshaling.vshost.exe] Entered Main
[Shape Domain] Shape constructor
[Shape Domain] Point constructor
[Shape Domain] Point x.get
[Shape Domain] Point y.get
[Shape Domain] Upper left : 3,5
[Marshaling.vshost.exe] Point x.set
[Marshaling.vshost.exe] Point y.set
[Marshaling.vshost.exe] Point x.get
[Marshaling.vshost.exe] Point y.get
[Marshaling.vshost.exe] localPoint: 500, 600
[Shape Domain] Point x.get
[Shape Domain] Point y.get
[Shape Domain] Upper left : 3,5
The output reveals that the Shape and Point constructors run in the Shape domain, as
does the access of the value of the Point object in the Shape.
The property is set in the original app domain, setting the local copy of the Point object
to 500 and 600. Because Point is marshaled by value, however, you are setting a copy of
the Point object. When you ask the Shape to display its upperLeft member variable, it is
unchanged.
Now run the program again using marshalling by reference. The output is quite different.
//[Serializable]
Public class Point : Marshal ByRefObject
[Marshaling.vshost.exe] Entered Main
[Shape Domain] Shape constructor
[Shape Domain] Point constructor
[Shape Domain] Point x.get
[Shape Domain] Point y.get
[Shape Domain] upper left : 3,5
DCOM CORBA
Top layer: Basic programming architecture
Common base class IUnknown CORBA::Object
Object class identifier CLSID interface name
Interface identifier IID interface name
Client-side object activation CoCreateInstance() a method call/bind()
Object handle interface pointer object reference
Middle layer: Remoting architecture
Name to implementation Registry Implementation
mapping Repository
DCOM CORBA
Supports m ultiple interfaces for objects Supports m ultiple inheritance at th e
and uses the Q ueryInterface() m ethod to interface level
navigate am on g interfaces. This m eans
that a client prox y d ynam ically loads
m ultiple server stubs in the rem oting
layer depending on the num ber of
interfaces being used.
Every object im plem ents IU nknow n. Every interface inherits from
C O R B A .O bject
U niquely identifies a rem ote server U niquely identifies rem ote server
object through its interface pointer, objects through object
w hich serv es as the object handle at run - references(objref), w hich serves as
tim e. the object handle at run-tim e. These
object references can be ex ternalized
into strings w hich can then be
converted back into an o bjref.
U niquely id entifies an interface using U niquely identifies an interface
the concept of Interface ID s (IID ) and using the interface nam e and
uniquely identifies a nam ed uniquely identifies a nam ed
im plem entation of the server object im plem entation of the server object
using the concept of C lass ID s (C LS ID ) by its m apping to a nam e in the
the m apping of w hich is found in the Im plem entation R epository
registry.
The rem ote serv er o bject reference The rem ote server object reference
gen eration is perform ed on the w ire gen eration is perform ed on the w ire
protocol b y the O bject E xporter protocol b y the O bject A dapter
Tasks like object registration, skeleton The constructo r im plicitly perform s
instantiation etc. are either ex plicitly com m on tasks like object
perform ed b y the serv er pro gram or registration, skeleton instantiation
handled d ynam ically b y the C O M run - etc
tim e system .
U ses the O bject R em ote P rocedure U ses the Internet Inter-O R B
C all(O R P C ) as its underlying rem oting Protocol(IIO P ) as its underlyin g
protocol rem oting proto col
W hen a client object needs to activate a W hen a client object needs to
server object, it can do a activate a serv er object, it binds to a
C oC reateInstance() or use other w ays to nam ing or a trader service or use
get a server's interface pointer other w ays to get a server reference
The object handle that the client uses is The object handle that the client uses
the interface pointer is the O bject R eference
The m apping of O bject N am e to its The m apping o f O bject N am e to its
Im plem entation is handled by the Im plem entation is han dled b y th e
R egistry Im plem entation R epository
The type inform ation for m ethods is The type inform ation for m ethods is
held in the T ype Library held in the Interface R ep ository
NOTES
8.18 CONCLUSION
NOTES
This article describes the Component Object Model (COM), a software architecture
that allows the components made by different software vendors to be combined into a
variety of applications. COM defines a standard for component interoperability, is not
dependent on any particular programming language, is available on multiple platforms, and is
extensible. COM allows applications and systems to be built from components supplied by
different software vendors. COM is the underlying architecture that forms the foundation
for higher-level software services, like those provided by OLE. OLE services span various
aspects of component software, including compound documents, custom controls, inter-
application scripting, data transfer, and other software interactions.
HAVE YOU UNDERSTOOD QUESTIONS
1. What is COM architecture?
2. What are the benefits of COM?
3. How COM does supports Binary Standard?
4. What are Interfaces? How is it used in COM?
5. What is IUnknown? What methods are provided by IUnknown?
6. What are the purposes of AddRef, Release and QueryInterface functions?
7. How can you create an instance of the object in COM?
8. What is marshalling?
9. What is marshalling by value and reference? What is the difference?
10. What’s the difference between COM and DCOM?
11. What is the use of Query Interfaces?
12. Explain about RPC and LPC mechanisms?
13. What is the use of IUnknown Interface in COM?
14. What is Remoting?
SUMMARY
The Component Object Model (COM):
Defines a binary standard for component interoperability
Is programming-language-independent
Is provided on multiple platforms
Provides for robust evolution of component-based applications and systems
Is extensible by developers in a consistent manner
Uses a single programming model for components to communicate within the same
process, and also across process and network boundaries
Allows for shared memory management between components
Provides rich error and status reporting
Allows dynamic loading and unloading of components
EXERCISES
Part I
1. What is IUnknown? What methods are provided by IUnknown?
Part II
6. Explain in detail the QueryInterface?What should QueryInterface functions do if
requested object was not found?
7. What is a COM interface? Discuss the attributes of the interface.
8. Discuss the advantages and disadvantages of COM architecture
9. Explain the terms proxy and stub with reference to COM. Discuss how they
support remote method invocation.
10. What is marshalling? Explain the difference between marshalling by value and by
reference.
Part III
11. Build a COM object and a MFC client to access that COM object.
MFC Client: Write a MFC client which will allow to use your COM object.
12. Creation of the project
REFERENCES
1. Microsoft web site and MSDN web site
2. COM and CORBA side by side by Jason Pritchard, Addison Wesley